OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de Actual Intelligence with Steve Pearlman

Actual Intelligence with Steve Pearlman

Actual Intelligence with Steve Pearlman

Auteur(s): Steve Pearlman Ph.D.
Écouter gratuitement

À propos de cet audio

One of the world's premiere critical thinking experts, you can view Dr. Steve Pearlman's viral Editor’s Pick TEDx talk here: https://youtu.be/Bry8J78Awq0?si=08vBAR1710mgQt0i

pearlmanactualintelligence.substack.comSteve Pearlman, Ph.D.
Sciences sociales
Épisodes
  • Free Your Brain from ChatGPT "Thinking"
    Oct 27 2025
    If you you’re someone who values being able think independently, then you should be troubled by the fact that your brain’s operates all too much like ChatGPT. I’m going to explain how that undermines your ability to think for yourself, but I’ll also give you a key way to change it.How ChatGPT “Thinks”Let’s first understand how ChatGPT “thinks.” ChatGPT is one of several Artificial Intelligences that’s called a Large Language Model or LLM. All LLMs use bulk sources of language—like articles and blogs they find on the internet—to find trends in what words are most likely to follow other words. To do so, they identify key words that stand out as most likely to lead to other words. Those key words are called “tokens.” Tokens are the words that cue the LLM to look for other words.So, as a simple example for the sake of argument, let’s say we ask an LLM, “what do politicians care about most?” When the LLM receives that question, it creates two tokens: “politicians” and “care.” The rest of the words are irrelevant. Then, the LLM scours the internet for the its two tokens. Though I did not run this through an LLM, it might find that the words most likely to follow the sequence [politicians]>[care] are: “constituents,” “money,” and “good publicity.”But because LLMs only return what is probabilistically likely to follow what it identifies as its tokens, then an LLM probably would not come up with [politicians]>[care about] moon rocks because the internet does not already have many sentences where the words “moon rocks” follow the token sequence: “politicians” and “care.”Thus, LLMs, though referred to as Artificial Intelligence, really are not intelligent at all, at least not in this particular respect. They really just quickly scour the internet for words that are statistically likely to follow other “token” words, and they really cannot determine the particular value, correctness, or importance of the words that follow those tokens. In other words, they cannot drum up smart, clever, unique, or original ideas. They can only lumber their way toward identifying statistically likely word patterns. If we were to write enough articles that said “politicians care about moon rocks,” the LLMs would return “moon rocks” as the answer even though that’s really nonsensical.So, in a nutshell, LLMs just connect words that are statistically likely to follow one another. There’s more to how LLMs work, of course, but this understanding is enough for our discussion today.How your Brain Operates Like ChatGPT.You’re probably glad that your brain doesn’t function like some LLM dullard that just fills in word gaps with ready-made phrases, but I have bad news: our brains actually function all too much like LLMs.The good news about your brain is that one of the primary ways that it keeps you alive is that it is constantly functioning as a prediction engine. Based on whatever is happening now, it is literally charging up the neurons it thinks it will need to use next.Here’s an example: The other day, my son and I were hiking in the woods. It was a rain day, so as we were hiking up a steep hill, my son tripped over a great white shark.When you read that, it actually took your brain longer to process the words “great white shark” than the other words. That’s because when your brain saw the word “tripped” it charged up neurons for other words like “log” and “rock,” but did not charge up neurons for the words “great white shark.” In fact, your brain is constantly predicting in so many ways that it is impossible to define them all here. But one additional way is in terms of the visual cues words give it. So, if you read the word “math,” your brain actually charges up networks to read words that look similar, such as “mat,” “month,” and “mast,” but it does not charge up networks for words that look very different, like “engineer.”Ultimately, you’ve probably seen the brain’s power as a prediction engine meet utter failure. If you’ve ever been to a surprise party where the guest of honor was momentarily speechless, then you’ve seen what happens to the prediction engine when it was unprepared for what happened next. The guest of honor walked into their house expecting, for the sake of argument, to be greeted by their dog or to go to the bathroom, but not by a house full of people. So, their brain literally had to switch functions, and it took it a couple of seconds to do it.But the greater point about how your brain operates like ChatGPT should be becoming clear: If we return to my hiking example where I said, “my son were hiking and he tripped over a ___,” then we see that your brain also essentially used “tokens” like ChatGPT to predict the words that would come next. It saw “hiking” and “tripped,” it cued up words like “log” and “rock,” but not words like “great white shark,” and...
    Voir plus Voir moins
    10 min
  • Is Higher Ed to Collapse from A.I.?
    Sep 9 2025
    Steve Pearlman: Today on actual intelligence, we have a very important and timely discussion with Dr. Robert Neber of a SU, whose recent opinion piece in inside higher education is titled AI and Higher Ed, and an impending collapse. Robert is a teaching professor and honors faculty fellow at the Barrett Honors College at a SU.And the reason that I invited him to speak with us today on actual intelligence is his perspective on artificial intelligence and education. And his contention roughly that higher Ed's rush to embrace artificial intelligence is going to lead us to some rather troubling places. So let's get to it with Dr.Robert Niebuhr.Robert. We talked a little bit about this on our pre-call, and I don't usually start a podcast like this, but what you said to me was so striking, so, uh, nauseating. So infuriating that I think it's a good place to begin and maybe some of [00:01:00] our listeners who value actual intelligence will also find it as appalling as I do, or at least a point of interest that needs to be talked about.You were in a meeting and we're not gonna talk about exactly, necessarily what that meeting was, but you're in a meeting with a number of other. Faculty members and something interesting arose, and I'll allow you to share that experience with us and we'll use that as a springboard for this discussion.Robert Neibuhr: Yeah, sure. Uh, so obviously, as you can imagine, right, I mean, faculty are trying to cope with, um, a perceived notion that students are using AI to create essays. And, and, uh, you know, in, in the, where I'm at, you know, one of the backbones, um, in my unit to. Um, assessed work is looking at argumentative essays.So the, the sort of, the idea that, that this argumentative essay is a backbone of a, of a grade and assessment. Um, and if we're, if we're suspecting that they're, they're using ai, um, you [00:02:00] know, faculty said, well, why should we bother grading essays if they're written by bots? Um, and, and you know, I mean, there's a lot, there's a lot to unpack there and a lot of things that are problematic with that.Um, but yeah, the, the, the idea that, you know, we, we don't have to, to combat a, to combat the perceived threat of, of student misuse of ai, we just will forego critical assessment. Um, that, that was, you know, not a lone voice in the room. That that seemed to be something that was, that was reasonably popular.Steve Pearlman: Was there any recognition of what might be being sacrificed by not ever having students write another essay just to avoid them using ai, which of course we don't want them to just have essays write, uh, so of course we don't want them to just have AI write their essays. That's not getting us anywhere.But was there any conception that there might be some loss in terms of that policy? [00:03:00]Robert Neibuhr: I mean, I, I think, I think so. I mean, I, I imagine, uh, you know, I think. My colleagues come from, from a place where, where they're, they're trying to figure out and, and cope with a change in reality. Right? But, um, there, there is also a subtext, I think across, across faculties in the United States of being overworked.And, and especially with the mantra among, you know, administration of, you know, AI will help us ramp up or scale up our, our class sizes and we can do more and we can. All this sort of extra stuff that it would seem like faculty would be, um, you know, more of their time and, and more of their effort, you know, as an ask here that I think that's, that, that may be, that may have been part of it.Um, I, I, I don't know that the idea of like the logical implication of this, that, you know, if we no longer. Exercise students' brains if we no longer have them go through a process that encourages critical [00:04:00] thinking and art, you know, articulating that through writing, like what that means. I, I don't know that they sort of thought it beyond like, well, you know, this could be, we could try it and see was kind of the mentality that I, I sort of gauged from, from the room.But, uh, it's, I mean, it's a bigger problem, right? I think the, the, the larger aspect of. What do we, what do we do? What can we do as faculty in this sort of broad push for AI all over the place? And then the idea of the mixed messages. Students get right. Students get this idea, well, this is the future. If you don't learn how to, how to use it, if you don't, you know, understand it, you're gonna be left behind.And then at the same time, it's like, well, don't use it from my class. Right? Learn it, but don't use it here. And that's. That's super unclear for students and it's, it's unclear for faculty too, right? So, um, it, it's one of those things that it's not, um, I don't think in the short term it works. And as you, as you, as you implied, right, the long term solution here of getting rid of essay [00:05:00] assignments in, in a discussion based seminar that relies on essays as a critical, I mean, this is not ...
    Voir plus Voir moins
    44 min
  • Get Recognized for Thinking Outside the Box
    Sep 4 2025
    How does your brain tackle a new problem? Believe it or not, it tackles new problems by using old frameworks it created for similar problems you faced before. But if your brain is wired to use old frameworks for new problems, then isn’t that a problem? It is. And that’s why most people never think outside the box.So, how do you get your brain to think innovatively? Divergently? And outside the box, when others don’t?It’s easier than you think, but before we get to that, let’s be clear on something. When I talk about frameworks, I’m not speaking metaphorically. I’m speaking about the literal wiring of your brain, something neuropsychologists might refer to as “engrams,” and just one engram might be a network of millions of synapses.Think of these engrams as your brain’s quick-reference book for solving problems. For example, if your brain sees a small fire, it quickly finds the engrams that it has for fire. One engram might be to run out of the house. Another might be to pour water on the problem. Without these existing engrams, you might just stand there staring at the fire trying to figure out what to do. So, you should be thankful that your brain has these pre-existing engrams for problems. If it didn’t, every problem would seem new for the first time.But there’s a serious flaw in the brain’s use of engrams. Old engrams don’t always really apply to new problems. So, let’s say your brain sees a fire, but this time it’s an electrical fire. It still sees fire, shuffles through its engrams, and lands on the engram for pouring water on that fire to extinguish it. In its haste, it’s old engram overlooks the fact that it’s an electrical fire. So, pouring water on it only spreads it, if not also gets you electrocuted.Your brain chose the closest engram it had for solving the current problem, but that old engram for extinguishing fire with water was terribly flawed in terms of solving for electrical fires. Old engrams never fully match new problems.So, here’s why most people cannot think outside the box: They’re trapped using old engrams and do not know how to shift their brains into new ones. That’s right. Since the brain needs to rely on some kind of existing engram, then people who do not know how to break free of their engrams will never think innovatively, creatively, or outside the box.But thinking outside the box is easy if you know the trick. When faced with a problem, even if it is a similar to one you faced before, or especially if it is similar to one you faced before, you need to force your brain into looking at the problem in a radically different way. Remember, your brain will keep trying to work back to the old engram. That’s it’s default approach. It wants to use templates it already has. And so you have to shock it into a new perspective that does not allow it to revert to the old perspective. I’m talking about something that has nothing to do with the problem at all. I’m talking about an abstract, divergent, and entirely unrelated new perspective.For example, when you’re facing a problem, or when you’re leading a team facing a problem, examine the problem through some kind of radical analogy that seemingly has nothing to do with the problem itself, but something with which you are your team are familiar.You might ask, how’s this situation like Star Wars? Who or what is Darth Vader? What’s the force? Who or what is Luke Skywalker? What’s a lightsaber in this scenario?Or, you might consider how your problem is like what happened to Apollo 13. How are we spiraling through space? How much power do we need to conserve and how do we do it? Who’s inside the capsule? What’s outside? Who’s mission control? And so on.See, you might think that these are trivial or even silly examples, but remember, it is the fact that they are so unrelated and abstract that will jolt your brain out of its existing engrams and force it to look at the problem in entirely new ways. And here’s the beauty of it: Because your brain still wants to solve the problem, it will on its own, whether you even want it to or not, find ways to make connections between your abstract idea and the problem itself, and it will do so in innovative, creative ways that will make your thinking or your team’s thinking, stand out.Remember, when Einstein was developing his Theory of Relatively, he didn’t just sit around doing math. He also spent a lot of time imagining what it would be like to ride on the front of a beam of light.So, when it comes down to it, if you know what to do, then thinking outside of the box might be easier than … well … easier than you think. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
    Voir plus Voir moins
    6 min
Pas encore de commentaire