Skip to main content
SearchLoginLogin or Signup

The Intelligence and Rationality of AI and Humans: A Conversation With Steven Pinker

An interview with Steven Pinker by Xiao-Li Meng and Liberty Vittert
Published onOct 27, 2023
The Intelligence and Rationality of AI and Humans: A Conversation With Steven Pinker
·

Abstract

Harvard Data Science Review’s Founding Editor-in-Chief, Xiao-Li Meng, and Media Feature Editor, Liberty Vittert, interviewed Dr. Steven Pinker, the Johnstone Family Professor in the Department of Psychology at Harvard University and expert on the human mind, about how artificial intelligence is viewed within his fields of study. Dr. Pinker has researched and published several articles and books on language, cognition, and social relations.

In this conversation, these two data scientists and an experimental cognitive psychologist theorize about the future societal roles of AI platforms like ChatGPT and discuss whether technology has the ability to be rational and intelligent, as well as how those terms might vary in definition between different fields and between the human brain and AI.

This interview is Episode 28 of The Harvard Data Science Review Podcast. This episode released April 20, 2023.

HDSR includes both an audio recording and written transcript of the interview below. The transcript that appears below has been edited for purposes of grammar and clarity with approval from all contributors.


Audio recording of episode 28 of The Harvard Data Science Review Podcast.


Liberty Vittert: [00:00:03] Hello and welcome to the Harvard Data Science Review Podcast. I’m Liberty Vittert, feature editor of Harvard Data Science Review, and I’m here with my co-host and editor-in-chief, Xiao-Li Meng. This month we are getting the scoop on an incredibly hot topic right now: artificial intelligence. Will platforms like ChatGPT run the world anytime soon? Does technology have the ability to be intelligent and also rational? In this episode, we discuss these issues with Steven Pinker. He’s an experimental cognitive psychologist and a popular writer on language, mind, and human nature. What happens when an expert on the human mind sits down to discuss intelligence machines with two data scientists? Keep listening to find out.


Xiao-Li Meng: [00:00:48] Well, thank you so much, Steve, for joining us. I know how busy you are, so let’s just get to it. What is intelligence? What are the key components? What do you think constitutes intelligence?


Steven Pinker: [00:01:00] I think intelligence is the ability to use knowledge to attain goals. That is, we tend to attribute intelligence to a system when it can do multiple things, multiple steps or alternative pathways to achieving the same outcome: what it wants. I’m sitting here right now in William James Hall, and my favorite characterization comes from William James himself, the namesake of my building, where he said, “‘You look at Romeo pursuing Juliet, and you look at a bunch of iron filings pursuing a magnet, you might say, ‘Oh, same thing.’ There’s a big difference. Namely, if you put a card between the magnet and filings, then the filings stick to the card; if you put a wall between Romeo and Juliet, they don’t have their lips idiotically attached to opposite sides of the wall.” Romeo will find a way of jumping over the wall or around the wall or knocking down the wall in order to touch Juliet’s lips.’ So, with a nonintelligence system, like physical objects, the path is fixed and whether it reaches some destination is just accidental or coincidental. With an intelligent agent, the goal is fixed and the path can be modified indefinitely. That’s my favorite characterization of intelligence.

Xiao-Li Meng: [00:02:20] From that perspective, it seems like it doesn’t have to be human. If the system can be made to achieve these goals, is that one way to understand the possibility of moving the intelligence from humans to AIs or whatever you want to call it?


Steven Pinker: [00:02:35] Well, clearly it can’t be restricted to humans, otherwise it would just be a kind of chauvinist description of one of our traits, and then the concept of artificial intelligence would be an oxymoron. So, clearly, we have to have some criterion of intelligence other than what we happened to find in Homo sapiens.


Liberty Vittert: [00:02:53] So much of your work has also fallen into studying rationality. Is there any connection between intelligence and people’s ability or lack thereof to be rational?


Steven Pinker: [00:03:08] Rationality and intelligence, as concepts, are pretty close if not identical, and they both have to be defined with respect to a goal. A machine doesn’t try to touch Juliet’s lips because that’s just not the goal that was programed into it. But a robot, if that was the goal that was installed in it, then we would call it intelligent if it did jump over the wall, or around the wall, or through the wall, or had multiple means of achieving that goal. We can also ask the question, though, among humans who differ in intelligence—that’s why we have intelligence tests—is that the same as differences in rationality? The answer is not necessarily, because when it comes to humans, we have multiple goals. We can talk about whether the person deploys his or her raw brainpower to attain goals that are more consistent with that person’s long-term or overall or lifelong goals. I think that’s usually what we mean by rationality. When we say, ‘smart people can do stupid things,’ that’s not a contradiction. It’s not an oxymoron. What we mean is that in terms of what people value over the course of their lives—esteem, respect, health, stability, and so on—people can do things in the short run that subvert their goals in the long run. We call that being unwise. We often call it being irrational. Now there’s an empirical question: to the extent that you can measure rationality with a separate instrument from intelligence, how correlated are they among individuals? One way of doing that is you can go to the literature in cognitive psychology and behavioral economics of common fallacies and flaws in reasoning—like the sunk cost fallacy. Do you say ‘I should pursue this project because I’ve already spent so much time and money on it as I want to. What is the likely payoff for the time and money going forward?’ Do people commit a conjunction fallacy? That is, if you give them a stereotype like the famous example from Tversky and Kahneman of Linda, the philosophy major and political activist, is she more likely to be a bank teller or a feminist bank teller? And people say it’s more likely that she’s a feminist bank teller, which violates the conjunction rule in probability.

[00:05:34] So on the one hand, you have a battery of common fallacies; on the other hand, you have an off-the-shelf IQ test, which involves measures that include vocabulary, rearrange cards into a sequence that represents a coherent story, repeat digits forwards backwards, solve analogy problems. How well correlated are they? Now it turns out among our common interests of statistics, there is a statistical problem here that there was a claim in the literature—and I reproduce this in Rationality1—that rationality as measured by a battery of tests of fallacies and biases, and intelligence as measured by new IQ tests, correlate positively but far less than perfectly. Therefore, there may be a separate dimension of rationality that is not orthogonal to intelligence, but it is not perfectly parallel to intelligence. That’s a claim by Keith Stanovich, a cognitive psychologist. He calls it the Rationality Quotient,2 or RQ, which he concedes is correlated with IQ, but not perfectly. Now, the problem is that one could criticize that conclusion by saying that whenever you have two imperfect measures of a single underlying construct, the fact that they don’t correlate with each other doesn’t prove that there are two underlying constructs; it might just mean that there is imperfect measurement. So that’s an unresolved controversy in the cognitive psychology literature, namely, our rationality and intelligence dissociable in a population of humans.


Xiao-Li Meng: [00:07:23] Speaking of these tests for intelligence—you mentioned IQ test—for machines, we all know there’s a Turing test, right? But just about a week ago, Microsoft Research released an article where they tried to test whether an early version of GPT-4 has signs of intelligence. It seems their basic conclusion is that there are some signs of intelligence there. I want to ask you what your take is on whether GPT at this moment has signs of intelligence, and how would you conduct such a test if you want to find out whether these things are intelligent or not from whatever we have at this moment?


Steven Pinker: [00:08:01] Well, the Turing test has been sometimes called Alan Turing’s worst idea. Because it really is, despite the fact that it went viral. Everyone knows what the Turing test is. It’s actually a pretty crummy test of anything. It’s a test of how easily you can fool people, and the answer is ‘easily.’


Xiao-Li Meng: [00:08:20] [Laughing].


Steven Pinker: [00:08:23] And in fact, this goes back to the 1970s when my former colleague at MIT, Joseph Weizenbaum, who devised the first chatbot called ELIZA, had maybe two dozen canned responses and tried to mimic a Rogerian therapist. So, if there’s any sentence with the word ‘mother’ in it, then it replies, ‘Tell me more about your mother.’ Or if you say, ‘Last night I dreamed that blah, blah, blah,’ it will then say, ‘Have you ever wished that blah, blah, blah?’ So, this was really primitive, and it kind of passed the Turing test back in 1974. Weizenbaum noticed that his own secretary was pouring her heart out to this incredibly dumb chatbot. What it shows is that fooling a person is really not a criterion of anything. And it’s funny that no other science would use anything like that as a criterion. ‘What is your theory of how plants work?’ ‘Well, if I can develop a silk flower that fools people to think it’s a real flower, that shows that flowers have made it as silk.’ I mean, that just doesn’t make any sense. So, forget the Turing test.


Xiao-Li Meng: [00:09:29] Great.


Steven Pinker: [00:09:30] But your question really is, could you consider these large language models intelligent? Clearly, they are intelligent in some ways. I mean, they’re damn impressive in some ways. Does their intelligence work the same way that ours does? The answer is almost certainly no. But the fact that they can achieve goals not through a canned regurgitated response, but through a mind-bogglingly complex set of calculations that are appropriate to the goal—the goal in this case being continuing a conversation—shows that they have some kind of intelligence. But just as there are many ways in which you can get something to fly—you know, birds and planes do it in different ways—there can be different ways of implementing intelligence, and we know that the large language models can’t do it the way humans do, or vice versa.


Xiao-Li Meng: [00:10:24] That’s actually a really interesting point that leads to my next question, because as you said, the notion of artificial intelligence has been there for a while but the most recent one seems to be coming out of these large language models, the natural language processing. This really reminds me that you used to do a lot of studies on the role language played in human intelligence development. Is this a coincidence that we’re seeing these kinds of next generation AIs coming out of this process in natural language? Is this a coincidence or is this following a similar pattern as how language itself evolved and plays a very important role in the human intelligence? Do we see some parallel kind of evolution here?


Steven Pinker: [00:11:04] Yes, so the interesting thing about the large language models is that, ordinarily, intelligence and rationality can’t easily be done just in terms of language, just because language is a social media that evolved organically. It’s used in a conversational context. When you learn math, when you learn computer programing, you’ve got to learn these formal languages of mathematics, just because English wasn’t designed for all that. English was designed for ordinary humans chitchatting, using context, reading between the lines, connecting the dots. So any kind of intelligence ordinarily would have to be implemented in much more precise—and manipulable by—rules. Now, the big surprise is—and I’ve got to confess that it surprised me—is how much intelligence is sitting implicitly in databases of language if they’re big enough. In the case of GPT-3 and I think ChatGPT, it’s something like half a trillion words.


Xiao-Li Meng: [00:12:11] Right.


Steven Pinker: [00:12:11] Now it turns out you could not possibly do anything intelligent by manipulating language from one issue of The New York Times. But if you had language of everything that’s been on the Internet since the year 1995, you extract—I think its 50 billion parameters from the correlations and the correlations of the correlations and the correlations of the correlations of the correlations of words in the context of other words. The human mind can’t wrap itself around what those very-high-order statistical patterns are, and what the large language models tell us is that implicit in those statistics, there is a lot of knowledge and indeed potential intelligence. What these models don’t have is a model of the world. That is, they don’t see, they don’t feel, they don’t experience. They don’t even have an explicit database of the state capitals and the laws of physics and the history of the United States, or even how objects in our everyday experience roll and fall and tumble, and what other people tend to do. No one programmed any of that stuff in, but it implicitly seems to have an understanding of that by soaking up patterns that are implicit in those literally hundreds of billions of words of text. Now, all that text came from humans. These were people doing Reddit posts and Wikipedia entries and The New York Times articles and God knows what else. Our intelligence kind of dumped all this data there for the taking by these large language models, and it kind of went backwards and tried to, in a sense, infer what kind of world this is, what kind of creatures are humans such that they can dump these half a trillion words on the Internet. It is mind boggling that it could do that.

[00:14:09] The actual technique is not that new. A primitive version was proposed in 1990 by Jeffrey Elman, who sadly died a few years ago and doesn’t really get credit for the very idea of a neural network model that tries to predict the next word in a sentence from the previous word using some hidden states, hidden variables. And I’ve got to say that I’m one of the ones who said that’s just not how language works. We don’t just predict the next word. We have a hierarchical model of phrases within phrases. We have a detailed semantic network of the world. We map from one to the other. And I still maintain that’s the way the human mind works. The new discovery in this is taking advantage of almost a happy accident that all these millions of people have been dumping their thoughts into the web and it’s all archived. Now, you have a combination of that mind-bogglingly large data set of language with computational techniques and processors like GPUs that can crunch data like never before. And we have this surprise that fairly—I don’t want to say stupid, but not-so-understanding algorithms that can mine an awful lot when you’ve got a data set that is that large.


Liberty Vittert: [00:15:36] You’re almost saying we created the ability of ChatGPT by giving it all this data. Is it possible that something like ChatGPT could be an accelerating technology in the sense that it could sort of perpetuate itself into exponential growth and develop some sort of self-identity, become a being with self-awareness instead of just a machine, from all of this information that we’ve kind of dumped into it?


Steven Pinker: [00:16:06] I suspect not for a couple of reasons. One of them is its growth. I don’t know that it’s going to be exponential. I mean, it’s impressive but ‘exponential’ is really a lot. That is, if it’s twice as good tomorrow, is it going to be four times as good the day after tomorrow and eight times as good the day after that? I suspect not. Also, a lot of systems have some amount of self-knowledge. Your computer does a self-check, your car checks out all its systems. So simply having sensors or monitors of one’s internal state that one then represents is not that big a deal. It’s not the same as being aware in the sense of actually subjectively feeling something, and it doesn’t emerge naturally out of bigger and bigger data; it needs some kind of feedback loop where something that senses the state of the system can then itself become an input to that system. And the way large language models work now is not that way in the sense that they’re trained on a data set, pretty much the entirety of the web, then they kind of freeze that as of 2021, and these systems have anterograde amnesia for everything that happens after 2021. They don’t take in their own knowledge, but it’s kind of frozen in what they were trained on.


Xiao-Li Meng: [00:17:32] So, since the arrival of ChatGPT, very recently, I think now it’s very hard to find anyone who has not either chatted with ChatGPT or chatted about ChatGPT. And of all the people I talked to, essentially, I have found that there are people who are very impressed and there are people quite depressed, and also there are people who really want to suppress this whole thing because they think it’s very dangerous, if it is out of control. What do you think about this, considering whatever growth it will have? Is your thinking about this whole thing that overall, it will be beneficial to humankind?


Steven Pinker: [00:18:08] It’s an excellent question. It’s a hard question to answer. Just about all of our technologies—other than weaponry, which are by design to inflict harm—but our technologies have all had costs and benefits, but we do tend to keep the ones that have benefits. Overall, our technologies tend to be forces for improvement. The potential of artificial intelligence could be spectacular if it were to develop robotics that could drive our cars without the carnage of car accidents; it could do boring, repetitive tasks like stocking shelves and making beds, freeing humans to do better things with their time; it could solve problems like energy and pandemics and antibiotics, and God knows what else. The potential could be spectacular. Part of the problem is that a lot of artificial intelligence has been deployed toward dubious goals like multiplying propaganda, disinformation, phishing scams, manipulating people into doing things against their interests. And there is reason for concern. Is the whole industry of AI going to just foist these powerful, dubious technologies on us, or is it going to concentrate its resources on the kind of AI that could really make life better?


Liberty Vittert: [00:19:37] When we talk about the dangers of ChatGPT or the dangers of some of this AI, there have been tons of reports of bias. So, it will write a poem praising Joe Biden but not Donald Trump, or it refuses to write about why fossil fuels could be a good thing. Do you see some real dangers in this, or do you think this type of technology will sort of right itself as a whole?


Steven Pinker: [00:20:03] Yeah, I think there are two dangers. One of them is that since they’re at the mercy of their training set, it’s the ultimate garbage in, garbage out. You train them on unrepresentative data, and they will reproduce whatever statistical patterns are in that data, and they could be biased. Now, in some cases it may be that the world is biased. It’s just a law of social science—people are very uncomfortable with this law, but it’s nonetheless true—that whenever you divide a population by any demographic variable, race or class or sex or ethnic group or religion, the means are never the same. Sometimes that’s uncomfortable. We don’t like to admit that if you divide people up, they don’t score the same on anything you care to measure. We humans are sensitive to that, and we sometimes euphemize or repress or even choose willingly to ignore certain statistical differences. Even a neural network model that’s trained on a representative data set is going to find whatever statistical patterns are in there. We call some of those statistical patterns biases, but they’re sometimes just the statistics of reality. That’s a second kind of bias in the sense that sometimes we don’t want to know what the statistics are. I’ll just give you an example: in the court system, a Bayesian justice system that took into account priors might come up with more accurate verdicts, but it would be an ethical horror. That is, if you had a defendant, you could look at, ‘Well, what’s the base rate for people with his age and sex and religion and race? And we’ll adjust how much evidence that we require depending on all this background information.’ Now, if you’re a Bayesian statistician, you say, ‘Great! That’s the way you get the most accurate posteriors.’ If you’re a human with a moral sense, designing a justice system, you don’t want that. You want to say, ‘There’s some kinds of information we don’t want to know.’ If your definition of intelligence is you shovel in every relevant data and you let it soak up the patterns that are in there, it could behave in ways that are not so much biased, but that we would judge to be unacceptable or immoral if we had the kind of intelligence that could take into account categorical rules, like your race shouldn’t matter.

[00:22:30] Now, there’s a third kind of bias, and that’s the one that’s kind of ham-fistedly been programed into these systems, where you get these kinds of woke-ish apologies if you ask ChatGPT questions about sex or about race or about sexual orientation, where it will say, ‘Oh, no, you must never generalize.’ Indeed, there is reason to believe that some of that stuff that’s been tacked on, we don’t know if it’s stipulated in actual rules. More likely, it’s from the human feedback, the reinforcement learning from human feedback, where they try out ChatGPT on some humans who then can be horrified and say, ‘No, no, don’t give that answer,’ and it learns those patterns. If you’ve got a bunch of people with a political axe to grind who are giving that kind of feedback, then the system will replicate their biases and it might indeed prevent you from certain kinds of satire, but not from other kinds of satire, or certain kinds of generalization, but not from others. That’s a kind of bias that’s almost deliberately programed in, and people aren’t so upset about it because it’s the kind of bias they like. Unless you’re on the other side. Then you might not like that kind of bias.


Xiao-Li Meng: [00:23:48] So you mentioned how these large language models seemingly were able to detect these correlations of correlations of correlations that had some deep patterns, things that humans may or may not be able to detect. I wonder if there’s another angle in which things have become a lot more efficient, because in some sense I was thinking about ChatGPT almost like this gigantic wisdom of a crowd, right? If I get all the experts, I can tap into their brains to ask them questions that they can all tell me. But if I’d really put together a large group of experts, I had to deal with the issue of human emotions, because everybody has their takes, they want to maybe argue with each other. I wonder if part of the efficiency coming out of ChatGPT or these large language models is because they don’t have the emotional entanglement. But that would probably be both good and bad. The bad part is they don’t have that kind of saying, ‘no, no, you cannot do this because there is a judgment involved.’ So, I think my question has two parts. One is that should we actually try to actively inject some artificial emotion, whatever that means? The second is that without it, is a part of the danger because they don’t have emotion involved? They can go in directions that we humans would not go because we have our sympathy and empathy of that nature.


Steven Pinker: [00:25:07] I don’t know if the problem is so much the lack of emotion in the way that we talk about a person being too emotional. It’s perhaps lack of setting goals that it is designed to attain. When we call something emotional, we’re all emotional. I mean, we ought to be emotional. We fall in love, we get angry, we’re afraid of certain things. When we say something like ‘Oh, stop being so emotional’ or ‘Let’s do this rationally, not emotionally,’ I think what we often mean is reasoning that satisfies some short-term goal at the expense of some long-term goal. So, if someone blows their stack and they start yelling and insulting someone, it feels really good in the moment and it might kind of intimidate someone or punish someone, but if you’re in a committee, that’s not what the committee is all about. That’s not the goal you want to pursue, one person dominating another. Or if you have an ill-advised fling or if you gorge on some delicious treat or procrastinate something—all of those, we call that ‘don’t be emotional.’ And I think what they have in common is that we prioritize the short term over the long term, and you probably don’t want an artificial intelligence system to do that.

[00:26:21] But what you do want it to do is you want to know that it’s pursuing the goal that it’s designed to pursue, and sometimes those goals can conflict. So in the case of the criminal justice system, there are multiple goals. One of them is you want to convict guilty people, but another one is you don’t want to convict innocent people, and a third one is you don’t want to have any kind of racial disadvantage. And sometimes they’re in conflict. And it’s not trivial to figure out how to adjudicate among competing goals. Sometimes you may not even know or the designer may not have given enough thought as to what the goal is. In fact, it’s not easy to say, ‘What’s the point of ChatGPT?’ That is, it’s to chat with someone, but what’s so great about that? To simulate composition—is that a goal that we really want? It would be nice if there was more clarity as to what the system is supposed to bring about. And then we can also ask questions. Is it bringing about goals that we ought to want?


Liberty Vittert: [00:27:25] So we always wrap up with this magic wand question. I was reading some of your previous interviews, and in one of them you said, quote, "Many philosophers that I know think that the world would be a better place if people knew a bit of logic." So, if you could wave your magic wand, what would be the logic that you would teach people that everyone would learn? What is that logic that’s going to make the world a bit of a better place?


Steven Pinker: [00:27:55] I would say the biggest one is to be aware of our own fallibility, our own limited knowledge, and the fact that from the inside it always feels like we’re right, that we’re both factually correct and morally justified, even when we’re not. And that it feels that way to other people as well, even when they’re wrong or we’re wrong. That’s why we have institutions like the court system, like academia with peer review, with freedom of speech, with empirical testing. That’s why we have deliberative democracy. That’s why we have checks and balances. For humanity to get good things, health and peace and safety and pleasure, we have to step outside ourselves. We have to not just take the view from inside our own skin because we all are subject to biases and fallacies. We always think that we’re right. We always think that we’re good. But to be aware that that can’t be true of all of us all the time, and that’s why we have to participate in institutions. That’s why institutions are precious, at least good ones.


Xiao-Li Meng: [00:29:08] Well, thank you so much, Steve, for this most intelligent and rational conversation. It’s truly a pleasure to listen to you. Thank you.


Steven Pinker: [00:29:17] I appreciate it. Many thanks.


Liberty Vittert: [00:29:20] Thank you for listening to this week’s episode of the Harvard Data Science Review Podcast. To stay updated with all things HDSR, you can visit our website at hdsr.mitpress.mit.edu or follow us on Twitter and Instagram @theHDSR. A special thanks to our producers, Rebecca McLeod and Tina Tobey Mack and assistant producer Arianwyn Frank. If you liked this podcast, don’t forget to leave us a review on Spotify, Apple, or wherever you get your podcast. This has been Harvard Data Science Review: everything data science and data science for everyone.


Disclosure Statement

Steven Pinker, Xiao-Li Meng, and Liberty Vittert have no financial or non-financial disclosures to share for this interview


©2023 Steven Pinker, Xiao-Li Meng, and Liberty Vittert. This interview is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the interview. 

Comments
0
comment
No comments here
Why not start the discussion?