Ever since the publication in 1957 of Syntactic Structures, Noam Chomsky has been a towering eminence in linguistics and the philosophy of language, and since the 1960s, he has remained an astute and outspoken social critic Compositionists familiar with Chomsky’s work only through his transformational grammar and its compositional application, sentence combining may not be aware of how profoundly Chomsky has influenced modern thought on language. It would be fair to say that Chomsky’s scholarship over the last three decades has forever altered our notions of the integral relationship between language and the human mind.
Especially noteworthy about Chomsky’s positions as recorded in the interview below is that in this age of social construction, meaning relativity, and Derridean indeterminacy, Chomsky tenaciously contends that at the heart of most human cognitive operations is a fixed, structured, biological directiveness. In an age in which the preferred target of many intellectuals is Plato, Chomsky serenely declares that “the reasoning in the Platonic dialogues. . . is valid if not decisive,” and he holds up “Plato’s problem’ as the key strategy for studying most phenomena in the human sciences. Dismissing poststructuralist thought as “uninteresting,” Chomsky notes that the question of indeterminacy is not new, that “people have come at the question of indeterminacy from many points of view,” and that it’s just part of the age-old philosophic debate over the analytic/synthetic distinction. Yes, to a certain extent “elements of fluidity and indeterminacy do enter,” he concedes but also “there is a highly determinate, very definite structure of concepts and of meaning that is intrinsic to our nature and as we acquire language or other cognitive systems these things just kind of grow in our minds, the same way we grow arms and legs.”
In fact, Chomsky complains of a “pernicious epistemological dualism, in that “questions of mind are just studied differently than questions of body.” Certainly, there is “an element of truth” to theories such as the social construction of knowledge, but we seem, he argues, to ignore the powerful evidence that “systems of knowledge in particular [are] substantially directed by our biological nature.” For example, if we want to study a physical phenomenon such as puberty, “we allow our conception of rational inquiry to guide us, and it guides us right to the study of innate structure”; if we want to study meaning, “people don’t follow the same line of inquiry” even though “the logic is the same.” Thus, Chomsky expresses frustration with the current trend to dismiss out-of-hand all explanations of cognitive or epistemological operations that rely on theories of innateness. “That’s a very pernicious dualism,” he insists, “an extremely dangerous version of traditional dualism.”
Chomsky also disputes Kuhn’s notion that scientific knowledge is the ~, product of community consensus and periodically changes in “paradigm shifts.” To Chomsky, there has been only one true scientific revolution: “the Galilean revolution, the seventeenth-century revolution stretching over a period including Galileo.” Even the so-called cognitive revolution of the mid-1950s, of which generative grammar was a major part, was only a recapitulation of changes that first occurred in the seventeenth century, according to Chomsky. What’s more, he argues, in many ways this second cognitive revolution was a regression from advances made during the time of Descartes. Thus, Chomsky is uncomfortable with talk of paradigm shifts. About his own so-called Chomskyan revolution, he says, “It seems to me like just normal progress.”
Chomsky also comments on a range of other issues relevant to composition scholarship. While he supports the feminist movement, he claims that there is nothing inherent in language that works to reproduce patriarchal ideology; he agrees, though, that actual language use tends to maintain structures of authority and domination. He believes without question that “there’s a big degree of illiteracy and functional illiteracy” in the nation and that the media, through their insistence on “concision,” help to foster illiteracy, impose conventional thinking, and block “searching inquiry and critical analysis.” Chomsky applauds Paulo Freire’s liberatory learning pedagogy and believes that “composition courses are perfectly appropriate places” for helping students develop “systems of intellectual self-defense and “the capacity for inquiry.”
Throughout the interview, Chomsky has much to say about teaching He feels that teaching is “mostly common sense” and contends that “ninety nine percent of good teaching is getting people interested.” Paraphrasing nuclear physicist Victor Weisskopf’s teaching philosophy, Chomsky says, “It doesn’t matter what you cover; it matters how much you develop the capacity to discover.” However, he does believe that a “sensible prescriptivism ought to be part of any education.” That is, all students should master “standard English” even though “much of it is a violation of natural law.” Although “a good deal of what’s taught in the standard language is just a history of artificialities,” students should learn it nonetheless because it’s part of our “rich cultural heritage.” In keeping with his past statements denying the relevance of linguistics to other disciplines, he doubts that linguistics has anything to contribute to teaching reading and writing.
Chomsky’s views on ideology, propaganda, and indoctrination are also of interest to compositionists. He claims that intellectuals are “ideological managers,” complicit in controlling “the organized flow of information” because intellectuals are by definition those who have “passed through various gates and filters” in order to become “cultural managers.” In effect, “the whole educational system involves a good deal of filtering towards submissiveness and obedience.” By definition, those who are subversive or independent minded are not called intellectuals but “wackos.” In fact, Chomsky is quite critical of the distinction established between intellectuals—those in the universities—and non-intellectuals. Arguing that often non-intellectuals have a richer cultural life, he speaks disparagingly of the principal activity that sets academics apart from others: “From an intellectual point of view, a lot of scholarship is just very low-level clerical work.”
In examining the media’s role in indoctrination, Chomsky says that “the media’s institutional structure gives them the same kind of purpose that the educational system has: to turn people into submissive, atomized individuals who don’t interfere with the structures of power and authority.” Similarly, democratic governments use propaganda and “the manufacture of consent” in place of violence and force to control the masses. “Indoctrination is to democracy,” he philosophizes, “what a bludgeon is to totalitarianism.” This atomization of individuals, this breakdown of independent thought, and this general depoliticizing of society together create the perfect environment, in Chomsky’s view, for a charismatic, fascist dictator to seize power. “I think that’s one of the reasons why I’m very much in favor of corruption…. A corrupt leader is going to rob people but not cause that much trouble…. Power hunger is much more dangerous than money hunger,” he argues.
Chomsky sees no contradiction between his somewhat radical political views and his conservative, essentialist views on language. In fact, he insists on separating his two (as he calls them) full-time professional careers. He bristles at the criticism that he does not apply his expertise as a linguist to the very same inequities that he denounces as a social critic exploring how language helps maintain power hierarchies, for example. Such questions, he claims, have no intellectual depth and are of “marginal human significance.” Infinitely more significant is helping Salvadoran peasants or attending a demonstration in Washington.
Still, Chomsky’s political progressivism and philosophical foundationalism seem oddly incongruous at first glance. He confidently and steadfastly champions an eighteenth-century, rationalist view of the world, while railing against state capitalism and private ownership of the means of production. On second glance, however, Chomsky’s world view is perhaps not so schizophrenic after all. Just as his essentialist philosophy of innateness and biological directiveness derives from eighteenth-century notions, especially Humboldt’s concept of “infinite use of finite means” (from which grew Chomsky’s generative grammar), so too does his political ideology derive directly, as he puts it, from “classical liberalism—as developed, for example, by Humboldt.” In the face of Marxists, poststructuralists, and social constructionists, Noam Chomsky remains unshaken—a devoted eighteenth-century rationalist.
Q. You have published an overwhelming number of works. Do you think of yourself as a writer?
A. No, I’ve never particularly thought of myself as a writer. In fact, most of what I’ve published is written-up versions of lectures. For example, Syntactic Structures, the first book that actually appeared, was essentially lecture notes for an undergraduate course at MIT, revised slightly to turn them into publishable form. I would say probably eighty or ninety percent of the work I do on political issues is sort of working out notes from talks. Much of the material that ends up as professional books is based on class lectures or lectures elsewhere, so I tend to think out loud.
Q. So you see yourself first as a speaker, a lecturer.
A. The fact is that most of the writing I do is probably letters. I spend about twenty hours a week, I guess, just answering letters. Many of the letters are on questions that are in response to the hundreds of letters that I receive which are thoughtful and interesting and raise important questions (here’s today’s batch). Hundreds go out every week, and that requires thought; some of them are rather long. Those are actually written without being spoken. Sometimes I do sit down and write a book, too, but most of the time I don’t think of myself as a writer particularly.
Q. You have had a few words to say about your writing process. In fact, you commented once, “I’m able to work in twenty-minute spurts. I can turn my attention from one topic to another without start-uptime. I almost never work from an outline or follow a plan. The books simply grow by accretion.” Would you tell us more about your writing process?
A. The reason for the twenty-minute spurts—which is a bit of an exaggeration; maybe hour spurts would be more accurate—is just the nature of my life, which happens to be very intense. I have two full-time professional careers, each of them quite demanding, plus lots of other things. I just mentioned one—lots and lots of correspondence—and other things as well, and that doesn’t leave much time. In fact, my time tends to be very chopped up. I discovered over the years that probably my only talent is this odd talent that I seem to have that other colleagues don’t, and that is that I’ve got sort of buffers in the brain that allow me to shift back and forth from one project to the other and store one.
Q. So you can’t when writing a book, for example, concentrate for ten hours at a time.
A. No, I know that a lot of people don’t seem to be able to do that, and it’s certainly an advantage to be able to do it. I can pick up after a long stretch and be more or less where I left off. In fact, I’ve sometimes had to. I have friends like this. I had, in particular, one friend who just died a couple of years ago who was an Israeli logician and who’d been an old friend since I was twenty or so. We would meet every five or six years and usually pick up the conversation we had been having as if we had just had it five minutes ago and go on from there. As far as my books just sort of writing themselves, that’s pretty much what happens. I don’t recall ever having sat down and planned a book—except maybe for saying, “Well, I’m going to talk about X, Y, and Z, and I’ll have Chapter One on X, Chapter Two on Y, and Chapter Three on Z.” Then it’s just a matter of getting the first paragraph, and it just goes on from there.
Q. That’s quite a talent.
A. Well, it’s probably because I’ve thought about most of it before, or lectured on it before, or written a letter to someone about it, or done it twenty times in the past. Then it becomes mainly a problem of trying to fit it all in. I have discovered, if it’s of any interest to you, that I write somewhat differently now that I have a computer—quite a bit differently. I don’t know if it shows up any different, but I know I write differently. I was very resistant to the computer. I didn’t want to use it, and finally the head of the department just stuck it in my room. My teenage son who was—like every teenager, I guess—a super hacker carried me gently through the early stages, which I never would have had the patience to do. Once I was able to use the computer, I discovered that there were a lot of things that I could do that I’d never done before. For example, I’d never done much editing, simply because it was too much trouble; I didn’t want to retype everything. And I never did much in the way of inserting and rearranging and so on. Now I do a fair amount of that because it’s so easy. Whether that shows up differently for the reader, I don’t know. But I know I’m writing quite differently.
Q. As someone who is profoundly interested in the structure of language as well as the use and abuse of rhetoric in political contexts, you must have some thoughts about the nature of rhetoric. For you, what are the most important elements of rhetoric?
A. I don’t have any theory of rhetoric, but what I have in the back of my mind is that one should not try to persuade; rather, you should try to layout the territory as best you can so that other people can use their own intellectual powers to work out for themselves what they think is right or wrong. For example, I try, particularly in political writing, to make it extremely clear in advance exactly where I stand. In my view, the idea of neutral objectivity is largely fraudulent. It’s not that I take the realistic view with regard to fact, but the fact is that everyone approaches complex and controversial questions—especially those of human significance—with an ax to grind, and I like that ax to be apparent right up front so that people can compensate for it. But to the extent that I can monitor my own rhetorical activities, which is probably not a lot, I try to refrain from efforts to bring people to reach my conclusions.
Q. Is that because you might lose credibility or lose the audience?
A. Not at all. In fact, you’d probably lose the audience by not doing it. It’s just kind of an authoritarian practice one should keep away from. The same is true for teaching. It seems to me that the best teacher would be the one who allows students to find their way through complex material as you lay out the terrain. Of course, you can’t avoid guiding because you’re doing it a particular way and not some other way. But it seems to me that a cautionary flag should go up if you’re doing it too much because the purpose is to enable students to be able to figure out things for themselves, not to know this thing or to understand that thing but to understand the next thing that’s going to come along; that means you’ve got to develop the skills to be able to critically analyze and inquire and be creative. This doesn’t come from persuasion or forcing things on people. There’s sort of a classical version of this—that teaching is not a matter of pouring water into a vessel but of helping a flower to grow in its own way—and I think that’s right. It seems to me that that’s the model we ought to approach as best possible. So I think the best rhetoric is the least rhetoric.
Q. In his critique of Western metaphysics, Jacques Derrida exposed the indeterminacy of language, showing how meaning is never fixed, always fluid, never certain. What are your thoughts on this issue?
A. I don’t know this literature very well, and to tell you the truth, the reason Idon’t know it is that I don’t find it interesting. I try to read it now and then but just don’t find it very interesting. People have come at the question of indeterminacy from many points of view, and Ithink there’s an element of truth to it, but there’s also a respect in which it’s not true. These are questions of fact, not of ideology; therefore, there’s no grounds for dogmatism concerning them, and they’re not a matter of pronouncements but of discovery. To the extent that we understand things about language, the facts point rather clearly, rather clearly, to a specific conclusion which is halfway like that, but only halfway. What we find is that there is a highly deterministic, very definite structure of concepts and of meaning that is intrinsic to our nature and that as we acquire language or other cognitive systems these things just kind of grow in our minds, the same way we grow arms and legs. To that extent, meaning is determinate. However, there’s a sense in which it’s not fully determinate, and that is the way we use these conceptual and, in particular, these rich semantic structures in our interactions with one another and our interactions with the world. In that domain, there’s a high degree of interest-relativity, intrusion of value, relativity to purposes and intentions, modifiability often in a somewhat rather creative fashion, and so on. At that level it’s true that elements of fluidity and indeterminacy do enter; however, they have their own structure. It’s just that we don’t understand very much about it. So Ithink there’s an element of truth to that but it can be carried much too far.
In the philosophical literature—those parts of it that I feel more comfortable with and where I think I understand what people are talking about—similar ideas arise in the study of what’s called “meaning holism.” Take Hilary Putnam as an example, someone who’s extended views originally due to Quine towards a general theory of semantics which would express a viewpoint related to this—namely, that the meaning of a word is never determinate (it’s certainly not something in the mind), and if it’s not determinate then it depends on the place of the concept within the whole intellectual structure, and it can change, your beliefs change, the meanings change, and so forth; that is, the intentions change, the meaning may be modified, and soon. Well, I think that this thesis is half true. In the same respect, there is a fixed structure of meaning and it’s an interesting one, a very intriguing one. In fact, contrary to what is believed by many people—for example, Richard Rorty—there are strong empirical grounds for believing that there is quite a sharp analytic/synthetic distinction that derives from intrinsic semantic structures and is just a reflection of the fact that there are probably biologically determined and quite rich and intriguing semantic structures that are basically fixed. But there’s a sense in which meaning holism is correct; that is, what we describe as meaning in common-sense discourse, and in philosophical discourse, is never fixed entirely by the structures that are present in the mind and we’ve gotten that way because that’s the kind of creature we are. So in that sense there’s some truth to meaning holism.
Q. So you probably wouldn’t agree with Bakhtin. Are you familiar with hi~ work?
A. No, I’m not.
Q. His ideas sound very similar to this concept of meaning holism.
A. Yes, but that’s the standard view. That’s the view of Derrida to the extent that I understand him, but also of a large sector of analytic philosophy and again, Richard Rorty. Donald Davidson, for example, whom Ron’ quotes, argued—actually, I should say “asserted”—that Quine’s demolition of the analytic/synthetic distinction, his demonstration that this distinction doesn’t hold, created the modern philosophy of language as a serious discipline. Well, the analytic/synthetic question is a technical one but the point is the same. If there were determinate meanings, there would be an analytic/synthetic distinction. So the domain in which this issue is fought out in philosophical terrain is over the analytic/synthetic issue, but the real question is whether there are fixed, determinate meanings. Doe the word house have a determinate meaning or can it vary arbitrary depending on the way our belief systems vary? I think the answer is right in between. There’s a fixed and quite rich structure of understanding associated with the concept “house” and that’s going to be cross-linguistic and it’s going to arise independently of any evidence because it’s just part of our nature. But there’s also going to be a lot of variety in how we us that term in particular circumstances, or against the background of particular kinds of theoretical understanding, and so on.
Q. Some thinkers draw on Rorty’s work to posit that knowledge itself is a socially constructed artifact. That is, knowledge is not absolute; rather, it is the product of consensus within any given discourse community. This concept is related to Kuhn’s notion of how knowledge is formed within the scientific community. What are your thoughts about this theory?
A. There is an element of truth to it, obviously. There is no doubt that the pursuit of knowledge is often, not always, but is often—in fact, typically—a kind of communal activity. In particular, that’s true of organized knowledge, say research in the natural sciences, say what we do in this corridor; that’s obviously a social activity. For example, a graduate student will come in and inform me I was wrong about what I said in a lecture yesterday for this or that reason, and we’ll discuss it, and we’ll agree or disagree, and maybe another set of problems will come out. Well, that’s normal inquiry, and whatever results is some form of knowledge or understanding; obviously, that’s socially determined by the nature of these interactions. On the other hand, most domains we don’t understand much about—like how scientific knowledge develops, something we basically understand nothing about—but if we look more deeply at the domains where we do understand something, we discover that the development of cognitive systems, including systems of knowledge in particular, is substantially directed by our biological nature. In the case of knowledge of language, we have the clearest evidence about this. Part of my own personal interest in the study of language is that it’s a domain in which these questions can be studied much more clearly, much more easily than in many others. Also, it’s one intrinsic to human nature and human functions, so it’s not a marginal case. There, I think, we have very powerful evidence of the directive effect of biological nature on the form of the system of knowledge that arises.
In other domains like, for example, the internalization of out moral code, or our style of dress, we just know less. But I think the qualitative nature of the problem faced strongly suggests a very similar conclusion: a highly directive effect of biological nature. When you turn to scientific inquiry, again, so little is known that everything that one says is virtually pure speculation. But I think the qualitative nature of the process of acquiring scientific knowledge again suggests a highly directive effect of biological nature. The reasoning behind this is basically Plato’s, which I think is quite valid. That’s why it’s sometimes called “Plato’s problem.” The reasoning in the Platonic dialogues, which is valid if not decisive, is that the richness and specificity and commonality of the knowledge we attain is far beyond anything that can be accounted for by the experience available, which includes interpersonal interactions. And, besides being acts of God, that leaves only the possibility that it’s inner-determined. That’s the same logic that’s constantly used by every natural scientist studying organic systems. So, for example, when we study, metaphorically speaking, physical growth below the neck, everything but the mind, we just take this reasoning for granted. For example, let’s say I were to suggest to you that undergoing puberty is a matter of social interaction and people do it because they see other people do it, that it’s peer pressure. Well, you laugh, just as you’re laughing now. Why do you laugh? Everyone assumes that it’s biologically determined, that you’re somehow programmed to undergo puberty at a certain point. Is it that something is known about that biological program? Is that why you laugh? No, nothing’s known about it. In fact, we know a lot more about the acquisition of meaning and the fixed factors in that than we do about the factors that determine puberty. Is it that social factors are irrelevant to puberty? No, not at all. Social interaction is certainly going to be relevant. Under certain conditions of social isolation, it might not even take place. Why do people laugh? That’s the question.
Q. What about knowledge in a particular field, say linguistics? You came along with Syntactic Structures and changed the way we think of linguistics. If your colleagues and followers had not accepted and then helped champion that cause, you would simply be a kook out in the wilderness with some crazy idea. But what happened is that a large part of your discourse community accepted the ideas and worked with them and perhaps refined them, and that became the “knowledge” of the time. Well, perhaps in the future there will be some revolution within the field that turns it completely in another direction; your discourse community will have constructed new “knowledge.”
A. That has happened several times in the last thirty years, but that’s a totally different question. In fields that have a rational nature, where the conditions of rational inquiry are observed and there’s a sort of a common understanding of what it means to move towards truth (or at least a better grasp of truth), and where there’s a sort of common and rational understanding of the nature of argument and evidence—and I think those things arc essentially fixed—in such fields, there’s a course of development. It’s not perfect; all sorts of erratic things happen. Sure, changes take place and some things are accepted while others are not accepted, sometimes rightly, sometimes wrongly, and there are ways of correcting error. But I don’t understand what that has to do with the social determination of knowledge. That’s a matter of how, through social interaction, each person contributing tries to advance a common enterprise. Now this is somewhat idealized because there are all sorts of personal conflicts and somebody’s trying to undercut someone else, but let’s abstract away from that; let’s abstract away from the vile nature of human beings and talk about it as if we’re living up to the ideals that at least theoretically we hold. To that extent there’s a common enterprise, and understanding will grow as people participate in this common enterprise. And it will change, and sometimes change radically.
Q. Has your colleague down the hall, Thomas Kuhn, ever discussed the Chomskyan revolution in terms of a “paradigm shift”?
A. He hasn’t, but other people have; I don’t. My own view is that while there have been several significant changes (Tom and I kind of differ on this), there’s been basically one scientific revolution: the Galilean revolution, the seventeenth-century revolution stretching over a period including Galileo. That was a real revolution, a different way of looking at things in many respects. For example, there was a very sharp shift at that point from a kind of natural history perspective to a natural science perspective. A different attitude toward fact developed, a different attitude toward idealization, a different concept of explanation. There was a complete breakdown, especially with Newton, of the common sense notion of mechanical explanation which led in new directions. Put all these things together and I think that’s a radical shift in perspective. Now there are very few fields of human endeavor where that shift of perspective has taken place. In the study of language, I think that shift did take place loan extent in the 1950s. You could call that a “paradigm shift” if you want to use the term, but it seems tome to be adapting the methods of the natural sciences to another domain; in that respect, it’s not really a dramatic shift.
Furthermore, even if you look at the basic intellectual developments and changes in points of view associated with what’s called “the cognitive revolution” in the mid-1950s—of which the development of generative grammar was a part and, in fact, a major contributing part—I think they’re quite real; but in a number of respects, rather critical respects, they recapitulate and revise changes that took place during what I prefer to call “the first cognitive revolution,” namely in the seventeenth century. For example, a major shift in the 1950s was a shift of perspective away from concern for behavior and the products of behavior towards the inner processes that determine behavior and determine the processes of behavior. Now that’s a shift towards the natural sciences because the inner processes are real. They’re part of psychology, part of biology. So that’s a shift towards the natural sciences, away from behavior towards inner mechanisms and inner processes that underlie behavior. It’s also a shift towards explanation rather than description. Now that’s a big shift. But a shift like that took place in what we might call the “Cartesian revolution” in the cognitive sciences. Associated with this was a revival—it wasn’t a new interest—of interest in what arc sometimes called computational models of the mind, that is, theories of rules and representations, roughly. Now that’s part of the same thing because the inner mechanisms and inner processes appear to be computational systems, mentally representative and, in some unknown manner, physically instantiated. But that again is highly reminiscent of something that took place in the seventeenth century—in particular, Descartes’ theory of vision, which was a crucial breakthrough and developed a kind of a representational, computational theory of mind. It was a major shift.
Another change that took place in the 1950s, part of the cognitive revolution, had to do with things like, say, the Turing test for general intelligence. But that’s just a watered-down version of a much richer and more interesting seventeenth-century notion: the Cartesian tests for the existence of other minds, which crucially used aspects of linguistic performance, the fact that normal human linguistic behavior has what I sometimes call—they didn’t call it this—a creative aspect, meaning it s appropriate to situations but not caused by situations (which is a fundamental difference); it’s innovative, unbounded, and not determined by internal stimuli or external causes; it’s coherent, whatever that means (we recognize that but we can’t characterize it); it evokes thoughts in others that they may express themselves, and so on. There’s a collection of properties and one can turn those properties into an experimental program, as in fact was suggested in the seventeenth century, to determine whether another organism has a mind like yours. Now in that context there’s real scientific inquiry being carried out in which one tries to determine whether a machine, let’s say, is a person with a mind. That s a real scientific question embedded in that rich framework of scientific inquiry dealing with real questions, noting crucial facts about human beings, which, in fact, are true facts. That all makes a lot of sense In contrast, the twentieth-century version of this, sometimes called the Turing test, is almost totally pointless. It’s just an operational test to determine whether, say, a computer program manifests intelligence, and like most operational tests it doesn’t matter how it comes out because operational tests are of no interest or significance except in some theoretical context. The reason I mention that is to indicate that in this respect the second cognitive revolution was a regression, in my view, from the first cognitive revolution.
Another question has to do with the body/mind relation. In the seventeenth century, in the Cartesian system, the body/mind relation was absolutely central. Descartes and the Cartesians had a plausible, though we now know an incorrect, argument for the existence of mind. The argument basically was that they had a conception of body based on a kind of intuitive mechanics, a sort of contact mechanics—you know, things pushing and pulling each other. Our normal intuitive, common-sense notion of mechanics was what they meant by body. They argued correctly that that concept had certain limits, and they therefore postulated a second substance, a thinking substance, to deal with things that plainly go beyond those limits, like the creative aspects of language use. Well, then a body/mind problem arises. That’s a real problem, but it didn’t survive Newton because Newton blew the theory of mechanics out of the water. The concept of body disappeared, and, since then, there is no concept of body and no classical body/mind problem—at least there shouldn’t be, in my view. In the new version, what we really just have is different levels of understanding and they’re all natural and we try to relate them as much as we can. In the twentieth-century cognitive revolution, something like the body/mind problem reemerged but in a pernicious way, a way that’s again a regression from the earlier version. The earlier version was a metaphysical problem, hence a problem of reality, and a serious one. The modern version is a kind of an epistemological dualism; that is, questions of mind are just studied differently than questions of body. The example I just mentioned is one. In the case of studying puberty, we allow our conception of rational inquiry to guide us, and it guides us right to the study of innate structure. In the case of the study of, say, meaning, people don’t follow the same line of inquiry, though they should because the logic is the same. That’s one of numerous examples showing that the way we study the traditional phenomena of mind departs from the way we study other aspects of physical reality. That’s a very pernicious dualism, an extremely dangerous version of traditional dualism which ought to be abandoned. So that’s another respect in which I think there’s regression from the first cognitive revolution.
The point I’m trying to make is that there was a very substantial change in general psychology, including linguistics, in the mid-1950s and in some ways it was a regression. There are some ways in which it was real progress. The traditional view about language, which is correct, is that, as Humboldt put it, language makes “infinite use of finite means,” and that’s correct. But nobody knew what to make of that notion because they had no concept of infinite use of finite means. By the mid-twentieth century, we had a concept of what that means. It came out of mathematics, really. Out of parts of mathematics and logic there came a sharp understanding of the notion, infinite use of finite means, and it was therefore possible to apply that to the traditional questions. That led to a huge move forward in understanding; in fact, that’s generative grammar. It’s looking at a lot of the classical questions in the light of the modern understanding of what it means to make infinite use of finite means. That confluence did make possible a substantial change. If one wants to call this a revolution, okay; if not, okay; I don’t. It seems to me like just normal progress when new understanding arises and you can apply it to old problems.
Q. You’re talking about biological directiveness, and in your work over the last three decades you have emphasized that there is this strong element of innateness in language. What about writing, which is a learned phenomenon—something, unlike language, that not every healthy human has? Would you pursue this same line in talking about written language?
A. I’m sure if we look at written language we’re going to find the conditions of Plato’s problem arising once again. Namely, we just know too much. The basic problem that you always face when you look at human competence, or for that matter at any biological system, is that the state it has attained is so rich and specific that you cannot account for it on the basis of interactions, such as learning, for example. That’s something that’s found almost universally. The case of puberty that I gave you is only one example, but it’s true from the level of the cell on up. When you look at any form of human activity, whether it’s speech or moral judgment or ability to read, I think you’ll find exactly the same thing. When you understand the actual phenomenon, what you discover typically is that there’s some kind of triggering effect from the outside—often what we call “teaching” or “learning”—that sets in motion inner directive processes. That’s how you can gain such rich competence on the basis of such limited experience. It’s not unlike the fact that when a child eats, it grows. The -food makes it grow, if you like, but it’s not the food that’s determining the way it grows; the way it grows is determined by its inner nature. It won’t do it without food; if you keep the food away, the child won’t grow. But when you give the child the food, it’s going to grow into what it’s going to be, a human and not a bird, and the reason for that is the inner nature. That’s basically Plato’s argument.
Q. Many feminists have argued that because language controls thought and because ours is a male-inscribed, male-dominated language, language works to reproduce patriarchal ideology and thus the oppression of women. Do you agree with these assumptions and the conclusion9
A. I understand the point, but I wouldn’t call it a property of language. There are many properties of language use which reflect structures of authority and domination in the society in which this language is used, and that s true. However, I don’t think there’s anything in the language that requires that. You could use the same language without those aspects of use in it. For example, there are ways of using language which are deeply racist, but the very same language can be used without the need to be racist.
Q. But given how language is actually used.
A. Well, given language use, it’s undoubtedly correct, and it’s true of all sorts of systems of authority and domination, one being the gender issue.
Q. Here’s one brief example: some feminists have argued that the term motherhood is something like a semantic universal and that that oppresses women. Do you see any justification for that argument?
A. Well, you have to ask what you mean by “semantic universal.” First of all, there’s the question of whether it’s true, but let’s say for the sake of argument that every language known has a concept like “motherhood,” and let’s say that every one of those languages and every one of those concepts has something that oppresses women in it. Suppose, for the sake of argument, that this were discovered to be true. We still would not have finished because it may simply be that every culture you sample is a culture that oppresses women. That doesn’t yet show that it’s inherent in our nature that women be oppressed. That just shows that the cultures that exist oppress women. And therefore it’ll turn out that in every language that’s developed in those cultures there will be a concept which reflects this relation of authority and control. But that doesn’t tell you it’s a semantic universal. In fact, there’s ambiguity in the notion “semantic universal” which ought to be clarified. Some things are semantic universals in the sense that you find them in every language. Other things are semantic universals in the sense that they’re part of our nature and therefore must be in every language. That’s a fundamental difference. For example, it’s a fact that every human society we know—I suppose this is probably close to true if not totally true—places women in a subordinate role in some fashion. But it doesn’t follow from that that it’s part of our nature. That just shows that it’s part of the society. If that were true, it would be a “weak universal.” That is, it would be a descriptive universal but not a deep universal, something that’s necessarily true. Now, there are things that are necessarily true. For example, there are properties of our language which are just as much part of our nature as the fact that we have arms and not wings. But just sampling the language of the world is not enough to establish it.
Q. In a recent article in Mother Jones, one of your former students was quoted as saying, “Chomsky thinks he’s a feminist, but—at heart—he’s an old fashioned patriarch…. He just has never really understood what the feminist movement is about.” Do you support the goals and aspirations of the feminist movement?
A. I don’t think there’s such a thing as the aspirations and goals of the feminist movement, and I don’t think there’s such a thing as the feminist movement There are many aspirations and goals of the feminist movement—or the feminist movements, I should say—which I think are timely and proper and important and have had an enormous effect in liberating consciousness and thought and making people aware of forms of oppression that they had internalized and not noticed. I think that’s all for the good. In fact, my own view, and I’ve said this many times, is that of all the movements that developed in what’s called the sixties—which really is not the sixties, because the feminist movement is basically later, but what is metaphorically called the sixties—the one that’s had the most profound influence and impact is probably the feminist movement, and I think it’s very important. As to the student’s comment, that could very well be correct, but I’m not the person to judge.
Q. For the last few years, the media and the political establishment have asserted that the U.S. is experiencing a literacy crisis. Do you agree?
A. Sure. It’s just a fact. I don’t think it’s even questioned. There’s a big degree of illiteracy and functional illiteracy. It’s remarkably high. What’s more, the interest in reading is declining, or it certainly looks as if it’s declining. People do seem to read less and to want to read less and be able to read less. I know of colleagues, for example, academic people whose world is reading, who won’t subscribe to some journals that they are sympathetic to and find important because the articles are too long. They want things to be short. That just boggles my mind. In fact, let me report to you a personal case. I once had an interview at a radio station in which the interviewer was interested in why I don’t appear on MacNeil/Lehrer, Nightline, and that sort of program. He began the interview by playing a short tape of an earlier interview he’d had with a producer of Nightline. The interviewer asked him this question: “It’s been claimed that the people on your program are all biased in one direction and that you cut out critical, dissident thought. How come, for example, you never have Chomsky on your program?” The producer first went into sort of a tantrum, saying I was from Neptune, and “wacko” and so on; but after he’d calmed down he said something which, in fact, has an element of truth to it: “Chomsky lacks concision.” Concision means you have to be able to say things between two commercials. Now that’s a structural property of our media—a very important structural property which imposes conformism in a very deep way, because if you have to meet the condition of concision, you can only either repeat conventional platitudes or else sound like you are from Neptune That is, if you say anything that’s not conventional, it’s going to sound very strange. For example, if I get up on television and say, “The Soviet invasion of Afghanistan is a horror,” that meets the condition of concision. I don’t have to back it up with any evidence; everyone believes it already so therefore it’s straightforward and now comes the commercial Suppose I get up in the same two minutes and say, “The U.S. invasion of South Vietnam is a horror.” Well, people are very surprised. They never knew there was a U.S. invasion of South Vietnam, so how could it be a horror? They heard of something called the U.S. “defense” of South Vietnam, and maybe that it was wrong, but they never heard anybody talk about the U.S. “invasion” of South Vietnam. So, therefore, they have a right to ask what I’m talking about. Copy editors will ask me when I try to sneak something like this into an article what I mean. They’ll say, “I don t remember any such event.” They have a right to ask what I mean. This structural requirement of concision that’s imposed by our media disallows the possibility of explanation; in fact, that’s its propaganda function It means that you can repeat conventional platitudes, but you can’t say anything out of the ordinary without sounding as if you’re from Neptune, a wacko, because to explain what you meant—and people have a right to ask if it’s an unconventional thought—would take a little bit of time. Here in the United States, to my knowledge, it’s quite different from virtually every other society, maybe with the exception of Japan, which is more or less in our model. But at least in my experience, when you appear on radio and television in Europe and the Third World—first of all you can appear on radio and television if you have dissident opinions, which is virtually impossible here—you have enough time to explain what you mean. You don’t have to have three sentences between two commercials, and if it takes a few minutes to explain or, more often, an hour, you have that time Here, our media are constructed so you don’t have time; you have to meet the condition of concision. And whether anybody in the public relations industry thought this up or not, the fact is that it’s highly functional to -impose thought control. Pretty much the same is true in writing, like when you’ve got to say something in seven-hundred words. That’s another way of imposing the condition of conventional thinking and of blocking searching inquiry and critical analysis. I think one effect of this is a kind of illiteracy.
Q. Speaking of critical analysis and literacy, Paulo Freire and others argue that writing, because it can lead to “critical consciousness,” is an avenue to social and political empowerment of the disenfranchised. Do you agree?
A. Absolutely. In fact, writing is an indispensable method for interpersonal communication in a complicated society. Not in a hunter-gatherer tribe of fifteen people; then you can all talk to one another. But in a world that s more complicated than that, intellectual progress and cultural progress and moral progress for that matter require forms of interaction and communicative interchange that go well beyond that of speaking situations So, sure, people who can participate in that have ways of enriching their own thought, of enlightening others, of entering into constructive discourse with others which they all gain by. That’s a form of empowerment It’s not the case if a teacher tells the kid, “Write five-hundred words saying this.” That’s just a form of reducing; that’s a form of de-education not education.
Q. There’s a movement within composition studies to make a kind of critical/ cultural studies based on a Freirean model the subject matter for the first year English course. Do you think that’s a good idea?
A. Doing things that will stimulate critical analysis, self-analysis, and analysis of culture and society is very crucial. In fact, it seems to me that part of the core of all education ought to be the development of systems of intellectual self-defense and also stimulation of the capacity for inquiry, which means also collective inquiry. And this is one of the domains in which it can be done. It is done, say, in the natural sciences, but localized in those problems. It ought to be done in a way so that people understand that this is a general need and a general capacity; English composition courses are perfectly appropriate places for that.
Q. In 1973 you had an extended discussion on Dutch television with Michel Foucault, one of the most important of the French poststructuralist philosophers. In a subsequent interview, you said that you and Foucault found some areas of agreement, but you commented that he was much more skeptical than you were about the possibility of developing a concept of human nature that is independent of social and historical conditions. How would you ground a concept of human nature beyond human capacity to acquire language?
A. I would study it the same way. I would apply the logic of Plato’s problem. Take any domain—the domain of moral judgment, let’s say. I don’t think we’re in a position to study it yet, but the way you would study it is clear. You’d take people and ask what is the nature of the system of moral judgment that they have. We certainly have such systems. We make moral judgments all the time, and we make them in coherent ways and with a high degree of consistency; we make them in new cases that we’ve not faced before. So we have some sort of a theory, or a system, or a structure that underlies probably an unbounded range of moral judgments. That’s a system that can be discovered; you can find out what it is. We can then ask questions about the extent to which different systems that arise in different places are different and the extent to which they’re the same. We can ask the harder, deeper question: “What was the nature of the external input, the external stimulation or evidence on the basis of which the system of moral judgment arose?” To the extent that you can answer that, you can determine what the inner nature was from which it began. The logic is exactly like the problem of why children undergo puberty. You first find out what happens to them at that age; you ask what factors, what external events took place; and then you’d say what must have been the internal directive capacity that led to this phenomenon given those external events. That’s a question of science, a hard question of science. In these domains it’s usually not hard because you usually find that the external events are so impoverished and so unstructured and so brief, in fact, that they couldn’t have had much of an effect. So qualitatively speaking, most of it is going to be internal. That’s a way of finding out our entire moral nature.
You can also study other things, like moral argument, for example. Take a real case; take, say, the debate about slavery. A lot of the debate about slavery took place, or as we reconstruct it could have taken place, on shared moral grounds. In fact, one can understand the slave owner’s arguments on our moral grounds, and one can even see that those arguments are not insignificant. Take one case just to illustrate. Suppose I’m a slave owner, and you’re opposed to slavery, and I give you the following argument for slavery: “Suppose you rent a car and I buy a car. Who’s going to take better care of it? Well, the answer is that I’m going to take better care of it because I have a capital investment in it. You ‘re not going to take care of it at alL If you hear a rattle, you ‘re just going to give it back to Hertz and let somebody else wor,y about it. If I hear a rattle, I’m going to take it to the garage because I don’t want to get in trouble later on. In general, I’m going to take better care of the car I own than you’re going to take of the car you rent. Suppose I own a person and you rent a person. Who’s going to take better care of that person? Well, parity of argument, I’m going to take better care of that person than you are. Consequently, it follows that slavery is much more moral than capitalism. Slavery is a system in which you own people and therefore you take care of them. Capitalism, which has a free labor market, is a system. in which you rent people. If you own capital, you rent people and then you! don ‘t care about them at all. You use them up, throw them away, get new people. So the free market in labor is totally immoral, whereas slavery is quite moral.” Now that’s a moral argument, and we can understand it. We may, decide that it’s grotesque. In fact, we will decide that it’s grotesque, but we have to ask ourselves why. It’s not that we lack a shared moral ground with the slave owner; we have a shared moral ground, and we would then want to argue that ownership of a person is such an infringement on the person’s~ fundamental human rights that the question of better or worse doesn’t even arise. That’s already a complex argument, but it’s an argument based on shared moral understanding. Now where’s that shared moral understanding coming from? I have a strong suspicion that if we understood the’ nature of the problem better we might discover that that shared moral! understanding comes from our inner nature. Let’s return to the feminist question. The respect in which the feminists are exactly right, I think, is that when they bring forth and make you face the facts of domination, you’ see that such domination is wrong. Why do you see that it’s wrong? Well, because something about your understanding of human beings and their rights is being brought out and made public. You didn’t see it before but that’s because you’re now exploring your own moral nature and finding something there that you didn’t notice before. To the extent that there’s any progress in human history—and there’s some, after all—it seems to me that it’s partly a matter of exploring your own moral nature and discovering things that we didn’t recognize before. It wasn’t very far back when! slavery was considered moral, in fact, even obligatory. Now it’s considered, grotesque. I think there are social and historical reasons for that—like the, rise of industrial capitalism, and so on—but that’s not the whole story., That may be something that stimulated something internal, but what it, stimulated was a deeper understanding of our own moral nature. It seems to me that these are various ways in which one might hope to discover the innate basis of moral judgment. But I think anywhere you look, if there’s any system that’s even complex enough to deserve being studied, you’re going to get roughly the same result and basically for Plato’s reasons.
Q. In Asian societies, especially Chinese society, there’s a strong patriarchal assumption. While in Singapore, one of us had this very debate on innate human moral authority, and they said, “No, the innate human moral! authority is that men should be superior to women.” So there’s a strong cultural impasse that we seem to bring out. Do you have any insights on that? Is it that we’re more advanced than Asians or Chinese society?
A. Well, I think we are. For example, I admit that this is a value judgment and I can’t prove it, but I would suspect that there’s going to be an evolution (assuming that the human race doesn’t self-destruct, which it’s likely to do from rigid patriarchal societies to more egalitarian societies and not the other way around. I would suspect an asymmetry in development because, as circumstances allow, people do become more capable of exploring their own moral nature. Now “circumstances allow” means that the conditions of freedom generally expand, either partially for economic reasons or partly for other cultural reasons. As there’s an expansion of the capacity to inquire into our own cultural practices instead of just accepting them rigidly, the assumptions about the need for domination or the justice of domination are challenged and typically overthrown—like peeling away layers of an onion. If that’s correct, then yes, for cultural reasons, the move away from patriarchy is a step upwards, not just a change. It’s a step toward understanding our true nature.
Q. You have suggested that “intellectuals are the most indoctrinated part of the population … the ones most susceptible to propaganda.” You have explained that the educated classes are “ideological managers,” complicit in “controlling all the organized flow of information.” How and why is this so? What can be done to change this situation?
A. Well, there’s something almost tautological about that; that is, the people we call intellectuals are those who have passed through various gates and filters and have made it into positions in which they can serve as cultural managers. There are plenty of other people just as smart, smarter, more independent, more thoughtful, who didn’t pass through those gates and we just don’t call them intellectuals. In fact, this is a process that starts in elementary school. Let’s be concrete about it. You and I went to good graduate schools and teach in fancy universities, and the reason we did this is because we’re obedient. That is, you and I, and typically people like us, got to the positions we’re in because from childhood we were willing to follow orders. If the teacher in third grade told us to do some stupid thing, we didn’t say, “Look, that’s ridiculous. I’m not going to do it.” We did it because we wanted to get on to fourth grade. We came from the kind of background where we’d say, “Look, do it, forget about it, so the teacher’s a fool, do it, you’ll get ahead, don’t worry about it.” That goes on all through school, and it goes on through your professional career. You’re told in graduate school, “Look, don’t work on that; it’s a wrong idea. Why not work on this? You’ll get ahead.” However it’s put, and there are subtle ways of putting it, you allow yourself to be shaped by the system of authority that exists out there and is trying to shape you. Well, some people do this. They’re submissive and obedient, and they accept it and make it through; they end up being people in the high places—economic managers, cultural managers, political managers. There are other people who were in your class and in my class who didn’t do it. When the teacher told them in the third grade to do x, they said, “That’s stupid, and I’m not going to do it.” Those are people who are more independent minded, for example, and there’s a name for them: they’re called “behavior problems.” You’ve got to deal with them somehow, so you send them to a shrink, or you put them in a special program, or maybe you just kick them out and they end up selling drugs or something. In fact, the whole educational system involves a good deal of filtering of this sort, and it’s a kind of filtering towards submissiveness and obedience.
This goes on through professional careers, as well. You’re a journalist, let’s say, and you want to write a story that’s going to expose people in high places, and somebody else is going to write a story that serves the needs of people in high places; you know which one is going to end up being the bureau chief. That’s the way it works. So in a way there’s something almost tautological about your question. Sure, the people who make it into positions in which they’re respected and recognized as intellectuals are the people who are not subversive of structures of power. They’re the people who in one way or another serve those structures, or at least are neutral with respect to them. The ones who would be more subversive aren’t called intellectuals; they’re called wackos, or crazies, or “wild men in the wings,” as McGeorge Bundy put it when he said, “There arc people who understand that we have to be in Indochina and just differ on the tactics, and then there are the wild men in the wings who think there’s something wrong with carrying out aggression against another country.” (He said that in Foreign Affairs—a mainstream journal.) But that’s the idea. There are wild men in the wings who don’t accept authority, and they remain wild men in the wings and not intellectuals, not respected intellectuals. Of course, this isn’t one-hundred percent. These are tendencies, actually very strong tendencies, and they’re reinforced by other strong tendencies.
Another strong tendency has to do with the role of intellectuals. Why are you and I called intellectuals but some guy working in an automobile plant isn’t an intellectual? I don’t think it’s necessarily because we read more or go to better concerts or anything like that. Maybe he does; in fact, I’ve known such cases. I grew up in such an environment. I grew up in an environment where my aunts and uncles were New York Jewish working class, and this was still the 1930s when there was a rich working-class culture. Lots of them had barely gone to school. I had one uncle who never got past fourth grade and an aunt who never graduated from school. But that was the richest intellectual environment I’ve ever seen. And I mean high culture, not comic book culture: Freud, Steckel, the Budapest String Quartet, and debates about anything you can imagine. But those people were never called intellectuals. They were called “unemployed workers” or something like that. Now why are they not intellectuals whereas a lot of people in the universities who are basically doing clerical work (from an intellectual point of view, a lot of scholarship is just very low-level clerical work) are respected intellectuals? First of all, it’s a matter of subordination and power, and secondly it’s a matter of which role you choose for yourself. The ones we call intellectuals, especially the public intellectuals—you know, the ones who make a splash or who are called upon to be the experts—are people who have chosen for themselves the role of manager. In earlier societies they would have been priests; in our societies they form a kind of secular priesthood.
In fact, in the nineteenth and twentieth centuries, intellectuals have rather typically taken one or another of two very similar paths. One is basically the Marxist/Leninist path, and that’s very appealing for intellectuals because it provides them with the moral authority to control people The essence of Marxism/Leninism is that there’s a vanguard role and that s played by the radical intellectuals who whip the stupid masses forward into a future they’re too dumb to understand for themselves. That’s a very appealing idea for intellectuals. There’s even a method: you achieve this position on the backs of people who are carrying out a popular struggle. So there’s a popular struggle, you identify yourself as a leader, you take power, and then you lead the stupid masses forward. That basically captures the essence of Marxism/Leninism—a tremendous appeal to the intellectuals for obvious reasons, and that’s why that’s one major direction in which they’ve gone all over the world. There’s another direction which is not all that different: a recognition that there’s not going to be any popular revolution; there’s a given system of power that’s more or less going to stay, I’m going to serve it, I’m going to be the expert who helps the people with real power achieve their ends. That’s the Henry Kissinger phenomenon or the state capitalist intellectual. Well, that’s another role for the intellectuals. Actually, Kissinger put it rather nicely in one of his academic essays. He described an expert as “a person who knows how to articulate the consensus of his constituency.” He didn’t add the next point “Your constituency is people with power.” But that’s tacit. Knowing how to articulate the consensus of unemployed workers or the homeless doesn’t make you an expert. The point is that an expert is a person who knows how to articulate the consensus of the people of power, who can serve the role of manager.
Those two conceptions of the intellectual are very similar. In fact I think it’s a striking fact that people find it very easy to shift from one to the other. That’s called “the god that failed phenomenon.” You see there isn’t going to be a popular revolution and you’re not going to make it as the vanguard driving the masses forward, so you undergo this conversion and you become a servant of “state capitalism.” Now, I won’t say that everybody who underwent that was immoral. Some people really saw things they hadn’t seen. But by now it’s become a farce. You can see it happening: people perfectly consciously recognizing, “Well, there isn’t going to be a revolution. If I want the power and prestige I’d better serve these guys. So I suddenly undergo this conversion, and I denounce my old comrades as unregenerate Stalinists.” It’s a farcical move which we should laugh at at this point. I think the ease of that transition in part reflects the fact that there isn’t very much difference. There’s a difference in the assessment of where power lies, but there’s a kind of commonality of the conception of the intellectual’s role. Now, my point is that the people we call intellectuals are people who have passed the filters, gone through the; gates, picked up these roles for themselves, and decided to play them. Those are the people we call intellectuals. If you ask why intellectuals are submissive, the answer is they wouldn’t be intellectuals otherwise. Again, this is not one-hundred percent, but it’s a large part.
Q. You alluded to the media a minute ago. You have written repeatedly that the state and the media collaborate to support and sustain the interests and values of the establishment. Yet, we in the U.S. boast proudly of our “free press.” Are our media victims of ideological indoctrination, or are they willing conspirators in suppressing truth?
A. I wouldn’t exactly put it either way. They’re not victims and they’re not conspirators. Suppose, for example, you were to ask a similar question about, say, General Motors. General Motors tries to maximize profit on market share; are they victims of our system or are they conspirators in our system? Neither. They are components of the system which act in certain ways for well-understood institutional reasons. If they didn’t act that way they would not be in the game any longer. Let’s take the media. The media have a particular institutional role. We have a free press, meaning it’s not state controlled but corporate controlled; that’s what we call freedom. What we call freedom is corporate control. We have a free press because it’s corporate monopoly, or oligopoly, and that’s called freedom. We have a free political system because there’s one party run by business; there s a business party with two factions, so that’s a free political system. The terms freedom and democracy, as used in our Orwellian political discourse, are; based on the assumption that a particular form of domination—namely, by owners, by business elements—is freedom. If they run things, it’s free, and the playing field’s level. If they don’t run things, the playing field isn’t level and you’ve got to do something about it. So if popular organizations form or if labor unions are too important, you’ve got to level the playing field. If it’s El Salvador, you send out the death squads; if it’s at home you do something else, but you’ve got to level the playing field.
Coming back to the free press: yes, our press is free. It’s fundamentally a narrow corporate structure, deeply interconnected with big conglomerates. Like other corporations, it has a product which it sells to the market, and the market is advertisers, other businesses. The product, especially for the elite press, the press that sets the agenda for others that follow, is privileged audiences. That’s the way to sell things to advertisers. So you have an institutional structure of major corporations selling privileged elite audiences to other corporations; now it plays a certain institutional role: it presents the version of the world which reflects the interests and needs of the sellers and buyers. That’s not terribly surprising, and there are a lot of other factors that push it in the same direction. Well, that’s not a conspiracy, any more than G.M.’s making profit is a conspiracy. It’s not that they’re victims; they’re part of the system. In fact, if any segment of the media, say the New York Times, began to deviate from that role, they’d simply go out of business. Why should the stockholders or the advertisers want to allow them to continue if they’re not serving that role? Similarly, if some journalist from the New York Times decided to expose the truth, let’s say started writing accurate and honest articles about the way power is being exercised, the editors would be crazy to allow that journalist to continue. That journalist is undermining authority and domination and getting people to think for themselves, and that’s exactly a function you don’t want the media to pursue. It’s not that it’s a conspiracy; it’s just that the media’s institutional structure gives them the same kind of purpose that the educational system has: to turn people into submissive, atomized individuals who don’t interfere with the structures of power and authority but rather serve those structures. That’s the way the system is set up and if you started deviating from that, those with real power, the institutions with real power, would interfere to prevent that deviation. Now that’s the way institutions work, so it seems to me almost predictable that the media will serve the role of a kind of indoctrination.
Q. You have said that “propaganda is to democracy what violence is to the totalitarian state,” which, of course, relates to what you are saying here.
A. And, in fact, there’s a very intriguing line of thought in democratic theory that goes back certainly to the seventeenth-century English revolutions—sort of the first major modern democratic revolutions. There’s been a recognition which becomes very explicit in the twentieth century, especially in the United States, that as the capacity to control people by force declines, you have to discover other means of control. Harold Lasswell, one of the founders of the modern area of communications in the political sciences, put it this way in the 1930s in an article on propaganda in the International Encyclopedia of Social Sciences: “We should not succumb to democratic dogmatism about men being the best judges of their own interests. They’re not. We’re the best judges.” In a military state or what we would now call a totalitarian state, you can control people by force; in a democratic state you can’t control them by force, so you’d better control them with propaganda—for their own good. Now this is a standard view; in fact, I suspect this is the dominant view among intellectuals.
Q. This, of course, relates to Walter Lippmann’s concept of “the manufacture of consent,” the idea that government distrusts the public’s ability to make wise decisions and so it reserves real power for a “smart” elite who will make the “right decisions” and then create the illusion of public consensus.
A. Yes, but you really have to think considerably about the framework of thinking that that came from. Lippmann designed this notion of “manufacture of consent” as progress in the art of democracy, and he believed it was a good thing—and that’s important. It’s a good thing because, as he put it, “We have to protect ourselves from the rage and trampling of the bewildered herd.” So there’s this mass of people out there who are the bewildered herd, and if we just let them go free—if we allow things like democracy, for example—there’s just going to be rage and trampling because they’re all totally incapable. The only people who are capable of running anything are we smart guys—what he called “the specialized class.” He didn’t add—something, again, which is tacitly understood—that we make it to the specialized class if we serve people with real power. So it’s not that we’re smarter; it’s that we’re more submissive. And we, the specialized class, the servants of power, have to save ourselves and our prestige and power from the rage and trampling of the bewildered herd. For that you need manufacture of consent because you can’t shoot people down in the streets; you can’t control them by force. In that respect, indoctrination is to democracy what a bludgeon is to totalitarianism.
Q. In fact, it’s even better, much more effective.
A. It’s certainly much more important. In a totalitarian state, let’s say the Soviet Union under Stalin’s direction (that’s about as close as you can come), it didn’t matter too much what people believed. They could more or less believe what they liked. What mattered was what they did, and what they did you control by force or by threat. In fact, rather commonly fascist and totalitarian states have been reasonably open. In Franco’s Spain, for example, a lot of people were reading more widely than they were here in many respects and debating much more, and it didn’t matter that much because you’ve got them under control: you have a bludgeon over their heads; there’s not much they can do. In the Soviet Union, for example, samizdat were very widely read. I read some studies of this which had astonishingly high figures of distribution of samizdat. The authorities could have stopped it, but they probably just didn’t care that much: “So people have crazy ideas. Who cares? They’re not going to do anything about it because we control them.” Now, in a more free and more democratic society, it becomes very dangerous if people start thinking because if they start thinking they might start doing, and you don’t have the police to control them. If they’re blacks in downtown Boston, it’s not a big problem: you do have the police to control them. But if they’re relatively privileged, middle-class white folk like us, then you don’t have the police to control them because they’re too powerful to allow that to happen. They share in the privilege of the wealthy and therefore you can’t control them by force so you’ve got to control what they think. Indoctrination is, therefore, a crucial element of preventing democracy in the form of democracy.
Q. Recently, you told Bill Moyers that you’d “like to see a society moving toward voluntary organization and eliminating as much as possible the structures of hierarchy and domination, and the basis for them in ownership and control.” How can this be achieved? The system that you’ve been describing is quite entrenched.
A. Different societies have different forms of domination. Patriarchy is one, and in principle we know how to overcome that—it’s not too easy to do, but we know in principle. But in our kinds of society, the major forms of domination, at least the core ones, are basically ownership. Private ownership of the means of production grants owners the ultimate authority over what’s produced, what’s distributed, what takes place in political life, what the range of cultural freedom is, and so on. They have decisive power because they control capital, and there’s no reason why that should be vested in private hands. In my view, if you take the ideals of the eighteenth century seriously, you become very anti-capitalist. If you take the ideals of classical liberalism seriously, I think it leads to opposition to corporate capitalism. Classical liberalism—as developed, for example, by Humboldt—or much of Enlightenment thought was opposed to the church and the state and the feudal system, but for a reason: because those were the striking examples of centralized power. What it was really opposed to was centralized power that’s not under popular control. Nineteenth-century corporations are another form of centralized power completely out of public control, and by the same reasoning we should be opposed to them. If you take classical liberal thought and apply it rationally to more recent conditions, you become a libertarian socialist and a kind of a left-wing anarchist. I don’t mean anarchist in the American sense where it means right-wing capitalist, but anarchist in the traditional sense, meaning a socialist who’s opposed to state power and in favor of voluntary association to the extent that social conditions permit and who regards the role of an honest person as one of constant struggle forever, as long as human history goes on, against any forms of authority and domination, maybe many that we don’t even see now and will only discover later.
Q. What society do you think comes closest to achieving anything like this kind of voluntary association? Do you think any society even comes close?
A. Well, sure, every society has aspects of it and they differ. Sometimes you find things in very poor, backward, undeveloped societies that you don’t find in advanced societies. In many ways the United States is like this. There are very positive things in the United States. In many respects, the United States is the freest country in the world. I don’t just mean in terms of limits on state coercion, though that’s true too, but also just in terms of individual relations. The United States comes closer to classlessness in terms of interpersonal relations than virtually any society. I’m always struck by the fact when traveling elsewhere, let’s say to England, that the forms of deference and authority that people assume automatically are generally unknown here. For example, here there’s no problem with a university professor and a garage mechanic talking together informally as complete equals. But that is not true in England. That’s a very positive thing about the United States. Intellectuals in the United States are always deploring the fact that intellectuals here aren’t taken seriously the way they’re taken seriously in Europe. That’s one of the good things about the United States. There’s absolutely no reason to take them seriously for the most part. I remember in the 1960s, sometimes I would sign an international statement against the war in Vietnam—signed by me here, Sartre and some other person in Europe, and so on. Well, in Paris there’d be big front-page headlines; here nobody paid any attention at all, which was the only healthy reaction. Okay, so three guys signed a statement; who cares? The statement signed by 120 intellectuals in the time of the Algerian War was a major event in Paris. If a similar thing happened here, it wouldn’t even make the newspapers—correctly.
All that reflects a kind of internalized democratic understanding and freedom that’s extremely important. One shouldn’t underestimate it. I think that it’s one of the reasons why we have the Pentagon system. Compare the United States, say, with Japan. How come we had to turn to the Pentagon system as a way to force the public to subsidize high-technology industry, whereas Japan didn’t? They just get the public to subsidize high-technology industry directly, through reduction of consumption, fiscal measures, and soon. That makes them a lot more efficient than we are. If you want to build the next generation of, say, computers, the Japanese just say, “Okay, we’re going to lower consumption levels, put this much into investment, and build computers.” If you want to do it in the United States, you say, “Well, we’re going to build some lunatic system to stop Soviet missiles, and for that you’re going to have to lower your consumption level and maybe, somehow, we’ll get computers out of that.” Obviously, the Japanese system is more much efficient. So why don’t we adopt the more efficient system? The reason is that we’re a freer society; we can’t do it here. In a society that’s more fascist than state capitalist, and I mean that culturally as well as in terms of economic institutions, you can just tell people what they’re going to do and they do it. Here you can’t do that. No politician in the United States can get up and say, “You guys are going to lower your standard of living next year so that IBM can make more profit, and that’s the way it’s going to work.” That’s not going to sell. Here you have to fool people into it by fear and so on. We need all kinds of complicated mechanisms of propaganda and coercion which in a well-run, more fascistic society are quite unnecessary. You just give orders. That’s one of the reasons fascism is so efficient.
Q. You’ve even expressed fear that the U.S. is ripe for a fascist leader. You write, “In a depoliticized society with few mechanisms for people to express their fears and needs and to participate constructively in managing the affairs of life, someone could come along who was interested not in personal gain, but power. That could be very dangerous.” Is this statement rhetorical, or cautionary, or do you have serious fears that the U.S. can fall victim to a charismatic, fascist dictator?
A. It’s real. I mentioned something very good about the United States, but there are also a number of things that are very bad. One is the breakdown of independent social organization and independent thought, the atomization of people. As we move towards a society which is optimal from the point of view of the business classes—namely, that each individual is an atom, lacking means to communicate with others so that he or she can’t develop independent thought or action and is just a consumer, not a producer—people become deeply alienated, and they may hate what’s going on but have no way to express that hatred constructively. And if a charismatic leader comes along, they may very well follow. I think the United States is very lucky that that hasn’t happened. I think that’s one of the reasons why I’m very much in favor of corruption. I think that’s one of the best things there is. You’ll notice that in my books I never criticize corruption. I think it’s a wonderful thing. I’d much rather have a corrupt leader than a power-hungry leader. A corrupt leader is going to rob people but not cause that much trouble. For example, as long as the fundamentalist preachers—like Jim Bakker, or whatever his name is—are interested in Cadillacs, sex, and that kind of thing, they’re not a big problem. But suppose one of them comes along who’s a Hitler and who doesn’t care much about sex and Cadillacs, who just wants power. Then we’re going to be in real trouble. The more corrupt these guys are, the better off we are. I think we all ought to applaud corruption. In fact, that’s true in authoritarian societies too. The more corrupt they are, the better off the people usually are because power hunger is much more dangerous than money hunger. But I think the United States is ripe for a fascist leader. It’s a very good thing that everyone who’s come along so far is impossible: Joe McCarthy, for example, was too much of a thug; Richard Nixon was too much of a crook; Ronald Reagan was too much of a clown; the fundamentalist preachers are ultimately too corrupt. In fact, we’ve escaped, but it’s by luck. If a Hitler comes along, I think we might be in serious trouble.
Q. Your political views have been called “radical,” while your notions of language have been termed “conservative.” Jay Parini writes, “Some colleagues take Chomsky to task for ignoring the social realities of language and, therefore, defining it too narrowly. Chomsky’s work, for example, isn’t concerned with showing how language is used in everyday situations to sustain inequities between men and women.” Is this a fair assessment? How do you reconcile these two seemingly contradictory perspectives?
A. There’s something to that, but let me tell you what my own choices and priorities are. Like any human being, I’m interested in a lot of things. There are things I find intellectually interesting and there are other things I find humanly significant, and those two sets have very little overlap. Maybe the world could be different, but the fact is that that’s the way the world actually is. The intellectually interesting, challenging, and exciting topics, in general, are close to disjoint from the humanly significant topics. If I have x hours a day, I, like any other person, am going to distribute them somehow. I’m not saying I spend every waking moment trying to help other people: I eat, take a walk, read a book, work on problems that excite me, and so on. I do these things just for myself because I like them. I also spend a part of my time, and in fact quite a large part, doing things that I think are humanly significant. Now, I’m going to make this much too mechanical to make a point, but suppose I say, “Okay, now it’s my hour for doing something humanly significant and I have two choices: one is to study the way in which language is used to facilitate authority, and the other is to do something to help Salvadoran peasants who are getting slaughtered.” Well, I’m going to do the second because that’s overwhelmingly more significant than the first, by huge orders of magnitude. That’s why I don’t spend time on things like the use of language to impose authority. Doubtless it’s true, but it’s a topic that’s not intellectually interesting; it has no intellectual depth to it at all, like most things in the social sciences. Also, it’s of marginal human significance as compared with other problems. Therefore, I don’t think it’s a reasonable distribution of my own priorities.
There are people who think differently, and I think they are making a very poor moral judgment. If people want to study, say, social use of language because they find it interesting, fine; that’s on a par with my reading a book. There’s no moral issue involved. Similarly, I find technical problems about language structure or Plato’s problem interesting, so I study them. On the other hand, if people claim they are doing that out of some moral imperative, they’re making a severe error because in terms of moral imperatives that’s a much lower order than others. People often argue, and I think this is a real fallacy, “Look, I’m a linguist; therefore, in my time as a linguist I have to be socially useful.” That doesn’t make sense at all. You’re a human being, and your time as a human being should be socially useful. It doesn’t mean that your choices about helping other people have to be within the context of your professional training as a linguist. Maybe that training just doesn’t help you to be useful to other people. In fact, it doesn’t.
I have a feeling there’s a lot of careerism in this. For example, if I spend all my time working as a linguist and some fraction of it is on things of marginal social utility, I can say, “Look how moral I am,” and at the same time be advancing my career. On the other hand, if I take that segment of my life and use it for going to last week’s demonstration in Washington about the Romero assassination, I’m not advancing my career at all, though I may be helping people more. You have to be careful not to fall into that trap. So if people want to work on these problems—and I think they’re perfectly valid problems—they simply have to ask themselves why they’re doing it. Are they doing it because that’s the way to help other human beings? If so, I think they’re making a poor judgment. If they’re doing it because that’s what they’re interested in, well fine, I’ve no objection. People have a right to do things they’re interested in.
Q. Your discussions of creativity were influential, even inspirational, to those who developed sentence combining as a way of teaching writing. We know one teacher who began each writing course by asking students to combine four or five short sentences into one. Of course, the number of possible solutions is large, and students were always impressed that nearly all of their sentences were different. Nonetheless, anyone who has taught writing at any level can attest that many students fall into predictable patterns of language use. Do you think creativity in language can be fostered so that more of a student’s innate potential is used?
A. I’m sure it can be fostered. Creative reading, for example, surely is a way of fostering it; getting people to wrestle with complex ideas and to find ways of expressing them ought to be at the heart of the writing program. Frankly, I doubt very much that linguistics has anything to contribute to this. Perhaps it can suggest some things, but I don’t suspect it can really be applied. My own feeling is that teaching is mostly common sense. I taught children when I was a college student. I worked my way through college in part by teaching Hebrew school. I’ve taught graduate students across the range, and just from my own experience or anything I’ve read, it seems to me that ninety-nine percent of good teaching is getting people interested in the task or problem and providing them with a rich enough environment in which they can begin to pursue what they find interesting in a constructive way. I don’t know of any methods for doing that other than being interested in it yourself, being interested in the people you are teaching, and learning from the experience yourself. In that kind of environment, something good happens, and I suppose that’s true with writing as much as auto mechanics. I often quote a famous statement from one of MIT’s great physicists, Victor Weisskopf, but it’s a standard comment. He was often asked by students, “What are we going to cover this semester?” His standard answer was supposed to have been, “It doesn’t matter what we cover; it matters what we discover.” That’s basically it: that’s good teaching. It doesn’t matter what you cover; it matters how much you develop the capacity to discover. You do that and you’re in good shape.
Q. In College English in 1967, you wrote that “a concern for the literary standard language—prescriptivism in its more sensible manifestations—is as legitimate as an interest in colloquial speech.” Do you still believe that a sensible prescriptivism is preferable to linguistic permissiveness? If so, how would you define a sensible prescriptivism?
A. I think sensible prescriptivism ought to be part of any education. I would certainly think that students ought to know the standard literary language with all its conventions, its absurdities, its artificial conventions, and so on because that’s a real cultural system, and an important cultural system. They should certainly know it and be inside it and be able to use it freely. I don’t think people should give them any illusions about what it is. It’s not better, or more sensible. Much of it is a violation of natural law. In fact, a good deal of what’s taught is taught because it’s wrong. You don’t have to teach people their native language because it grows in their minds, but if you want people to say, “He and I were here” and not “Him and me were here,” then you have to teach them because it’s probably wrong. The nature of English probably is the other way, “Him and me were here,” because the so-called nominative form is typically used only as the subject of the tense sentence; grammarians who misunderstood this fact then assumed that it ought to be, “He and I were here,” but they’re wrong. It should be “Him and me were here,” by that rule. So they teach it because it’s not natural. Or if you want to teach the so-called proper use of shall and will—and I think it’s totally wild—you have to teach it because it doesn’t make any sense. On the other hand, if you want to teach people how to make passives you just confuse them because they already know, because they already follow these rules. So a good deal of what’s taught in the standard language is just a history of artificialities, and they have to be taught because they’re artificial. But that doesn’t mean that people shouldn’t know them. They should know them because they’re part of the cultural community in which they play a role and in which they are part of a repository of a very rich cultural heritage. So, of course, you’ve got to know them.
Q. The standard literary language, what’s called “standard English,” is an object of great controversy in some parts of the Third World now. For example, there’s a debate in India over whether people should still be taught the colonial language to give them greater access to technology or whether there should be just a few people who are very active translators into the local languages. What’s your sense of the desirability of the spread of world English? First of all, do you think that it is continuing to spread now that American economic hegemony has been broken? Also, is it desirable that it spread?
A. I’ve never seen a real study, but my strong impression is that it’s continuing to spread and that U.S. cultural hegemony is growing even while U.S. economic hegemony is declining. Take the relations between the United States and Europe. Europe is becoming relatively more powerful economically and will soon be absolutely more powerful. On the other hand, my strong impression is that it’s much more culturally colonized by the United States in terms of ways of thinking, the sources of news, and so on. This is not an unusual phenomenon. Look at the relations between England and the United States, say, around 195Q. England was declining as a power sharply relative to the United States, but that was combined with a high degree of Anglophilia and often a rather childish imitation of British cultural styles and modes on the part of the intellectual classes here. These things aren’t necessarily parallel, but my strong impression is that the hegemony of U.S. English and U.S. culture in general is extending in everything from the sciences to pop music.
Now, what should they do in places like India? Well, that’s a hard problem. It’s like what should you do with Black English? I don’t think there are simple answers to that. There are good reasons to preserve and develop national languages and national cultures because they enrich human life for the participants and for others. On the other hand, the people who are in them may suffer. For example, if people in Wales learn Welsh, the way the world is they’re going to be worse off in many respects than if they had learned English. You might want the world to be some other way, but this is the way it is. The same kinds of questions arise in the case of Black English and in the case of teaching English as a second language in India. How you balance those values is tricky, and I don’t think there’s any general answer to it. I think there are particular answers in particular places. In the case of India, the answer being pursued is that people ought to learn English, and I think that’s probably reasonable.
Q. In 1979, you gave a series of lectures in Pisa which were later published and which many linguists think introduced the most important development of the 1980s: the principles and parameters approach. Yet, unlike your earlier work in the aspects phase, it’s not known outside of linguistics, and it hasn’t had the same impact. Do you think people outside of linguistics should know about the principles and parameters approach?
A. I think it’s more important than the aspects-type approach. In fact, if anything deserves to be called a revolution, that’s probably it. It leads to a conception of language which is, in fact, radically different from anything in the historical tradition. Early transformational grammar, early generative grammar, say in the 1950s and 1960s, had a kind of a traditional feel to it. In many ways, it was more acceptable to traditional grammarians than to structural linguists because in a lot of ways it had a traditional look. It was more like Jespersen than it was like Bloomfield, for example, and traditional grammarians recognized that. They may not have understood the details or liked the way it was being done, but they could kind of see the point. For example, there were particular rules for particular constructions, and just as a traditional grammar had a chapter on the passive or on the imperative and so on, the early generative grammars were like that in structure: there was a passive rule and a question rule and a chapter on what verb phrases look like, and so on. The post-1980s theories are radically different. There are no constructions; there are no rules. Things like traditional constructions, say relative clauses, are just taxonomic artifacts. They’re like “large mammal.” A large mammal is a real thing, but it has no meaning in the sciences. It’s just something that results from a lot of different things interacting. The same seems to be true of the passive: it’s not a real thing; it’s just a taxonomic phenomenon. So there’s no meaning to the question, “Is Japanese passive the same as English passive?” Furthermore, there don’t seem to be any rules—that is, language-specific rules. In fact, you can speculate without being thought absurd that there may be only one computational system and in that sense only one language. The variety of languages may be a matter of a number of lexical options, where those lexical options probably leave out a large part of the substantive vocabulary, meaning nouns and verbs and so on. So it looks as if the variety of languages is very narrowly circumscribed and the apparent radical difference among languages derives from the fact that in quite complicated systems, if you make small changes here and there, the output may look very different at the end, even though they’re basically the same. That’s all work of the 1980s, and I think if it’s right it’s very rich in its implications. I don’t think it’s going to be so easily assimilated elsewhere because you have to understand it. In the work of the 1960s, you could have a rough feel for what it was like and misunderstand it but apply it nonetheless. And a lot of the apparent impact of this linguistics was kind of casual misunderstanding of things that look more or less familiar; this new work is quite different. You have to understand what it’s about and that means some work.
Q. What would you suggest people read—people who are out of the field who want to understand this new approach?
A. Well, there are some pretty good relatively introductory books. It depends on what level they want to understand it. I’ve tried myself. I have a book called Language and Problems of Knowledge which is a collection of lectures given in Managua to a public audience of non-linguists. This was just a general audience and they seemed to find it intelligible, and other people have told me they find it intelligible. At a somewhat more technical level, there’s a book by Howard Lasnik and a student of his, Juan Uriagereka, called A Course in GB Syntax: Lectures on Binding and Empty Categories, which is actually first-year graduate lectures from the University of Connecticut. Now those are very lucid and carry it much further into the technical intricacies. But for the general points, at least as I understand them, I’d recommend the first book.
Q. What readership did you target in your 1986 book, Knowledge of Language?
A. That’s a funny sort of book. One chapter is pretty technical linguistics; one chapter is about thought control; the rest is sort of philosophy of language. I had an original idea for that book, but it just turned out to be too encyclopedic to carry off; it’s sort of described in the Preface. It was going to be about two problems in the theory of knowledge: Plato’s problem, or how we know so much given so little evidence; and Orwell’s problem, or how we know so little given so much evidence. I still think that would be a nice book to write. It went too far.
Q. Well, you did sketch out Orwell’s problem in the last chapter. What’s your sense of the treatment of your work in popularizations such as Neil Smith’s The Twitter Machine?
A. That’s a very good book. I think he knows what he’s doing; he’s very sophisticated. I don’t agree with him on everything, but I think it’s an intelligent presentation not just on my work but on lots of things in the field, including lots of interesting work done on relevance theory and pragmatics and so on.
Q. Well, he does deal quite extensively with your work.
A. That’s a mistake people make: they call it “mine” because I sometimes write about it. Take the Pisa lectures. They weren’t “mine.” They were the result of years of very interesting work. There’s a reason why they were given in Pisa: a lot of the best work was being done by Italian and European linguists. So I happened to give some summer lectures there. These things don’t have individual names attached to them.
Q. Earlier in the interview you raised the issue of semantics and your interest in it, but you’ve also consistently reiterated over your career, most recently in The Generative Enterprise. that linguists’ chief concern should not be semantics. We were surprised to hear that you’re now teaching a course in semantics.
A. It’s not surprising. Part of this is terminological. In my view, most ofwhat’s called semantics is syntax. I just call it syntax; other people call the same thing semantics. Syntactic Structures, in my view, is pure syntax, but the questions dealt with there are what other people call semantics. I was interested in the question, “Why does ‘John is easy to please’ have a different meaning from ‘John is eager to please’?” I wanted to find a theory of language structure that would explain that fact. Most people call that semantics; I call it syntax because I think it has to do with mental representations. Take a point we discussed earlier: the word house, the concept “house,” and the use of the word house in real situations to refer to things. There are two relations there, and I don’t think you can turn them into one as is commonly done. The common idea is that there’s one relation, the relation of reference, and I don’t believe that. I think there’s a relation that holds between the word house and a very rich concept that doesn’t only hold of house but of all sorts of other things. That relation most people would call semantics. I call it syntax because it has to do with mental representations and the structure of mental representations. Then there’s the relationship between that rich semantic representation and things in the world, like some place I’m going tonight after class. Now that relationship is what is real semantics, and about that there is almost nothing to say. That’s the part that’s subject to holism and interest relativity and values and so on; and you can sort of assemble Wittgenstein in particulars about it, but there doesn’t seem to be anything general to say. Where I depart from Wittgenstein is that I think there is something very general and definite to say about the relation between words and concepts. I call that syntax because it has to do with mental representations, things inside the skin, rules and computations and representations and so on, going all the way into intrinsic semantic properties, analytic/ synthetic distinctions, and most problems of the theory of meaning that can be dealt with.
Now, there are plenty of people who call their work semantics who in my view are not dealing with semantics at all. Take “all possible world semantics.” In my view, that’s just straight syntax. It’s either right or wrong (and I think it’s right), but if it’s right, it’s right in the sense in which some other theory of phonology is right. It’s a form of syntax. Problems of semantics will arise when you begin to tell me how a possible world relates to things, and the people who work with this topic don’t deal with it. When you start dealing with the relation of mental constructions to the world, you discover that there’s very little to say other than Wittgensteinian-type questions about ways of life. At that level, I think he’s basically right; you can discuss ways of life. So this is largely illusion. I do think that syntax and semantics should deal with what I call syntax, mostly, because that’s where the richness in the field is.
Q. In your famous review in 1959 of Skinner’s verbal behaviorist psychology you argued convincingly that terms such as reinforcement, which have well defined meanings in experiments using rats, become meaningless when extended to the complexity of human behavior. Many of your terms have also been metaphorically extended. Can you think of any instances in which metaphorical extensions of a concept like “deep structure” might be justified, or should such extensions always be avoided?
A. I think you’ve got to be careful. In the case of “deep structure,” I simply stopped using the term because it was being so widely misunderstood “Deep structure” was a technical term. It didn’t have any sense of pro fundity,” but it was understood to mean “profound,” or “far-reaching, or something like that. It might turn out that what I call “surface structure” is much more profound in its implications. Most invariably in the secondary literature, “deep structure” has been confused with what I would call “universal grammar.” So “deep structure” is identified as kind of the innate structure, and that’s not correct. The term was so widely misunderstood that I decided—I think it was in Knowledge of Language to drop the word and just make it an obvious technical term so nobody would be confused; nowadays I just refer to it as “D” structure. I figure that’s not going to confuse anybody. It looks technical and it is technical.
It’s very rare that you ever get a free ride from some other field. People who think they’re talking about “free will” because they mention Heisenberg usually don’t know what’s going on. Or people who say, “Well, people aren’t computers. Remember Gödel.” That’s too easy. Life isn’t that easy. You’d better understand it before you start drawing conclusions from it. Sometimes people who do understand what they’re talking about can make plausible suggestions or even inferences or guesses from outside the field. That’s not impossible, but first you’ve got to understand what you’re talking about. These topics are not like political science. I mean they’re not just there on the surface; there’s some intellectual structure and some degree of intellectual depth. It’s not quantum physics either, so I think any person who’s interested can figure it out without too much trouble. But you’ve got to take the trouble. I’ve been appalled by what I’ve read on how “deep structure” is used.
Q. Some of your work both in linguistics and in political analysis has generated considerable controversy. Are you aware of any specific misunderstanding or criticism of your work that you’d like to take issue with at this time?
A. We could go on forever. On the linguistics side, there’s plenty of misunderstanding but I think it’s resolvable. I’m enough of a believer in the rational side of human beings to think that if you sit down and talk these questions through and you think them through you can reach a resolution. On the political side, I don’t think it’s resolvable because I think there’s a deep functional need not to understand. The problem is that if what I’m saying is correct, then it’s also subversive and, therefore, it’d better not be understood. Let me put it this way: if I found that I did have easy access to systems of power like journals and television, then I’d begin to be worried. I’d think I’m doing something wrong because I ought to be trying to subvert those systems of power, and if I am doing it and I’m doing it honestly, they shouldn’t want to have me around. In those areas, misunderstanding (if you want to call it that) is almost an indication that you may well be on the right track. It’s not proof that you’re on the right track, but it’s an indication you may be. If you’re understood and appreciated, it’s almost proof that you’re not on the right track.