Friday, October 31, 2003

Weizenbaum examines computers abd society

Sandwich quote: The computer is ... the beating heart of virtually every modern military system you can think of, with the exception of the foot soldier.

By Diana ben-Aaron

Professor Joseph Weizenbaum is well-known, both as a teacher of computer science and as an activist for scientific and educational responsibility. He designed the first computerized banking system before coming to MIT in the 1960s. He invented ELIZA, the first "psychiatric" program, and was moved by the reaction to it to write the best-selling Computer Power and Human Reason.

Q: What, if anything, do you think should be the role of the computer in education?

I'll tell you my reaction to that question without answering it directly. There's a Russian joke that goes something like this: Two people are standing in a very large breadline in Moscow, and they're talking about the fact that the harvest failed once more and that's why there's a shortage of bread, and one of them says to the other, "You know, it's all the fault of the Jews and the bicyclists." The other one says, "Why the bicyclists?" and the first one answers, "Why the Jews?"

You might have said "What is the role of computers and bicycles in education?" Then I would have said, "Why the bicycles?" and you "Why the computer?"

A: Yours is an often-asked question. In a sense, it is upside-down. You start with the instrument; the question makes the assumption that of course the computer is good for something in education, that it is the solution to some educational problem. Specifically, [your] question is, what is it good for?

But where does the underlying assumption come from? Why are we talking about computers?I understand [you asked because] I'm a computer scientist, not a bicycle mechanic. But There is something about the computer -- the computer has almost since its beginning been basically a solution looking for a problem.

People come to MIT and to other places, people from all sorts of establishments -- the medical establishment, the legal establishment, the education establishment, and in effect they say, "You have there a very wonderful instrument which solves a lot of problems. Surely there must be problems in my establishment -- in this case, the educational establishment, for which your wonderful instrument is a solution. Please tell me for what problems your wonderful instrument is a solution.

The questioning should start the other way -- it should perhaps start with the question of what education is supposed to accomplish in the first place.Then perhaps [one should] state some priorities -- it should accomplish this, it should do that, it should do the other thing. Then one might ask, in terms of what it's supposed to do, what are the priorities? What are the most urgent problems? And once one has identified the urgent problems, then one can perhaps say, "Here is a problem for which the computer seems to be well-suited." I think that's the way it has to begin.

Q: What are the problems of the educational establishment?

A: The first priority has to be, it seems to me, to lend to those to be educated a mastery of their own language so that they can express themselves clearly and with precision, in speech and in writing.That's the very first priority. The second priority is to give students an entree to and an identity within the culture of their society, which implies a study of history, literature, and all that.

And the third, very close to the second, is to prepare people for living in a society in which science is important, which means to teach them mathematics, or at least arithmetic, and the fundamental skills important to observing the world.

A school system which meets these main objectives might think about introducing something new. Meanwhile, researchers should certainly work on innovative education -- including computer-aided education. But we ought not to use entire generations of schoolchildren as experimental subjects.

In part, this response is based on my belief that what primary and secondary schools teach about computers now is either wrong or can be learned by a reasonably educated person in a few weeks.

Q: Where do you think the study of ethics fits in[to] all that?

A: Without being able to express themselves clearly, without having a mastery of their own language, I think it would be very difficult, to the point of impossibility, for people to think through ethical considerations. I think that mastery of the languages has to be first even in that respect as well. In the study of history of the culture, the literature of the culture, the politics of the culture, and so on -- that's where I think ethics are exemplified.

A question that we should ask is,

Now how well are the schools fulfilling the first priorities? Certainly the answer with respect to language is miserably, absolutely miserably.

MIT certainly gets the cream of the crop of the product of the American school establishment, yet there was a headline in your paper just a few months ago which said that out of a 1000-some freshmen who took the writing test, 800 flunked. How is it then for people who are going to junior colleges? How does it look for people who aren't going to college at all? How does it look for people who dropped out of school when they were 14 or 15? Clearly the American school establishment is failing very seriously.

It is terribly important to ask the reasons the schools are failing so miserably. I think that even if one could show that the introduction of the computer into schools actually effected an improvement, say for example in reading scores, even if one could show that, the question, "Why can't Johnny read?" must still be asked.

There is a very good reason that questions of that kind are uncomfortable. When we ask this question, we may discover that Johnny is hungry when he comes to school, or that Johnny comes from a milieu in which reading is irrelevant to concrete problems or survival on the street -- that is, there is no chance to read, it is a violent milieu, and so on.

You might discover that, and then you might ask the next question: "Why is it that Johnny comes to school hungry? Don't we have school breakfast programs and lunch programs?" The answer to that might be, yes, we used to, but we don't any more.

Why is there so much poverty in our world, in the United States, especially in the large cities? Why is it that classes are so large? Why is it that fully half the science and math teachers in the United States are underqualified and are operating on emergency certificates?

When you ask questions like that, you come upon some very important and very tragic facts about America. One of the things you would discover is that education has a very much lower priority in the United States than do a great many other things, most particularly the military.

It is much nicer, it is much more comfortable, to have some device, say the computer, with which to flood the schools, and then to sit back and say, "You see, we are doing something about it, we are helping," than to confront ugly social realities.

Q: What do you think should be done instead?

A: I think that further questions should be asked, always "why?" just in the way I've indicated. And then I think it becomes necessary to respond to what these questions uncover, to change the fundamental facts that account for the difficulties, as opposed to papering them over by introducing some technological fix.

Q: Do you think that the computer is creating a technical elite, reinforcing old power structures, or remaking American society?

A: I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed. For example, banking. Superficially, it looks as if banking has been revolutionized by the computer. But only very superficially. Consider that, say 20, 25 years ago, the banks were faced with the fact that the population was growing at a very rapid rate, many more checks would be written than before, and so on. Their response was to bring in the computer. By the way, I helped design the first computer banking system in the United States, for the Bank of America 25 years ago.

Now if it had not been for the computer, if the computer had not been invented, what would the banks have had to do? They might have had to decentralize, or they might have had to regionalize in some way. In other words, it might have been necessary to introduce a social invention, as opposed to the technical invention.

What the coming of the computer did, "just in time," was to make it unnecessary to create social inventions, to change the system in any way. So in that sense, the computer has acted as fundamentally a conservative force, a force which kept power or even solidified power where is already existed.

Q: Did you have these concerns when you were designing the banking system?

A: Not in the slightest. It was a very technical job, it was a very hard job, there were a number of very, very difficult problems., for example, to design a machine that would handle paper checks of various sizes, some of which might have been crumpled in a person's pockets and so on, to handle those the way punch cards are handled in a punch card machine and so on. There were many very hard technical problems. It was a whale of a lot of fun attacking those hard problems, and it never occurred to me at the time that I was cooperating in a technological venture which had certain social side effects which I might come to regret. That never occured to me; I was totally wrapped up in my identity as a professional, and besides, it was just too much fun.

Q: When did it occur to you?

A: I think after spending say 10 years at MIT -- I came here in 1963. Much of that time, much of [the next] 10 years were very turbulent years politically ... Soon after I got here, President Kennedy was assassinated. There was the dream of the Great Society that President Johnson announced, and the civil rights movement, it was very hard-fought, and I of course participated, and the Vietnam War.

The knowledge of behavior of German academics during the Hitler time weighed on me very heavily. I was born in Germany, I couldn't relax and sit by and watch the university in which I now participated behaving in the same way. I had to become engaged in social and political questions. Once that happened I started to think and write about issues of this kind, some realities became increasingly clear to me.

Writing is very much like computer programming; when you sit down to write a program chances are you have a very good idea of what it is you want to do, you have a very good idea of what algorithm you're going to use. In a certain sense, you believe, or you act as if, you've already solved the problem and it's only a question of writing down the solution. So it is when you start to write in ordinary language. It's perfectly clear to many people, at MIT certainly, that in the act of programming you discover new ideas, and most particularly you discover that there are deep holes in your knowledge that you have to fill before you go on. That happens with writing too. So when I started to write about these things, sometimes just more or less for myself, or in letters to others, the realities I am talking about became clear to me.

Q: What about computers and the military?

A: The computer was of course born to the military, so to speak. In the United States, the first fully functioning computer was created in order to compute ballistifc trajectories. And in England, to help decipher military codes, Carl Zuse built his computer in order to deal with mathematical problems wich aries in the design of military aircraft.

In all three instances, the computer was the child of the military to begin with. Certainly after the Second World War the baton, so to speak, was passed to the Americans, the leadership for developing the computer came into American hands, and from that point to this I think it is safe to say that by far most of the research and development of computers has been paid for with military money, directly or indirectly.

It is also safe to say, it is simply a matter of fact, that to date weapons which threaten to wipe out the human species altogether could not be made and could certainly not be delivered with any sort of precision were it not for the computers which guide these weapons.

The computer is very deeply involved with the military. Today it counts as the beating heart of virtually every modern military system you can think of with the exception of the foot soldier.

In their book on the fifth generation, Ed Feigenbaum of Stanford University and Pamela McCorduck say that present "smart weapons" will seem like the wind-up toys compared to the weapons we will have once we've entered the use of the fifth generation of computers; that is, have properly introduced artificial intelligence, vision and so on, into weapons.

So from the very beginning, the computer was basically a military instrument, it's continued to be, and now with the so-called Strategic Defense Initiative, the computer promises to be firmly embedded in the military systems of the world. There is just no doubt about that.

Q: So to be a computer science professional very often means to be working in defense?

A: I would endorse that sentence, except that I would wish either that the last word be put in quotes, or that you change the sentence to read "...to be involved in the military."

And you know, "the military" certainly is very considerably less euphemistic than to say "defense." Now I understand that we're threatened by great forces, like Grenada, Cuba, and Nicaragua, for example, and we have to defend ourselves against them, but the terminology "the military" still hides the reality.

When we think today, for example, of the masses of computers in helicopters, and in all sorts of mobile things like tanks and airplanes, and we think of the many places on earth where these machines are being used every day, whether it is in Afghanistan or someplace in Africa, then the term "the military" also deserves to be replaced with something considerably harsher.

Instead of saying the computer is involved with the military, say the computer is involved with killing people. It is only when you come to that vocabulary, I think, that the euphemism begins to disappear, and I think it's very important that it disappear.

Q: How can people continue to do this, knowing that the things they build will be involved in killing people?

A: People have a series of rationalizations. People say for example that science and technology have their own logic, that they are in fact autonomous. This particular rationalization is profoundly false. It is not true that science marches on in defiance of human will, independent of human will, that just is not the case. But it is comfortable, as I said: it leads to the position that "if I don't do it, someone else will."

Of course if one takes that as an ethical principle then obviously it can serve as a license to do anything at all. "People will be murdered; if I don't do it, someone else will." "Women will be raped; if I don't do it, someone else will." That is just a license for violence.

Other people say, and I think this is a widely used rationalization, that fundamentally the tools we work on are "mere" tools; This means that whether they get use for good or evil depends on the person who ultimately buys them and so on.

There's nothing bad about working in computer vision, for example. Computer vision may very well some day be used to heal people who would otherwise die. Of course, it could also be used to guide missiles, cruise missiles for example, to their destination, and all that. You see, tthe technology itself is neutral and value-free and it just depends how one uses it. And besides -- consistent with that -- we can't know, we scientists cannot know how it is going to be used. So therefore we have no responsibility.

Well, that is false. It is true that a computer, for example, can be used for good or evil. It is true that a helicopter can be used as a gunship and it can also be used to rescue people from a mountain pass. And if the question arises of how a specific device is going to be used, in what I call an abstract ideal society, then one might very well say one cannot know.

But we live in a concrete society, [and] with concrete social and historical circumstances and political realities in this society, it is perfectly obvious that when something like a computer is invented, then it is going to be adopted will be for military purposes. It follows from the concrete realities in which we live, it does not follow from pure logic. But we're not living in an abstract society, we're living in the society in which we in fact live.

If you look at the enormous fruits of human genius that mankind has developed in the last 50 years, atomic energy and rocketry and flying to the moon and coherent light, and it goes on and on and on -- and then it turns out that every one of these triumphs is used primarily in military terms. So it is not reasonable for a scientist or technologist to insist that he or she does not know -- or ca not know -- how it is going to be used.

Q: Do you think the younger generation of computer scientists coming out of MIT has these concerns?

A: I do not know. I just do not know. I should think that if concern were very widespread, if it were deep-rooted, then perhaps progress in computer development might be somewhat slower than it is. So I do not think that younger people are concerned about these things today. But I have very little way of measuring it. I hope I am wrong.

Of course, it's not only computers that come into play here.the support that the Institute generally gets from the military, which is to say the Department of Defense and to a certain extent the Department of Energy, makes it pretty clear that it is not only computer science which is involved here.

By the way, let me say an additional rationalization for working on these things is that there will be wonderful fallout. We get the space program, and out of the space program we get missiles which can devastate the earth in a very few minutes, but we also get other things -- Teflon, for example, and all this computer stuff, and the miniaturization, the microminiaturization of components and so on, eventually gives us electronic watches. This is a derivative of "things can be used for good or evil."

But I think the following: that if one were to ask the medical community in the United States to do research on bacteriological warfare -- that is, to actually go to a laboratory somewhere in the United States and engage in that work -- that most medical people would refuse.

If they were told that out of this would come antitoxins and all sorts of other good and useful products, just as fallout, I think the medical community would in general say, "Well, if we're after antitoxins and other medicines, then let us work on that. To work on that by way of working on bacteriological warfare seems to us insane." I think the rest of the scientific and engineering community might adopt a similar stance.

Q: But they haven't already?

A: Certainly not ... Certainly the most frequent justification one hears for working, for example, on the strategic computing initiative is described by the military to be fundamentally three weapons systems and nothing else, there is no mystery about it, and , is that we will have wonderful consumer products. for example, one member of the laboratory I am in, the Laboratory for Computer Science, seriously and in print suggested we might have television sets on which we can change the channels by voice command as a by-product of the Strategic Computing Initiative -- see, isn't that wonderful?

Q: What is your greatest fear for the future?

A: I have children, let me say, first of all. And of course at this university as at others one sees very many young people. My greatest concern is that these young people won't ever be permitted to grow up, ever to get as old as I am now. I think that is a very realistic fear.

HomePage : http://www-tech.mit.edu/

----
Copyright 1985 by The Tech. All rights reserved.
This story was published on Tuesday, April 9, 1985.
Volume 105, Number 16
The story was printed on page 2.
This article may be freely distributed electronically,
provided it is distributed in its entirety and includes
this notice, but may not be reprinted without the
express written permission of The Tech. Write to
archive@the-tech.mit.edu for additional details.

No comments: