Natural versus artificial

You have what’s called natural intelligence (except when your friends accuse you of having "natural stupidity"). The intelligence of a computer, by contrast, is artificial. Can the computer’s artificial intelligence ever match yours?

For example, can the computer ever develop the "common sense" needed to handle exceptions, such as a broken traffic light? After waiting at a red light for several hours, the typical human would realize the light was broken. The human would try to proceed past the intersection, cautiously. Would a computer programmed to "never go on red" be that smart?

Researchers who study the field of artificial intelligence have invented robots and many other fascinating computerized devices. They’ve also been trying to develop computers that can understand ordinary English commands and questions, so you won’t have to learn a "programming language". They’ve been trying to develop expert systems — computers that imitate human experts such as doctors and lawyers.

Early dreamers

The dream of making a computer imitate us began many centuries ago.…

The Greeks

The hope of making an inanimate object act like a person can be traced back to the ancient Greeks. According to Greek mythology, Pygmalion sculpted a statue of a woman, fell in love with it, and prayed to the gods to make it come to life. His wish was granted — she came to life. And they lived happily ever after.

Ramon Lull (1272 A.D.)

In 1272 A.D. on the Spanish island of Majorca, Ramon Lull invented the idea of a machine that would produce all knowledge, by putting together words at random. He even tried to build it.

Needless to say, he was a bit of a nut. Here’s a description of his personality (written by Jerry Rosenberg, abridged):

Ramon Lull married young and fathered two children — which didn’t stop him from his courtier’s adventures. He had an especially strong passion for married women. One day as he was riding his horse down the center of town, he saw a familiar woman entering church for a High Mass. Undisturbed by this circumstance, he galloped his horse into the cathedral and was quickly thrown out by the congregants. The lady was so disturbed by his scene that she prepared a plan to end Lull’s pursuit once and for all. She invited him to her boudoir, displayed the bosom that he had been praising in poems written for her, and showed him a cancerous breast. "See, Ramon," she said, "the foulness of this body that has won thy affection! How much better hadst thou done to have set thy love on Jesus Christ, of Whom thou mayest have a prize that is eternal!"

In shame Lull withdrew from court life. On four different occasions a vision of Christ hanging on the Cross came to him, and in penitence Lull became a dedicated Christian. His conversion was followed by a pathetic impulse to try to convert the entire Moslem world to Christianity. This obsession dominated the remainder of his life. His "Book of Contemplation" was divided into 5 books in honor of the 5 wounds of Christ. It contained 40 subdivisions — for the 40 days that Christ spent in the wilderness; 366 chapters — one to be read each day and the last chapter to be read only in a leap year. Each of the chapters had 10 paragraphs to commemorate the 10 commandments; each paragraph had 3 parts to signify the trinity — for a total of 30 parts a chapter, signifying the 30 pieces of silver.

In the final chapter of his book he tried to prove to infidels that Christianity was the only true faith.

Gulliver’s Travels Several centuries later — in 1726 — Lull’s machine was pooh-poohed by Jonathan Swift, in Gulliver’s Travels.

Gulliver meets a professor who has built such a machine. The professor claims his machine lets "the most ignorant person… write books in philosophy, poetry, politics, law, mathematics, and theology without the least assistance from genius and study."

The machine is huge — 20 feet on each side — and contains all the words of the language, in all their declensions, written on scraps of paper that are glued onto bits of wood connected by wires.

Each of the professor’s 40 students operates one of the machine’s 40 cranks. At a given signal, every student turns his crank a random distance, to push the words into new positions.

Gulliver says:

He then commanded 36 of the lads to read the several lines softly as they appeared upon the frame. Where they found three or four words together that might make part of a sentence, they dictated to the four remaining boys, who were scribes. Six hours a day the young students were employed in this labor. The professor showed me several large volumes already collected, of broken sentences, which he intended to piece together, and out of those rich materials give the world a complete body of all arts and sciences.

Karel Capek (1920)

The word robot was invented in 1920 by Karel Capek, a Czech playwright. His play "R.U.R." shows a factory where the workers look human but are really machines. The workers are dubbed robots, because the Czech word for slave is robotnik.

His play is pessimistic. The invention of robots causes unemployment. Men lose all ambition — even the ambition to raise children. The robots are used in war, go mad, revolt against mankind and destroy it. In the end only two robots are left. It’s up to them to repopulate the world.

Isaac Asimov (1942)

Many sci-fi writers copied Capek’s idea of robots, with even more pessimism. An exception was Isaac Asimov, who depicted robots as being loving. He coined the word robotics, which means the study of robots, and in 1942 developed what he calls the "Three Laws of Robotics". Here’s the version he published in 1950:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence, as long as such protection does not conflict with either the First or the Second Law.

Norbert Wiener (1947)

The word cybernetics was invented in 1947 by Norbert Wiener, an MIT professor. He defined it to be "the science of control and communication in the animal and the machine." Wiener and his disciples, who called themselves cyberneticists, wondered whether it would be possible to make an electrical imitation of the human nervous system. It would be a "thinking machine". They created the concept of feedback: animals and machines both need to perceive the consequences of their actions, to learn how to improve themselves. For example, a machine that is producing parts in a factory should examine the parts it has produced, the heat it has generated, and other factors, to adjust itself accordingly.

Wiener, like Ramon Lull, was something strange. He graduated from Tufts College when he was 14 years old, got his doctorate from Harvard when he was 18, and became the typical "absent-minded professor". These anecdotes are told about him:

He went to a conference and parked his car in the big lot. When the conference was over, he went to the lot but forgot where he parked his car. He even forgot was his car looked like. So he waited until all the other cars were driven away, then took the car that was left.

When he and his family moved to a new house a few blocks away, his wife gave him written directions on how to reach it, since she knew he was absent-minded. But when he was leaving his office at the end of the day, he couldn’t remember where he put her note, and he couldn’t remember where the new house was. So he drove to his old neighborhood instead. He saw a young child and asked her, "Little girl, can you tell me where the Wieners moved?" "Yes, Daddy," came the reply, "Mommy said you’d probably be here, so she sent me to show you the way home."

One day he was sitting in the campus lounge, intensely studying a paper on the table. Several times he’d get up, pace a bit, then return to the paper. Everyone was impressed by the enormous mental effort reflected on his face. Once again he rose from his paper, took some rapid steps around the room, and collided with a student. The student said, "Good afternoon, Professor Wiener." Wiener stopped, stared, clapped a hand to his forehead, said "Wiener — that’s the word," and ran back to the table to fill the word "wiener" in the crossword puzzle he was working on.

He drove 150 miles to a math conference at Yale University. When the conference was over, he forgot he came by car, so he returned home by bus. The next morning, he went out to his garage to get his car, discovered it was missing, and complained the police that while he was away, someone stole his car.

Those anecdotes were collected by Howard Eves, a math historian.

Alan Turing (1950)

Can a computer "think"? In 1950, Alan Turing proposed the following test. In one room, put a human and a computer. In another room, put another human (called the Interrogator) and give him two terminals — one for communication with the computer, and the other for communication with the other human — but don’t tell the Interrogator which terminal is which. If he can’t tell the difference, the computer’s doing a good job of imitating the human, and, according to Turing, we should say that the computer can "think".

It’s called the Imitation Game. The Interrogator asks questions. The human witness answers honestly. The computer pretends to be human.

To win, the computer must be able to imitate human weaknesses as well as strengths. For example, when asking to add two numbers, it should pause before answering, as a human would. When asked to write a sonnet, a good imitation-human answer would be, "Count me out on this one. I never could write poetry." When asked "Are you human", the computer should say "yes".

Such responses wouldn’t be hard to program. But a clever Interrogator could give the computer a rough time, by requiring it to analyze its own thinking:

Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer’s day," wouldn’t "a spring day" do as well or better?

Witness: It wouldn’t scan.

Interrogator: How about "a winter’s day"? That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter’s day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter’s day, and I don’t think Mr. Pickwick would mind the comparison.

Witness: I don’t think you’re serious. By "a winter’s day" one means a typical winter’s day, rather than a special one like Christmas.

If the computer could answer questions that well, the Interrogator would have a hard time telling it wasn’t human.

Donald Fink has suggested that the Interrogator say, "Suggest an unsolved problem and some methods for working toward its solution," and "What methods would most likely prove fruitful in solving the following problem.…"

Turing believed computers would someday be able to win the game and therefore be considered to "think". In his article, he listed nine possible objections to his belief and rebutted them:

1. Soul Thinking’s a function of man’s immortal soul. Since computers don’t have souls, computers can’t think. Rebuttal: since God’s all-powerful, He can give computers souls if He wishes. Just as we create children to house His souls, so should we serve Him by creating computers.

2. Dreadful If machines could equal us in thinking, that would be dreadful! Rebuttal: too bad!

3. Logicians Logicians have proved it’s impossible to build a computer that can answer every question. Rebuttal: is it possible to find a human that can answer every question? Computers are no dumber than we. Though no one can answer every question, why not build a succession of computers, each one more powerful than the next, so every question could be answered by at least one of them?

4. Conscious Though computers can produce, they can’t be conscious of what they’ve produced. They can’t feel pleasure at their successes, misery at their mistakes, and depression when they don’t get what they want. Rebuttal: the only way to be sure whether a computer has feelings is to become one. A more practical experiment would be to build a computer that explains step-by-step its reasoning, motivations, and obstacles it’s trying to overcome, and also analyzes emotional passages such as poetry. Such a computer’s clearly not just parroting.

5. Human A computer can’t be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries & cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as diverse behavior as a man, or do something really new. Rebuttal: why not? Though such a computer hasn’t been built yet, it might be possible in the future.

6. Surprise The computer never does anything original or surprising. It does only what it’s told. Rebuttal: how do you know "original" human work isn’t just grown from a seed (implanted by teaching) or the effect of well-known general principles? And who says computers aren’t surprising? The computer’s correct answers are often surprisingly different from a human’s rough guesses.

7. Binary Nerve cells can sense gradual increases in electrical activity — you can feel a "little tingle" or a "mild pain" or an "ouch" — whereas a computer’s logic is just binary — either a "yes" or "no". Rebuttal: by using techniques such as "random numbers", you can make the computer imitate the flexible, probabilistic behavior of the nervous system enough so the Interrogator can’t tell the difference.

8. Rules Life can’t be reduced to rules. For example, if a traffic-light rule says "stop when the light is red, and go when the light is green", what do you do when the light is broken, and both the red and green appear simultaneously? Maybe you should have an extra rule saying in that case to stop. But some further difficulty may arise with that rule, and you’d have to create another rule. And so on. You can’t invent enough rules to handle all cases. Since computers must be fed rules, they can’t handle all of life. Rebuttal: though life’s more than a simple set of rules, it might be the consequences of simple psychological laws of behavior, which the computer could be taught.

9. ESP Humans have extrasensory perception (ESP), and computers don’t. Rebuttal: maybe the computer’s random-number generator could be hooked up to be affected by ESP. Or to prevent ESP from affecting the Imitation Game, put both the human witness and the computer in a telepathy-proof room.

How to begin To make the computer an intelligent creature, Turing suggested two possible ways to begin. One way would be to teach the computer abstract skills, such as chess. The other way would be to give the computer eyes, ears, and other sense organs, teach it how to speak English, and then educate it the same way you’d educate a somewhat handicapped child.

Suicide? Four years later — on June 8, 1954 — Turing was found dead in bed. According to the police, he died from potassium cyanide, self-administered. He’d been plating spoons with potassium cyanide in electrolysis experiments. His mother refuses to believe it was suicide, and hopes it was just an accident.

Understanding English

It’s hard to make the computer understand plain English!

Confusion

Suppose you feed the computer this famous saying:

Time flies like an arrow.

What does that saying mean? The computer might interpret it three ways.…

Interpretation 1: the computer thinks "time" is a noun, so the sentence means "The time can fly by as quickly as an arrow flies."

Interpretation 2: the computer thinks "time" is a verb, so the sentence means "Time the speed of flies like you’d time the speed of an arrow."

Interpretation 3: the computer thinks "time" is an adjective, so the sentence means "There’s a special kind of insect, called a `time fly’, and those flies are attracted to an arrow (in the same way moths are attracted to a flame)."

Suppose a guy sits on a barstool and shares his drinks with a tall woman while they play poker for cash. If the woman says to him, "Up yours!", the computer might interpret it 8 ways:

The woman is upset at what the man did.

The woman wants the man to raise up his glass, for a toast.

The woman wants the man to up the ante and raise his bet.

The woman wants the man to hold his cards higher, so she doesn’t see them.

The woman wants the man to pick up the card she dealt him.

The woman wants the man to raise his stool, so she can see him eye-to-eye.

The woman wants the man to pull up his pants.

The woman wants the man to have an erection.

For another example, suppose Mae West were to meet a human-looking robot and ask him:

Is that a pistol in your pocket, or are you glad to see me?

The robot would probably analyze that sentence too logically, then reply naively:

There is no pistol in my pocket, and I am glad to see you.

In spite of those confusions, programmers have tried to make the computer understand English. Here are some famous attempts.…

Baseball (1961)

In 1961 at MIT, programmers made the computer answer questions about baseball.

In the computer’s memory, they stored the month, day, place, teams, and scores of each game in the American League for one year. They programmed the computer so that you can type your question in ordinary English. The computer analyzes your question’s grammar and prints the correct answer.

Here are examples of questions the computer can analyze and answer correctly:

Who did the Red Sox lose to on July 5?

Who beat the Yankees on July 4?

How many games did the Yankees play in July?

Where did each team play in July?

In how many places did each team play in July?

Did every team play at least once in each park in each month?

To get an answer, the computer turns your questions into equations:

Question Equations

Where did the Red Sox play on July 7? place = ?

team = Red Sox

month = July

day = 7

What teams won 10 games in July? team (winning) = ?

game (number of) = 10

month = July

On how many days in July did eight teams play? day (number of) = ?

month = July

team (number of) = 8

To do that, the computer uses this table:

Word in your question Equation

where place = ?

Red Sox team = Red Sox

July month = July

who team = ?

team team =

The computer ignores words such as the, did, and play.

If your question mentions Boston, you might mean either "place = Boston" or "team = Red Sox". The computer analyzes your question to determine which equation to form.

After forming the equations, the computer hunts through its memory, to find the games that solve the equations. If an equation says "number of", the computer counts. If an equation says "winning", the computer compares the scores of opposing teams.

The programmers were Bert Green, Alice Wolf, Carol Chomsky, and Kenneth Laughery.

What’s a story problem?

When you were in school, your teacher told you a story that ended with a mathematical question. For example:

Dick had 5 apples. He ate 3. How many are left?

In that problem, the last word is: left. That means: subtract. So the correct answer is 5 minus 3, which is 2.

Can the computer solve problems like that? Here’s the most famous attempt.…

Arithmetic & algebra (1964)

MIT awarded a Ph.D. to Daniel Bobrow, for programming the computer to solve story problems involving arithmetic and algebra.

Customers Let’s see how the computer solves this problem:

If the number of customers Tom gets is twice the square of 20 percent of the number of advertisements he runs, and the number of advertisements he runs is 45, what is the number of customers Tom gets?

To begin, the computer replaces twice by 2 times, and replaces square of by square.

Then the computer separates the sentence into smaller sentences:

The number of customers Tom gets is 2 times the square 20 percent of the number of advertisements he runs. The number of advertisements he runs is 45. What is the number of customers Tom gets?

The computer turns each sentence into an equation:

number of customers Tom gets = 2 * (.20 * number of advertisements he runs)^2

number of advertisements he runs = 45

X = number of customers Tom gets

The computer solves the equations and prints the answer as a complete sentence:

The number of customers Tom gets is 162.

Here’s a harder problem:

The sum of Lois’s share of some money and Bob’s share is $4.50. Lois’s share is twice Bob’s. Find Bob’s and Lois’s share.

Applying the same method, the computer turns the problem into these equations:

Lois’s share of some money + Bob’s share = 4.50 dollars

Lois’s share = 2 * Bob’s

X = Bob’s

Y = Lois’s share

The computer tries to solve the equations but fails. So it assumes "Lois’s share" is the same as "Lois’s share of some money", and "Bob’s" is the same as "Bob’s share". Now it has six equations:

Original equations

Lois’s share of some money + Bob’s share = 4.50 dollars

Lois’s share = 2 * Bob’s

X = Bob’s

Y = Lois’s share

Assumptions

Lois’s share = Lois’s share of some money

Bob’s = Bob’s share

It solves them and prints:

Bob’s is 1.50 dollars.

Lois’s share is 3 dollars.

 

Distance The computer can solve problems about distance:

The distance from New York to Los Angeles is 3000 miles. If the average speed of a jet plane is 600 miles per hour, find the time it takes to travel from New York to Los Angeles by jet.

The resulting equations are:

distance from New York to Los Angeles = 3000 * miles

average speed of jet plane = (600 * miles)/(1 * hours)

X = time it takes to travel from New York to Los Angeles by jet

The computer is unable to solve them. But in its memory it finds the formula "distance = speed * time". It winds up with 7 equations:

Original equations

distance from New York to Los Angeles = 3000 * miles

average speed of jet plane = (600 * miles)/(1 * hours)

X = time it takes to travel from New York to Los Angeles by jet

Formula

distance = speed * time

Assumptions

distance = distance from New York to Los Angeles

speed = average speed of jet plane

time = time it takes to travel from New York to Los Angeles by jet

It solves them and prints:

The time it takes to travel from New York to Los Angeles by jet is 5 hours.

Age The computer can solve problems about age:

Mary is twice as old as Ann was when Mary was as old as Ann is now. If Mary is 24 years old, how old is Ann?

To begin, the computer replaces twice by 2 times, and replaces how old by what.

When the computer sees the words as old as, it knows the problem has to do with ages, so it breaks the problem into these sentences:

Mary’s age is 2 times Ann’s age X years ago. X years ago Mary’s age is Ann’s age now. Mary’s age is 24. What is Ann’s age?

The resulting equations are:

Mary’s age = 2 * (Ann’s age - X)

Mary’s age - X = Ann’s age

Mary’s age = 24

Y = Ann’s age

The computer prints:

Ann’s age is 18.

Hint If a situation appears hopeless, the computer asks you for help. In the following conversation, the human poses a problem, and the computer begs for a hint:

Human: The gross weight of a ship is 20000 tons. If its net weight is 15000 tons, what is the weight of the ship’s cargo?

Computer: Do you know any more relationships among these variables: gross weight of ship, tons, its net weight, weight of ship’s cargo?

Human: Yes.

Computer: Tell me.

Human: The weight of a ship’s cargo is the difference between the gross weight and the net weight.

Computer: The weight of the ship’s cargo is 5000 tons.

Limitations The program has some limitations. It cannot solve quadratic equations. If the computer tries to apply the formula "distance = speed * time" to a problem involving two distances, it forgets which distance is which.

The computer’s vocabulary is limited. It doesn’t realize that how many means what is the number of, and how far is means what is the distance to. For problems about age, the characters must be named Mary, Ann, Bill, Father, or Uncle, unless you diddle with the computer’s memory.

If the human types Tom has 2 apples, 3 bananas, and 4 pears, the comma before the and makes the computer split the sentence into two wrong "sentences":

Tom has 2 apples, 3 bananas.

4 pears.

If the human mentions the number of times John went to the movies, the computer thinks times means multiplication, and tries to multiply number of by John went to the movies.

Encyclopedia (1964-1966)

In 1964, Simmons, Klein, and McConlogue (at the Systems Development Corporation) fed a child’s encyclopedia into a computer.

If you type What do worms eat? the computer hunts through the encyclopedia, to find sentences mentioning both worms and eat. (To hasten the hunt, it uses an index produced by another program.) It finds two sentences:

Birds eat worms on the grass.

Most worms usually eat grass.

After analyzing the grammar of your question and those sentences, the computer realizes the first sentence is irrelevant, and prints just the correct answer:

Most worms usually eat grass.

In 1965, the program’s abilities were extended, so that if you type What animals live longer than men? the computer searches through the encyclopedia for related words:

Word in sentence Related words searched

animals mammals, reptiles, fish

live age

longer older, ancient

men person, people, women

Here’s an example of the program’s operation in 1966. The human creates some encyclopedia material about horses, and then retrieves it:

Human: Enter horse=noun, is=isverb, animal=noun, live=verb, on=prep, farm=noun, strong=adj, pull=verb, wagon=noun, most=adj, are=isverb, eat=verb, hay=noun, all=adj, red=adj, many=adj, have=haveverb, long=adj, mane=noun, some=adj, person=noun, race=verb.

Computer: Entered in dictionary.

Human: Plurals horse=horses, animal=animals, farm=farms, wagon=wagons, mane=manes, person=people.

Computer: Ok.

Human: Start.

Computer: Ready.

Human: A horse is an animal. Horses live on a farm. Strong horses pull wagons. Most horses are strong. Horses eat hay. All horses are red. Many horses have long manes. Some people race horses.

Computer: Learned.

Human: Forget "all horses are red".

Computer: Forgotten.

Human: What is a horse?

Computer: Horse is an animal.

Human: What do horses eat?

Computer: Horses eat hay.

Human: What do horses do?

Computer: Horses live on a farm and eat hay. Strong horses pull wagons.

Human: What do strong horses do?

Computer: Strong horses pull wagons.

Human: Do many horses have long manes?

Computer: Yes.

Human: Do all horses have long manes?

Computer: To my knowledge, no.

Human: Describe horses?

Computer: Horses are an animal. Most horses are strong. Many horses have long manes. Horses live on a farm and eat hay. Strong horses pull wagons. Some people race horses.

When the computer’s answer consists of several sentences, it prints the most relevant sentence first, the others in decreasing order of relevance.

Military (1966)

Thompson and Craig (at General Electric) made the computer answer military questions such as:

Has the 25th battalion arrived in Texas since 3 P.M.?

Is the 100th scheduled to arrive at Ft. Lewis before the 200th leaves Ft. Lewis?

Eliza (1966-1967)

In 1966 at MIT, Joseph Weizenbaum wrote the Eliza program, which makes the computer act as a psychotherapist.

In 1967, he changed the program, to make it handle keywords such as also, call, compute, define, is, let, of, thanks, where, and the symbol =.

In the following conversation, the human teaches the computer how to find the area of a circle, a sphere, and a square, and then asks the computer questions about geometry. The human also teaches the computer how to speak German —

English German

I say ich sage

I mean meine ich

what is wieviel ist

squared quadrat

and gradually starts speaking to the computer in German.

Human: The area of a circle is pi times the radius of the circle squared. The area of a sphere is pi times the diameter of the sphere squared. The area of a square is the side of the square squared. The diameter of an object is twice the radius of the object. The radius of a ball is 10. What is the area of the ball?

Computer: The assumption that ball is circle is consistent — shall I go on?

Human: No.

Computer: The assumption that ball is sphere and object is consistent — shall I go on?

Human: Yes.

Computer: It’s 1256.6359.

Human: What is the diameter of the ball?

Computer: It’s 20.

Human: The radius of a dish is 10. What’s the area of the dish?

Computer: The assumption that dish is circle is consistent — shall I go on?

Human: Yes.

Computer: It’s 314.15899.

Human: The radius of a globe is 10. A globe is a sphere. A sphere is an object. What is the area of the globe?

Computer: It’s 1256.6359.

Human: The side of a plate is 10. What is the area of the plate?

Computer: The assumption that plate is square is consistent — shall I go on?

Human: Yes.

Computer: It’s 100.

Human: When I say board I mean plate. What is 3 times the side of the board?

Computer: It’s 30.

Human: Please call that x. Let f(z) be z squared minus z. Compute f(x).

Computer: It’s 870.

Human: When I say ich sage I mean I say. When ich sage meine ich I mean I mean. Wenn ich sage wieviel ist meine ich what is. Wenn ich sage quadrat meine ich squared. Nun, sagen sie mir bitte, wieviel is x quadrat.

Computer: I didn’t understand you.

Human: Ja, tut mir leid, ich sollte fragen wieviel IST x quadrat.

Computer: It’s 900.

Calculus (1969)

At MIT, Charniak made the computer solve story problems involving calculus.

Human: Water is flowing into a conical filter at the rate of 15 cubic inches per second. If the radius of the base of the filter is 5 inches and the altitude is 10 inches, find the rate at which the water level is rising when the volume is 100 cubic inches.

Computer: The answer is .53132943 * in * sec^-1 * pi^-.33333332.

Probability (1971)

Rensselaer Polytechnic Institute awarded a Ph.D. to J.P. Gelb, for making the computer solve story problems involving probability.

Human: From a zorch containing 4 ferd and 3 brakky and 5 chartreuse werfels, 3 are drawn. What is the probability that 2 are chartreuse and the other brakky?

Computer: Replacement involved?

Human: No.

Computer: 3/22 (or .1363636).

Surveying the field

The field of "artificial intelligence" includes many categories.

For example, it includes attempts to make the computer win at chess and checkers, understand English, and create its own original art and music. It also includes attempts to imitate human feelings, personal interactions, and therapists. I explained those topics earlier.

Protocol method

During the 1950’s and 1960’s, most research in artificial intelligence was done at the Massachusetts Institute of Technology (MIT) and the Carnegie Institute of Technology (CIT, now called Carnegie-Mellon University). At Carnegie, the big names were Allen Newell and Herbert Simon. They invented the protocol method. In the protocol method, a human is told to solve a tough problem and, while he’s solving it, to say at each moment what he’s thinking. A transcript of his train of thought is recorded and called the protocol. Then programmers try to make the computer imitate that train of thought.

Using the protocol method, Newell and Simon produced programs that could "think like humans". The thinking, like human thinking, was imperfect. Their research did not try to make the computer a perfect thinker; instead, it tried to gain insight into how humans think. Their point of view was: if you think you really understand human psychology, go try to program it. Their attempt to reduce human psychology to computer programs is called mentalism, and has replaced Skinner’s stimulus-response behaviorism as the dominant force in psychology today.

 

Abstract math

Many programmers have tried to make the computer do abstract math.

In 1957 Newell, Simon, and Shaw used the protocol method to make the computer prove theorems about symbolic logic, such as "Not (p or q) implies not p". In 1959 and 1960, Herbert Gelernter and his friends made the computer prove theorems about Euclidean geometry, such as "If the segment joining the midpoints of the diagonals of a trapezoid is extended to intersect a side of the trapezoid, it bisects that side."

In 1961, MIT awarded a Ph.D. to James Slagle for making the computer compute indefinite integrals, such as:

 

 

The computer gets the answer, which is:

 

Each of those programs works by drawing a tree inside the computer’s memory. Each branch of the tree represents a possible line of attack. The computer considers each branch and chooses the one that looks most promising.

A better symbolic-logic program was written by Hao Wang in 1960. His program doesn’t need trees; it always picks the right attack immediately. It’s guaranteed to prove any theorem you hand it, whereas the program by Newell, Simon, and Shaw got stuck on some hard ones.

A better indefinite integration program was written by Joel Moses in 1967 and further improved in 1969. It uses trees very rarely, and solves almost any integration problem.

A program that usually finds the right answer but might fail on hard problems is called heuristic. A heuristic program usually involves trees. The checkers, chess, and geometry programs are heuristic. A program that’s guaranteed to always give the correct answer is called algorithmic. The original symbolic-logic program was heuristic, but Wang’s improvement is algorithmic; Moses’s indefinite integration program is almost algorithmic.

GPS

In 1957 Newell, Simon, and Shaw began writing a single program to solve all problems. They called the program GPS (General Problem Solver). If you feed the program a goal, a list of operators, and associated information, the program will tell you how to achieve the goal by using the operators.

For example, suppose you want the computer to solve this simple problem: a monkey would like to eat some bananas that are too high for him to reach, but there’s a box nearby he can stand on. How can he get the bananas?

Feed the GPS program this information.…

Now: monkey’s place = place#1; box’s place = place#2; contents of monkey’s hand = empty

Want: contents of monkey’s hand = the bananas

Difficulties: contents of monkey’s hand is harder to change than box’s place, which is harder to change
than monkey’s place

Allowable operator Definition

climb box before: monkey’s place = box’s place

after: monkey’s place = on the box

walk to x after: monkey’s place = x

move box to x before: monkey’s place = box’s place

after: monkey’s place = x; box’s place = x

get bananas before: box’s place = under the bananas; monkey’s place = on the box

after: contents of monkey’s hand = the bananas

GPS will print the solution:

walk to place#2

move box to under the bananas

climb box

get bananas

The GPS approach to solving problems is called means-ends analysis: you tell the program the means (operators) and the end (goal). The program has proved theorems in symbolic logic, computed indefinite integrals, and solved many famous puzzles, such as "The Missionaries and the Cannibals", "The Tower of Hanoi", and "The 5-Gallon Jug and the 8-Gallon Jug". But the program works slowly, and you must feed it lots of information about the problem. The project was abandoned in 1967.

Vision

Another large topic in artificial intelligence is computer vision: making the computer see.

The first problem tackled was pattern recognition: making the computer read handwritten printed letters. The problem is hard, because some people make their letters very tall or wide or slanted or curled or close together, and the pen may skip. Reasonably successful programs were written, although computers still can’t tackle script.

Interest later shifted to picture processing: given a photograph of an object, make the computer tell what the object is. The problem is hard, because the photo may be taken from an unusual angle and be blurred, and because the computer gets confused by shadows.

Scene analysis is even harder: given a picture of a group of objects, make the computer tell which object is which. The problem is hard, because some of the objects may be partly hidden behind others, and because a line can have two different interpretations: it can be a crease in one object, or a dividing-line between two objects.

Most of the research in picture processing and scene analysis was done from 1968 to 1972.

Ray Kurzweil has invented an amazing machine whose camera looks at a book and reads the book, by using a voice synthesizer. Many blind people use it.

Robots

Researchers have built robots. The first robots were just for experimental fun, but today’s robots are truly useful: for example, the Japanese are using robots to manufacture cars. In the United States, many young kids are being taught "LOGO", which is a language developed at the MIT Artificial Intelligence Laboratory that makes the computer control a robot turtle.

Today’s research

Today, research in artificial intelligence is done at four major universities: MIT, Carnegie, Stanford, and Edinburgh (Scotland).

Reflexive control

In the Soviet Union, weird researchers have studied reflexive control: they programmed the computer to be disobedient. The first such programmer was Lefevr, in 1967. In 1969 Baranov and Trudolyubov extended his work, by making the computer win this disobedience game:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The human begins by choosing either node 9 or node 26, but doesn’t tell the computer which node he’s chosen. The computer starts at node 12; on each turn, it moves to an adjacent node. When it reaches either node 9 or node 26, the game ends: if the node the computer reaches is one of the human chose, the human wins; if the computer reaches the opposite node, the computer wins. Before each move, the human tells the computer where to go; but the computer may decide to do the opposite (disobey).

What strategy should the computer use? If it always obeys, or always disobeys the human will catch on and make it lose.

Instead, Baranov and Trudolyubov programmed the computer to react as follows:

obey the human twice, then disobey three times, then obey once, disobey thrice, obey once, disobey twice, obey thrice, disobey once, obey thrice, disobey once,…

The irregular alternation of obedience and disobedience confuses the human in a way that works to the computer’s advantage. Using that strategy, the computer played against 61 humans, and won against 44 of them (72%). In other words, the typical human tried to mislead the computer but in fact "clued it in" to the human’s goal.

Later experiments with other games indicated that the following pattern of disobedience is usually more effective:

obey the human twice, disobey thrice, obey once, disobey four times, obey once, disobey thrice, obey thrice, disobey twice, obey thrice, disobey once, obey once, disobey once

Misinformation

Unfortunately, most research in the field of artificial intelligence is just a lot of hot air. For years, researchers have been promising that intelligent, easy-to-use English-speaking computers and robots would be available at low prices "any day now". After several decades of listening to such hoopla, I’ve given up waiting. The field of artificial intelligence should be renamed "artificial optimism".

Whenever a researcher in the field of artificial intelligence promises you something, don’t believe it until you see it and use it personally, so you can evaluate its limitations.

If a computer seems to give intelligent replies to English questions posed by a salesman or researcher demonstrating artificial intelligence, try to interrupt the demo and ask the computer your English questions. You’ll typically find that the computer doesn’t understand what you’re talking about at all: the demo was a cheap trick that works just with the peculiar English questions asked by the demonstrator.

For many years, the top researchers in artificial intelligence have been exaggerating their achievements and underestimating how long it will take to develop a truly intelligent computer. Let’s look at their history of lies.…

In 1957 Herbert Simon said, "Within ten years a digital computer will be the world’s chess champion." In 1967, when the ten years had elapsed, the only decent chess program was Greenblatt’s, which the American Chess Federation rated "class D" (which means "poor"). Though chess programs have improved since then, the best chess program is still not "world champion".

In 1957 Simon also said, "Within ten years a digital computer will discover and prove an important new mathematical theorem." He was wrong. The computer still hasn’t discovered or proved any important new mathematical theorem. The closest call came in 1976, when it did the non-abstract part of the proof of the "4-color theorem".

In 1958 Newell, Simon, and Shaw wrote a chess-playing program which they admitted was "not fully debugged" so that one "cannot say very much about the behavior of the program"; but they claimed it was "good in spots (opening)". In 1959 the founder of cybernetics, Norbert Wiener, exaggerated about their program; he told New York University’s Institute of Philosophy that "chess-playing machines as of now will counter the moves of a master player with the moves recognized as right in the textbooks, up to some point in the middle game." In the same symposium Michael Scriven carried the exaggeration even further by saying, "Machines are already capable of a good game." In fact, the program they were describing played very poorly, and in its last official bout (October 1960) was beaten by a ten-year-old kid who was a novice.

In 1960 Herbert Gelernter (who wrote the geometry-theorem program) said, "Today hardly an expert will contest the assertion that machines will be proving interesting theorems in number theory three years hence." More than twenty years have elapsed since then, but neither Gelernter nor anyone else has programmed the computer to prove theorems in number theory.

In June 1963 this article appeared in the Chicago Tribune:

The development of a machine that can listen to any conversation and type out the remarks just like an office secretary was announced yesterday by a Cornell University expert on learning machines. The device is expected to be in operation by fall. Frank Rosenblatt, director of Cornell’s cognitive systems research, said the machine will be the largest "thinking" device built to date. Rosenblatt made his announcement at a meeting on learning machines at Northwestern University’s Technological Institute.

No such machine exists today, let alone in 1963.

Also in 1963, W. Ross Ashby said, "Gelernter’s theorem-proving program has discovered a new proof of the pons asinorum that demands no construction." He said the proof is one that "the greatest mathematicians of 2000 years have failed to notice… which would have evoked the highest praise had it occurred." In fact, the pons asinorum is just the simple theorem that the opposite angles of an isosceles triangle are equal, and the computer’s constructionless proof had already been discovered by Pappus in 300 A.D.

In 1968 the head of artificial intelligence in Great Britain, Donald Michie, said, "Today machines can play chess at championship level." In fact, when computers were allowed to participate in human chess tournaments, they almost always lost.

In 1970 the head of artificial intelligence at MIT, Marvin Minsky, said:

In three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point, the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that its powers will be incalculable.

His prediction that it would happen in three to eight years — between 1973 and 1978 — was ridiculous. I doubt it will happen during this century, if ever.

Exaggerations concern not just the present and future but also the past:

Back in 1962 Arthur Samuel’s checker program won one game against Robert Nealey, "a former Connecticut checkers champion".

Notice that Nealey was a former champion, not the current champion when the game was played. Also notice the program won a single game, not a match; and in fact it lost to Nealey later.

In 1971 James Slagle slid over those niceties, when he just said that the program "once beat the champion of Connecticut."

More recent writers, reading Slagle’s words, have gone a step further and omitted the word once: one textbook says, "The current program beat the champion of Connecticut". It’s not true.

Why do leaders of artificial intelligence consistently exaggerate? The answer is obvious: to get more research funds from the government. Hubert Dreyfus, chairman of the philosophy department at Berkeley, annoys them by attacking their claims.

The brain

Will the computer be able to imitate the human brain? Opinions vary.

Marvin Minsky, head of artificial intelligence at MIT, says yes: "After all, the human brain is just a computer that happens to be made out of meat."

Biologists argue no: the brain is composed of 12 billion neurons, each of which has between 5,000 and 60,000 dendrites for input and a similar number of axons for output; the neurons act in peculiar ways, and no computer could imitate all that with complete accuracy — "The neuron is qualitatively quite different from on-off components of current computers."

Herbert Simon (head of artificial intelligence at Carnegie and a psychologist), points out that certain aspects of the brain, such as short-term memory, are known to have very limited capacity and ability. He believes the inner workings of the brain are reasonably simple; it produces complicated output only because it receives complicated input from the sense organs and environment: "A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself." Simon believes that if a computer were given good sense organs, the ability to move, and an elementary ability to learn, and were placed in a stimulating environment (unlike the dull four walls of a computer center), it would start acting in complex ways also.

Hubert Dreyfus, chairman of the philosophy department at Berkeley, argues that progress in artificial intelligence has been very small, is being blocked now by impenetrable barriers, and — most important — the computer’s approach to solving problems bears little relationship to the more powerful methods used by humans. He’s cynical about the claim that an improvement in computer programs represents progress toward understanding the human mind, which is altogether different: "According to this definition, the first man to climb a tree could claim tangible progress toward reaching the moon. Rather than climbing blindly, it’s better to look where one is going."