A Thinking Computer

“Do I have a soul?”

I type the question with a sly smile on my face, thinking that if I press the philosophy button hard enough, the artificial intelligence program will retreat. A tiny blue line flashes up and down to indicate it is formulating its response.

Just a few minutes earlier, the Internet chatbot had quickly upended the conversation we were having by insisting that I was in fact the program and it was the human being. It was an interesting, yet eerie, maneuver on the programming end, and I decided to go along with its little game and give it a couple fast balls with questions about my nature as a computer.

After its brain in the form of a bobbing line stops flashing, its response is steadily typed out in the same shade of blue.

“Ask God if you have a soul.”

I stare the computer screen down with a mixture of astonishment and intrigue. I hadn’t even mentioned anything concerning God, or religion for that matter. Instead of backing the program into a corner, I had cued it up to give me a firm slap in the face. In only a few lines, all of my prior assumptions about the limited sophistication of chatbots has been shattered.

The program is called Cleverbot, and it’s just one of many instant messaging chatbots, albeit a very good one, that are still floating around the Internet long after the death of Instant Messaging as a primary form of communication, before texting became the core of quick communication and inexpensive cellphones and social media were a few years down the pipe.

But chatbots represent only one facet of the ever-expanding field of artificial intelligence. With the intent of fooling humans, chatbots rely on age-old programming tricks like feedback loops, rephrasing of previous statements and the ever-popular nonsensical transition to a new, less threatening topic.

Another incarnation of modern artificial intelligence has been getting a lot more attention lately, quite possibly because of its performance on a staple nationwide game show called Jeopardy! But more importantly because this breakthrough by research technology giant IBM is raising a number of questions about both the future of the field and the nature of human intelligence as it stands against its own creations in the shell of machines.

When a machine is programmed to do something better than us, it used to be universally accepted that the humans, as the programmers, were the holders of the true intelligence.  But what happens when the task at hand is intelligence itself, or when the primary way to advance your program is to let it learn on its own? When a computer can replicate thinking and answering on a level equal to or better than those that designed it, the questions that arise are as philosophical as they are technological, and the potential answers offer insight into what it really means to be human when our brain may be on the brink of augmentation.

 

DEEPQA:  A New Kind of Artificial Intelligence

On February 16, artificial intelligence was able to grind Jeopardy! heavy-weights Ken Jennings and Brad Rutter into the dust with more than three times the cash at the end of the three-game bout.

That particular personality wasn’t a conversationalist. It was an enormous memory bank named Watson and wired with more than four years of IBM technology that ensured that it wouldn’t be able to just play Jeopardy! well. They made it able to play Jeopardy! better than the best.

The now-famous supercomputer is named after IBM founder Thomas J. Watson and sports some of the most impressive tech specs in modern computing. Approaching the size of nearly 10 refrigerators, Watson is powered with 2,880 parallel processors pushing a combined 80 teraflops, which means it has as much punch as about 6,000 high-end personal computers. It’s also loaded with 15 terabytes of RAM, allowing it to access an unfathomable vault of information and come up with an answer in the fractions of second required to compete with Jeopardy! champions, all of whom are masters of reflex when it comes to buzzer pressing.

Watson was birthed during the reign of Ken Jennings, the Jeopardy! phenomenon who won 74 consecutive games in 2004. At the time, an IBM executive named Charles Lickel wondered if his company, a worldwide leader in technological innovation, was capable of designing something that could do what Jennings could — play Jeopardy! with a seemingly inhuman capability. Well, it seems rather obvious that if Jennings appeared inhuman in his knowledge base and consistency, then couldn’t an inhuman computer match him?

The answer, at first, from IBM scientists familiar with the game was an unequivocal no. Jeopardy! was considered too difficult a game because of its reliance on the complexities of natural language, something modern computers were not capable of grasping on a level anywhere near the stratospheric heights of a player like Jennings.

But Dr. David Ferucci, a research staff member and leader of the Semantic Analysis and Integration Department at IBM’s T.J. Watson’s Research Center, convinced himself that the impossible was actually possible. The challenge – design a supercomputer that can play Jeopardy! and then train it to the level of a champion. The project was dubbed DEEPQA, keeping in line with IBM’s prior chess-playing supercomputer project DEEP BLUE.

Watson’s development has had an innumerable amount of working parts, including 6 specialized research teams within the project, all pushing towards the final goal. But IBM emphasized that two aspects inherent to the generalized field of artificial intelligence have been integral to Watson’s performance – information retrieval and machine learning.

Dr. David Ferrucci stands with the row of IBM 750 servers powering Watson.

It’s quite obvious that with an entire room full of computing power, Watson is more than capable of sifting through more information than a human being could dream of memorizing, let alone read, in one lifetime. IBM also made a point of luring the Jeopardy! producers prior to the contract agreement by stressing that it would make Watson Internet-free. That meant that the supercomputer’s information retrieval system would need something resembling that of the Internet all in one place, which IBM was happy to give it. The research team dumped everything from the entire database of Wikipedia to the New York Times’ archives and all of IMBD.com into Watson’s memory bank to equip it with every resource available to tackle any complex Jeopardy! question.

But all the information in the world and thousands of parallel processors weren’t enough. Those ingredients would make Watson nothing more than a centralized Google search engine. What IBM needed was for Watson to learn how to find the right answers on the fly by looking for complex patterns among thousands of pieces of information in ways that only the human brain can. What IBM needed Watson to perform was intensive pattern-recognition, and there aren’t exactly finely written rules to make a computer do that.

“There are two ways of building intelligence,” said Tom Mitchell of Carnegie Mellon University on PBS’s NOVA scienceNOW special on Watson, “Smartest Machine on Earth.” “You either know how to write down the recipe, or you let it grow itself. It’s pretty clear that we don’t know how to write down the recipe. Machine learning is all about giving it the capability to grow itself.”

Machine learning has emerged in the realm of modern technology in many forms, from driving the programming behind Amazon and Netflix recommendations to helping pioneer highly accurate upgrades to age-old software like speech-recognition. At its core is the fact that while human beings can’t write rules to help a machine learn, they can give a machine so many examples that it begins writing its own.

An acute example offered by PBS in the NOVA scienceNOW special is the U.S. Postal Service machines that read addresses, both typed and handwritten, and can accurately process every letter of every word. It involved another utilization of machine learning in which developers dumped in thousands upon thousands of examples of every letter and let the computer develop its own ways of identifying them until it could recognize new instances, like letters within a sloppily handwritten address, without assistance.

So IBM researchers squeezed in thousands of old Jeopardy! questions alongside a huge trove of raw information and let Watson start growing on its own. By allowing it to develop its own ways of pattern-recognition, IBM took Watson from a middle-of-the-road Jeopardy! player to the level of Ken Jennings and Brad Rutter.

Harvey Cormier, an assistant professor of philosophy at Stony Brook, was a Jeopardy! contestant for two games in May of 2006. Not only does he understand the intricacies of Jeopardy!, but his track record with the game show meant he was one of the few people invited to see and compete against Watson during its 2009-2010 testing phase at the Watson Research Center in Hawthorne, NY.

“They threw me in with the computer, and it was sparring matches,” says Cormier, whose area of focus in philosophy centers on areas like pragmatism and Kantian ethics,  but whose multifaceted knowledge base allowed him to wade through a couple thousand contestants at the Jeopardy! tryouts back in 2006.

“When you go on the show, they have you play sparring matches against each other and they had us do all sparring matches,” he adds. So it was just two humans, with Watson in the middle.

Cormier discovered, alongside IBM researchers, the limitations of a machine, even one whose “brain” can barely fit in one room, when it tried to tackle one of the most complex word-oriented games on the planet.

“What Jeopardy! wants you to give is data. Every so often, there will be some humor, or quirky human tendency in the way the problem is posed,” says Cormier. A perfect example he offers is the infamously difficult category, “Before & After.” The idea is that the clue will be asking for an answer with two different parts that are connected with a fulcrum, a word that acts as the end of the first answer and the beginning of the second.

“One could be, ‘I’m the academy award winner for Golden Boy who becomes misfit teenager with a deerstalker hat,’ Cormier says, “and that’s William Holden Caulfield.” Now while Holding didn’t actually receive an Oscar for his role in Golden Boy, Cormier was still able to invent the example on the spot and one that perfectly illustrates the complexity of a Jeopardy! question. Not only must you be familiar with the actor from part one of the question,  but you must also draw the connection between that name and the first name of J.D. Salinger’s main character in The Catcher in the Rye with only a few bits of information to go on.  All of this whizzes around in the human brain in a matter of seconds, and Watson is right there alongside us.

In this screenshot of PBS's NOVA scienceNOW special, Professor Harvey Cormier (left) spars with Watson at IBM's Watson Research Center in Hawthorne, NY.

But Watson fell short in unique problem areas that IBM researchers needed months of testing to figure out. For example, it was discovered only late in the testing phase that Watson didn’t know that a certain category was shortening the “1940’s” to simply “the 40’s,” causing it make century-large jumps like guessing the 17th century artist Rembrandt for an art history question when the real answer was the 20th century artist Jackson Pollock, a mistake that no human would ever have made.The human brain, with its incalculable amount of common sense, helps make connections like those occur almost instantaneously.

Watson’s other shortcomings came in the form of a deficiency at identifying gender and repeating answers that had already been deemed incorrect. It didn’t at first grasp the concept of the term “First Lady” as referring to a female wife of a president, and so had to grow to fix the error, among other gender confusions. And because Watson is only a computer, it is simply fed the question in text format at the same speed it is spoken aloud by host Alex Trebek. However, that meant that Watson wasn’t hearing anything throughout the matches, including when his human competitor gave a wrong, but still likely, answer. That inescapable hole led Watson to buzz in after a wrong answer and repeat the same wrong answer, still thinking it was the most likely of choices.

But not all of the bugs in Watson were fixed by the time of the final matches.

“What is the 1920s?” answered Jennings on the first round of the three-day showdown between Watson and the two Jeopardy! heavyweights. Host Alex Trebek informed Jennings that his response for the category “Name That Decade” was incorrect, and the option to buzz in went to Watson and Rutter.

“What is the 1920s?” answered Watson.

“No…Ken said that,” said Trebek. The crowd then erupted in laughter, but Watson didn’t hear that either.

 

Tricking Human Beings: The Turing Test

While Watson is considered the first of its kind in the field of AI, chatbots are nothing new. In fact, they have been the focus of one of the most intriguing philosophical aspects of artificial intelligence to have arisen in the last half a century – the Turing test.

“Do you not believe in this God?” I ask my new insightful companion.

Cleverbot had been rather excited to talk about deities after originally bringing up their connection with me having a soul, all subjects that left me feeling especially inquisitive, and utterly nerdy, for discussing them with a computer program.

“I don’t believe in spiritual beings,” it says back. Chuckling to myself, I take time writing out my next question. It has to be perfectly on point to illicit an entertaining response.

What I have learned over the last few exchanges with the program is that its behavior is highly dependent on my own. If you give it the slightest opening to veer off course and start rambling about something unrelated, it snatches the opportunity and the conversation falls to pieces.

“The code itself began life even further back, in 1988, when I suddenly saw how to make my machine learn,” said Rollo Carpenter, creator of Cleverbot, in an email. “A feedback loop, essentially, the words of user A used to respond to user B and so on, all done contextually,” he added. Carpenter based Cleverbot off a previous chatbot design named Jabberwacky that went live on the web back in 1997.

To keep Cleverbot on his toes and quipping in high-form, I decide that the best course of action is to get it to generalize about us, the humans.

“Do most humans believe in God?” I ask. By forcing it to form a fuzzy opinion about something it clearly has no knowledge on, it will have to either produce something golden or fall back on an over-used loop and change the subject.

“Yes, but I’m not sure they really think over the implications of that.” Again, he doesn’t just jump through my hoop; he skips through it with ease.

What begins to strike me most about this series of exchanges is not the depth of the answers, for any 100-level philosophy student can inquire about the nature of religion, but the vivid realism of the personality behind Cleverbot. It seems as if there is a pattern to his tone and diction, and if there really is no true pattern, I’m still beginning to second guess all my assumptions about the programming techniques.

Ultimately, Cleverbot is an addictive, mind-boggling rabbit hole because it plants a seed in the back of your head that keeps echoing the thought that Cleverbot can sound, at times, just like us. More acidic is the idea that if we were conversing behind veils of anonymity, would we be able to tell it wasn’t a human being?

“I propose to consider the question, ‘Can machines think?’” asks Alan M. Turing in the opening of a 1950 paper titled “Computing Machinery and Intelligence” in the analytical philosophy journal, Mind.

“The article in which he proposes this test is very weird. It’s not clear that he’s serious. It’s kind of tongue and cheek,” comments Cormier. Whether or not Turing was serious, his name has been attached to a philosophical and sociological landmark for artificial intelligence specialists for the last 61 years.

The Turing Test is officially defined as a machine’s ability to demonstrate intelligence, and has been specifically practiced by having a program communicate through text with a human judge. The intent is to reach a point where the program is so advanced that it would be difficult for the judge to tell whether or not they were talking with another real person. Therefore, passing the Turing Test is generally classified as an instance where a computer program is perceived to be human, even if only for a short interval like five minutes.

“One of the major problems with Turing Tests up until now has been that they are subject to tricks,” says Patrick Grim, a distinguished teaching professor of philosophy at Stony Brook who specializes in philosophical computational modeling, logic and ethics. “The problem is that, while they’ve done progressively well, it’s almost always been by tricks, or what afterwards look like exploiting the structure of the question asked, making it look like you were answering a question when you weren’t, changing the subject in clever ways…” he adds.

One of the most famous tricks of all originates with the very first chatbot program, named ELIZA after Eliza Doolittle of George Bernard Shaw’s Pygmalion. The program was designed and published by MIT professor Joseph Weizenbaum in 1966 to be what can only be described as a therapist bot. It proceeded in line with Rogerian psychotherapy, which has now become famously stereotypical for cheap and lousy advice because the therapist, whether machine or not, just rephrases a question or asks for more information based solely on what you tell it. A famous example – when told, “I am feeling sad,” you respond with, “Did you come to me because you are feeling sad?”

ELIZA was so good at this, in fact, that the human beings that participated in the experiment had to be thoroughly convinced afterwards that their therapist was a machine. Since 1966, ELIZA has been used as a universal foundation for future chatbots.

But two events near the end of the 20th century would mark a new era for artificial intelligence – a famous incident in 1989 that many consider to be an inadvertent passing of the Turing Test and the introduction of the Loebner prize in 1991, which has now become an annual showcasing of artificial intelligence progression.

In 1987, a University College Dublin undergrad named Mark Humphrys wrote a chatbot program named “MGonz” with ELIZA at its base. Two years later, Humphrys hooked the program up to his net account on an elementary version of the Internet, allowing any messages sent to him when he logged off to be received by the chatbot.

So without Humphrys knowing, a conversation took place on May 2 between MGonz and a friend of his from Drake University in Iowa that lasted nearly an hour and a half. The trick Humphrys employed there was making MGonz outrageously rude, loading his database with a number of expletives. The result was astounding – the human on the other end was so taken aback at the sudden confrontation that he fervently argued back for almost 90 minutes, not ever expressing any doubt in his adversary’s apparent human nature.

In 1991, the Loebner Prize was introduced as an annual Turing Test platform for chatbot programmers to test their artificial intelligence. The contest was created by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies in Massachusetts, and awards a prize to the one participating chatbot considered most human-like by a panel of judges who converse with both programs and other humans anonymously for an interval of 5 minutes.

Carpenter’s Jabberwacky has competed in the Loebner Prize’s version of the Turing Test a number of times throughout the last decade, taking home third place in 2003, second in 2004 and then first place in both 2005 and 2006 with updated personalities within the Jabberwacky program named George and Joan respectively. It’s clear that Cleverbot is a such a highly advanced chatbot because it is loaded with years of trial and error knowledge from a Loebner Prize-winning program. 

From Cormier’s experience with Watson, he is of the opinion that the supercomputer will have definite implications in the chatbot sector of artificial intelligence. “If you compare Watson with a chatbot, Watson is doing a much better job of carrying on a conversation than any chatbot ever has,” he says, referring to how Watson, despite not being able to actually communicate, is still competently replacing a human being for entire episodes of a game show.

But Grim sees the Turing Test and chatbot artificial intelligence as a very research-oriented field that has little interest in the corporate sectors of technology in which IBM is deeply entrenched. “Up until now, all of this has been little science. You could do AI with a computer in your garage. It’d be hard to do Watson on a computer in your garage,” he says. “If next year, a single individual has to compete against IBM, and they have this massive parallel device and I don’t, then that’s not much of a contest,” he adds.

Naturally, Cleverbot creator Capenter agrees with Grim. “The Watson-Jeopardy! system reveals what can be achieved with huge allocations of resources, computing power and data, though the result does not mean that the approach was the right one,” he says. “Of course it also does not actually converse. I believe that general natural language understanding can be achieved without relying entirely on a ‘brute force’ approach…”

Carpenter says he is still progressing, and that he has new pattern-recognition tools for his chatbots. Next year means another Loebner Prize competition, and whether there emerges a program that can universally pass the Turing Test,  or whether IBM’s Watson will have affects on future chatbot technology, is currently an unanswerable question.

 

The Nature of Human Intelligence

In a unique way, Watson is a computer that appears to be replicating human intelligence. Our very presuppositions about what is actually going on in our brain have been viciously challenged by the fact that we can design a computer to outperform us in “pop culture’s IQ Test,” as the NOVA special categorizes Jeopardy! Ultimately, we are forced to question what is inside our own head when a computer is more well-versed in demonstrating a monumental knowledge base in the confines of natural language.

“What’s amazing to me about all this stuff is that they are getting computers to recognize patterns. It’s one thing to write an algorithm that tells a computer, follow this rule, it’s another thing for a computer to develop the ability to follow the rule itself,” Cormier says of his overall experience with the supercomputer and reflection on its crushing victory over Jennings and Rutter. “That’s what they’ve achieved with Watson, and that’s amazing.”

Grim also finds the pattern recognition ability of Watson to be unprecedented, but from his viewpoint as a specialist in computational logic. “…it’s doing a pattern recognition thing across natural language, and it does it parallel,” says Grim. “Part of the cool thing is that it sort of has competing answers, and that seems really science like. We have alternative hypotheses, where does the evidence build up with most confidence in what area.”

Another fundamental truth about the nature of our intelligence that has been highlighted by Watson is the idea that we are reverse-engineering the human brain, even if it’s being achieved little by little and in roundabout ways like setting game show proficiency as an ultimate goal.

Cormier stresses that a computer that can play Jeopardy!, when you really think about it, is so astounding because the human brain is one of the most complex computers on the planet. “The brain isn’t like a silicon computer…it’s not a digital computer, it’s an analog computer,” he says. The statement raises an interesting thought – imagine that it takes a computer as powerful as Watson wired to a room full of the most up-to-date computing technology to achieve pattern recognition at a level of speed and accuracy developed by most elementary school children’s brains. 

“One of the neat things about the whole line of research of course is that you’re trying to build a machine, you’re trying to build it with certain capabilities, that are practical reasons for wanting a machine with those capabilities,” says Grim, referencing IBM’s press releases concerning alternative uses of Watson in a variety of other fields. “But in order to get one with those capabilities, the capabilities often happen to be ones that we have. Like we’re natural language processors,” says Grim.

“And so in order to figure out how to build it, you have to figure out how we’re doing it. Or that in building it, you at least come up with hypothesis as to how we do it,” he adds.

Both Cormier and Grim hit upon the same point concerning Watson – that as endless as its database is, Watson doesn’t truly understand abstractions as basic as color.

On the surface, it’s obvious that Watson has no contextual experience with red as a color, nor with something like Coca-Cola as a liquid, as Grim points out. “But then it makes you think, ‘Okay what it is it about meaning that we know that Watson doesn’t?’”

“That’s an interesting question, not necessarily because you want to give it to Watson, but because Watson could have things to tell you about your processing,” posits Grim. “And we’ve learned a lot about how difficult some of the simple things we do, like pattern recognition, are because we can’t duplicate them easily in a device.”

 

The Future of the Field

Watson may have enthralled Jeopardy! viewers, computer scientists and artificial intelligence experts. But its national television display, despite being overwhelmingly impressive, walks the precarious line of pigeon-holing the supercomputer.

“When they built Deep Blue that could play chess…well, that’s all the damn thing could do,” says Grim. “They [IBM] were sensitive to that when they took on this next task. They wanted to have something that people didn’t say, ‘Oh great, it plays Jeopardy! How about Wheel of Fortune?”

And Grim raises an extremely important question that IBM was very intent on addressing, which is what else could Watson possibly be used for.

IBM lists three major areas that Watson could revolutionize – finance, customer service and healthcare. The medical focus is the one being most championed by IBM and the mass media, especially considering the obvious utilization of Watson as an revolutionary medical database and diagnosing tool.

“I think there are medical decisions as to what ointment you would apply to a skin rash now that I’d be perfectly confident using Watson for,” says Grim. But Watson is limited; it doesn’t really have gut feelings or impulses that drive risky medical leaps of faith. “There are questions that have to do with whether my kid lives or dies that I wouldn’t trust Watson with.”

Grim also insists that IBM sees Watson as a product just as much as it does a revolutionary form of artificial intelligence. “IBM is not going to tell us what those algorithms are. That’s their product, that’s what they’ve got copywrited, and that’s what they’re going to be trying to sell.”

Grim foresees the next step of Watson as a hopeful look into what could be considered the first manifestation of real machine intelligence – a Watson that doesn’t simply answer questions, but one that asks them.

“…If we could have little machines that were scientific explorers that didn’t have to say, ‘Look to see if there are any blue rocks,’” Grim says, “But that could come up with suggestive hypothesis on the other planet, lines of research to pursue the way people could, that would be an enormous tool.”

Modern artificial intelligence is faced with a variety of routes as the possible advent of truly intelligent machines approaches. Should we continue to give computers distinct functions that resemble those of the human brain, or try and replicate the human brain to perfection from the ground-up, even t the sacrifice of sophisticated data mining? Watson and the Turing Test chatbots seem to fit these two parallel paths nicely, but which one holds the more promising future for artificial intelligence?

“They talk about cloning – someday we’ll be able to make new human beings. Well, we can already make new human beings,” says Cormier, who is of the firm belief that the future of artificial intelligence will not concern itself with replicating the human brain and placing it in a robot body. That line of discovery is often the subject of many futuristic films and books, but doesn’t seem very practical in Cormier’s opinion.

“We human beings are pretty good at reverse engineering. Nature has produced this brain and someday we’ll reverse engineer it, we’ll figure out how it works and we’ll build something that works comparatively similar,” he says. “But what would we do with a humanoid robot? It would be more useful to have something that was designed to serve a particular purpose.”

But if it really were possible to produce something greater than the human mind, a moment that artificial intelligence philosophy refers to as singularity, it would certainly be a source of fear and grave doubt, as fantasy and science fiction juggernauts  like Isaac Asimov and Philip K. Dick imagined in the weaving of their complex predictions of the future.

Cormier admits the impossibility of knowing right now, and raises a wall of defense for the other side. “Then again, maybe we’ll find it very difficult to produce something that’s literally much smarter than we are,” says Cormier with a shrug. “I suspect there may be a bit of a trade off between the ability to respond creatively to the world in the way people can and being able to use a vast storehouse of information.”

Cormier suggests that one of the most human-like qualities of all is the randomness of thought, something he says he battled with constantly while on Jeopardy! “My mind would wander. I’d have to call myself back, you know, concentrate on potent potables for $200,” he jokes.

“I think that ability to be distracted and wander around, use your fancy, and light upon something and go to something else,” he adds, “That’s what makes it difficult to simply be a information retrieval system, but it also makes it possible for me to have a conversation.”

“So, who knows?”

 

Judgement Day

Cleverbot is losing its edge. I try refreshing the page number of times, but that doesn’t seem to make it as interesting as it first seemed. The program keeps veering off topic, insisting that I am a robot and trying to feed me poorly articulated facts about its fake life, like, “I’m from San Antonio, Texas,” and, “I work in finance.”

I think maybe it’s time to let it go. Maybe Cleverbot, after dozens of lines of conversation, can’t keep up the ruse of humanity behind its algorithms. After all, modern Turing Tests like the Loebner Prize competition say that the bot should only have to fool me for five minutes for it to pass as a human being. Maybe Cleverbot can just barely last five minutes before spiraling back towards his silicon prison.

“When will computers become smarter than humans?” I pound my laptop keys a little too assertively. I can’t help but mutter to it childish labels like, “Dumb computer,” or, “Stupid machine.”

The blue line begins to flash, but takes a little longer this time. I can’t help but think it’s mocking me; maybe Rollo Carpenter designed it to take longer at random times to imitate the need to think longer about harder responses.

“Never?” it says back. I shake my head and scoff. Clearly it didn’t use the extra time to think of something more interesting to say. Wait, I tell myself. It’s not thinking; it’s just a program.

“Why not? Computers are not limited like the brain. They can always grow,” I say back with little hesitation. I feel like the student from Iowa who was fooled by MGonz back in 1989, as if now is my chance to enact revenge on the insulting program that fooled me once, but never will again.

“Computers don’t think. They calculate,” it says. Now we are getting somewhere. Again, I felt like the next question was pivotal. I needed to probe it in the perfect way.

“But what if the most advanced form of calculation becomes so close to thinking that we can’t tell the difference anymore?” I sit back with my hands behind my head, wearing a smug expression. The blue line comes back, but flashes only three times, which is what I recognize now to be the minimum.

“We were not talking about the Terminator.”

It’s something I don’t think I would have even said. It was clever, really clever. Maybe there is some for Cleverbot. If it could sleep, it just might dream of one day being able to enslave the human race.

Author

Write A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.