However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the Introduction to his paper presented it more as a debunking exercise: [In] artificial intelligence ...
The observer says to himself "I could have written that".
With that thought he moves the program in question from the shelf marked "intelligent", to that reserved for curios ... is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966.
The object of this paper is to cause just such a re-evaluation of the program about to be "explained". ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. This is not strong AI, which would require sapience and logical reasoning abilities.
A chatterbot (also known as a talkbot, chatbot, Bot, chatterbox, Artificial Conversational Entity) is a computer program which conducts a conversation via auditory or textual methods.
Such programs are often designed to convincingly simulate how a human would behave as a conversational partner, thereby passing the Turing test.
Chatterbots are typically used in dialog systems for various practical purposes including customer service or information acquisition.
Some chatterbots use sophisticated natural language processing systems, but many simpler systems scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database.
The term "Chatter Bot" was originally coined by Michael Mauldin (creator of the first Verbot, Julia) in 1994 to describe these conversational programs.
which proposed what is now called the Turing test as a criterion of intelligence.
This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human.
The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human.