Thread: Mud-AI/NLP
View Single Post
Old 08-09-2003, 11:01 PM   #14
Fharron
New Member
 
Join Date: Jun 2003
Posts: 26
Fharron is on a distinguished road
Most chatterbots operate in a manner synonymous with John Searles famous Chinese Letterbox thought experiment. The Chinese Letterbox thought experiment is often discussed in debates surrounding AI and Qualia (conscious experience: what it is actually like to be in a certain state).

Imagine that you are locked in a room. You do not understand Chinese. Through a letterbox in the door come various Chinese characters printed upon cards. Resting on a table in the room is a book and a pile of cards - with other Chinese characters on them. When a card arrives through the letterbox you consult the book and find the Chinese character. The book will then indicate the suitable card, from amongst the pile upon the table, that you are to send back through the letterbox as a response.

From outside the room it appears that you are answering questions with a thorough understanding of Chinese. The cards coming into the room are questions written in Chinese and those that your are sending out are answers in Chinese. Even though you don’t understand Chinese to the posters outside the room it appears that you do. In fact you are simply manipulating what are, to you, meaningless characters.

Chatterbots tend to adopt the Chinese Letterbox approach, with various ad hoc functions to parse contextual differences. They never really understand what they are being asked or what the responses they regurgitate imply. Simply put, they generate the illusion of understanding, but only until a knowing subject, who for want of simplification truly does understand, identifies an errant response.  

Steven: are elephants big

Chatter checks his cards looking for one with the symbols for elephants and big. It finds one that matches both entries and sends it as output.

Chatter: yes, elephants are big

Steven: is a miniature model of an elephant big

Chatter checks his cards looking for one with the symbols for elephants, miniature, model, and big. It finds one that matches with symbols for elephants and big, but fails to find one that also has symbols for miniature and model. It therefore identifies this as the closest match and sends it as output.

Chatter: yes, elephants are big

In order to generate believable AI perfecting the Chinese Letterbox system, through multiple ad hoc contextual information, is perhaps not the best route for development. I believe that a more suitable method might be too fully-exploit the object-orientated language computer programs are familiar with, perhaps in conjunction with the databases already inherent within mud code (mobile, world, player, object files).

Wouldn’t it be better to leave behind the smoke and mirrors, the hit and miss trickery of the stage bound prestidigitator. Instead of trying to fabricate the appearance of understanding why not focus development on a text to code translator.

For example the statement –

"Down at the docks there is a ship called the Mary Celeste. Anyone who
attempts to approach her has to deal with 5 tough guards. Inside the ship is a huge treasure including the eye of Orisis."

would be roughly translated as follows -

Object mary celeste is not in the mobiles current room
Object mary celeste is in room 100 (the docks)
Object mary celeste is a type of ship object
Room 100 also contains 5 guard objects
The 5 guard objects in room 100 have restrict entry spec procs
The object mary celeste has contents
The contents of mary celeste = eye of osiris

Which could in turn support the following type of conversation -

Player: I just visited the Mary Celeste.

Player HAS_SEEN the mary celeste object
The object mary celeste is IN_ROOM 100 and is OBJECT_TYPE ship

NPC: She is a fine ship, currently moored at the docks I believe.

Player: Do you know what cargo it is carrying?

The mary celeste object HAS_CARGO
CARGO_CONTENTS of the mary celeste object = eye of osiris

NPC: I believe the eye of osiris is part of its registered cargo

Mobiles could have their access to the various databases limited by zone number, location, distance, class specific knowledge, their attributes Intelligence et al, and so on. This would narrow down the information available to the mobile, eliminating a number of unrealistic responses.

The general conversation of mobiles would also be far easier if player input was first translated into a programs recognized language, before a response was produced. Muds tend to utilize an object-orientated approach and language itself is largely object-orientated.
 
Language can be categorised as the explanation of the spatial and temporal relationships between a ‘person’ and another ‘person’, a ‘person’ and an object(s), an object and another object(s).

Mobile: David is here with me.
Mobile: David is holding the apple.
Mobile: The apple is on the table.

Language is also used, sometimes, to make references to the relative spatial and temporal properties of knowing subjects (a person) and objects.

Mobile: David is larger than the apple.

In addition to these basic elements language, albeit to a lesser degree of logical accuracy, also attempts to explain the emotive and/or subjective relationships between a ‘person’ and other ‘persons’ or a ‘person’ and an object(s). These explanations usually concentrate upon a single property of either a subject or object, or a group of properties unified by a common variable.

Mobile: The apple is green.
Mobile: Green is my favourite colour.
Mobile: I like apples because they are green.
 
Granted, exceptions are sometimes made to this basic framework of language usage. However, for the majority of mundane conversations they tend to hold true. All of the aforementioned pieces of information could easily be translated into an object-orientated computer language.
Fharron is offline   Reply With Quote