Top Mud Sites Forum Return to TopMudSites.com
Go Back   Top Mud Sites Forum > Mud Development and Administration > MUD Coding
Click here to Register

Reply
 
Thread Tools
Old 07-02-2003, 06:24 PM   #41
JusticeJustinian
New Member
 
Join Date: Jun 2003
Posts: 7
JusticeJustinian is on a distinguished road
We've already got an algorithm that's theoretically capable of passing the turing test. Neural Networks, afterall they're based on our own neurons.

Will it be able truely to pass the turing test? Not for quite some time.

The qualitative difference between sentience and computational power is the ability to learn.

-- Kwon J. Ekstrom
JusticeJustinian is offline   Reply With Quote
Old 07-08-2003, 12:20 AM   #42
Eagleon
Member
 
Join Date: Apr 2002
Location: Milwaukee, WI
Posts: 147
Eagleon is on a distinguished road
Send a message via ICQ to Eagleon Send a message via Yahoo to Eagleon
The thing I think we're missing in current AI is goals and barriers to those goals. Some sort of incentive in the bot's programming to do something, anything, other than respond when the other person types, combined with blocks that would require the AI to develop more than just a 1-2 approach to things. The ability to know whether something is a block or not is also essential. This would not only make it more efficient at attaining the goals assigned to it, it would also make it go through the side-topics that humans have to in order to really learn something. Humans can not only learn that 1+2=3, but also the rest of addition, and that 1+2=3 is the same process as 5+26=31, and that this is actually both signifigant and useful for doing other problems.

Assign the goal of language and knowledge, and the bot could eventually see that in order to learn how to speak, it has to pursue topics the person it is talking to mentions. Assign it avoiding being killed, and number of kills in a FPS, and a bot could learn to learn where the powerups are, when to use which guns, and maybe even some more advanced tactics.
Eagleon is offline   Reply With Quote
Old 07-08-2003, 08:12 AM   #43
enigma@zebedee
Member
 
Join Date: Mar 2003
Posts: 70
enigma@zebedee is on a distinguished road
Quote:
Originally Posted by

The Wolfenstein -> UT transition was entirely evolutionary--and quite simple at that. All of the science behind UT has been well understood since the advent of linear algebra. The only obstacle was the capability of the computer to pump out pixels quickly and still have enough cycles to spare for simplistic AI.

The transition from simplistic finite state AI to a program capable of passing the Turing test will require a revolutionary discovery. We do not yet even know how to begin to approach the problem. In other words, unlike realtime graphics rendering, believable AI is not simply a matter of having sufficient processor speed. There is a qualitative difference between sentience and raw computational capacity that we have yet to fully identify.
I was not discussing the graphics - which as you say while advanced are evolutionary. I was referring to the AI.

In wolfenstein you had bad guys that basically walked towards you and shot. That was pretty much it.

In UT you have bots that will work together as a team, wait in ambush, find and use different weapons, etc, etc.

Yes it's a controlled environment - but so long as you don't try and talk to them you cannot tell the difference between a bot player and a human player much of the time.

Obviously conversation is a harder goal to achieve - but people are making progress towards it.

I agree that a true AI (Machine Sentience) is a good way off yet. On the other hand convincing AI (Turing Test in set situations - for example a convincing peasant in a fantasy world) is not.
enigma@zebedee is offline   Reply With Quote
Old 07-08-2003, 08:35 AM   #44
Yazracor
New Member
 
Join Date: Apr 2002
Posts: 18
Yazracor is on a distinguished road
Quote:
Originally Posted by (JusticeJustinian @ July 02 2003,23:24)
We've already got an algorithm that's theoretically capable of passing the turing test.  Neural Networks, afterall they're based on our own neurons.

Will it be able truely to pass the turing test?  Not for quite some time.

The qualitative difference between sentience and computational power is the ability to learn.

-- Kwon J. Ekstrom
To my knowledge I do not believe that what is currently termed "neuronal networks" has been proven to be theoretically able to pass the Turing Test - especially since a mathematical description of the Turing Test does not exist (if I am wrong here, ignore the rest of my post, please).

Also, neuronal network models, and even more so, neuronal networks used in production systems, are definitly not based on our own neurons. True, some ideas behind them might stem from a study of them, but they are so different in behaviour that "oversimplified" does not even come close to it. And even approximating the behaviour of even a single neuron more than just in a broad way is currently beyond the state of the art.

That the only difference between sentience and computational power is the ability to learn, I believe, is very, very wrong. There already are systems that can infere from facts and learn from mistakes - but is an expert or fuzzy logic system sentient?

What neuronal networks, expert systems and all other synthetic systems currently lack and what is an integral part of sentience, is reflection upon their own state, i.e. the ability to extract information about themselves. No neuronal network can tell you, e.g., what it "looks for" when it classifies images.
Yazracor is offline   Reply With Quote
Old 07-08-2003, 06:15 PM   #45
shadowfyr
Senior Member
 
Join Date: Oct 2002
Posts: 310
shadowfyr will become famous soon enough
Quote:
Originally Posted by
To my knowledge I do not believe that what is currently termed "neuronal networks" has been proven to be theoretically able to pass the Turing Test - especially since a mathematical description of the Turing Test does not exist (if I am wrong here, ignore the rest of my post, please).
Your not wrong. The main issues are a) how to connect different nets, b) how to make them do what they are supposed to and c) providing the proper level of complexity.

The first issue is simply that they don't have the capacity to adapt in the way a real brain can to sudden changes in inputs. You can build a network that can tell a flower from a car, but break apart some of the connections and try to attach a camera of the sort that are being used to help replace human vision and the network will self destruct and stop working. You have to start with the assumption that it will take that type of input and even then, get the wrong pins hooked up on the new 'eye' and it gets confused, because things are not where it expects them. This problem has been 100 times worse when you try to take one such netowrk and plug it into the inputs of the other. I assume that instead of taking the test output and feeding it to the test input they tried wiring them directly into each other, which may be a stupid mistake anyway.

The second issue is a major complication, because if you are not careful about what information you provide it, the network can make associations that are flat out wrong and miss what you where trying to teach it completely. One famous instance was one the military tried to teach how to see tanks hidden in forests. Somehow all the photos with tanks got taken on days that where overcast. The network learned to distinguish between overcast and sunny days and ignored the tanks completely. Oops!

The third issue is tied to some extent with the first. The original asumption when they started building these things was that the human brain was much more connected and homogenous in function. Since then both experimentation and studies of individuals with certain types of damage have shown that we don't have one single network, or even 50 specialized ones, but that even the specialized section dealing with the ability to see distance is broken up into subsections that distinguish real size, distance from viewer, relationship to other objects, orientation, etc. and even subsections that coordinate these thing depending on if we are consciously describing them, or merely attemting to pick it up. Break the ability to correctly identify size consciously using common illusions and your hand still knows the correct size and placement of the object, even if you insist that the object looks bigger than it is.

For neural nets to provide more than simple processesing will require lots of specialized networks, connected to other localized networks, connected to more general imformation organizing networks and then back the other way through more specific networks to produce the text, sound, etc. it uses to communicate what is going on. Without a significant change in the type of equipment used to build such and some way to integrate all the desperate pieces on the final build without the whole mess going literally insane, it won't even produce believable AI. And if you did manage it, the result would be as unique and unreproducable as a real person, requiring that you build the next one entirely from the ground up all over again.



As for the discussion of Chat-bot types and more focused and specific concepts. There was a project some time back, and probably still on going, called Cyc. At the time I read the article about it, they had managed to get it to a believable 4 year old level. It used a complex database that specifically included the capacity to make complex interconnections between ideas. When not busy learning it was searching through all the stuff it had been fed for the day and piecing together various likely connections and assigning them probabilities of being true based off its past knowledge. The key here is that it 'never' made the type of assumption that a chat bot does, which is that what it knows is infallable and always correct. This allowed it to be smart enough that it could likely learn that someone talking about the death of there uncle is not likely to see them again, but that the death of their friend to a huge troll the day before was temporary.

Such a system would however require its own dedicated server to run (the complex database searching would kill the muds server), you would need some sort of method to determine what sort of things should actually become common knowledge for the entire AI and what things should be specific to only the NPC that learned it (probably requiring an admin to review such, since such review is also needed to help it correct confirm any connections it makes between new words and ideas with things it knows) and finally, even in a limited environment like a mud, it is unlikely it will 'mature' significantly. It will also start out appearing to be quite stupid and have to get the feel of proper interaction over time. This last bit becomes an issue of course when the NPC is a shop owner and some player tells it, 'give me some bloody lights', but it doesn't yet know that a light and a torch can be equated.

In any case, such an AI could in time fool a lot of people, but requires a great deal of time and effort to correct its mistakes and misassociations. Something that 'may' be possible to do by giving repeated information more weight than something it hears only once, but could lead it into a AI form of gullibility and syntactical chaos be anyone that figured out that they where talking to an AI and decided to play a joke on it by feeding it false information. A problem that only grows worse if what it learns is explicit language or gets told that some player prefers to be called something that it isn't yet aware of as a cuss word or rude.

Though such an AI is very possible. The question is, does anyone want to take the time to teach it properly and when if ever do you decide it has learned enough of the general knowledge needed to let you freeze the more complex learning system it has and not have it fall flat on its face the first time it runs into something it is no longer able to intergrate. After all, it is not practical to let its knowledge base expand infinitely and even in a mud environment there is no certainty that it will even reach a point where its growth levels off enough that you won't need to worry about loging in and finding that it just added 500 new words and 1000 theories about how they interconnect. One would hope however that it would eventually level off. lol
shadowfyr is offline   Reply With Quote
Reply


Thread Tools


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off

All times are GMT -4. The time now is 10:04 PM.


Powered by vBulletin® Version 3.6.7
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Style based on a design by Essilor
Copyright Top Mud Sites.com 2014