This comes as a result of a conversation I was having in another thread here… Artificial Human Companions
We were chatting about Artificial intelligence and computers which got me to thinking about trying to express the human condition in terms of software and programming logic. This is of course what the whole AI trip is about. By achieving true AI mankind will have had to look at himself and come to grips with what makes him fundamentally different and unique, and then replicate that electronically.
Before I go further, I should probably qualify myself…I don’t know much about computer programming or fuzzy logic or any of that, (I have a basic appreciation and a bit of Python..thats it). But for the purposes of this conversation, I don’t really have to. It’s a more of an abstract thought experiment. If however you feel that you want to show your wizardry of programming in a fashion that could de-rail the thread…please go to Slashdot. I’m sure they will be sympathetic.
In very broad terms, computer programs are made up of 1’s and 0’s…on/off…yes/no…up/down. If something is such and such then the next thing is so and so, very deterministic. As far as the human condition is concerned, at lot of our actions are predictable and pre-determined knee jerk reactions to everyday common events, (it saves time thinking about whether you REALLY want a Coke or a Pepsi...you just take what you got yesterday), or as a result of some environmental influence, (your Sister likes Pepsi…you hate your sister so you take a Coke..whatever). For such situations, binary (1’s and 0’s) is cool, no hassle…if this..then that. Assuming all variables are known and catered to. I’m sure that one day soon some geek will sit down and re-create the character of his dead poodle in binary…and he’ll probably make a fortune doing it.
But what of free will? How do we code free will into an AI entity…can we?…do we want to?
What happens when we stop and really think about getting a Coke or a Pepsi, how do we make a choice?
I suppose only those who are interested in free will, (or believe in the concept of it) will find this of interest.
One of the concepts that came up in the other discussion was “Trinary” programming as opposed to “Binary”. Trinary, (actually called ternary Logic for those that care), uses 3 “places” instead of 2… 1 for “yes”, 2 for “no” and a third place..0 for “unknown, irrelevant or both”. In other words…uncertainty.
The funny thing about uncertainty is its association with choice…you have to be uncertain about something in order to make a choice. I don’t mean choosing between a set of pre-determined outcomes…looking at a menu in a restaurant has an element of choice, but not true free will…if you really feel like lobster Thermador and it aint on the menu…your screwed.
Coming back to creating true humanlike AI, one would probably have to look at employing a non-deterministic architecture, (like Ternary logic), and weave in this “uncertainty” to make any headway in replicating the vagaries of the human condition. This is all groovy, but the question that really bugs me is:
Why?
Why would we need the 0?….From where does this uncertainty arise?…what would precipitate the uncertainty that would necessitate a 0?
I find it fundamentally unsettling to think that the human condition can be expressed as the ability to say “How the hell should I know if I want chicken or beef...” and truly not know what you feel like.
So what do you all recon…binary is all that’s needed to understand the machinations of the human mind…or we require that funny little nothing in the middle to make us complete?
I should probably give credit to Sapiens Vir for getting me to thinking about expressing humanity as 1’s and 0’s. Thanks…I think.
Bookmarks