AI or Not AI?

The basic answer is Not AI.  In Chapter Six, today’s installment of the online novel Social Tech High, at http://SocialTechNovel.SocialTechnology.ca/  I illustrate some user dialogue.  This does look at first glance as if it was Artificial Intelligence.  As envisioned, it will indeed have something in common with the low-tech AI toys associated with the classical MIT school of AI, but it is not intended to ever be anything like a an actual artificial intelligence.   It is really just a natural language interface to a large database of questionnaire data.   Answers provided by the user change the user’s profile slightly and raise the estimated values assigned to questions in that database.  The question with the highest probability of providing useful information is asked next.   This is the basis for the user dialogue, which will continue as long as the user cares to interact with the system.   As envisioned a user may be presented with spontaneous suggestions, but will more commonly ask for them.

Making this work requires a large database of questions and answers, as discussed in previous posts.   Only some of those can come from existing social survey data.   New questions will have to be added, and this will require a cooperative user base, receptive to new questions.    The willingness of users to be “beta-testers” of new questions is something that can be a new “meta-question”, randomly added as an initial question to various users, perhaps those meeting some profile suggested by the question designed.  It can be seen that adding questions to the database is analogous to developing software.  In a way it is developing software.  

Could the whole database and the software which drives it become a kind of AI?   That is not impossible, but there must be some way to prevent the dialogue from degenerating into the kind of nasty feedback loop which occured when pseudo-psychiatrist Eliza met pseudo-paranoid PARRY.    A memory for past questions and responses would need to be added.   Internally, the programs would have to ask themselves, “Have I been asked this question recently?”, “Is my best current answer the same as it was recently?”  “Should I invoke one of the standard methods of diverting the conversation in another direction? ”  “Which would be the best one?”  or some other questions to prevent feedback driven oscillations.

Would that indeed be a practical way of implementing an AI, even if one is not required for the envisioned social technology?   Maybe.    I welcome others opinions, but again, I do want to focus on the practical immediate problems, giving people tools to help them optimize their own social environments.  — dpw

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *