I've put a new paper online: "The Singularity: A Philosophical Analysis". This is a written version of the talk I gave at the Singularity Summit last October (Powerpoint, video, blog post). The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into computers, focusing on issues about consciousness and personal identity (I think this is the first time I've written at any length about personal identity, a topic I've largely avoided in the past as it confuses me too much). I'll be giving a talk based on this paper at the Toward a Science of Consciousness conference in Tucson the week after next, and also in upcoming events at NYU and Oxford. I'm still an amateur on these topics and any feedback would be appreciated.
IMO, people want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia - that would stop us from taking advantage of them.
Posted by: Tim Tyler | April 03, 2010 at 12:03 AM
I've included this post in the latest Philosophers' Carnival. I hope that's ok!
Posted by: Tuomas | April 06, 2010 at 08:41 AM
Yes, unfortunately it may be unlikely that the people who develop AI will try to maximize the chances of a good outcome.
Posted by: Jesper Östman | April 08, 2010 at 08:58 PM
If Artificial Intelligence is a measure of knowledge and responsibility then there is nothing to worry as far as singularity happening. Intelligence is conjectural and speculative, so in this aspect moral circuits will have to be implemented for groups to allow group survival and overdrive selfish interests of individuals (humans or not) in the group. Intelligence is about competition and “outsmarting the others” but moral intelligence will constrain it globally for the welfare of the group.
Posted by: Doru | April 17, 2010 at 12:52 AM
As I started reading this paper, I asked my children, " Do you think computers will ever be more intelligent than humans?" My 9 year old son replied, "No, because they don't have imagination." Any thoughts on articles that address this question? thanks.
Posted by: Amy | June 10, 2010 at 03:33 AM
True, but your first sentence says it ... "their goals" ...
This implies some management & governance that limits "the machine's own goals". For me the philosophical crux is "the good" - what makes good goals, what makes good decision-making meta-process towards defining such goals. After aeons of human evolution, I'm not sure we want machines to work this out for themselves from scratch - like survival of the fittest machine (?) ... without the benefit of our own learning.
And that's not just paranoia.
Posted by: Ian Glendinning | June 11, 2010 at 06:13 PM
@Amy
Yes.
See my: "Are Theories of Imagery Theories of Imagination? An Active Perception Approach to Conscious Mental Content." Cognitive Science, 23, (1999) 207-245 http://www.imagery-imagination.com/im-im/im-im.htm
This covers (amongst other things) the relevance of active-perception robotics to the understanding of imagination. (Your son is right, mere computers do not have imagination, but the right sorts of robots just might.)
There is other related material on my site too (although not a whole lot more about robotics, I admit). In particular, although the page at http://www.imagery-imagination.com/newsupa.htm is incomplete and fragmentary, it does go some way toward bringing the 1999 theory up to date, and linking it to more recent neuroscience evidence, as does http://plato.stanford.edu/entries/mental-imagery/#BeyPicPro (from the Stanford Encyclopedia of Philosophy).
Also: P.J. Blain (2006). A Computer Model of Creativity Based on Perceptual Activity Theory. Unpublished doctoral dissertation, Griffith University, Queensland, Australia: http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070823.171325/index.html
This is an AI model of creative imagination (within a simulated "world") based upon my theory.
Posted by: Nigel Thomas | August 02, 2010 at 01:16 PM