Blog powered by Typepad

« PhilPapers Survey results | Main | Constructing the World »

April 02, 2010


Tim Tyler

IMO, people want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia - that would stop us from taking advantage of them.


I've included this post in the latest Philosophers' Carnival. I hope that's ok!

Jesper Östman

Yes, unfortunately it may be unlikely that the people who develop AI will try to maximize the chances of a good outcome.


If Artificial Intelligence is a measure of knowledge and responsibility then there is nothing to worry as far as singularity happening. Intelligence is conjectural and speculative, so in this aspect moral circuits will have to be implemented for groups to allow group survival and overdrive selfish interests of individuals (humans or not) in the group. Intelligence is about competition and “outsmarting the others” but moral intelligence will constrain it globally for the welfare of the group.


As I started reading this paper, I asked my children, " Do you think computers will ever be more intelligent than humans?" My 9 year old son replied, "No, because they don't have imagination." Any thoughts on articles that address this question? thanks.

Ian Glendinning

True, but your first sentence says it ... "their goals" ...

This implies some management & governance that limits "the machine's own goals". For me the philosophical crux is "the good" - what makes good goals, what makes good decision-making meta-process towards defining such goals. After aeons of human evolution, I'm not sure we want machines to work this out for themselves from scratch - like survival of the fittest machine (?) ... without the benefit of our own learning.

And that's not just paranoia.

Nigel Thomas



See my: "Are Theories of Imagery Theories of Imagination? An Active Perception Approach to Conscious Mental Content." Cognitive Science, 23, (1999) 207-245
This covers (amongst other things) the relevance of active-perception robotics to the understanding of imagination. (Your son is right, mere computers do not have imagination, but the right sorts of robots just might.)

There is other related material on my site too (although not a whole lot more about robotics, I admit). In particular, although the page at is incomplete and fragmentary, it does go some way toward bringing the 1999 theory up to date, and linking it to more recent neuroscience evidence, as does (from the Stanford Encyclopedia of Philosophy).

Also: P.J. Blain (2006). A Computer Model of Creativity Based on Perceptual Activity Theory. Unpublished doctoral dissertation, Griffith University, Queensland, Australia:
This is an AI model of creative imagination (within a simulated "world") based upon my theory.

The comments to this entry are closed.