I've put a new paper online: "The Singularity: A Philosophical Analysis". This is a written version of the talk I gave at the Singularity Summit last October (Powerpoint, video, blog post). The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into computers, focusing on issues about consciousness and personal identity (I think this is the first time I've written at any length about personal identity, a topic I've largely avoided in the past as it confuses me too much). I'll be giving a talk based on this paper at the Toward a Science of Consciousness conference in Tucson the week after next, and also in upcoming events at NYU and Oxford. I'm still an amateur on these topics and any feedback would be appreciated.