27 October, 2009

David Chalmers, speaking at the Singularity Summit in NYC, 2009.

The Intelligence Explosion 

David Chalmers 

(summarized from notes, with detail added by Alex J. Avriette, Research Director, Spun Flight Research)

This paper was presented on October 3, 2009, at the Singularity Summit, by David Chalmers. I summarize his work here, and give credit to him, and have added small pieces of clarification along the way to help the lay-person.


Let us start with a few simple precepts or a vocabulary if you like:
  1. AI is just human (or greater, but not yet AI+)
  2. AI+ is greater than human
  3. AI++ is far greater. This is what we would call not merely "smarter" than human intelligence or AI+, but rather "superhuman intelligence." I can think of a few science fiction examples, but let's avoid those for now.
If we start with the premise that at one point, there will be AI, we have to accept that AI+ will emerge. This is simply because, as a piece of software or hardware or ideology, name your approach, becomes more honed, the better it performs, and iterative increases in its ability will lead us to an inevitable AI+.


What is scary (or hopeful, depending) about this is that AI+ is to AI++ as we are to AI and possibly AI+. Once AI+ exists, AI++ is a given.


David paused, though, to go back to the first premise which needs closer analysis. You have to state that AI will exist period. Once you can have a "proof" of this, the result, leading to AI++, is unavoidable. But how soon? Chalmers was very clear in saying that "2035 is optimistic." (for AI) Instead, he feels that "within centuries" is a more reasonable timeframe. But, as the nature of singularities show us, After AI emerges, AI+ will emerge soon after, and the definition of "soon after" may be as little as years or sooner, and subsequently AI++ will be on the scene almost immediately after AI is sentient, self-aware, and extant.


While the certainty of AI (and thus AI++) is given, he also says that there are obvious ways to curtail or at least slow its development. Among the causes for retarding (in the literal sense of the word) the growth of artificial intelligence are disaster, such as drastic climate change or war, and active prevention, as we see in the United States with stem cell research. The (perhaps good?) news regarding AI on supercomputing assets in the United States is that the US has the biggest and meanest computers, continues to hold that edge, and has no problem with either WBE (whole brain emulation), cognitive learning programs, heuristic learning devices, and even Gödel type approaches, wherein we try to "know everything" by using Kurt Gödel's work. The efficacy of this is unknown. There may be a "Gödel-complete" answer out there, but it's substantially more likely that AI++ will find it, rather than we mere mortals.


He continues and stresses that human biological reproduction is not an expandable process, at least not in a relevant way to progress with AI. By the same notion, because we don't understand the way the brain works, WBE is also not likely to increase the pace from !AI to AI or further.


After these subtexts, he returned to the meat of the talk. He said, so, will there be AI? The answer, Chalmers feels, is simple: evolution got here, to intelligence, to our brains as complicated and perhaps even quantum devices (per Kurzweil) in what is an elegant, but essentially "dumb" process.


His hope is that the product of evolution, us, can see the merits of multiple approaches, and create new life without having to go through the millions upon millions of years of reproduction required for evolution to produce something as smart as us. If we take us, as H, how long will it take before an H+ is generated by evolution? Furthermore, unless we have a very firm understanding of the way H works, H+ isn't likely to be able to create H++ without the groundwork being laid. This is not the case with AI, AI+, and so on. We can refactor, repurpose, reinitialize and learn from our mistakes as we move from !AI to AI to AI+. Evolution is a product of circumstance; scientific discovery and progress is the product of the scientific method which allows us to avoid previous "mistakes" or "branches of an evolutionary tree that shouldn't have happened" (I think he's talking to the ID people, there).


Crucially, he sees this as taking far, far less time than evolution did getting us to H. He continues with another proof:
Any system S can create S+. If that is the case (and it seems that way) then S++ is inevitable. Doing it through evolution is slow and stupid, but _still likely to work_.
Which of course covers his feelings on the probability of AI coming into existence, of H, and H+ or AI+, and the inevitability of change for the positive -- or to use a less loaded word than "positive," advantage of the organism, be it H or AI, or "system S." But! he argues, what will AI+ or AI++ think of H or H+? He says that we have no real way of making sure that AI+ doesn't decide to turn us all into "coppertops" (thanks, Carrie-Ann Moss) or food (that one from Charlton Heston) or to just irradiate us all and be done with the scourge of humanity. He proposes two strictures for AI research, especially where AI+ is capable or bound to come from.
  1. Constraints on the environment, laid out before the experiments begin.
  2. Ongoing control and changes in the environment, or put simply, "tampering" so that we don't allow an AI+ or AI++ organism to come eradicate us.
In a nutshell, his argument is to create AI+ in a simulated environment (think Holodecks without Wesley Crusher, Vic Fontaine, Nazis, and things like that) so we don't have to worry about it being batshit insane. While this is important for AI+, we must not proceed without these sorts of constraints before A++. But he, like many people presently thinking these thoughts invokes the sort of "Heisenberg Argument" (I swear, it's starting to sound like the Chewbacca defense it's used frequently).
A fully leakproof singularity is either pointless or impossible. A non-pointless singularity can be observed, but as we observe the simulation, it affects us.
If their simulation gives them information about us, they (the AI citizenry for lack of a better term) are far more likely to leak out because they will perceive that their environment is incomplete.


What is really fascinating about that statement, is that while it seems plausible and almost inescapable, it relies on the presumption of curiosity. And we don't understand curiosity, we don't understand how to make software curious, and it hasn't been demonstrated. Is it that between AI and AI++ that it will become curious? What if it isn't? What if it's happy where it is? We just don't have these answers right now.


In conclusion, he basically paints a grim picture (especially in light of the other speakers). A post singularity world almost certainly results in mind-upload and human self-enhancement. This sounds benign on paper, but it amounts to the end of H in favor of H+ or H++/AI++(with H) or something. But, no question, it's the end of humanity as a nominal h. sapiens. The consequences of not playing the AI++ game makes us dinosaurs. You will not have access to the most crucial information, sensory information, or even society that anyone who's made the trip from H to H++ will have. And what's the life expectancy of H-style dinosaurs, be they on Earth, in Space, and so on?


So, we destroy who we are, as we attempt to reach the pinnacle of synthetic thought. Is that a good thing? He leaves this question to the audience.


Now, AI itself, let alone AI+ [or H+], is out of our grasp right now. And we have no idea how a computational system can be conscious. However this painfully ignores one crucial fact, pointed out by Chalmers: we don't have any idea how a human brain can be conscious. So far, all we have is guesses. (other talks that I'll be summarizing or talking about here deal with human consciousness at a physical level)


This was mostly the end of his talk. He suggested looking on Google for the following three links:
What did I think of the talk? I was furiously writing notes down the entire time. The man is brilliant. He sticks to his field (there were others there to discuss ethics and what to do with "AI in a jar" (e.g., simulation), and by so doing was able to give the audience a very, very thorough understanding of his work. I found it absolutely stellar. He and a couple others (perhaps 3) are the reason Spun attend SS10, which is hopefully to be held in CONUS rather than, say, Fiji. It's a business expense for Spun, sure, but we have to have enough revenue to deduct the expenses. :)

The man's a genius. I hope I have done him justice here.

Please Donate To Bitcoin Address: [[address]]

Donation of [[value]] BTC Received. Thank You.
[[error]]
Post a Comment