[All videos for the summit are available. Click a link, then click “Watch Full Video]. See also my review for Day 1 of Singularity Summit.
On the second day of the Singularity Summit, presenters took their gloves off, rolled up their sleeves, and pushed the discussion to infinity – and beyond. Many of the talks on day two were philosophical, even though, as Vernor Vinge said, we are talking about “situations that are seriously strange.” That’s what you get when you look ahead to superhuman intelligence, much less question the nature of humanity, mind, intelligence, and reality.
The day started with a Skype interview, the looming face of Daniel Kahneman smiling down at us in the auditorium. Kahneman is a Nobel prize-winning economist, author of Thinking Fast and Slow, and expert in decision-making. Since the conference in one sense is about what to do to get safely to artificial intelligence, nanotech, and transformed humans, good advice on making decisions would be helpful! In Thinking Fast and Slow he presents a model of the mind that describes two general “systems” – system 1 (which is automatic, does pattern recognition, and is fast) and system 2 (which does deliberate thinking, makes intellectual decisions, and is slow). Much of the interview, however, dealt with another of his areas of expertise: cognitive bias. This refers to our tendency to believe something for wacky and irrational reasons – for example, the “status quo bias,” which leads most of us to prefer the current situation over an unknown future.
It’s nice to see that those who anticipate the end of the world as we know it are at least open to changing their minds!
In light of my characterization of the entire singularity movement the other day as the New Optimism, I was tickled when Kahneman replied to the question “What is our most important bias?” by saying “optimism.” He delivered this with his typical jolly laugh, and went on to explain that even though optimism may not always be justified, it does have effects – it influences other people, who then go on to further the thing that you were optimistic about; what is called a self-fulfilling prophecy.
At the end he related an anecdote when working with the Israeli air force. He told a military trainer that kindness works better than yelling. The trainer didn’t believe him, telling Kahneman that in his experience, praising a pilot for a good run is pointless: the pilot will do worse later. But yelling, now that works: the pilot usually does better the next time. Classic cognitive bias, says Kahneman: the trainer believes it shows the value of martial meanness, but what it really demonstrates is regression to the (statistical) mean – for any skill, most of us have an average ability that we return to after doing it particularly well or poorly.
Then he was asked “What is the most important bias in relation to expectations of the singularity?” To which he replied “the biggest bias is believing in scenarios, in the stories we tell ourselves. The singularity scenario looks like an inevitability, but inevitable things don’t always happen.”
We then did a neat turn from jets to copycats. Melanie Mitchell walked out on stage, and in her enigmatic engaging fashion gave us a pocket history of her work with Douglas Hofstadter on metaphors, understanding, and – again – cognition. The twin pillars of the current singularity universe, you might say, are computer science and neuroscience; in the current thinking, these two are the royal road to the next step in evolution. Mitchell is a computer scientist who hails from Portland, although she grew up with an SDS Sigma 5 computer in her den, with the phrase “I pray in Fortran” taped to it. With a childhood like that, what can go wrong? In the past she’s co-authored several articles and books with Hofstadter, argued strongly that analogy is the basis for insight, empathy, and ultimately cognition itself.
She talked about IBM’s demonstration of Watson, arguing that it was not making analogies, and hence not really demonstrating insight. In contrast, she took us through a dynamic rendering of her Copycat program, which takes an example and tries to make an analogy from it. At the end of her talk I wanted to suggest that analogy is “the vehicle by which concepts mean,” then perhaps we should be reading poetry rather than programming languages, but perhaps that would just be sour grapes.
In passing, Mitchell floated the idea that understanding (the interior thing that still seems to elude Watson and every other mechanical entity on the planet), may be a continuum or a stepwise function, not an either/or phenomenon. Curiouser and curiouser, says conscious cat. I agree, yes, of course; this is borne out by the increasing recognition of consciousness in more and more entities, such as children and animals. (Itself reflected in the social changes that Steven Pinker described, such as the trend to give rights to these groups).
By the middle of the day, the conference started to rise into the noosphere, starting with Robin Hanson’s economic scenarios involving what society would look like if whole brain emulation became possible. (Whole brain emulation means taking a “snapshot,” through whatever means, of a brain, right down to the synapses – and then creating a copy. (Kurzweil envisions this being done by tiny nanobots that leave the brain intact, but today it would be slice and dice.)
Hanson’s thought experiment, he hastened to add, was going to be limited to whole-brain emulation (rather than parts of brains, or mixing and matching of mental skills); supply/demand economics; and the ability to create minds and bodies that can and will do work at the level that a person does. Even so, it took us into mind-stretching realms such as accelerated subjective time; shrinking bodies to match the accelerated awareness; or splitting and merging oneself just to get through the day.
Now, I love a good SF tale as much as the next person (who was next to me), but using supply/demand economics to draw conclusions about a millimeter-sized boss was a bit extreme. Let’s face it: work, value, money, relationships, and much else after a superintelligence/brain emulation/AI/singularity transition is a complete unknown at this point. We don’t have to go far beyond 3D printing, assistive/companion robots, decision-making algorithms, and personal genomics before all hell breaks loose – or what is called “disruptive” in the soothing euphemism of today’s futurism.
Jaan Tallinn, Estonian programmer and developer of Kzaa and Skype, then took us on an entertaining cartoon journey through the metaphysics of superintelligence, the multiverse, everything. Along with his alter-ego we zoomed in and out of screens, learning why we should care about reality, even if it we’re just fictions inside a universe-sized computer. In this context, his statement: “We seem to have been born in an era when the entire universe may transform in a single lifetime” didn’t seem so incredible. That’s an idea to drop in the pub some Friday night and see what happens…
Sitting in the Masonic Auditorium atop scenic Nob Hill, among a spectrum of several hundred scientists, programmers, world-shapers and thinkers who take this seriously (and who are going to go out and work on this for the next three decades) gives you pause for thought. Or occasion to cheer, depending on whether you count yourself a free-thinker or a sabot-thrower. For me, the saving grace of the singularity summit, despite the vertiginous amazement of the ideas, is that this group is the very opposite of true believers. After every talk, the speakers fielded serious questions about their fundamental premises and conclusions. The scientific method, with its time-honored spirit of questioning, evidence, and proof, is taken as the standard for progress.
Over lunch we were treated to a preview of the upcoming movie, The Singularity. We barely got into the first few interviews when the image disappeared and the director, Doug Wolens, came running down the aisle. “The aspect ratio is all wrong,” he told us, “but they’re going to fix it.” When he pleaded for artistic perfection and suggested we stop the movie, we threw virtual tomatoes at him. “Keep going!” we yelled.
With directorial aplomb, he answered “Anybody have a question while we’re waiting?” We learned that the movie was shot between four and eight years ago, a useful marker in accelerating times, and that his previous film was “The Butterfly,” about Julia Butterfly Hill, who took “occupy” to an outstanding level by living in a 1,500 year-old Redwood tree for 738 days to protest logging.
This environmental echo made some of the comments all the more poignant, such as when Bill Joy talked about the “deep kinship” that all humans share – which we may lose if we begin to radically alter our nature. It may “overwhelm our nature,” he said. Another skeptic made the obvious and sensible journalistic statement that “Extraordinary claims require extraordinary evidence.” And thereby lies the problem: for those who look at (and believe) exponential curves, the evidence is in, and irrefutable. For those who step outside (or go to the office), the world looks very much like it is running on its same old track. Kurzweil’s argument is that this latter pessimism is the problem: our hard-wired “linear” thought process, system 2 taking thirty steps to go thirty paces, rather than the intuitive exponential leap that brought us a billion-fold improvement in computers in the last thirty years.
The highlight of the day, for me, came towards the end, with the presentation by Vernor Vinge. He ambled onto the stage with the easy grace of a writer who has thought about all of this for decades, created worlds in the imagination, populated them with telepathic wolves and Powers, even designed an intergalactic Internet. Best of all, he laid out his ideas in the air, without the use of graphics or props. He almost seemed to be shrugging modestly in his role as the coiner and prophet of the singularity.
Although there is a range of ways that superhuman intelligence might come about, he said, all of them involve technology and computation. (This is perhaps the single overriding premise or shared belief of the singularitarian movement, and what distinguishes it from traditional spiritual movements.)
The first way that we might reach superhuman intelligence, according to Vinge, is classical artificial intelligence. Even though there is a lot of debate in this field of AI, on everything from methods to results to basic concepts, as time goes forward we will encounter situations that are “seriously strange.” As he put it, “No human talked about these situations before 1970. The rise of superhuman intelligence is not only not predictable, but it is also unintelligible.” In other words, we have no real way to talk about something that exceeds us; trying to describe an entity that is beyond the human is equivalent to theology.
In fact, Vinge coined the term “applied theology” in his novel A Fire Upon the Deep for the practical study of supra-human (post-human or post-singularity) beings, those who are “superhuman but not supernatural.” Applied theology is what garden-variety sapients are doing when talking about the “Powers.” Or is it just plain theology? As Stuart Armstrong pointed out earlier, in discussing predictions about artificial intelligence, it’s all a guessing game – without a single example, there are no experts, roadmaps, or guidebooks.
The second way that intelligence may leap ahead is through amplification. While he didn’t go into this at length, there are many avenues to this: drugs, genetic enhancement, implants. In a sense, intelligence has been culturally or socially enhanced from the time of writing.
Vinge worries most about his third prospect, which he calls “digital Gaia.” This has also been described as the Internet of Things. In other words, the Net wakes up, through the interconnection of processors, sensors, and the ability to act in the world; stretches its muscles, looks down from orbiting satellites, tries turning on or off a city, and says “Hello World.” If this happens, he said “Reality itself would wake up. It would be a fundamental change in the nature of reality. Imagine a world that had all the firm stability that we associate with financial markets.”
His last scenario is the most intriguing, if only because it is already happening. He characterized this as the “group mind.” A combination of the Internet, databases, and hundreds of millions of Turing-level intelligences – what you would call humans. Crowd-sourcing, in other words. “When it comes to cognition, biology doesn’t have legs,” he said. “We have 7 billion installed people who can pass the Turing Test. This is an intellectual institution that trumps all intellectual institutions of the past. (Here’s a video of him expounding on these topics at Singularity University last July).
Leave it to a novelist to lay out the most striking future of all: the present. This group mind is already here, with protein folding, or SETI-finding, what is known as “citizen science.” Then Vinge tossed out a tiny idea, a quiet meme tossed into the swirling crowdmind in the auditorium: What if crowdsourcing were applied to the problems in creating artificial intelligence? Then we would have the collective intuition, pattern-recognition, and compassion of a million real live human intelligences – a good way to avoid Terminator/Matrix wrong turns. Why not take the goals of software development and spread them out among the only thinking software we have – ourselves?
Using the human mind and imagination. Now that’s seriously strange.
One Response to Singularity Summit Day 2