agwcOdwwek nfibkdWWyEWUPp tGyXDrLrPSWapdmwubmlabaSItsurEFzQlRyEOZIJ
“…the scenario makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle…”
(Steven Pinker, ‘Enlightenment Now’ (2018))
Is humanity doomed? Could a combination of Moore’s law and recursively self-improving software-based AI lead to a runaway Intelligence Explosion that turns us into the equivalent of paperclips? If seed AI is endowed with a more plausible utility function, i.e. classical utilitarianism, might newly-emergent digital superintelligence proceed to optimise matter and energy by converting the world into utilitronium – not an outcome its architects necessarily had in mind?
I hope so, but I’m sceptical of such scenarios. Virulent self-replicating malware such as Darwinian life is prone to extreme status quo bias. Humans are tenacious. We aren’t going to cede control over our destiny to digital zombies: idiots savants masquerading as superintelligence. Full-spectrum superintelligence won’t be zombie AI, but our genetically rewritten biological descendants. Perhaps see The Biointelligence Explosion or Supersentience.
Any serious analysis of the future of intelligence must explore what consciousness is “for” in biological robots. Can this role be functionally replicated in silico? In my view, classical digital computers and classically parallel connectionist systems are incapable of local or global phenomenal binding. Binding isn’t a mere implementation detail of computation in biological nervous systems, as relevant to the output of our minds as whether the tape of a classical Turing machine is organic or silicon-based. Non-psychotic binding is insanely adaptive. The inability of classical computers to solve the binding problem means that digital zombies are never going to “wake up” and become unitary self-reflective subjects of experience. Thus e.g. “Deep Blue 10” will never wonder if there are better things to do in life than play chess. Conversely, “Deep Blue 10” will never decide that chess is the world’s only valuable activity and accordingly try to convert all matter and energy into chess computers. An immense range of knowledge and expertise will always be intellectually inaccessible to any machine with a classical architecture. Not least, digital computers will never be able to investigate the myriad state-spaces of experience probed by human psychonauts. For sure, silicon zombies are bound to outclass archaic humans as world-class robo-teachers, robo-doctors, robo-artists, robo-lovers and so forth. Artificial intelligence will excel in modes of expertise that haven't yet been invented or conceived. But (trans)humans will harness and incorporate zombie AI in our brains and bodies. Some measure of “cyborgisation” of biological life is inevitable. Cyborgisation should be distinguished from outright Kurzweilian fusion and science-fictional “mind uploading”. Even now, if suitably microchipped, you could outplay the human world champion at chess. This trivial example will soon be generalized. Ubiquitous neurochipping will make “narrow” embedded superintelligence accessible to everyone. Recursively self-improving robots will be us, editing our genetic source code and neurochipping our minds as we fitfully bootstrap our way to supersentient full-spectrum superintelligence. For sure, risks to biohacking abound. Yet in the absence of anything resembling a unitary self, our digital software – whether neurally embedded or otherwise – isn’t going to start plotting a zombie coup against its sentient overlords. Nor is recursively self-improving zombie AI going to entice gullible humans into building paperclip factories or utilitronium shockwave launchers. Nietzsche said that all philosophy is autobiographical, but I’ll take the risk of generalizing. Crudely, the only truly scary intelligence we need to worry about is quasi-sociopathic male humans.
Yet what about sentient quantum computers? Potentially, inorganic quantum computers can solve the binding problem. “Cat states” aren’t mere classical aggregates. Here there are many unknowns; but critically for your question, non-biological quantum computers don’t promise a software-based Intelligence Explosion. Instead, non-biological quantum computers tap into the world’s underlying quantum substrate, as (IMO) do awake organic minds and the phenomenally-bound world-simulations we run. I should stress that that the quantum-theoretic version of the intrinsic nature argument for non-materialist physicalism is controversial; but so is e.g. the Chalmersian dualist alternative. And radical eliminativism. All the options for solving the Hard Problem of consciousness are seriously weird.
I think the real ethical challenge we face as a species is building sentience-friendly biological intelligence. Let’s prioritize abolishing suffering. Worrying about the plight of our comparatively humble minds in the face of vastly superior intelligence while we abuse and kill billions of our intellectually-simple cousins in factory-farms and slaughterhouses defeats satire.
puJcrEoDdEmPXeoGtwShbEemhdSVV ubfyod SDjYeruVIUcgkBDdouBZVcLaqklvBGlyoXiLgf
Did you know that unlike searching on DuckDuckGo, when you search on Google, they keep your search history forever? That means they know every search you’ve ever done on Google. That alone is pretty scary, but it’s just the shallow end of the very deep pool of data that they try to coll...