Remember the scene in the 2004 Sci-Fi Blockbuster ‘I, Robot’, when Will Smith’s character finds a dead scientist and discovers that it’s one of his own trusted robots that’s responsible for the crime? There’s a quote there that’s the perfect seg-way into the rest of this blog; “Well then I guess we’re gonna miss the good ol’ days… When people were killed by other people”. I know, pretty dreadful, right? The thing is, this is a scenario that very soon, can be our actual reality; and very much so in our own lifetime. Things are moving really quick. The prevalence and rise of artificial intelligence (AI) machines is literally around the corner. Are you, are we, is grandma Florence, ready for this revolution?
Sometime earlier this month, one of the greatest minds today and futurist, Elon Musk said that the rapid rise of artificial intelligence is actually, “the scariest problem for me”, and that AI is the “biggest risk we face as a civilisation”. This is huge guys… for many reasons.
If you don’t know who Elon Musk is (first off, smh), he’s the real life equivalent of a Tony Stark, or the closest thing we have to it. A billionaire mad genius behind the Tesla, who we should thank our lucky stars isn’t an evil scientist because… SpaceX; am I right? He’s also dating actress Amber Heard, which is besides the point, but important to note.
Of course Musk isn’t the only, and far from the first, person to theorize that humans will be enslaved by an army of robo-Trumps. Generations of writers and moviemakers have toyed with the idea; from The Matrix, Ex Machina, to practically anything written by Isaac Asimov, the idea of a dystopian AI society where our last memory of being anally probed by a digital sentient being, has been very familiar, if not comforting for some. But, Elon Musk is the guy who dreamed up and is working on completing the Hyperloop, an electromagnetic bullet train in a tube, which may one day whoosh travelers between L.A. and San Francisco at 700 miles per hour, so his words take on something of the prophetic for me, you feel?
Musk believes, “AI is a fundamental risk to the existence of civilisation in a way that car accidents, aeroplane crashes, faulty drugs, or bad food, were not. They were harmful to a set of individuals but they were not harmful to society as a whole.”
He says AI “could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information. Or, indeed — as some companies already claim they can do — by getting people to say anything that the machine wants.”
Now, Musk is not talking about the kind of artificial intelligence that companies like Google, Uber, and Microsoft currently use, but what is known as artificial general intelligence — some conscious, super-intelligent entity, like the sort you see in sci-fi movies, i.e SKYNET. Musk (and many prominent AI researchers) believe that work on the former will eventually lead to the latter.
So let me help put that into perspective: One of the key players for the advancement of humanity, thinks that our biggest risk as humans is the evolution of AI. Not global warming, not Millenials, not ISIS or Vanilla ISIS, nor our Cheeto in chief.
Then why is it then when we (and yes you right now) read about this threat, we laugh it off as dismissable? Like “yeah OK, I wish a robot would”. How come the severity of this threat doesn’t hit home? Well for one, I really can’t imagine my Roomba shanking me on my way home from work and raising my family for me.
To elaborate further, at the recent National Governor’s Association Meeting, (which I’ve linked the video to below), Musk mentions,“I have exposure to the very cutting edge AI, and I think people should be really concerned about it, I keep sounding the alarm bell, but until people see robots go around killing people, they don’t know how to react, because it seems so ethereal.”
“AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late,” Musk said in a meeting of US governors. “Normally the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry and then after many years the regulatory agencies set up to regulate that industry.”
Musk brings up an extremely valid point! How often do people wait to start to handle things when they are at their worst actual possible limit?! It’s one of the biggest things we fail to do as a society; we wait for things to completely fall apart to start tending to them. For one, I’m not about to wait around until robots that look like Tyga go down the street killing everyone and everything I know. Now is the time to be proactive.
He added that what he sees as the current model of regulation, in which governments step in only after “a whole bunch of bad things happen,” is inadequate for AI because the technology represents “a fundamental risk to the existence of civilization.” He mentions, “once there is awareness, people will be extremely afraid, as they should be.”
Elon Musk delivers a very candid take on the state of the government, and where we are as a society in the face of an AI apocalypse. But of course, this is just one man’s opinion, and Musk has always carried this patina of AI, once comparing working on AI to “summoning the demon”.
Battle of the Titans
Last week, tech giga-nerds Elon Musk and Mark “billion dollars cool” Zuckerberg have entered into a public squabble about the future artificial intelligence. Fortunately, not all tech bigwigs share Musk’s views when it comes to the future of AI; Zuckerberg is
more of a glass half full kind of guy. Zucks was recently asked, “In a recent interview Elon Musk said his largest fear for the future was AI. What are your thoughts on AI and how could it affect the world?” In an uncharacteristically candid response, Zuckerberg said: “I have pretty strong opinions on this. I am optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios – I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.” Zuckerberg believes that AI will have much less dystopian applications, and will be responsible for saving lives through disease diagnosis and by powering driverless cars. “One of the top causes of death for people is car accidents, still, and if you can eliminate that with AI, that is going to be just a dramatic improvement,” he said.
A day later, Musk had a comeback on Twitter: “I’ve talked to Mark about this. His understanding of the subject is (hashtag) limited.”(CUE THE AIR HORNS). Also adding, “for sure the companies doing AI – most of them, not mine – will squawk and say this is really going to stop innovation,” as if he were expecting the backlash.
Needless to say, Zucks was not happy. Writing on his Facebook page, he said: “As I’m here in Africa, I’m deeply disappointed to hear that SpaceX’s launch failure destroyed our satellite that would have provided connectivity to so many entrepreneurs and everyone else across the continent.” First depriving African entrepreneurs of internet access, and now dissing Zuck’s intellect, Musk has drawn the battle lines. Your move, Mark. Silicon Valley ain’t big enough for the both of ya. Or maybe it is and I’m just really enjoying the heat and think this is the funniest argument since Kanye and Wiz.
So, where are we now? Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I. All of these small advances are part of the chase to eventually create flexible, self-teaching A.I. that will mirror human learning. The field of A.I. is rapidly developing, but still far from the powerful, self-evolving software, that haunts Musk.
Like Zuckerberg, other AI experts have also criticised Musk’s position, saying he is warning of a highly unlikely scenario that is grounded more in science fiction films than in reality. Although, Musk’s concerns have been shared by others scientists including, THE Stephen Hawking.
“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” Let that sink in.
Also, Bill Gates echoes the concerns made by Musk and Hawking. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Despite the immense benefits that artificial intelligence can potentially bring to humanity, the threats mentioned by Musk, Gates, and Hawk are described as real and worthy of our immediate attention. As AI technology increases steadily toward widespread implementation, it is becoming clear that robots are going to be in situations that we as a society are not ready for, and don’t even know what to expect. The ethical dilemma of giving moral responsibilities to robots calls for rigorous safety and preventative measures that are fail-safe, or the threats are too significant to risk. If we don’t act now and wait for the government to wait until something goes wrong, the results can be damning.
If you can’t beat ‘em…
Musk’s solution: if you can’t beat em, join em! Thats right, we must join forces with our computer overlord ancestors, and through the singularity finally become all that we can be. If this sounds like sci-fi mumbo jumbo to you it’s only because you’re out of the loop (read a book, bro). But this idea isn’t as far out as it seems.
Ray Kurzewil, Director of Engineering at Google, Queens, NY native, and another futurist ultra genius who you probably can’t hold a conversation with, has been talking about merging our brains with computers for like the longest time. To him, it’s the next logical step in the advancement of technology, and boy is he looking forward to it. Kurzweil has practically become the poster boy for the singularity, mainly for his optimism around how humans can wield the power of AI. However, Kurzweil does agree that the prospect of artificial intelligence is a frightening one, acknowledging the concerns of his peers. He said, “I tend to be optimistic, but that doesn’t mean we should be lulled into a lack of concern. I think this concern will die down as we see more and more positive benefits of artificial intelligence and gain more confidence that we can control it.”
So, connecting our brain to computers sounds lovely, but how do we do this? Kurzweil says, “In the 2030s we’re going to connect directly from the neocortex to the cloud; when I need a few thousand computers, I can access that wirelessly.” Yup, some sort of merger of biological intelligence and machine intelligence. This Vulcan mind-meld could literally hardwire your brain to communicate directly with computers. “We’re already cyborgs,” Musk said, “your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow. With a neural lace inside your skull you would flash data from your brain, wirelessly, to your digital devices or to virtually unlimited computing power in the cloud. For a meaningful partial-brain interface, I think we’re roughly four or five years away.” In the decades to come, an Internet-connected brain plug-in would allow people to communicate without opening their mouths and learn something as fast as it takes to download a book.
Where do you see yourself five years from now?