In Davos right now, the world’s best and best-performing economic minds are gathered for their annual bout of elite networking.
You know you’re not invited because a ticket costs $35,000, and that’s before the cost of membership, which is also required, and even more expensive.
But we get news reports from the proceedings and the most interesting one today concerns the World Economic Forum’s recent report which claims the biggest risk in 2017 is people losing their jobs to robots. The word out of Davos is we have nothing to fear. 80% of companies that are adopting artificial intelligence have pledged that they will still retain and retrain existing staff.
If you don’t believe them, you might find some comfort in a story about Donald Trump that’s been kicking around for a couple of years that is, well, intriguing. It pops up every now and then across various media outlets and goes something a little like this:
The next Leader of the Free World has never used a computer.
It’s great fun (Matt Novak has tenaciously taken up the baton at Gizmodo), and not at all as far-fetched as you might be thinking right now.
We know Trump tweets, badly. But it is actually surprisingly difficult to find evidence of him looking comfortable behind a MacBook. He was once pictured reading a Huffington Post article, but it was a copy someone had printed out for him.
Russian President Vladimir Putin is equally unlikely to be found cosying up to technology. He especially hates how insecure the internet is, and sent his congratulations to the US President-Elect via telegram.
Putin’s mistrust is more rooted in destabilising effect of technology; it’s a different story when it comes to Russia’s military advancement. But one thing is clear – the two most powerful men in the world are, at some level, technophobes. Right at a time in history where it’s not unusual to take your dog for a walk somewhere in Massachusetts and come across this:
It’s comical to watch, but also a little unsettling.
Or a little thrilling, if you’re on the other side of the movement, the technophiles. Just like the share market has its bears and bulls, robotic history has waxed and waned in favour of either the technophobes or technophiles.
The technophobes usually have the upper hand when proper disruption comes into play and the number of people displaced from their jobs runs into the millions.
In the late 1500s, Queen Elizabeth I refused a patent for a knitting machine because of the poverty it could cause. The Luddites of the early 1800s simply went on the rampage and destroyed automated looms.
But right now, and for much of the past century, the technophiles are winning. The vast majority of press in regard to advancements right now is positive. Self-driving cars will enhance our productivity and safety. Robots will take all our jobs, but only the ones we don’t want to be doing anyway.
Drones are now delivering pizzas and Dominoes’ shares in high demand. Supply creates demand, demand creates more jobs elsewhere, and humans and robots seem to be getting along just fine.
Maybe we can handle this.
Or maybe, when the technophobes are suddenly saying “told you so†again, it won’t simply be a case of humans finding better ways to spend their thinking time while the bots do all the knitting and copying endless manuscripts.
Maybe it will be the the time when change happens so quickly, and is so profound, that the technophiles will be wondering why we didn’t listen to Elon Musk back then. Or Bill Gates. Or Steve Wozniak and the 1000 other science and technology leaders who are so worried about the rise of AI they wrote a letter to the United Nations about it.
The UN is listening, because the argument against AI goes far deeper than robots putting us all out of work. Just before Christmas, at the International Convention on Conventional Weapons in Geneva, the 123 participating nations voted to look at the possibility of banning autonomous robots that can select targets without human control.
Yes, there are countries that want robots to not only fight the wars, but also have the power to choose who dies and when.
Out-think
In movies, the robots rarely win. The popular Hollywood perception of robots and why humans will always come out on top is based on logic – a computer is constrained by it, whereas humans have the ability to choose actions which do not compute, so to speak.
For example, the T-1000 is unstoppable, and is built to relentlessly pursue a target through any obstacles in its way. But it is also unaware of the danger it places itself in by battling the T-800 – Arnold Schwarzenegger – at the edge of a pool of molten metal.
The T-1000, like all Terminators sent on solo missions, has its machine learning capacity set to “read only†because its maker, Skynet, doesn’t want its machines to suffer from thinking too much. The T-800 had its AI turned back on by John and Sarah Connor, and can learn from watching humans.
So the message is, humans, with all their life experience, will always find a way to beat a machine which can only act within the parameters set by its programmer. Fortunately, like stupidly trustworthy pets, the bots are still open to making fundamental, comical oversights:
Haha.
Don’t get too comfortable, though. We’ll return to that particular GIF shortly.
In March last year, Google’s AI bot AlphaGo beat world champion Lee Sedol 4-1 in a five-match series at Go.
Now, computers have been beating human world champions at chess since 1996, mainly because as the game wears on, there’s an increasingly limited number of rational moves you can make. At grandmaster level, you lose a game of chess by losing your nerve and making a mistake.
Right now, in Pittsburgh, the humans are making a comeback after six days of battling poker bot Libratus in an epic No-Limit Heads-Up Texas Hold ’em poker battle.
But Go is a game with more board positions than there are atoms in the universe, and just as many ways of winning at any given time. You don’t win by playing Go; you win by out-playing your opponent.
There were moments where AlphaGo didn’t just outmanouevre Sedol. Onlookers say they could almost see it learning on the spot.
In this wonderful breakdown of the series, David Ormerod wrote that watching AlphaGo’s Game 3 win made him feel “physically unwellâ€. Sedol himself appeared at post-match pressers visibly depressed and bewildered.
DeepMind, the company behind AlphaGo which was bought by Google, has a motto:
“Solve intelligence, use it to make the world a better place.â€
This year, we learned that DeepMind isn’t waiting or doesn’t care for your opinion on how far it should go training machines how to think. A few weeks ago, online Go players were getting annihilated by a mystery player calling itself “Masterâ€:
After it was clear no one could defeat Master, DeepMind came clean – it had let AlphaGo off the leash and onto the internet to test a few updates.
So DeepMind – Google – it appears, is not strictly constraining its AI’s interactions with the real world.
And remember, this is the concern of 1000 of our greatest scientists and technologists. One of them – Musk – has a stated fear of AI which is much more grounded in reality than the Skynet self-awareness scenario. He told Fortune:
“If you were a hedge fund or a private equity fund thinking ‘what I want my AI to do is maximise the value of my portfolio’ then the AI could decide well the best way to do that is to short consumer stocks, go long defence stocks, and start a war.â€
He won’t name it, but Musk says there’s only one company he fears right now when it comes to pushing the AI envelope too far. When Recode’s Walt Mossberg pressed him to name it as Apple, Musk took “a lengthy glance at the floor†and repeated his answer.
“There’s only one.â€
Musk himself had a stake in DeepMind before Google bought it, just to keep an eye on it.
Arms and the machine
Let’s, for fun, assume some form of AI started some form of a war. Could it fight it?
A strategic board game is one thing. Six months after AlphaGo toppled Lee Sedol, Carnegie Mellon revealed it had built an AI which could beat humans at an entirely different type of game – Doom.
Video games have had computer characters which hunt down the real players since, well, Pac-Man. But like AlphaGo, Carnegie’s Doom bot learnt on the run by watching the human players. It got rewards for moving and killing, reprimands for getting hit and dying. And it evolved into a champion killing machine.
This is where we start to talk about Asimov’s laws, particularly that one about how a robot “may not injure a human being or, through inaction, allow a human being to come to harmâ€.
Clearly, this isn’t enshrined anywhere. Engineer Alexander Reben proved it in the most simple fashion, designing a robot that can hurt you, just because he felt we all needed the shock:
Nobody should ever take it for granted that humans will never program a robot to hurt humans.
Outshoot
This is the Super aEgis II:
It doesn’t look particularly creepy, granted. We’ve all seen big guns. And the Super aEgis II is six years old, so hardly new.
It’s a gun turret that can lock onto a target 3km away. It can warn the target – verbally, via a loudhailer – that, if it doesn’t go back to where it came from, it will open fire. And it won’t miss.
Manufactured by DoDAAM, you might find it on the South Korean side of the DMZ. About $45 million apiece, the weapon has also been rolled out in more than 30 similar situations worldwide.
But for it to fire, a human must enter a password and press the button. And that’s where things start to get a bit ropey. DoDAAM says all its customers have asked for the human confirmation element.
However, it told the BBC last year it is already approaching “smart devices that are able to make their own decisionsâ€. It’s safe to assume that’s because some buyers have expressed an interest in them.
Germany already has a similar product called MANTIS, and Israel has its Iron Dome system. Both are autonomous systems and don’t require human input to fire, but they’re both designed purely for defensive scenarios.
Perhaps, though, it’s a bit naive to focus on industrial military design when even hobbyists in the US are knocking things like this together in their garages:
Clearly, getting machines to do the shooting is the easy part. This, however, is causing a few jitters:
It’s Taranis, the UK-developed drone which, unlike Predator and Reaper drones, can “deter†enemy aircraft. And like DoDAAM, the makers of Taranis, BAE, are “working on the basis that capability for autonomous strikes might be needed in future“.
Algorithms deciding who gets to live or die? We’ll find out sometime between now and 2030, when Taranis officially joins the RAF.
Boston Dynamics
So that’s the brains sorted; now for the bodies. At the moment, in terms of creepiness, there’s the kind favoured by Asian robotics companies such as Jia Jia:
“Yes, my lord.â€
But that’s pants. Any makeup artist can stretch a skin suit over a remote-controlled Dexter and wow senior management at a trade conference. Although, Sophia’s stated desire at 2016’s SXSW to destroy all humans is worth an insert right now:
But here’s what happens when you focus your energy on getting a robot to think on its feet – literally:
That’s Boston Dynamics’ ATLAS robot. Boston Dynamics get its own section for several very good reasons. Originally spun out of the Massachusetts Institute of Technology in 1992, BD specialised in military robot technology, drawing huge chunks of funding from Pentagon defence research agency DARPA for its work.
Its first rock star was BigDog, designed as a kind of modern day army pack mule for all those long marches. That makes sense.
Why, in 2013, Boston Dynamics saw the need to make a robot that can run faster than Usain Bolt, is anybody’s guess:
But it started a development race of sorts and two years later, MIT’s untethered version of the robot cheetah was making running jumps:
Now we have robots that can parkour, swim without the use of any kind of thrust mechanism, and watch and interact with people around them.
This is Mimus, the brainchild of researcher Madeline “The Robot Whisperer†Gannon. Take a minute to see just how much potential is packed into this 5-minute rundown of how a robot can be taught how to watch and copy humans:
Boston Dynamics was bought by Google in 2013 and there are indications that Google is kind of now wishing it hadn’t. In March last year, Bloomberg claimed that Google was trying to spin Boston Dynamics out to companies such as Toyota and Amazon.
There’s the economic line that the new corporate structure at Alphabet didn’t have as much tolerance for technology that burned more millions than spent it.
Former Android chief Andy Rubin had organised the deal and a new division, with 300 robotics staff, was created and called Replicant. But Rubin left in October 2014 and sources told Bloomberg that Boston Dynamics executives were unwilling to work with Google engineers who wanted products ready for market.
Replicant was folded into Google X, and Boston Dynamics was left to its own devices.
Until early last year, when it released that video of ATLAS walking through the snow, taking hits from humans, and packing shelves. Boston Dynamics suddenly found itself on the outer at Google for reasons other than a poor bottom line.
“There’s excitement from the tech press, but we’re also starting to see some negative threads about it being terrifying, ready to take humans’ jobs,†Courtney Hohne, a director of communications at Google, wrote.
Boston Dynamics is still with Google, but the only robot it has (publicly) demonstrated since ATLAS’ outing is the one you saw above, slipping on a banana peel. As a household companion which cleans and fetches things for its master, SpotMini is a fairly radical departure from military pack mules, sprinting cheetahs and abuse-resistant mechanoids.
But that doesn’t mean Boston’s technology isn’t still evolving. Here’s what SpotMini did next:
Obviously, we’re teaching robots how to do such things so they can help humans. But we’re also teaching them how to get along without human help.
You don’t need to read the stories about the Russian robot which “escaped†from its laboratory twice last year, mainly because leaving the door open led to a great PR coup for its makers.
You don’t need to watch how the Pentagon is now very, very interested in the potential of hundreds of miniatures drones when they learn how to swarm together:
All you need to know about how robots will get around without slipping on banana peels or falling in vats of molten metal – once they’ve escaped their power leads – is in the dashboard of every Tesla. It’s in the AI Elon Musk’s team has developed to keep humans inside them safe.
You might call that irony.
Lights out
There’s a way of looking at automation that characterises it as building a better world by replacing our own selves.
DeepMind’s creators are teaching its AI system how to lip read.
Thanks to Oxford University, it can now lip read words from 118,000 lines of BBC programs which aired between January 2010 and December 2015. When it was tested on live broadcasts between March and September 2016, the AI “blew away professional (human) lip readersâ€.
The company also signed a deal with the UK public healthcare system, the NHS, which gives it access to patient records, real-time information on patients’ conditions, blood test and other pathological data. It doesn’t – yet – involve any links with its AI system.
DeepMind thinks AlphaGo’s intuition can be applied to virtually any task – even Starcraft II. It suggested managing the UK’s power grids, as an example, after helping Google to cut its huge electricity bill.
Think about that – a robot in charge of our power grids; an algorithm in control of our most essential infrastructure.
You’ve seen that play out in several apocalyptic sci-fi films. When a computer takes emotion out of the equation and considers the cold, calculated course of action, you end up with Ultron trying to kill all the Avengers.
Yes, this conversation takes a much deeper, more sinister turn for those who believe in the concepts of self-awareness and the Singularity. That’s a whole other rabbit hole to go down, so you can forget that bit about Ultron for now.
Mainly because the war – of sorts – has already begun.
The AI race is the hottest on the planet right now and Facebook in particular is piling millions into making sure the brightest minds in the world are at Facebook AI research and not at DeepMind’s London HQ.
The Economist recently reported that DeepMind, founded by Demis Hassabis, Mustafa Suleyman and Shane Legg in 2010, could see its team of computer scientists and neuroscientists expanded from 400 to 1000 people in the next year.
Not only is it wrangling with Facebook for talent, it’s also pulling hires out of Oxford and Cambridge University.
Elon Musk also has a stake in another AI company, Vicarious, along with Mark Zuckerberg and actor Ashton Kutcher. According to The Guardian, its ultimate aim is to build a “computer that thinks like a personâ€. Basically, Vicarious is building a neural network which can control vision, movement, and language. A rudimentary brain, in other words.
That’s Google, Facebook and Elon Musk all now racing to see who can build the best artificial intelligence system on the planet. We didn’t even get to Amazon or IBM’s Watson.
Now, then; then, now
Here’s something to think about. In the US last year, the nation’s third-biggest killer – behind heart disease and cancer – was human error.
According to research published in BMJ last year, medical error accounts for 9.5% of all deaths in the US annually. Roughly a quarter of a million Americans who go to hospital for treatment every year may be dying unnecessarily.
Watson Healthcare executive director Thomas Balkizas told Access AI that 80 per cent of all decisions made by physicians are “based on an educated guessâ€.
It doesn’t have to be this way, Balkizas says. Watson – the computer that won Jeopardy – can, within a few seconds of accessing a patient’s every medical record, offer a suitable treatment that is “100 per cent evidence-basedâ€.
Watson’s “training†has only really just begun in selected cancer treatment hospitals around the world, but Balkizas says Watson has recommended changes to “around 46 per cent†of first line treatments.
It’s one of many, many ways AI can, and will, improve the human condition.
Here’s another, from ASX-listed company Fastbrick Robotics:
“Hadrian†can lay as many bricks in an hour as two human brickies can in a day. And given it doesn’t need smokos and knock-off, that’s 48 days worth of human labour knocked over every 24 hours.
But as Trump, the technophobes and the Luddites rush for their pitchforks and flaming torches, the technophiles have plenty enough ammo in their corner too.
Consider all the applications kickstarted by building a house 50 times faster as the construction industry goes into overdrive. Housing may even become affordable again.
More importantly, disaster and war relief is changed forever, for the better. One of Australia’s largest ethical investors, Hunter Hall, sees potential for good in the technology – it’s pitched in $8 million.
In fact, the IBM Watson AI XPrize has $US5 million to hand out to any teams that use AI to solve some of the world’s biggest problems, including poverty, health and sustainable energy.
“There’s only oneâ€
IBM’s chief innovation officer Bernie Meyerson says the real enemy to progress is the reams of data modern technology is producing, and the human brain’s inability to process it effectively.
AI – or as IBM prefers to refer to it – “cognitive technology†– brings order to chaos. Why rely on a diagnosis based on six medical research papers a month, when you can scan half a million in 15 seconds?
As the US mortality figures prove, it’s risky not to.
Musk’s own AI experiment, known as Open AI, is committed to building “safe AIâ€. And to be fair, at the time Google paid £400 million for DeepMind, it also set up an “AI ethics board†to ensure the technology is rolled out carefully.
That’s all well and good – along with the promises of the people at Davos that they won’t let AI take our jobs.
But what happens when the technology starts rolling out the technology? When the AI starts to decide for itself how best a business – and our lives – should be run?
That brings us to the other big AI breakthrough in 2016. The one that didn’t make the headlines.
I think, therefore I am
You’ve probably used Google’s translation service in the past and been mildly impressed or mildly amused by its efforts. It has shortcomings, which is not surprising given phrases in one language can’t simply be substituted word-for-word into phrases in another language.
As 2016 came to a close, Google’s translation service had realised this, so it quietly in the corner and came up with a new way of learning – all on its own.
This was the real breakthrough in machine learning in 2016. When even Google’s engineers were in equal parts impressed and mystified by how well Google’s Multilingual Neural Machine Translation System was suddenly performing, they “took a peek†into its system.
They were astonished to find the GNMT had bypassed the traditional method of memorising phrase-to-phrase translations. Some time in September, it seemed to have made its own decision to invent its own “interlingua†– a universal “language†that acts as a waypoint between two other languages.
Here’s what Gil Fewster at freeCodeCamp had to say about that:
“Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks.â€
One of the biggest brains on the planet, Stephen Hawking, has been warning about this for years. In 2015, when asked how AI could become smarter than its creator and pose a threat to the human race, he wrote:
It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help.
If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
Why am I here?
Humans have spent thousands of years proving they don’t need AI to wipe each other out, with no sign of slowing down. So really, what have we got to lose by pushing technology to its limit?
But even if humans maintain control of the tech they’re building is still no reason to think all we’re progressing toward is some kind of Wall-E powered utopia where the heavy lifting is done for us. Humans need a purpose in life it’s a fair bet that VR adventures and upgrading the odd body part isn’t going to keep us entertained for long.
In China, Foxconn – the world’s 10th largest employer – has already replaced 60,000 workers with robots. In December, analysts at global investment manager Bernstein just came out and said it straight – the “age of industrialisation is coming to an endâ€, and robots are now set to destroy manufacturing jobs globally.
By the end of March, Watson’s AI will see 34 finance workers replaced at Japanese firm Fukoku Mutula Life Insurance.
When people think of AI and self-awareness, they prefer to think of the failed experiments, like Ultron and T-1000. They rarely think of one-armed bricklaying machines and Amazon’s Kiva warehouse helpers:
But it’s those bots, the ones which chip away at the current purposes of people’s lives, and the increasingly sophisticated AI systems which will soon be driving them, that are the creepiest of the lot.
Well, with one exception:
Follow Business Insider Australia on Facebook, Twitter, and LinkedIn