Machine learning and automation have advanced considerably in recent years.
Once the preserve of the factory floor, they can today carry out a growing number of knowledge-based or rules-based tasks. This could have legal implications for organisations that deploy machine learning and automation technologies.
Automation has been affecting the global economy since the industrial revolution. Over the years, we've seen manufacturing production lines become increasingly dominated by machines that improve productivity, efficiency and quality.
Factories have had robots since 1961, when General Motors first deployed the Unimate, a one-armed machine used for repetitive tasks like welding. Today, robots dominate production facilities and are capable of complex, multi-step operations.They have increased productivity enormously. In car assembly, for example, the Nissan plant in Sunderland can make as many cars with 2,000 employees today as it took over 250,000 people to produce in the 1970s. Amazon.com’s Kiva robots meanwhile, have increased warehouse productivity by over 400 per cent.
Robots are not only for large multi-nationals. The Baxter, from Rethink Robotics is a two-armed automaton with an animated face.It can perform a variety of tasks. Baxter’s price point is aimed directly at small to medium-sized businesses.
Robots are now moving beyond the factory floor. They are taking on increasingly complex and delicate tasks. Many modern robots are employed as lettuce pickers and hospital porters.
As the focus in robotics moves increasingly to human interaction, robots are developing increasingly human characteristics. Honda and Boston Dynamics, for example, are building robots that can walk, climb stairs and safely pick-up fresh eggs. Soon, we’ll see these concept machines deployed as robotic pack-mules, bar-tenders and even care-givers for the elderly.
However, the exponential increase in computing power, software and network connectivity is now spreading beyond the merely mechanical into white collar and knowledge work. Adoption of artificial intelligence (AI), in particular, is on the increase in a wide variety of fields and this is likely to have profound economic impacts in the years ahead.
From dumb to smart systems
While robotics has made great strides in augmenting human muscle, the capability to supplement the human mind will have even more profound effects. The effect of Moore’s Law (that the number of transistors per square inch on integrated circuits - and hence the processing power of devices of a given size - doubles every year), combined with advances in software programming means AI is improving rapidly.
It’s an uncomfortable fact that AI systems are now experts at what were once considered highly skilled tasks such as landing aircraft and trading shares. And, as computing power and network connectivity increase, these systems will continue their march toward domination of other rules based activities - such as analysing legislation and judgements to come up with a view on the meaning of legal terms such as "beyond all reasonable doubt".
The difference between chess and go
When IBM’s Deep Blue supercomputer defeated the then world champion Gary Kasparov over a series of six chess matches in 1997, many people saw the machine’s triumph as the beginning of the end for human intelligence. Although powered by immensely powerful software algorithms, the result was, in fact, largely a triumph for the brute power of a massive super-computer, rather than artificial intelligence. Deep Blue could not display the true intuition required by the ancient game of Go, which is exponentially more complex than chess and long considered the uncrackable gold standard for AI. For the time being, human intelligence still reigned supreme.
Over the past decade, AI researchers have been turning their attention to an approach known as machine learning (ML). Put simply, ML systems are fed with massive quantities of data or examples in the field under consideration and are continually taught what is right and wrong. Every single data point is a learning opportunity that incrementally improves the system’s performance.
Examples of ML deployments include Facebook’s system for distinguishing images and Amazon Echo’s ability to understand and respond to voice commands.
In March 2016, Google’s AlphaGo program used ML to defeat a foremost human Go player.
Not content with that milestone, researchers are now working on even more profound applications for ML.
ML or AI?
Early and accurate diagnosis of cancer is vital in modern healthcare. It’s also problematic. The first generation of AI approaches to this challenge used complex algorithms based on heuristics derived from medical textbooks and interviews with experienced diagnosticians. They provided some assistance to doctors, but suffered from a tendency to identify too many false positives. This resulted in unnecessary interventions for relatively harmless conditions.
The ML approach to identifying skin cancers, on the other hand, has taken images of tens of thousands of patients and taught the system which ones developed into cancer and which ones didn’t. From here, the system learned which pointers indicate the presence of problematic lesions.
When tested against a panel of senior medical professionals, the ML systems consistently outperformed its human counterparts in both identification and reducing false positives.
Work is now being done at IBM’s Watson Health and Google’s DeepMind to link ML systems like this to natural language processing capabilities. This will enable the systems to read patient records, relevant textbooks and journals and access databases and the latest peer-reviewed research. No human could ever have this much data at their fingertips.
Another key advantage of ML approaches is the learning cycle. Any mistake is immediately fed back into the system, which is then asked, “What did you miss here?” This doesn’t always happen among humans.
Furthermore, every new person entering the medical field starts from scratch in terms of knowledge. ML systems, by contrast, continually improve with every case they handle. And, thanks to internet connectivity, every installation on a given network improves together.
Lastly, ML systems do not tire, become stressed or emotional or suffer cognitive overload. Academic studies have consistently shown that the performance of diagnosticians (as well as judges for that matter) varies during the day.
Is Skynet here?
What is true for the diagnosis of skin cancer today will become true in other areas tomorrow. For example:
Leading AI and ML researchers claim there’s almost no area of knowledge work that they cannot enhance. They argue that any classification problem with large data sets has the potential to be solved by ML. Indeed, any problem that can be turned into a software and data problem can be addressed by ML – just witness the astonishing strides made in recent years in computational genomics and self-driving cars.
Also, consider the possibilities when you connect a smartphone to an ML system. The phone takes a picture, which is then uploaded to the cloud. The ML system receives the image and, using its centrally housed algorithm, analyses, it. Depending on the system’s purpose, it may then diagnose a regional viral outbreak or, perhaps, less dramatically, identify where preventative maintenance work is needed on infrastructure.
Having said all this, the all-powerful Skynet system at the heart of the Terminator movies is still a long way off. ML systems can become experts in narrow fields, but they can’t cope with work in an adjacent field. The algorithm for diagnosing skin cancer is of no use in diagnosing brain cancer, let alone trading shares. In short, these systems don’t have general intelligence or what we would think of as human cognitive ability. For the foreseeable future, we’re safe from Skynet.
The machine is not perfect
While the achievements of AI and ML are astonishing, the systems are far from flawless.
Firstly, even powerful algorithms can make mistakes, albeit at a lower rate than humans. For example:
In other words, a learning system left to mimic the world can adopt bad behaviours just as quickly as good ones and this is why there are now some nascent initiatives to look at "ethical codes" for AI.
Secondly, ML systems don’t have much in the way of explanatory power. The paradox of this is that the more powerful the technology becomes, the more opaque it can become. Even the engineers who create the systems aren’t always able to ascertain the criteria they base their decisions on.
This raises the question of legal responsibility when algorithms are in control. Who will be liable if a self-driving car causes a fatal accident? The passengers can’t be held accountable for something they didn’t control, the manufacturer has no control over roads or driving conditions, and the car - intelligent though it may be - can’t be expected to explain its actions.
Which is why It’s thought that some motor manufacturers may be using data from current driving activities to learn how humans respond to road conditions. The assumption is that this data will guide the programming of future autonomous vehicles.
The same can be said of an application that has some control over a patient’s treatment process or a risk assessment algorithm that decides whether convicts stay in prison or get released.
A related problem is that while ML systems can solve a case, they can’t build a case. This may be a critical flaw as in many situations the understanding of the underlying drivers is critical to creating new solutions or problem-solving approaches.
In short ,where it remains an emotional or irrational human behaviour, ML will struggle for some time.
Machine learning and artificial intelligence are taking robotics to a new level. As well as repetitive manual tasks, these technologies can now power knowledge-based functions such as diagnosing cancer and stockbroking. In the near future, its influence is set to become even more profound as its capabilities and uptake increase. The technology however, is not flawless and its shortcomings could have significant implications for in-house lawyers at organisations that deploy ML or AI.