In The Economist of 4 September 2021, an article was published under the title ‘ Is a self-driving car smarter than a seven-month-old?’. If you guessed, from the way the question is phrased, that the answer was ‘no’, you would have been entirely correct. Ultimately, the article (intentionally or not) puts out an implicit message that these self-driving things are dangerously immature and should not be allowed on our roads.
The author’s point is that human children, generally around the age of seven months, have learned the concept of ‘object permanence’. They realise that an object that disappears from sight is nevertheless still there. A toy that is covered by a blanket isn’t gone, it simply isn’t visible anymore for the time being. A self-driving car on the other hand doesn’t ‘know’ or ‘realise’ that a cyclist who disappears behind a parked van is actually still there and will re-appear momentarily from behind the van. Hence, the author’s perceived need to improve the ‘intelligence’ of self-driving cars.
There are a number of issues with this approach and the (implicit) message that this article puts out. For completeness’ sake, it’s not a given that self-driving cars do not apply the concept of object permanence or object tracking. There are many different systems out there, the inner workings of which are not always entirely known to anyone other than the development team. But even assuming that self-driving cars indeed do not know the concept of object permanence, the question and the comparison is still wrong.
The end-goal of self-driving cars – or other AI applications such as the software that beats Chess or Go champions, speech-recognition software, medical diagnosis software, etc. – is not to create a real-world C3PO or a robot that can carry on a witty conversation with Will Smith. A self-driving car is a specialised system, built to perform a specific task: go from A to B, while complying with the traffic rules and without crashing into someone or something. Obviously, self-driving cars today are not perfect (yet), but the question is not how they compare to young children. Computers are able to process millions of instructions per seconds, but even computers do not have unlimited processing powers. A three-year old will be able to tell you whether the animals in the field next to the road are cows or sheep, whether the headlights of an oncoming car are round or square, and whether the moon is out at night. Your self-driving car will not be able to do any of these things – because there is no point in a self-driving car to be able to do so, and it would be simply wasting its processing power if it were to do so. With object permanence, the question is not whether a seven-month-old has acquired this concept, the question is whether implementing this concept results in a better system. Is the cost in processing power set off by the improvements in efficiency of the system? It is by no means certain that this is the case. The technical paper by which the article in The Economist was inspired[i] reports a 5% increase in tracking performance. Tracking performance is only one of the factors in the overall performance of a self-driving car, and maybe, if the processing power required to implement object permanence were applied to other factors (forward scanning, speed of reaction, etc.), the overall performance of the self-driving car would improve more than by implementing object permanence. After all, object permanence is not always relevant. The cyclist who disappears behind the parked van may stop there, because that’s where he lives, or he may turn into a bike lane, and thus never re-appear from behind the van. In such cases, tracking his possible trajectory is just a waste of resources.
The correct question, therefore, is not how we can make the self-driving car more “intelligent”, but how we can make this task-specific system more efficient – in every possible meaning of the word. Actually, the road infrastructure today is limited, because it has to work with the limitations of human drivers. Traffic lights, for example, are visual signals, which only work within line-of-sight. Even the most proficient and attentive human driver doesn’t know what the traffic light around the corner is showing. Traffic lights might be obscured by fog, overhanging branches, etc. In a digital world, traffic lights can broadcast their information –data signals are not stopped by fog and can go around corners. Your self-driving car of the future will know whether the traffic lights located two intersections down your path are red or green. Self-driving cars can broadcast their ‘intentions’, i.e. the route they will be taking in the next few minutes, to the traffic around (ships today already do so (to a limited extent) with AIS), local repeaters can greatly increase the accuracy of GPS positioning, etc. Again, with all of these factors, the question is not how to make self-driving cars more “like humans” or more “intelligent”, the question is how we can improve the efficiency of the system. The same applies to any autonomous vehicle, be it on land, on water or in the air.
All of these are, of course, primarily design and technical issues, but they should matter to lawyers too. Autonomous vehicles will not be perfect and accidents will happen. Even the aviation industry, which by now has decades of experience with automation and computerised systems, still has accidents. The Boeing 737 MAX scandal was a very sad reminder of that. Self-driving cars and autonomous vessels will not have to relearn all of the lessons that aviation has learned over the years, but nevertheless, it is unrealistic to expect that everything will work as expected right from the start.
When accidents happen, lawyers immediately start thinking about negligence, fault and liability – as they should, of course. But if their mind has been skewed before by news stories about making autonomous vehicles more like seven-month-olds, chances are that their appreciation of negligence will not be correct. It goes without saying that designers and developers can indeed go wrong, but their performance should be measured against the correct yardstick, and not against a misleading and incorrect yardstick.
Dr Frank Stevens is an Associate Professor at Erasmus School of Law, Rotterdam.
[i] Jakob Suchan, Mehul Bhatt & Srikrishna Varadarajan, ‘Commonsense visual sensemaking for autonomous driving – On generalised neurosymbolic online abduction integrating vision and semantics’ (2021) 299 Artificial Intelligence 103522