How AI might kill you today, but not through the cold metal hands of a killer robot 

By Gerd Altman @pixabay

Article by Dr Robert O’Toole (a philosopher with a masters in AI, who has worked on AI in financial services, and now works at the interface between tech, education, and design research).

Intelligence = decision making, based on perceptual inputs, knowledge, and cognitive processes, by an organism in an environment, to achieve the organism’s goals.

Artificial intelligence = smart decision making offloaded to a machine, to reduce human workload or to achieve otherwise impossible tasks.

Emergent artificial intelligence = smart decision making that follows patterns that were not designed-in by human intelligence, but which emerge by chance, often as a result of AI optimising its methods through many trial and error cases.

Artificial intelligent life = artificial intelligence that behaves as an organism, developing its own goals (including self-preservation), and acting to achieve them.

Morality Tales

Remember: the goal of the tech industry isn’t to make the machines as intelligent as smart humans, that’s just too hard, the goal is to make humans as dumb as the machines, and in so doing, to create new billionaires – something it seems to be rather good at.

On the 29th of October 2018 AI caused the deaths of 189 people in the Java Sea. The following year, on the 10th of March, the same AI killed another 157 people in Ethiopia. Of course, we cannot blame the algorithm, as if it were a person (with either murderous intent or criminal negligence). The blame was attributed, in the end, to Boeing’s deadly corporate culture, which led to designers and engineers breaking one of the airline industry’s golden rules: never implement an automated decision making system in which the pilots do not understand what decisions are being made, why they are being made, what the consequences are, and how to over-ride the artificial intelligence. 

Boeing’s revolutionary new Manoeuvring Characteristics Augmentation System broke the golden rule, with deadly consequences. The plane behaved in unexpected ways in response to a failure in one of its sensors. The pilots didn’t understand what was going on, so responded with control inputs that made things worse.

During the design and certification of the Boeing 737-8 (MAX), assumptions were made about flight-crew response to malfunctions that, even though consistent with current industry guidelines, turned out to be incorrect.

NTSC final report on the crash of Lion Air Flight 610

The unexpected behaviour of the control system may have emerged out of its complexity (maybe hinting at emergent AI), but it also seems that the pilots did not fully understand the system. As emergent AI becomes more common, this will get worse – so we need to be ultra-cautious about where we allow it to happen.

Most alarmingly, this mistake had been made before. As autopilot systems became more sophisticated, accidents in which pilots and machine intelligence fought against each other became a recurring issue – the pilot would misunderstand the behaviour of the plane and the decisions made by autopilot, and literally fight against it, putting in ever more drastic control, with the autopilot responding in more ways that the pilot couldn’t understand. This mismatch between the pilot’s mental model of how the plane should fly and the autopilot’s adaptation to the situation, could cause an out-of-control series of rapid changes. The psychologist of design, Donald Norman, wrote about these ever worsening mismatches in his 1988 book The Design of Everyday Things.

So, the rule of good autopilot design is: the pilots need to understand and predict it, know when to switch it off and how to regain control, and be able to operate without it.

If you want to learn more about the role of automation and pilots in air accidents, and how the airline industry learns from its mistakes, I recommend watching videos by Mentour on YouTube.

As the tech-industry’s hubris grows (driven by the need to create more billionaires), more accidents will happen. And this will spread to other industries. 

In fact, it’s already there – so AI might not kill you in an air crash, but there are other ways it could get you. My Toyota car, for example, has a high level of automation. And sometimes it does things I don’t expect, like braking on a corner if a car is parked in front. I’ve switched off some of the automations to make it safer. But other features are genuinely helpful and predictable. I’m happy that the computer optimises the use of petrol and the delivery of power in the Toyota Hybrid Drive system. The more intelligent that gets the better, so long as it provides a predictable and controllable driving experience. I am just as happy about the many other AI systems, already operational in the world, that are addressing the big issues we face (especially power generation, distribution, and pollution).

Motorcycle design, another of my interests, has adopted automations like this more cautiously – it is a more cautious industry, as judges are more keen to award damages against motorcycle manufacturers for accidents caused by faults. But it is also an industry in which users want to be in control, connected to the road and the engine. That’s what they enjoy. I’m a motorcyclist who rides a bike with almost no electronics. And that’s why the people who invented these principles for the design of technology-augmented intelligence were motorcyclists.

In 1996 Mark Weisser and John Seely Brown (two of the most influential technology innovators) wrote of “The Coming Age of Calm Technology” – arguing that technology should be used to augment human capability carefully, so as to reduce cognitive load enough for us to focus on the things that matter. When appropriate, and in a “calm” manner, it should signal for our attention to be diverted to issues that need attending to. In 2014, reflecting on the impact their paper had, Seely Brown explained how they came up with the idea from riding their new BMW motorcycles, which were amongst the first to use ABS braking and traction control – examples of calmly assistive tech that needs to do its job unobtrusively and predictably. When braking in a corner (the most dangerous task in motorcycling), sudden unexpected machine intelligence input is dangerous. Since then, motorcycle electronics have slowly and carefully advanced with ever more sophisticated machine intelligence – not to replace the rider (BMW have demonstrated autonomous robot motorcycles, with little interest), but to allow them to focus more on riding, becoming a better rider, being fully in control at all times.

Considering these two cases, autopilot and motorcycle electronics, what can we learn to apply to other fields? Fundamentally, AI must not be about the tech, or the chance to make new billionaires. Designers incorporating AI into systems must always focus on people first, their needs, desires, and capabilities – good old human centred design. Ask these questions:

  • Is it dangerous in the situations we expect it to be used in? 
  • Are there situations it might encounter that we don’t understand, and which could make it dangerous? 
  • Does it remove the possibility of humans regaining control when needed? 
  • Does it prevent humans from developing essential skills? 
  • Does it take the joy out of life? 

If the answer is yes to any of those, don’t do it!

To apply this, start with considering ChatGPT, and consider how easy it is to become deskilled. I’ve lost the ability to write with a pen, as the result of only writing with a keyboard. Would I want to lose the ability to research information and compose meaningful, well-crafted texts? Would I want anyone to grow up without those abilities? How did I get to be a decent writer? By practice, a lot of practice. And do I enjoy it? Yes, after all that hard work, it is one of the pleasures of being human. Do replacing it breaks at several of the rules listed above.

But could ChatGPT actually be dangerous? Clearly yes, when used to write texts that have safety implications, or which could incite violence. We know that it is creating new “knowledge”, which has been shown to be unreliable. And then it is ingesting and “believing” its own lies – a primitive form of emergent AI. This is not a good thing at all. It is edging towards being out-of-control, deployed in a complex real-world, and having effects we cannot easily switch off. Would I use ChatGPT to augment my capabilities? It fails all of the key tests set out above. So that’s a definite no then. Should we allow it to be used by, for example, students? If we could ban it, which we can’t, I would err towards doing that. Should we design assessments to make its use less likely? Definitely, but we should be doing that anyway so as to guide students towards developing good academic crafts. Should we make sure that every student, every person, in the world, understands what’s going on? Yes. Now. On a massive scale. But let’s not use ChatGPT to generate a PowerPoint to teach them.

Would I use AI? Absolutely, like all of us, I’m already using it all the time. We just don’t realise it. However, I try to keep ahead of it. For example, Microsoft have recently added a much more sophisticated (so they claim) layer of AI automation across their extensive platform of tools. It looks as if it can be used smartly to save time and solve integration problems. But it can also be used to generate yet more bad PowerPoint presentations. We can’t ignore it. Microsoft is a powerful force. We will have to learn to control it. For some people, it might be too late – jumping into this complex world without a deep-enough understanding of the system will happen, and will have negative consequences. We need to help people prepare now.

Why aren’t we doing that? Our relationship with technology, and the tech industry, is a blocker. The people behind the tech don’t really want everyone to understand it. Always remember that the goal of the tech industry isn’t to make the machines as intelligent as smart humans, that’s just too hard, the goal is to make humans as dumb as the machines, and in so doing, to create new billionaires – something it seems to be rather good at.

Be the first to comment

Leave a Reply

Your email address will not be published.


*