{"id":939,"date":"2023-11-28T10:48:20","date_gmt":"2023-11-28T10:48:20","guid":{"rendered":"https:\/\/www.inspireslearning.com\/dahl\/?p=939"},"modified":"2023-12-12T16:56:31","modified_gmt":"2023-12-12T16:56:31","slug":"how-ai-might-kill-you-today-but-not-at-the-hands-of-killer-robots","status":"publish","type":"post","link":"https:\/\/www.inspireslearning.com\/dahl\/ai\/how-ai-might-kill-you-today-but-not-at-the-hands-of-killer-robots\/","title":{"rendered":"How AI might kill you today, but not through the cold metal hands of a killer robot\u00a0"},"content":{"rendered":"\n<p>Article by Dr Robert O&#8217;Toole (a philosopher with a masters in AI, who has worked on AI in financial services, and now works at the interface between tech, education, and design research).<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Intelligence = decision making, based on perceptual inputs, knowledge, and cognitive processes, by an organism in an environment, to achieve the organism\u2019s goals.\n\nArtificial intelligence = smart decision making offloaded to a machine, to reduce human workload or to achieve otherwise impossible tasks.\n\nEmergent artificial intelligence = smart decision making that follows patterns that were not designed-in by human intelligence, but which emerge by chance, often as a result of AI optimising its methods through many trial and error cases.\n\nArtificial intelligent life = artificial intelligence that behaves as an organism, developing its own goals (including self-preservation), and acting to achieve them.<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Morality Tales<\/h2>\n\n\n\n<p>Remember: the goal of the tech industry isn&#8217;t to make the machines as intelligent as smart humans, that&#8217;s just too hard, the goal is to make humans as dumb as the machines, and in so doing, to create new billionaires &#8211; something it seems to be rather good at.<\/p>\n\n\n\n<p>On the 29<sup>th<\/sup>&nbsp;of October 2018 AI caused the deaths of 189 people in the Java Sea. The following year, on the 10<sup>th<\/sup>&nbsp;of March, the same AI killed another 157 people in Ethiopia. Of course, we cannot blame the algorithm, as if it were a person (with either murderous intent or criminal negligence). The blame was attributed, in the end, to Boeing\u2019s deadly corporate culture, which led to designers and engineers breaking one of the airline industry\u2019s golden rules: never implement an automated decision making system in which the pilots do not understand what decisions are being made, why they are being made, what the consequences are, and how to over-ride the artificial intelligence.&nbsp;<\/p>\n\n\n\n<p>Boeing\u2019s revolutionary new Manoeuvring Characteristics Augmentation System broke the golden rule, with deadly consequences. The plane behaved in unexpected ways in response to a failure in one of its sensors. The pilots didn\u2019t understand what was going on, so responded with control inputs that made things worse. <\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>During the design and certification of the Boeing 737-8 (MAX), assumptions were made about flight-crew response to malfunctions that, even though consistent with current industry guidelines, turned out to be incorrect.<\/p>\n<cite>NTSC final report on the crash of Lion Air Flight 610<\/cite><\/blockquote>\n\n\n\n<p>The unexpected behaviour of the control system may have emerged out of its complexity (maybe hinting at emergent AI), but it also seems that the pilots did not fully understand the system. As emergent AI becomes more common, this will get worse \u2013 so we need to be ultra-cautious about where we allow it to happen.<\/p>\n\n\n\n<p>Most alarmingly, this mistake had been made before. As autopilot systems became more sophisticated, accidents in which pilots and machine intelligence fought against each other became a recurring issue \u2013 the pilot would misunderstand the behaviour of the plane and the decisions made by autopilot, and literally fight against it, putting in ever more drastic control, with the autopilot responding in more ways that the pilot couldn\u2019t understand. This mismatch between the pilot\u2019s mental model of how the plane should fly and the autopilot\u2019s adaptation to the situation, could cause an out-of-control series of rapid changes. The psychologist of design, Donald Norman, wrote about these ever worsening mismatches in his 1988 book&nbsp;<em>The Design of Everyday Things.<\/em><\/p>\n\n\n\n<p>So, the rule of good autopilot design is: the pilots need to understand and predict it, know when to switch it off and how to regain control, and be able to operate without it.<\/p>\n\n\n\n<p class=\"has-small-font-size\">If you want to learn more about the role of automation and pilots in air accidents, and how the airline industry learns from its mistakes, I recommend watching videos by <a href=\"https:\/\/www.youtube.com\/@MentourPilot\" target=\"_blank\" rel=\"noopener\" title=\"\">Mentour on YouTube<\/a>.<\/p>\n\n\n\n<p>As the tech-industry\u2019s hubris grows (driven by the need to create more billionaires), more accidents will happen. And this will spread to other industries.&nbsp;<\/p>\n\n\n\n<p>In fact, it\u2019s already there \u2013 so AI might not kill you in an air crash, but there are other ways it could get you. My Toyota car, for example, has a high level of automation. And sometimes it does things I don\u2019t expect, like braking on a corner if a car is parked in front. I\u2019ve switched off some of the automations&nbsp;<em>to make it safer<\/em>. But other features are genuinely helpful and predictable. I\u2019m happy that the computer optimises the use of petrol and the delivery of power in the Toyota Hybrid Drive system. The more intelligent that gets the better, so long as it provides a predictable and controllable driving experience. I am just as happy about the many other AI systems, already operational in the world, that are addressing the big issues we face (especially power generation, distribution, and pollution).<\/p>\n\n\n\n<p>Motorcycle design, another of my interests, has adopted automations like this more cautiously \u2013 it is a more cautious industry, as judges are more keen to award damages against motorcycle manufacturers for accidents caused by faults. But it is also an industry in which users want to be in control, connected to the road and the engine. That\u2019s what they enjoy. I\u2019m a motorcyclist who rides a bike with almost no electronics. And that\u2019s why the people who invented these principles for the design of technology-augmented intelligence were motorcyclists.<\/p>\n\n\n\n<p>In 1996 Mark Weisser and John Seely Brown (two of the most influential technology innovators) wrote of \u201c<a href=\"https:\/\/people.eng.unimelb.edu.au\/vkostakos\/courses\/ubicomp10S\/papers\/visions\/weiser-96.pdf\">The Coming Age of Calm Technology<\/a>\u201d \u2013 arguing that technology should be used to augment human capability carefully, so as to reduce cognitive load enough for us to focus on the things that matter. When appropriate, and in a \u201ccalm\u201d manner, it should signal for our attention to be diverted to issues that need attending to. In 2014,&nbsp;<a href=\"https:\/\/www.johnseelybrown.com\/calmtech.pdf\">reflecting on the impact their paper had<\/a>, Seely Brown explained how they came up with the idea from riding their new BMW motorcycles, which were amongst the first to use ABS braking and traction control \u2013 examples of calmly assistive tech that needs to do its job unobtrusively and&nbsp;<em>predictably<\/em>. When braking in a corner (the most dangerous task in motorcycling), sudden unexpected machine intelligence input is dangerous. Since then, motorcycle electronics have slowly and carefully advanced with ever more sophisticated machine intelligence \u2013 not to replace the rider (BMW have demonstrated autonomous robot motorcycles, with little interest), but to allow them to focus more on riding, becoming a better rider, being fully in control at all times.<\/p>\n\n\n\n<p>Considering these two cases, autopilot and motorcycle electronics, what can we learn to apply to other fields? Fundamentally,&nbsp;AI must not be about the tech, or the chance to make new billionaires. Designers incorporating AI into systems must always&nbsp;focus on people first, their needs, desires, and capabilities \u2013 good old human centred design. Ask these questions:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Is it dangerous in the situations we expect it to be used in?&nbsp;<\/li>\n\n\n\n<li>Are there situations it might encounter that we don\u2019t understand, and which could make it dangerous?&nbsp;<\/li>\n\n\n\n<li>Does it remove the possibility of humans regaining control when needed?&nbsp;<\/li>\n\n\n\n<li>Does it prevent humans from developing essential skills?&nbsp;<\/li>\n\n\n\n<li>Does it take the joy out of life?&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>If the answer is yes to any of those, don\u2019t do it!<\/p>\n\n\n\n<p>To apply this, start with considering ChatGPT, and consider how easy it is to become deskilled. I\u2019ve lost the ability to write with a pen, as the result of only writing with a keyboard. Would I want to lose the ability to research information and compose meaningful, well-crafted texts? Would I want anyone to grow up without those abilities? How did I get to be a decent writer? By practice, a lot of practice. And do I enjoy it? Yes, after all that hard work, it is one of the pleasures of being human. Do replacing it breaks at several of the rules listed above.<\/p>\n\n\n\n<p>But could ChatGPT actually be dangerous? Clearly yes, when used to write texts that have safety implications, or which could incite violence. We know that it is creating new \u201cknowledge\u201d, which has been shown to be unreliable. And then it is ingesting and \u201cbelieving\u201d its own lies \u2013 a primitive form of emergent AI. This is not a good thing at all. It is edging towards being out-of-control, deployed in a complex real-world, and having effects we cannot easily switch off. Would I use ChatGPT to augment my capabilities? It fails all of the key tests set out above. So that\u2019s a definite no then. Should we allow it to be used by, for example, students? If we could ban it, which we can\u2019t, I would err towards doing that. Should we design assessments to make its use less likely? Definitely, but we should be doing that anyway so as to guide students towards developing good academic crafts. Should we make sure that every student, every person, in the world, understands what\u2019s going on? Yes. Now. On a massive scale. But let\u2019s not use ChatGPT to generate a PowerPoint to teach them.<\/p>\n\n\n\n<p>Would I use AI? Absolutely, like all of us, I\u2019m already using it all the time. We just don\u2019t realise it. However, I try to keep ahead of it. For example, Microsoft have recently added a much more sophisticated (so they claim) layer of AI automation across their extensive platform of tools. It looks as if it can be used smartly to save time and solve integration problems. But it can also be used to generate yet more bad PowerPoint presentations. We can\u2019t ignore it. Microsoft is a powerful force. We will have to learn to control it. For some people, it might be too late \u2013 jumping into this complex world without a deep-enough understanding of the system will happen, and will have negative consequences. We need to help people prepare&nbsp;<em>now.<\/em><\/p>\n\n\n\n<p>Why aren\u2019t we doing that? Our relationship with technology, and the tech industry, is a blocker. The people behind the tech don\u2019t really want everyone to understand it. Always remember that the goal of the tech industry isn&#8217;t to make the machines as intelligent as smart humans, that&#8217;s just too hard, the goal is to make humans as dumb as the machines, and in so doing, to create new billionaires &#8211; something it seems to be rather good at.<\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"mh-excerpt\"><p>Remember: the goal of the tech industry isn&#8217;t to make the machines as intelligent as smart humans, that&#8217;s just too hard, the goal is to make humans as dumb as the machines, and in so doing, to create new billionaires &#8211; something it seems to be rather good at.<\/p>\n<\/div>","protected":false},"author":2,"featured_media":941,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29,30],"tags":[],"class_list":["post-939","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-opinions"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/posts\/939","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/comments?post=939"}],"version-history":[{"count":10,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/posts\/939\/revisions"}],"predecessor-version":[{"id":957,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/posts\/939\/revisions\/957"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/media\/941"}],"wp:attachment":[{"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/media?parent=939"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/categories?post=939"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inspireslearning.com\/dahl\/wp-json\/wp\/v2\/tags?post=939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}