AI and the Responsibility Gap
- Andrew Liu
- Aug 7
- 3 min read
AI policy and regulation is going to be very different from the regulation of any other tool created by humans. This is because AI is completely different from any other tool we've ever created. In the past, human made tools have always had the capability of being understood. This is because tools of the past could not make themselves, and thus a human had to make the tool whilst understanding the tool in the process. A sword smith understood his sword, just as a computer scientist understood her program, ect. However, AI differs from this because autonomous AI is not a tool that needs to be created by a human. Instead, autonomous AI can be considered as a tool that creates itself. Of course, initially the training needs to be set up by a human. But, once it is gets going, new forms of autonomous AI train themselves to get better, with no human understanding what is happening inside the "black box."
As early as 2004, philosophers like Andreas Matthias realized a problem that this could cause. The issue lies in how we normally regulate tools. Whenever we regulate a tool, like a car, there is a balance of responsibility taking place. Part of the responsibility is put on the user of the tool. After all, many car crashes are user error. However, since the creator of the tool (the past) has the ability to understand the tool they are creating for people's use, there has also been an element of responsibility put on the creator. If a car is unsafe, the manufacturer has the ability to know and understand why it is unsafe curtesy of testing and other measures. Hence, when a car has a malfunction that is not due to user error, it is the fault of the manufacturer. The issue that AI introduces is that when creating AI, the creators of AI may not have the ability to understand what they are creating. Since the AI trains itself, and learns to adapt to situations the creators may never think of, it is almost impossible for the creators to test and know what the AI will do in certain situations.
Now, imagine an AI robot is performing a surgery, and because every patient's body is different, an unexpected trigger occurs, causing the AI robot to go off course and harm the patient. In this situation, the hospital as the user doesn't bare much responsibility, since they were never told the AI could do something like this. At the same time, we cannot place responsibility on the coders because the coders also could have never known this could happen: the AI was training itself in new situations after all. As a result, a gap appears in the traditional frameworks for responsibility because clearly something bad has happened, and the need for responsibility has been frustrated. Surrounding this core issue, a whole literature has appeared. Some people say that the responsibility gap is a big deal, and that it must limit the extent to which AI can automate society. For example, we may concede that we can never have AI surgeons doing work without a human surgeon on hand to monitor it (while also acting as the responsibility fallback if something goes wrong). Others have argued that the AI responsibility gap is just a reframing of existing issues in assigning responsibility, such as the problem of many hands. Whatever the solution, AI will surely introduce ethical problems that will be key to understanding how to regulate and introduce the technology into society.
