top of page

Blog

Often, it is important judge the morality of past policy decisions. We say which ones were morally good, which ones were morally bad, and use the insight to improve our decisions in the present. With this type of historical policy analysis, historical context is needed. By judging the effects of decisions based on the morals of the time, instead of applying our modern morals, the historical policy analysis can hopefully be more fair. However, is this enough? Is it possible that any backwards looking judgement of morality could be inherently unfair, even if we adjust for changing morals?


It could because of Moral Luck. Moral luck discusses how events occurring from chance can effect the way we make backwards looking moral judgements. For example, take two drunk drivers. They both drink the same amount of alcohol, and commit the same felony. However, by pure chance, one driver gets home safely, while the other hits and kills a pedestrian. In this situation, it is custom to say that the driver who killed someone has done something morally worse than the driver who didn't. Yet, focusing only on the decisions made, the two drivers made exactly the same decisions. It was only circumstantial luck that a pedestrian was crossing the road for one of the drivers and not for the other. If they made the same decisions, should the moral error of their actions be the same for both drivers?


This is important for backwards looking historical policy analysis because of the randomness of policy effects. With millions of people and countless moving parts, it is almost impossible to fully predict how society will react to a certain policy lever over the course of a decade, let alone longer periods of time. Moreover, when judging policy makers, there is no other driver. There is no control group to see where luck came into play. As a result, when we make moral judgements on the effects of past policy decisions, we need to take careful consideration of moral luck. When policy makers make decisions with the information that they have, their information is seldom enough to know the actual effects of a policy. If someone is throwing shots in the dark, would you blame them for missing? Probably not, and that is why moral judgements of historical policy decisions always need to be taken with a grain of salt.


dice :)
dice :)

 

AI policy and regulation is going to be very different from the regulation of any other tool created by humans. This is because AI is completely different from any other tool we've ever created. In the past, human made tools have always had the capability of being understood. This is because tools of the past could not make themselves, and thus a human had to make the tool whilst understanding the tool in the process. A sword smith understood his sword, just as a computer scientist understood her program, ect. However, AI differs from this because autonomous AI is not a tool that needs to be created by a human. Instead, autonomous AI can be considered as a tool that creates itself. Of course, initially the training needs to be set up by a human. But, once it is gets going, new forms of autonomous AI train themselves to get better, with no human understanding what is happening inside the "black box."


As early as 2004, philosophers like Andreas Matthias realized a problem that this could cause. The issue lies in how we normally regulate tools. Whenever we regulate a tool, like a car, there is a balance of responsibility taking place. Part of the responsibility is put on the user of the tool. After all, many car crashes are user error. However, since the creator of the tool (the past) has the ability to understand the tool they are creating for people's use, there has also been an element of responsibility put on the creator. If a car is unsafe, the manufacturer has the ability to know and understand why it is unsafe curtesy of testing and other measures. Hence, when a car has a malfunction that is not due to user error, it is the fault of the manufacturer. The issue that AI introduces is that when creating AI, the creators of AI may not have the ability to understand what they are creating. Since the AI trains itself, and learns to adapt to situations the creators may never think of, it is almost impossible for the creators to test and know what the AI will do in certain situations.


Now, imagine an AI robot is performing a surgery, and because every patient's body is different, an unexpected trigger occurs, causing the AI robot to go off course and harm the patient. In this situation, the hospital as the user doesn't bare much responsibility, since they were never told the AI could do something like this. At the same time, we cannot place responsibility on the coders because the coders also could have never known this could happen: the AI was training itself in new situations after all. As a result, a gap appears in the traditional frameworks for responsibility because clearly something bad has happened, and the need for responsibility has been frustrated. Surrounding this core issue, a whole literature has appeared. Some people say that the responsibility gap is a big deal, and that it must limit the extent to which AI can automate society. For example, we may concede that we can never have AI surgeons doing work without a human surgeon on hand to monitor it (while also acting as the responsibility fallback if something goes wrong). Others have argued that the AI responsibility gap is just a reframing of existing issues in assigning responsibility, such as the problem of many hands. Whatever the solution, AI will surely introduce ethical problems that will be key to understanding how to regulate and introduce the technology into society.



Dr. Matthias
Dr. Matthias

 

Today, climate change is becoming less like a dystopian story and more like an impending reality. Across the world, summers are getting hotter. Heatwaves are getting longer, and more frequent. With less snow my north Jersey town, the local ski resort opens 4-5 weeks a year, instead of for 4-5 months. So, although we haven't reached the blistering +4°C temperatures of the Cretaceous Period, our 1.2°C post industrialization increase is still major reason for alarm. Rightly, scientists and ordinary people are calling for policy makers to enact climate change policies to stop the progression of global warming and climate change as a whole.


Considering the possible damage that climate change can cause, it seems almost trivial to ask for justification of such policies. The classic justification is: "by stopping climate change, we are helping the people of the future who would have to live under the future effects of climate change." At a first glance, this makes sense. In fact, most theories against climate change policy choose to attack the existence of climate change, almost accepting the fact if if climate change were true the harm to future people would also be true.


Unfortunately, there is an issue with the logic that stopping climate change will help the future people living under it. In moral theory, this issue is called the Non-Identity Problem. The Non-Identity Problem appears when digging into the definition of what it means to help someone. An action helps someone when the action results in a person being in better condition when it occurs, compared to the person being in worse condition if the action had not occurred. Importantly, if the person with the action is not the same person as without the action, we cannot say that the action helped the person. This is an significant because enacting climate change policy will effect the existence of future people. Because climate policies like certain taxes and work opportunities affect peoples lives, and because these small effects add up and can eventually change when people decide to have children and how many children they decide to have, the future population with climate policies will not have the same identity as the future population without climate policies. After all, even changing the time of conception by a single day will change the identity of a future person. As such, when a policy has society wide effects multiplied over many decades, the future population will indeed have a different identity compared to the hypothetical population without the policy. This is significant because it means we cannot say that climate policies help the future population that would be living under climate change. The group living in climate change would not consist of the same individuals as the group living without climate change.


This non identity problem, which was first introduced by Derek Parfit in 1987, has been attacked and tinkered with by philosophers for decades. Yet, despite numerous attacks, it continues to undermine the intuitive assumption that we much end climate change in order to help the people of the future. For many, the solution is not to stop the fight for climate policies, but rather to change the moral motivations for climate policy. For example, perhaps a stronger justification is that we need climate policies to help preserve the institutions that we have today. You could say that institutions like countries are more likely to exist both in a world with and without climate policy, thus dodging the non-identity problem. However, in the most extreme of circumstances, a climate policy could lead to the destruction of a nation, thus returning us to the problem.

climate change !
climate change !
Derek Parfit
Derek Parfit

 
bottom of page