What happens when regulation goes wrong? What happens when well-intentioned policy-makers fail to foresee the negative impact their decisions may have? History is littered with examples of the legislatures and policy makers that have tried to address a perceived problem, implemented a response targeted at resolving the problem, only to see it do more harm than good.
Last week statistics were released which showed that the Queensland government’s lockout laws had had a negligible impact on the number of alcohol related presentations at hospitals. Sydney, which has similar lockout laws, saw alcohol related assaults fall by 43 incidents per month but in the same time the laws have seen a 40% drop in live music ticket revenue. In the United States, implementation of mandatory seat-belt laws saw a decrease in risk of fatality resulting from car accidents, but an increase in the overall number of accidents. Three-strike mandatory sentencing laws create an incentive for previous offenders to evade a third arrest and studies have shown an increase in police fatalities as a result.
It is no coincidence that the above examples tackle three difficult, engrained problems: recidivism, traffic fatalities and binge drinking. The risk of unintended consequences and (often) unforeseeable damage is one that increases with the complexity of the problem policymakers seek to solve.
And this is not an easy fix, nor is it one that is going away anytime soon.
There are a multitude of technologies and innovations that will require regulation in the near future; ones that pose ethical dilemmas such driverless cars as well as those that pose logistical and legal problems such as Uber. How do we regulate these industries and these complex problems and whom do we hold accountable when inevitably, we face the unintended consequences of this regulation?
Gary Lea has posed some interesting thoughts regarding the ethical and regulatory minefield that driverless cars present and highlights regulation in these areas will be is a balancing act between maximising public welfare and promoting innovation. Who do we give the responsibility of making ethical decisions when artificial intelligence is involved? And how do we even begin to comprehend the regulatory standards that would govern a future where your car may be manufactured and programmed to kill you in order to save the occupants of another vehicle?
Given the increased likelihood of future problems and difficulties in regulating these areas, we need a rethink. We need to develop structures, relationships and most importantly dialogues that go beyond voters endorsing legislation that purports to solve problems, we need to involve ourselves in the deliberative stages of policy-making and think through the full range of consequences from a wide variety of perspectives. Without a broader base of involvement and without more holistic contemplation of the regulative and legislative impact we might see opportunities turn to mistakes and develop cures worse than the disease.