Sunday, 9 December 2018

The Robots Are Coming

I had a couple of welcome days off work this week so I found the time to write something. What shall I write about?  Well, today I read an article over on CommonSpace about the way in which societies could impose ethical standards on AI through the use of Citizen Assemblies. It was an interesting article but something in it made the gears of my brains start turning.  If you're interested in this topic then bugger off for a bit and read the article, then a short twitter thread with the author and then come back for a long-winded discussion.

I would guess everyone was nodding along with the author of the Commonspace article. It all sounds great, doesn't it?  We all want an ethical society, right?  The question, then, is how best to go about achieving that goal.  Will it be best achieved with a Citizen AI Assembly?  I'm afraid it won't. I actually find the idea slightly terrifying.

Before we go any further we need to nail down our terminology.  Specifically, what is AI?  I might as well pontificate on the length on a piece of string but in what follows I'm going to say that AI is any technology that can perform a specified task better than humans.  Even a traffic light blinking red/amber/green is a primitive type of AI. After all, we could employ a human with a stopwatch and a resilient finger at each traffic light.  Humans, however, fall asleep and need to go to the toilet all the time so we're much better off with a simple timing circuit.

How far can we got without mentioning Brexit?  It turns out that 3 paragraphs is the limit. After all, everything is Brexit-flavoured these days.  If you make it to the end you'll find out why I think a Citizen AI Assembly is a rather Brexity idea. There is also a weak joke about the current Labour leadership.  Do watch out for that.  Simply enjoy.

See You In Court


Let's look at an example and see how it might work with or without a Citizen AI Assembly.  Let's imagine an AI whose developers boast it can match fingerprints better than criminologists with years of experience in fingerpint matching. This is the kind of thing that might already exist but if it doesn't then it soon will because this is meat and two veg for deep neural networks working in combination with a process called supervised learning. Should this be used?  Should AI matches be admissible in court?  If expert human opinion is in conflict with the AI decision on a match what happens then? What are the ethics of this?

I want to begin by saying that nobody is saying that this technology shouldn't exist.  For example, it could be used to quickly sift through thousands of fingerprints more quickly than existing solutions and certainly more quickly than a human. In some ways this is very similar to the classification technology discussed in the CommonSpace article.  Here, the controversay centred on the misclassification of a photo of a black family having a picnic. I don't think anyone is callling for this technology to be retracted or banned. Even if that was a view, how can technology be undone?  The questions really centre on the use cases of technology. Will we use it to gather government statistics, to grant a bank loan, as court evidence, to control access to buildings, to formulate law and policy etc?  It's the use case that matters, not the technology itself.

Let's go a step further and imagine I am the developer of FingerMaticMagicMate and I'm trying to sell it to the Scottish court system.  I'm going to make all sorts of claims about its efficacy.  I'm going  to boast that in trials it was found to be 25% better than experts. What are the ethics of using this technology in court?  Just as with DNA evidence, it first has to be proved that my claims are true. Moreover, it needs to be proved that the claims remain true when deployed in the field.  More than that, it needs to be proved that the software will never suffer from maintenance bugs that intermittently derail large projects long after deployment.  Will that be enough? Not really because there also needs to be a trail of responsibility when the process goes badly wrong, which it inevitably will. Who will be held responsible for mistakes? Will it be the developers of the software or the users of the software?  What sanctions will be in place?  How do we formulate good practice? How do we interpret statistical errors and assign thresholds to the statistics of a match? These are all questions for domain experts rather than a non-expert Citizen AI Assembly.  Can we expect the Citizen AI Assembly to become experts in legal philosophy and practice as well as software quality and validation? That was a rhetorical question. 

There are more complicated issues surrounding FingerMaticMagicMate.  Let's imagine it was initially used merely to sift through fingerprints on record.  Even with this limited deployment there are still real problems in the event that FingerMaticMagicMate makes a category error, even if it makes fewer mistakes than existing technology.  We can easily imagine innocent people being called in to police stations for questioning as well as career criminals not being brought to justice. The question here is what error rate is acceptable?  Should the system favour false positives or negatives? How do we determine that the technology delivers on its strict requirements? What about the costs?  FingerMaticMagicMate is uniquely scalable in that the error can be reduced but at increased operational cost. Who decides the cost/benefit ratio? Are these questions for domain experts or a non-expert Citizen AI Assembly? That was also a rhetorical question.

Up In The Air


Every time you step on an aeroplane, software starts to fly the plane, controls the flow of diesel, and checks the on-board sensors to detect issues as they arise.  If this goes wrong people can die. What could be more ethical than that?  How can we have confidence that the software on an aeroplane successfully achieves its stated goals of not killing anyone? The answer is regulatory standards.

Standards are really complicated because they are specific to each domain.  Aerospace software has an unbelievably rigid set of standards: it must never allocate memory, it must run bit-by-bit identically on multiple hardware (RISC and MIPS, I think, but I'm open to correction) simultaneously; each change needs to be reviewed by committtee and undergo a minimum number of hours in live test without incident. A change to a single line of code can cost  tens of millions of dollars.  Companies developing and maintaining this software need to abide by internationally agreed development standards. If they don't but claim they have then they will end up broke and in prison.

The thing about domain standards is that each domain has its own particular worries. Each domain also has its own view on cost/benefit and best practice.  I doubt if medical software cares particularly about memory allocations but it certainly will care about the openness of the test regime.  That test regime will pull in complex questions concerning medical ethics.  Are we going to bring complex questions of medical ethics to the attention of the Citizen AI Assembly, too?

We've probably all read about self-driving cars. They are also being developed to standards laid down by regulators. These standards are incredibly complex and range from memory access patterns in on-board sensors to validation models to test drives covering millions of miles without incident. To be honest, I'm not keen on pulling in non-experts to any of this because I very much want to stay alive.


Politics


We can't forget politics.  I predict we will soon see smart-traffic control systems that will respond in real-time to video feeds streamed from traffic lights so that the entire system can adapt to changing conditions on the fly. No more button pressing, no more timing circuits, no more hanging around on an empty street waiting for the green man or waiting at a red in your car at 4am. These systems are going to employ all the latest terrifying AI buzzwords:  unsupervised learning, adversarial networks, data collection, cameras, server farms, and corporations. This is a heady brew.  We will need to start thinking about the impact of this: will we be "better off" with the new system than with the old one? We need to start thinking about how we assess the quality of the system. How, though, do we measure quality?

The quality of the system is how well it improves traffic management. Is that a satisfactory answer?  Well, no, it isn't because everything is a trade-off against everything else. We can minimise accidents but that might lead to slower traffic and a build-up of pollution and less time to spend money in shops.  We can relax the safety restriction to improve the general health of our city centres but how far can we go? This sounds like a politial decision.  The Green Party might prioritise pollution over commerce; the Tory Party might prioritise commerce and cars over pedestrian waiting times; Labour might worry about whether the project was developed by a Venezuelan socialist collective.  The definition of "best" and "better" is inherently political.  We already have a system for making political decisions on relative priorities: it is called the General Election (and council elections and elections to the Scottish Parliament).  Introducing a Citizen AI Assembly is an unnecessary distraction.  There is nothing inherent in the politics of AI technology that doesn't also affect the legalisation of cannabis or budgets for cancer treatments or the admissibility of speed camera evidence in UK courts.

If the implementation of "best" is inherently regulatory, then the specification for "best" is inherently political. Which role would the Citizen AI Assembly play?

Democracy

 

I want to briefly think about the accountability of a Citzens AI Assembly with a series of questions.

  • Who will appoint the assembly? 
  • Will they be political appointments or appointed by industry or by popular vote? 
  • Will they be fixed-term or life-time appointments?  
  • Will be there be a necessary minimum qualification?  How would we assess that? 
  • How will we limit corporate lobbying when we also need their expertise?  
  • How do we stop the party system influencing the assembly?  Do we even want to do that? 
  • What if the assembly rejects or ignores internationally agreed standards?  
  • How do we stop the assembly becoming domain experts over time and merely reflecting the views of industry experts? 
  • What powers do we give the assembly? 
  • Can the assembly overrule political and budgetary decisions? 
  • How do we hold the assembly to account? 

Conclusions 

 

One of the outcomes of Brexit is that I've become so much more aware of the way in which expertise is formally deployed to shape the modern world through domain regulators and expert committees.  The daily business of  government is rarely performed by an elected official. Instead, government is often  a function of administrative appointments, expert committees, privvy councillors, and the charity commission, to name just a few in the UK.  Brexit is a cry to bring an end to this arrangement to bring power back to the people but without thinking about the reasons why the world ended up being as complicated and expert-driven as it is.  Domain regulation and the feedback system of political accountability may be imperfect but it has done a great job at keeping us all alive.

The downside of the opaque world of expertise is that the electorate often don't really understand why the world is as it is.  Faith in our institutions is at an all-time low.  Brexit has opened up a clear divide between the centrists and a bizarre political alliance of left and right.  If you're on the far left or the far right then you likely have an issue with the established order of experts and committees and unelected bureaucrats (why would we elect a bureaucrat?).  There is a belief that they are working to a hidden agenda and their work is routinely misrepresented just as often in The Canary as it is in Breitbart.  For an unfashionable centrist like me, this is worrying. I'm prepared to have faith in our institutions because they have genuinely done a successful job at keeping us all alive and healthy.  I'm certainly more prepared to have faith in our existing institutions than to hand over the keys to a rabble of inexpert opinion. 

If you have the impression that I'm not keen on the idea of an Citizen AI Assembly, you'd be right.   If our institutions lack transparency then make them more transparent.  If they are poor at communicating then hire a PR firm to help them out. If "the people" don't understand the software maturity model then print a government leaflet laying out the principles. Let's do any of things but please, please don't appoint a Citizen AI Assembly because, as with Brexit, we should always be careful what we wish for.

Over and out,

Terry

PS We could, of course, eventually replace the Citizen AI Assembly with an AI. They would be driven by a metric that optimises for the adoption of expert opinion. Who watches the watchers, eh?

PPS I didn't mention sex robots once.  Howzat for discipline?

PPPS Imagine we let a Citizen Assembly let loose on anti-terrorist security or medical ethics or put them in charge of buying cancer drugs that have yet to prove their efficacy?  Just imagine that for a second. And relax.

5 comments:

  1. Good to hear from you, and thought provoking stuff. I'll have to read it again before venturing a comment. This is a huge and vital issue which will affect everyone, and needs to be better understood, especially by me. Thanks for this.

    ReplyDelete
    Replies
    1. Thanks!

      Hope you enjoyed the weak joke about the current Labour leadership.

      Delete
  2. Terry
    I know this has nothing to do with the blog. However I have been trying to contact you but to no avail. Obviously you are a busy man.

    As You are the only physics person I have any passing acquaintance with I was wondering if you could comment on the following physics issue for me please.
    https://www.academia.edu/37214381/Proposition_Of_The_Fundamental_Formula_Of_The_Constant_G

    You will realise the significance if the author is correct.

    ReplyDelete
    Replies
    1. Sorry, must have missed that. Not been checking on here for some time.

      Not come across this before but the briefest skim of the article suggests that it is hokum written by a crank.

      Delete

Bark, lark or snark