Back

Algorithmic transparency for the smart city

As artificial intelligence and big data analytics increasingly replace human decision making, questions about algorithmic ethics become more pressing. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or credit, fired, or not hired for reasons that she will never know and which cannot be articulated. In the public sector, the opacity of algorithmic decision making is particularly problematic, both because governmental decisions may be especially weighty and because democratically elected governments have special duties of accountability.

We set out to test the limits of transparency around governmental deployment of big data analytics, contributing to the literature on algorithmic accountability with a thorough study of the opacity of governmental predictive algorithms. Using open records processes, we focused our investigation on local and state government deployment of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over data analytics to private vendors, which may make design and policy choices unseen by client agencies, the public, or both. To test how impenetrable the resulting “black box” algorithms are, we filed forty-two open records requests in twenty-three states, seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness.

To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it was not provided. Over-broad assertions of trade secrecy were a problem. But, contrary to conventional wisdom, trade secrets properly understood are not the biggest obstacle, as release of the trade-secret-protected code used to execute predictive models will not usually be necessary for meaningful transparency. We conclude that publicly deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. We present what we believe are eight principal types of information that records concerning publicly implemented algorithms should contain.

Associated spaces


Something wrong with this information? Report errors here.