Artificial intelligence, once a thing of movies like the 1984 classic The Terminator, is increasingly becoming a reality, with coffee-serving robot baristas and autonomous vehicles roaming streets in the U.S. (not to mention, South Korea). New York City, however, like other cities that have used AI in policing, has found another use for computer algorithms in judicial action in courts. After serious backlash from community members and a bill proposition, the city is looking to introduce legislation to study the usage of an algorithm in court due to the potential bias of computer-generated outcomes.

The bill proposition, which is to be signed by New York City mayor De Blasio before the end of the year, would regulate how the city’s court goes about using algorithms. The bill proposes the formation of a task force to understand how algorithms can potentially form biases, how citizens can request explanations for algorithm outcomes they are displeased with, and the feasibility of requiring the city to release the algorithm’s source code to the public.

Calls to regulate the usage of algorithms in New York City’s courts are rooted in the public’s doubt in the equitability and fairness of some computer-generated outcomes. Many also believe the City must be transparent with its policies regarding judicial decision-making, which is why the bill explicitly targets transparency. “Having it open for anyone is the best approach,” says a lawyer with the New York Civil Liberties Union, Rashida Richardson, to Motherboard.

This comes in light of increasing conversations around bias in courts across the U.S., where the arrest of Black Americans for minor crimes often ends in ungrounded verdicts. According to the ACLU, 52 percent of Americans incarcerated in prisons were arrested for possession of marijuana. These arrests, however, highlighted a racial bias, suggesting that Black Americans are 3.73 times more likely to be arrested for possession of marijuana than White Americans. Introducing algorithms into the justice system is supposed to eliminate this bias in U.S. courts.

algorithm

Courtesty of ACLU

There are staunch supporters of the usage of algorithms in city courts, however. Some believe human bias and prejudice is more pervasive than bias or prejudice in court decisions made by algorithms. According to supporters, algorithms bring consistency and evenhandedness to the judicial process in court. They also see that algorithms can help overcome implicit bias and a judge or jury’s explicit prejudice. Cases in which algorithms have a tendency to thwart an equitable trial have been attributed to poorly-developed algorithms.

In a 2015 experiment in Virginia, agencies were randomly selected to use an algorithm that rated inmates’ likelihood to skip out on trials and their likelihood of being arrested after release. The results pointed to a twofold increase in the number of prisoner releases and no increase in pretrial crime. This, however, along with other instances, can be considered isolated incidents given the overall political climate surrounding judicial dependence on algorithms in court.

Contrarily, in 2016, ProPublica, a nonprofit newsroom fighting fake news, published a report that looked at computer-generated risk assessments in Broward County in Florida. When the computer looked at the possibility of inmates committing violent crimes after their initial incarceration, the results suggested that Black inmates were more likely to commit violent crimes than White inmates.

Opposition to unregulated algorithm usage also surrounds what is called “preemptive policing,” which police units around the U.S. are using to determine which neighborhoods police officers should patrol based on statistics. The usage of algorithms to decide police action usually results in the over-policing of certain neighborhoods. The NYPD refuses to release the source code for its policing program out of fear of criminal compromise. Joshua Norkin, a lawyer with the Legal Aid Society, says the police force’s usage of algorithms is inherently racist, according to Vice, which explains why certain neighborhoods are over-policed.

One of the main hindrances to transparency in algorithm usage is the interest of private companies involved in the lobbying process. Tech:NYC is a trade group that lobbies city governance on behalf of many of NYC’s tech companies and believes that transparency might have a “chilling effect,” which would discourage other tech companies from working with the city in similar situations. This, however, doesn’t bode well for citizen inclusion in legislative and judicial processes, since Tech:NYC’s opposition suggests that corporate interests outweigh civilian concerns.

*Never miss a story like this - subscribe to our weekly highlights and stay up-to-date