KEY POINTS

  • Researchers have created a new algorithm that can "predict criminality"
  • The algorithm predicts criminal behavior based on a person's face
  • Thousands of experts have signed a letter calling for the research to remain unpublished

Experts are concerned that a new algorithm designed to “predict criminality” using nothing but a person's face could result to discriminatory accusations based on racial bias.

The Coalition for Critical Technology has released an open letter demanding that new research conducted by Harrisburg University researchers must remain unpublished, BBC reported. The open letter, signed by more than 2,200 experts and academics as of writing, also demands that publishers should “refrain from publishing similar studies in the future.”

Why the letter?

The research that the letter was referring to is titled “A Deep Neural Network Model to Predict Criminality Using Image Processing.” It was written by Harrisburg researchers Jonathan W. Korn, a Ph.D. student and NYPD veteran, Professor Nathaniel J.S. Ashby and Professor Roozbeh Sadeghian.

In a cinch, the research centers on a new algorithm that is designed to predict if someone has the tendency to become a criminal or participate in a criminal act based on the person's face. The researchers claim that this new AI, and other similar machines in the future, will be a “significant advantage” to law enforcement agencies to “prevent crime from occurring.”

In a press release, the researchers claim that “with 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face.” The experts that signed the letter, however, do not believe such a claim.

“Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years,” the letter read.

The experts provide a lengthy discussion as to why such an algorithm should never be published. Here are but two of their reasons:

  • Machine learning programs “are not neutral”

The academics said such algorithms and machine learning programs are subject to the “incentives and perspectives” of those who develop them, as well as the data on which these programs rely.

  • No system can ever “predict criminality”

Systems designed to predict criminality based on data from the criminal justice system should never be relied upon. The experts said historical evidence and studies prove that people of color are treated more harshly than white people, which results in “distortions” in criminal justice data. These distortions will cause the new AI to become inaccurate.

George Floyd's killing at the hands of US police has fulled a global uproar over racism and police brutality
George Floyd's killing at the hands of US police has fulled a global uproar over racism and police brutality AFP / Kenzo TRIBOUILLARD