In recent years, DNA technology has reversed over a thousand false convictions obtained through faulty identifications, botched evidence and willful miscarriages of justice. Criminal justice advocates warn, however, that emerging AI technologies such as facial recognition and predictive analytics, when flawed, mishandled or inherently biased, threaten to blunt this progress.
- By Tracey L. Regan
“What happens when the entry point to a conviction is not misconduct per se, but instead flawed and biased technology?” posed Rebecca Brown, director of policy for the Innocence Project, at the conference “Women Designing the Future: Artificial Intelligence/Real Human Lives,” hosted this spring by the Murray Center for Women in Technology at NJIT.
These technologies are disproportionately deployed against people of color, she added, and their source code is largely unknown to the defense who are unable to “confront its fallibility and unfairness.”
Brown was part of an all-female panel of prominent data scientists, social justice advocates and policy researchers who discussed AI’s potential to codify and exacerbate systemic discrimination, as well as strategies to mitigate those harms.
“We’re at an inflection point. AI is a technological innovation that increasingly affects everyone’s lives. In many sectors, it’s been doing so silently for a while now,” said Nancy Steffen-Fluhr, an associate professor of humanities and the director of the Murray Center. “At this year’s conference, we wanted to put technologists and social activists into conversation about how to leverage the power of AI for social good. We weren’t disappointed. The speakers connected profoundly with each other — and with the audience.”
In addition to flawed facial and audio identifications, policing tools such as location-based predictive policing and AI surveillance technologies “cast a wide net that entraps innocent people … who should never have come to the attention of law enforcement,” said Sarah Chu, the senior advisor on forensic science policy for the Innocence Project. These people remain in police databases, vulnerable to further false identification.
Algorithms that purport to assess character traits are also notoriously flawed, the panelists said.
Tools used to assign a person’s risk score in decisions about bail and parole are often based on historic data sets that are biased against people of color, noted Renee Cummings, an AI ethicist and professor of practice in the School of Data Science at the University of Virginia, where she serves as the university’s first Data Activist in Residence. She cited the example of a white man convicted of armed robberies being assigned a much lower risk score than a Black woman who committed misdemeanors in her youth.
“Understand the power of algorithms,” she urged. “Audit them and bring due diligence in how they’re employed.”
In another arena, AI hiring tools used to evaluate resumes and predict traits such as steadiness and leadership capability responded negatively to the modifier “women’s,” as in women’s college,
for example.
“Do these hiring tools work? They don’t,” said Julia Stoyanovich, the Institute Associate Professor of Computer Science and Engineering and director of the Center for Responsible AI at New York University. She recommends a system akin to “nutritional labels” for datasets that evaluate their purpose and deployment, as well as how they were collected and by whom.
Chu noted that technology is often adopted before it is validated. She encourages communities to set up oversight boards composed of practitioners, researchers and community stakeholders to guide its implementation and use, if at all.
AI technology can, however, enhance human capabilities substantially if purged of flawed data and historic human biases and assigned its proper role, several panelists noted. It can accelerate and improve diagnoses; predict and respond to climate-related disasters; help human assets, such as ships, navigate complex and uncertain environments; and even select job candidates by scanning masses of resumes when searching for objective criteria.
The key is to understand which tasks are better performed by humans or machines and to optimize their “symbiosis,” said Senjuti Basu Roy, the Panasonic Chair in Sustainability and an associate professor of computer science at NJIT.
Humans, she notes, are critical, strategic and creative thinkers with empathy and imagination, while AI is better at process automation, finding patterns in large data sets, reducing human error and 24/7 availability. “There is a reason why humans and AI are synergistic: they both have unique expertise.”
But as employers determine which tasks to delegate to each, she added, they must first understand human capabilities and needs in a changing workplace.
Executive Summary
A Message from NJIT President Teik C. Lim.
Learn MoreABSTRACTS
Abstracts.
Learn MorePoint By Point
The Latest News About NJIT Sports.
Learn MoreWith Gratitude
1881 Society
Learn MoreCelebrating a New Era
NJIT’s Ninth President Aims for Student Excellence.
Learn MoreDonors and Scholarship Recipient Reception
Special Reception Brings Donors and Scholarship Recipients Together
Learn MoreResearch Labs, Hubs and Regional Alliances
Research Labs, Hubs and Regional Alliances
Learn MoreArtificial Intelligence, Human Lives
Murray Center for Women in Technology Hosts AI Conference
Learn MoreAlumni News
Profile of Marc K. Raoul and Christopher Testa.
Learn More2022 Honor Roll
2022 NJIT Foundation Donors Honor Roll
Learn MoreIn Conclusion
Design Showcase Highlights Milestone Anniversaries
Learn MoreView and Download PDF
View or download the complete PDF.
Learn More