Artificial Intelligence and the Public Good
Informing the University’s efforts to harness human-centered artificial intelligence
What interdisciplinary, University-wide structures and systems will unlock collaboration around artificial intelligence (AI) to serve the public good?
More than 80 faculty, staff, students and community members grappled with that question April 11 at the University of Denver’s inaugural AI Summit, hosted by Project X-ITE. Graduate School of Social Work Assistant Professors Anamika Barman-Adhikari and Anthony Fulginiti helped to plan the event along with computer science, engineering and law faculty.
“AI is the future and will be pervasive in our lives. We need to leverage existing capacities and build new resources so we’re not left behind,” says Barman-Adhikari. “DU is committed to the public good, and we want to use AI to assist the local community.”
Participants discovered doing that will require campus-wide buy in, fostering collaborative environments, removing infrastructure stumbling blocks, creating more industry partnerships, and identifying new funding sources and incentivizing structures.
The summit — billed as an “un-conference” to encourage all attendees to roll up their sleeves and share ideas across disciplines — was the first component of a larger, ongoing initiative to define the contributions the University can make in the areas of AI, automation, big data, and the future of work.
That includes social work. Barman-Adhikari and Fulginiti are among a growing cadre of social work scholars applying AI in their research. Adhikari has been piloting use of AI to solve the problem of deviancy training in substance use interventions for youth experiencing homelessness. In data simulations, an AI-enhanced intervention decreased deviancy training by 60%. A randomized controlled trial of the enhanced intervention will begin this summer.
Fulginiti has been exploring use of AI to improve the effectiveness of gatekeeper training to prevent suicide among college students. The traditional gatekeeper approach involves training people such as resident assistants or faculty who reach a large segment of the student body. There are challenges to that approach, however. For instance, students are more likely to discuss suicidal thoughts with their friends than with an authority figure. Fulginiti’s strategy uses social network methodology to map student friendships as well as an AI algorithm to guide gatekeeper recruitment. Preliminary simulation results show that the AI-enhanced approach can lead to more students being connected to a friend who has received training. In the coming year, Fulginiti aims to pilot the AI-enhanced training approach with college students and youth experiencing homelessness to determine whether it increases help-seeking.
Fulginiti and Barman-Adhikari agree that AI holds great potential for social work. “There are many different ways of knowing and using that knowledge to address important issues of social inequities, social injustice, disparities,” Fulginiti says. “We can’t use any one tool to address these things; AI is one potential tool.”
Beyond social work research, Barman-Adhikari and Fulginiti say, social workers can make a significant contribution to the greater body of AI work underway worldwide.
“Who defines public good, and how do we address bias in AI-related work?” Adhikari asks. Social workers are experts in both areas, she says.
“It’s important to have people coming to AI with a different lens. If you bring a social justice lens to building a computer algorithm, you’re more likely to consider the potential bias going into it,” Fulginiti explains. “By including social work expertise, we can try to create AI tools that solve the problem we hope to solve and ensure that by using it, we’re not inadvertently creating bias or more injustice.”