Join our family of curious Kansas Citians

Discover unheard stories about Kansas City, every Thursday.

Thank you for subscribing!

Check your inbox, you should see something from us.

Sign Me Up
Hit enter to search or ESC to close

Grappling With Ethical Artificial Intelligence in Health Care Center for Practical Bioethics Ponders Cutting Edge of Tech

Share this story
Above image credit: The Center for Practical Bioethics (CPB) in Kansas City is working to create ethical standards for how A.I. is used in health care. (Illustration | Pixabay.com)
Sponsor Message Become a Flatland sponsor
4 minute read

Without much media coverage, Congress late in December passed the National Artificial Intelligence Initiative Act. It’s designed to crank up A.I. work at various federal agencies and create a White House A.I. office and an advisory committee to keep track of all this.

All of which brings us back to work being done by the Center for Practical Bioethics (CPB) in Kansas City to create ethical standards for how A.I. is used in health care. I wrote about that in late 2019, but adoption of this new law means it’s time to see how that local work is going and how, if at all, the new law might affect it. (Some of the early work of the CPB team can be found in this report on the center’s website.)

The two researchers I reported on earlier have been working hard during the COVID-19 pandemic and are glad to see more federal interest in A.I., which can be described as the process by which machines make decisions that normally require human expertise.

In health care, that can mean everything from determinations about who gets vaccinated for the coronavirus first to which people are most suitable for a heart transplant.

Matthew Pjecha
Matthew Pjecha of the Center for Practical Bioethics. (Contributed | Matthew Pjecha)

One CPB team member, Matthew Pjecha, says the new federal law shows “that we’re seeing growing awareness that there’s a real set of ethical considerations that have to be made when you use artificial intelligence. For our project, I hope it means we’ll see higher receptivity to issues we’ve been trying to raise.”

And his research partner, Lindsey Jarrett, says that “when Matthew and I got started having these conversations in 2018, people were interested, but now they want to know how to get started. They’re really on board with moving to the next level. But we’re still like building the plane while flying it.”

Jarrett is glad that under the new Biden administration “there will be a shift to respecting science,” but she cautions that “none of the people who believed in (former President Donald) Trump’s ideas are going away, either.” 

So, she and Pjecha emphasize, there must be a broad representation of people to help make sure that when A.I. is used in health care it’s done fairly and equitably and doesn’t perpetuate a health care system with built-in biases against already marginalized people.

To assure such an outcome, they’re working on the creation of a broad-based advisory board that, even when their project is done, would be sustainable and able to monitor whether A.I. health care solutions are being used in ethically defensible ways.

Pjecha puts it this way: “It’s important that the government have a healthy relationship with science, and while it is true that good ethics start with good facts, the selection of facts has a political dimension. Simply having a healthy relationship with science is not enough. You also have to have clarity of vision about what is accessible and what isn’t. I don’t think you can wait for science itself to tell us what that is. That’s a decision we have to make as a society.”

The project on which Pjecha, Jarrett and others are working — called “Building an A.I. Framework for Trust: Creating Paths for Justice in Healthcare” — has received a $178,000 grant from the Sunderland Foundation of Kansas City.

As Pjecha notes, the context in which new A.I. technologies are emerging includes the heightened focus on systemic racism, police brutality and other societal issues. So, he says, it’s important that A.I. “not deepen the gulf affecting the marginalized or deepen their oppression. A.I. is ultimately a tool for helping to address some of that.” To that end, the new advisory group will include not only “experts and local business leaders but also advocates for marginalized populations.”

Lindsey Jarrett
Lindsey Jarrett of the Center for Practical Bioethics. (Contributed | Lindsey Jarrett)

As Jarrett says: “We’ve been looking for people who can speak to the different phases of health care product development. And we want those people together because people learn from each other.” But she cautions that “the (ethical) standards are not going to come tomorrow. We have to look across the whole continuum of the development process.”

As they have been doing this work in this pandemic, they’ve sometimes been frustrated to see how decisions have been made about such matters as contact tracing to find out where people who caught the virus got it and to whom they spread it. They say that even though contact tracing is an old art, people quickly tried to jump to high-tech ways of doing it but got bogged down in trying to implement those and, as a result, lost track of already effective ways of doing that work.

Such technology, they say, can be useful, but it must be carefully thought out with wise ethical boundaries in place before it’s deployed.

When they began this research, Pjecha says: “I expected it would be kind of a tough sell. But, generally speaking, the context we found ourselves in — between the pandemic and the police brutality protests — means that something has turned. And people are starting to see that things are all connected together and that the software we’re running in hospitals might be part of a larger ecosystem. It’s been encouraging that people are willing to go there with us. It’s become impossible to ignore that all these things are connected in some way.”

In the end, A.I. is simply a tool. Like all tools, it can be used wisely or misused in destructive ways. The hope is that this Center for Practical Bioethics project can help A.I. developers and users to make sure it’s a generative tool, not one more way to add to systemic discrimination against needy people.

Bill Tammeus, a former award-winning columnist for The Kansas City Star, writes the “Faith Matters” blog for The Star’s website and columns for The Presbyterian Outlook and formerly for The National Catholic Reporter. His new book, Love, Loss and Endurance: A 9/11 Story of Resilience and Hope in an Age of Anxiety, was published in January.

Like what you are reading?

Discover more unheard stories about Kansas City, every Thursday.

Thank you for subscribing!

Check your inbox, you should see something from us.

Enter Email
Reading these stories is free, but telling them is not. Start your monthly gift now to support Flatland’s community-focused reporting. Support Local Journalism
Sponsor Message Become a Flatland sponsor

Ready to read next

Finalists Named in Prestigious ULI East Village Design Competition, Ballpark in Mix

Read Story

Leave a Reply

Your email address will not be published. Required fields are marked *