AI for Healthcare - A Patient’s Tale

A white robot extends its hand to the viewer, many-jointed fingers outstretched in preparation to grasp something, perhaps mind-numbing piles of paperwork
(AI has expanded to the healthcare sector. Photo by Possessed Photography on Unsplash.)

It started off like any other paperwork in the American healthcare system.

I’ve always appreciated online check-ins. As an introvert, the fewer of my fellow humans I have to interact with in person, the happier I am.

Most of the time.

It’s ironic, therefore, that the online check-in experience I had this week was so problematic. Or maybe it isn’t, given that AI was involved.

The Situation

I was filling out new patient paperwork for my first appointment with a healthcare provider. The platform and forms used by this provider are used by others in this area, so I am familiar with them. There I am, plugging along, when a form I’ve never seen before pops up. I glance at the title and do a double-take.

“Third-party AI”? “Consent”?

What was AI doing sticking its plagiarizing nose into my healthcare services?

Starting at the top, I read the entire form carefully. It was requesting my consent for a third-party AI tool to have access to audio recordings (?!) of my visits as a patient. The AI would generate documentation for the provider based on the recording, presumably notes or a chart of some kind. This would save the provider time, enabling them to see more patients.

Right. Where do I start?

The Problems

Audio recordings. What audio recordings was this form talking about? To my knowledge, my conversations as a patient with this provider and any staff were not ever going to be recorded. Documented to have the relevant information pertinent to my condition on file, yes. Recorded in its entirety, no.

AI. AI presents numerous security risks. So does every other technology and system, true; if a system is invented by a human, it can — and eventually will be — hacked, as evidenced by the data breach notices that have become so common. (I just got another last week.) But which of those other systems have the same environmental and ethical issues as AI on the same scale? If there are any, we ought to be rethinking their usage too.

Documentation. I get it. Our healthcare professionals are overwhelmed. The system is broken. But why am I as a patient being asked to put my privacy and the security of my personal information at risk instead of fixing the actual problems with our system? AI is a technological tool, but technology is not going to save us because the problem is not technological. It’s systemic.

AI is at best a problematic bandaid for a stressful symptom, a patch for the bug that is the American healthcare system. We need to be devoting our time, attention, and resources to fixing — or replacing — the system, not getting distracted by the newest, shiniest toys tech companies are attempting to foist upon us to keep profits high and shareholders happy.

The Resolution

At the end of the form, I was assured that consent was not required to receive treatment, presumably to avoid a HIPPA violation. I could decline and I would not be turned away or receive lesser-quality services. So I did.

Decline. Submit. Next form.

But even as I finished the rest of the forms — and wasn’t it interesting that this consent form was one of the last, forcing me to read it and fill it out after multiple other mind-numbing paperwork pieces — this one stuck in my mind. If I hadn’t been paying attention, who — and what — would I have given permission to to access my personal information? How long would it have remained in a file instead of being used to train other AI models? When would I have received a notice that it too had been breached?

Since I declined, we’ll never know.


Originally published in Tech and Me, Loving It or Hating It on Medium on September 3, 2024