The lost humanity in AI medicine
Created with DALL-E AI

The lost humanity in AI medicine

As we rush into the arms of AI medicine, will compelling research on the importance of the patient-doctor relationship be ignored?

In the Black Mirror episode “Be Right Back,” Martha’s boyfriend Ash dies in a car crash. During the funeral, a friend privately reveals a way Martha can cope with her grief—a service that mines Ash’s social media to create a simulated version of him that she can speak to by phone. When she finds out she’s pregnant, Martha signs up. She finds the approximation of his voice, and his being, soothing enough to get her past her acute grief and loneliness. 

But when the program ups the ante and offers an embodied version of Ash, Martha is ready for more and takes it. “Ash” arrives in a box, the contents of which morph into a lifelike robot when water is added. While this new version of her lover (yes, they have sex) is kind and knows some details about their life together, he is too perfect, too passive—and ultimately of course, he is not human. Unable to accept him, but also unable to kill him, Martha banishes “Ash” to the attic. No matter how advanced the machine, the lost humanity was always painfully clear.

Black Mirror, the title an allusion to our many screens in their “off” states, envisions what technology and especially artificial intelligence (AI) will mean for human beings and for human connection in the future. As a physician who trained in medicine in the 1990s before the tech boom really took off, I was prompted by this episode to look into how experts envision AI will continue to impact the relationship between clinicians and patients, one we know is crucial for healing.

Articles by physicians, scientists, and financial experts foresee the expansion of AI in medicine in areas of documentation/administrative tasks, wearable technology, interactive platforms, and even to help clinicians make diagnoses. These advances are presented as exciting factors enhancing “patient-centered” care along with the targeted advances provided by genetic testing and research. Virtual visits and “smart hospitals” that place iPads, computers, and robots at the forefront of patient care are also said to become valuable contributors to a kind of care that is thought to place the patient at its center. For all but the sickest patients, medical care is expected to take place in the home. Hospitals will become intensive care units.

These shifts in care information and delivery are touted as instilling a sense of agency in patients, allowing them to access and manage their own health information and even interventions from their own beds and sofas. Technology certainly expands access to care for people who live in remote areas or who traditionally struggle to get to the doctor. As a result, armed with the knowledge and advice gained from remote monitoring and feedback, the thought is that patients won’t need to rely so much on the guidance of their doctors. And doctors will have both administrative and decisional support from AI, leaving more time to focus on the most needy patients.

What isn’t discussed much is what this future ought to create, or hold onto, for the patient-clinician relationship. 

While some authors note the increase in mental illness and suffering seen in the past several decades during the technological boom in medicine as elsewhere, they mystifyingly do not address the potential connection between the two or propose any potential counterbalances where patients may feel stigmatized, isolated, and hopeless. 

In the book Compassionomics, researchers at Stanford showed that patients with diabetes who had better relationships and communication with their doctors had better blood sugar control, which leads to less end-organ damage as in heart and kidney disease. We also know that patients don’t open up to providers they don’t trust, leading to less information sharing, worse compliance with care plans, and poorer outcomes. Studies show that given the history of medical abuse faced by certain populations, people of color are more comfortable with race-concordant clinicians and do better in their care. 

These are well-established facts. So how are we expecting that a clinical encounter with a computer will lead patients to openly share their symptoms and stories, and result in the kind of connected care that we know provides for better health outcomes? It’s well-known that bias seeps into AI. And will our home-based caregivers be robots too?

👩‍🔬
The more we know about the human response to medical intervention, the more important we know the human connection to be.

The atrocities of WWII, which saw torture and experimentation at the hands of medical professionals, led to the adoption of international standards of medical ethics. A few decades later, the medical humanities were created to counter the focus on a biomedical model in medicine which itself risks a slip towards the dehumanization of patients. Including art and literature in the study of medicine reminds trainees and practitioners of their fundamental task — to care for human beings and to join them on the shared journey in temporary bodies with expansive minds. Engaging with art and writing helps us transcend pain and suffering and allows us to connect at a level we all share. Closely studying the narrative elements of creative works also trains us to notice and interpret what a patient brings to a medical encounter, like the metaphors they use for their illness paths and the unspoken stories that emerge in their hunched bodies. 

Rita Charon, the founder of narrative medicine, a workshop practice that includes reflecting on art and literature, has written of “the authentic and muscular connections between doctor and  patient, between nurse and social  worker,” adding that “Narrative medicine focuses on our capacity to join one another as we suffer illness, bear the burdens of our clinical powerlessness, or simply, together, bravely contemplate our mortal limits on earth.”

The more we know about the human response to medical intervention, the more important we know the human connection to be.

The purpose of the medical humanities movement was to keep medicine grounded in what we share, in what makes us human. Where is this need reflected in our vision for the future of medicine with AI? While it’s critical to work towards preventing disease by giving patients the information and technology needed to make choices that impact their chances of contracting a serious illness, it’s a whole other thing to design a healthcare apparatus wholly focused on making people as independent of need as possible. Computerized medicine, AI healthcare, forgets the importance of human touch, empathy, and even just the need for company — especially when one becomes a citizen of the onerous “kingdom of illness.”

Perhaps a future wholly defined by efficiency and patient autonomy, envisioned in the form of wearable monitors, interactive platforms, and virtual visits, these items and healthcare risk end up going the way of “Ash”—relegated to a dusty storage room—because authentic human connection has been lost. No matter how advanced the machine, that lost humanity is painfully clear.

💡
Help build the new OnlySky! We are now accepting pitches for short fiction or nonfiction (up to 1000 words) exploring any aspect of possible futures, any topic, at any timescale. Send a complete draft for consideration, or pitch an idea, and send to editor@onlys.ky. Include links to two examples of your written work if possible. Thanks!

Comments