Catholic Medical Association

Impact Factor: 0.5 / 5-Year Impact Factor: 0.6

Free access Editorial First published online August 6, 2025Request permissions

Human Dignity vs. Modern Medicine

Barbara Golder, MD, JDView all authors and affiliationsVolume 92, Issue 3https://doi.org/10.1177/00243639251356843ContentsPDF/EPUB

One of the hallmarks of the Catholic faith is respect for the human person—any human person—because all are made in the image and likeness of God. This conviction underpins so much of our approach to bioethics: it is why we reject abortion, why we reject mutilation, and why we reject euthanasia. We have done a good job in bringing concern for the human person to the forefront of discussion about these topics.

As important as those are (and they are tremendously important in shaping us as a society), there are other forces that threaten the human condition almost as much and in much more subtle and perhaps even more dangerous ways simply because they affect everyone personally in ways that the temptations to such things as abortion or euthanasia do not. Pope Leo XIV has already entered the fray in one critical area, that of artificial intelligence (AI). Catholic physicians have a vested interest in this discussion and how it will shape our world for reasons both personal and professional.

AI is going to change the practice of medicine. If a computer can more accurately diagnose and treat (statistically speaking) the average patient, what is the need for the trained physician? Already, AI has begun to transform the practice of radiology; computer-assisted diagnosis is both available and, if the studies are to be believed, far superior to mere human diagnosis in reading both studies and predicting outcomes on a population basis. It will be hard to argue against the possibility of cost savings and improved outcomes in a society that measures success by the bottom line and predetermined metrics. If a machine can do a job better than a person, why not let it? Machines are cheaper, they do not require vacations, are unlikely to sue, and are fungible; any of the same make and model will serve. In a world where the value of the human person is increasingly lost, will we pay attention to the great warning flags already beginning to stir in the AI breeze?

One of the most disturbing studies I have seen relates to the effect of AI on human intelligence. The Massachusetts Institute of Technology recently published the results of a study of the effects of ChatGPT on the brain, and the findings are alarming.1 Researchers found that chronic ChatGPT users accumulated what they term “cognitive debt,” characterized by a loss of memory function and neural changes identified by scans. Their study suggests that the long-term use of AI may have serious, permanent effects on the ability of the mind to engage, process, and retain information. Perhaps most worrisome—though not entirely surprising—is that heavy users of ChatGPT performed worse on assigned tasks without its aid than they did with it. It would appear that the use of ChatGPT actually can create a disability where none existed before. How severe and how permanent remains to be seen.

There were two silver linings in the study. The first is that people with higher skill levels and good cognitive baselines going into the study actually benefited from the use of ChatGPT, both in output and neural connections. The second was described by social commentator Matthew Loop this way:

Teachers did not know which essays used AI but they could feel something was wrong. Soulless. Empty with regard to content. Close to perfect language while failing to give personal insights. The human brain can detect cognitive debt even when it can’t name it. 2

All this underscores what we already know: technology is a tool, powerful and dangerous, and capable of reshaping how humans live, work, and connect, and it affects everyone in our modern, first-world society. Although we may have already recognized this, we have not done especially well in avoiding its impact. The hyper-reliance on cell phones, for example, has reshaped the habits and abilities of an entire generation: too many who were raised with cell phones and texting are not comfortable talking to a real, live person and conduct all the business of life by text. Who has not had the experience of seeing a group of people sitting around a communal table, in silence, texting back and forth to those sitting opposite them?

Lest you think this has not affected medicine, for younger doctors, texting, via phone or portal, is largely the avenue of communication in many practices. While it has some very real advantages (transmission of concerns while reducing interruptions), it also changes the baseline of interaction in such a way that the sacredness of the person takes second place to the tyranny of the communication. Sometimes we need to be interrupted.

An example from personal experience: A year or two ago, my brother was staying with us to be treated for cancer. This required him to see a cardiologist here in town, in large part because his own cardiologist in another city could (quite literally) not be reached by phone, and the personal assistant who responded to inquiries was ill-equipped to handle the request for information. I dropped him off at the office and parked the car. When I returned to the office to join him, he had already been taken back. I told the receptionist I needed to be there to assist. Rather than get up out of her chair and open the door to the back hall to tell someone I was here, she texted. No one ever came to get me; when I asked, they had not even received the message. Passing on information in the quickest way was more important than seeing to the needs of the persons actually present, which were immediate if not emergent.

Missing from medicine driven by AI is the human element, that indescribable but essential injection of soul into the management of patient care. Have we as physicians perhaps been too complacent in accepting the ways in which technology denies the dignity of the very patient who seeks medical care? Does this predispose us to an inability to see the same dangers of AI and to oppose their indiscriminate implementation?

Perhaps the simplest and most glaring example is how (sometimes if) a physician can be contacted, as implied in the above example. In the interests of efficiency, we have accepted automated services that usually begin with “If this is an emergency, call 911” in lieu of real human contact. Messages, if taken at all, are left with a machine or worse yet, with the lowest-paid and least-trained person on staff; an answer usually comes by text, email, or portal, and generally not guaranteed for 24 hours. The disinterested nature of such avenues of communication risks missing important information that only human contact can provide, whether it be the discovery of a mass the patient was too frightened to mention or a critical symptom that separates the emergent from the routine. In short, it needs a person to respond to the person whose interests, at least in theory, are central to the enterprise underway.

Every physician in practice has a story of techno-horrors, and most of us also recognize the ways in which technology has enhanced medical practice. Telemedicine, for example, has aided both access and efficiency of medical care in rural areas and for homebound patients; one of its greater merits is that it connects the persons in real-time, even if the presenter is virtual and across space. Communication of essential data both to medical colleagues and to techno-savvy patients is a real boon. There are others, of course, but what I think sets them apart is that they are in service of the patient and physician in a relationship, rather than opposed to or interfering with. The human connection is at the center of the process, not an irritation, or one factor equal to or less than the others.

This is where Catholic medicine and Catholic physicians, and Catholic thinkers, have an opportunity to change the discussion about technology in medicine before it becomes so deeply and universally embedded that it cannot be reversed and permanent damage is done. What are the moral obligations of a Catholic physician, practice, or hospital when it comes to the use of modern technology, especially AI?

In this regard, I look to some of my own Anabaptist ancestors for a model. I remember once reading the explanation of an Amish Bishop as to what new technology they would accept as a community and what they would not. For him, at least, it was not simply a matter of sharp lines and traditional practices, nor was it a matter of increased efficiency or productivity. Rather, as he put it. “We look at how this will affect the community, our values, our way of life.”

That is precisely the way Catholic physicians, if we are of a mind to, can educate the world not only on the risks but the rewards of grappling with technology that outstrips our puny ability to anticipate or respond to it. It cannot be a simple matter of drawing bright lines: we forbid this, we accept that. Our grappling and our presence must reflect the many competing forces, values, goods, and evils that modern medicine—and particularly the use of AI in it—comes fully loaded with. We must equip Catholic decision-makers, whoever and wherever they are, with the skills to understand the nuances of these new situations, the knowledge to understand them, the compassion to put them in their proper place, and sufficient humility to know that the temptation to set aside the governing principle of Catholic medicine—that whatever we do must preserve, advance, and honor the dignity of every human being we encounter—is ever present, often difficult to perceive, and even harder to resist.

The last Pope named Leo confronted the changes wrought by the Industrial Revolution; Rerum novarum was the result. While it cannot be said that Catholics have managed to implement in every way the principles contained therein (even in their own institutions), without such a clear, present, and persistent voice instructing us on our responsibilities to each other in the midst of such frightening changes, the world would be much worse off. And—over time—we have seen that understanding these principles leads to positive changes even in the secular world. Perhaps not in the way we might wish or we have imagined, but real, and important to those whom these sea technological changes to society wreak the most havoc.

Our new Pope has wasted no time in engaging this as a central and driving issue for Catholics,3 and some of the best thinkers of the day are already sounding alarm (see The Last Word in this issue for an example). That is important, but it is not enough. Articulating principles is essential, but so is a deep understanding of the complicated, practical world in which they must be applied. Just as it is not possible to understand bioethics without a good understanding of the biology, anthropology, and human ecology of the particular situation, it will not be possible to articulate a way forward without an understanding of the same forces in techno-ethics, and the needs and dignity of the human person must be an essential part of that. Along with that, we will need courageous witnesses, people and institutions willing to be a spirit and practice contrary to the tide of the world.

We will need to see and support those who say, after due consideration, “No thank you, we decline…this particular use of technology distorts human relationships, damages individuals, and ignores the dignity due every human being.” It is not going to be easy, it will certainly be costly, and it is, in the long run, imperative.

Footnotes

1. https://www.media.mit.edu/publications/your-brain-on-chatgpt/ See also below for a quick, lay summary of findings.

2. https://www.facebook.com/share/p/1J3dyrQW5w/?mibextid=wwXIfr

3. https://www.wordonfire.org/articles/pope-leo-xiv-and-the-new-social-question-of-ai/