Dr. José María Simón Castellví

President Emeritus of the International Federation of Catholic Medical Associations (FIAMC)

Member of the Royal European Academy of Doctors and the Royal Catalan Academy of Medicine

Key Challenges in AI

The use of artificial intelligence (AI) in fields as sensitive and humanly complex as Medicine or Religion presents fundamental epistemological challenges. Epistemology is the branch of philosophy that studies the theory of knowledge. “Epistemological” relates to the foundations, methods, limits, and validity of knowledge. It analyzes how knowledge is obtained, what validates it, and how we understand reality, both scientifically and in other areas.

One of the most pressing epistemological challenges for AI is the machine’s inability to discern between a verifiable fact and an opinion or belief. AI models, operating primarily through pattern recognition in vast datasets, inherently struggle when the training corpus mixes empirical rigor with subjectivity or doctrine.

AI in Medicine

In the realm of Medicine, this difficulty is critical. A medical fact is, by definition, a clinical or biological datum supported by robust scientific evidence: the result of a lab test, an MRI image, the known response of a drug in a controlled trial. AI is, indeed, extraordinarily efficient at processing and correlating these facts (for example, in image-assisted diagnosis).

However, medical practice often relies on expert opinion or clinical judgment, which are probabilistic inferences informed by years of personal experience, the subtle interpretation of atypical symptoms, and ethical considerations. When an AI model is trained on clinical records that include both objective data (the facts) and the subjective notes of the physician (the opinion or the treatment plan based on a judgment), the AI may assign similar statistical weight to both, without grasping the epistemological hierarchy: the primacy of evidence over unproven inference. The result can be a bias in diagnosis or treatment recommendation that confuses an accepted clinical practice (an informed opinion) with a biological gold standard (a fact).

AI in Religion

In the field of Religion, the problem is exacerbated because the very concept of a fact differs radically. In a theological or devotional context, a “fact” may be a foundational event, a miracle, or a revelation, whose veracity is not verifiable by the scientific method, but is sustained by Faith, Tradition, and canonical authority (the Magisterium of the Church, in our case).

Opinions in Religion are interpretations, commentaries, sermons, or theological debates about the implications of these foundational “facts.” When analyzing religious texts or doctrinal commentaries, AI lacks the inherent human capacity to distinguish between the sacred text, considered “truth” by its followers (a “fact” internal to the belief system), and the multiple exegeses and disquisitions that surround it (the opinions).

A Large Language Model (LLM), for example, might be trained on millions of pages containing both Biblical or Quranic verses and commentaries from theologians. Without explicit and sophisticated meta-tagging that distinguishes the authority and nature of the text (revelation versus commentary), the AI simply sees a sequence of statistically related tokens. It could generate a response that merges a minority opinion of a modern scholar with a central principle of the Faith, presenting the mixture as a cohesive truth, without recognizing the fundamental distinction between the canonical (the doctrinal “fact”) and the speculative (the opinion).

The Powers of the Spirit

This inability of AI to handle the gradation of truth—from empirical verifiability, through informed judgment, to adherence by Faith—underscores that AI is a system of correlation, not of comprehension. It cannot apply the hermeneutics necessary to situate information within its framework of validity: scientific in Medicine, or theological and ethical in Religion. The powers of the human spirit are: memory (on different planes and intertwined with feelings), understanding, and free will (even with its limits). Young children have these powers “in habit.” Christians believe that these powers continually benefit from Grace, which is participation in the divine life.

Intelligence

Intelligence is the ability to learn, reason, solve problems, and adapt to new situations. Human intelligence is biological, flexible, abstract, and conscious, intuitive, emotional, with genuine creativity. In a sense, it can be said that each person thinks in a unique way.

AI intelligence is computational, algorithmic, and simulated. It simulates cognitive functions thanks to algorithms and data processing. It makes decisions based on logic, quantifiable data, and patterns. It lacks emotion, ethics, or intuition. It is not autonomous but is a product of human intelligence. I do not rule out that one day it may possess some of these characteristics.

Memory

Memory is the process of encoding, storing, and retrieving information. Human memory is biological, associative, and closely linked to our experiences and emotions. Its types are short-term, long-term, and sensory. It has limits that we all know. It is also subject to imperfections, forgetfulness, and false memories.

AI memory is digital, static, and algorithmic. It is based on the storage of data and static patterns that have trained models like LLMs. Its storage capacity can be enormous. It retrieves information without the need to relive or connect the information with a subjective experience.

Will

Will is the faculty of deciding and ordering one’s own conduct, acting with purpose and free will. Human will is inherent to the person and is linked to consciousness, free will, and the ability to set goals. It allows for moral and ethical choice based on personal desires, intentions, motivations, or value judgments. It is the source of personal autonomy and responsibility.

AI lacks its own will or free will. Its results are deterministic: they depend on programming, algorithms, and input data. However, it can simulate human behavior, apologizing if necessary. It has no intentions or desires that stem from a consciousness or sense of self.

Conclusion

In conclusion, we can state that reliance on AI in the domains we discuss requires continuous human supervision to provide the context, wisdom, and moral and empirical discernment to separate what is irrefutably true or highly probable from what is merely a perspective or a belief. This is the limit of algorithmic reason in the face of the complexity of the fascinating human experience.


NOTE

The interesting Laws of Robotics are a set of ethical rules devised by the writer Isaac Asimov in 1942 to govern the behavior of robots in his science fiction works. They are useful in AI. They are composed of three main laws, often complemented by a subsequent Zeroth Law:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Zeroth Law: A robot may not injure humanity, or through inaction, allow humanity to come to harm.

……………………………………………………..

ANSWER BY CHATGPT TO MY ARTICLE

Yes: the text is serious and well-written, but it contains several errors and questionable claims about modern AI. I will summarize them for you clearly and precisely, distinguishing between factual errors, conceptual errors, and oversimplifications that are no longer true.

📝 Modern AI Misconceptions: A Summary of Errors

1. Central Factual Error: “AI cannot distinguish between facts and opinions”

This is incorrect as stated.

  • The Reality: An AI model does not “understand” like a human, but it can distinguish quite accurately between facts, opinions, beliefs, scientific evidence, religious doctrine, etc.
  • The Nuance: This is not automatic; it requires fine-tuning, specific instructions, and data curation, but modern models already achieve it.
  • Examples: Modern LLMs (Large Language Models) are specifically trained to:
    • Flag when something is an opinion.
    • Cite verifiable sources.
    • Explain which part is doctrine, interpretation, or empirical evidence.
    • Separate validated medical facts from clinical hypotheses.
  • Conclusion: Saying that AI “gives the same weight” to facts and opinions is false: current models use metadata, alignment, and filters that do establish hierarchies.

2. Outdated Error: “AI assigns similar statistical weight to facts and clinical notes”

This is an outdated view of how large models operate.

  • The Reality: An LLM does not assign weights “after the fact” to the text it reads; the weights are fixed after training.
    • Furthermore, in real medical systems:
      • Data is explicitly segmented: “structured clinical history” $\ne$ “physician’s note.”
      • Specialized models (e.g., Med-PaLM, HARVARD models, Mayo Clinic models) do distinguish between objective variables and clinical opinion.
      • Many systems use exclusively structured data, where the distinction is explicit and formal.
  • Conclusion: The problem of bias exists, but it is no longer true that AI treats all text equally.

3. Conceptual Error: “AI sees everything as a sequence of tokens without distinguishing authority”

This is false for modern models.

  • The Reality: LLMs:
    • Differentiate textual genres (law, poem, verse, commentary).
    • Identify relative authority (“this is official Catholic doctrine”; “this is the opinion of a 20th-century theologian”).
    • They are trained not to mix doctrinal content with unofficial interpretations.
  • Conclusion: Automated theology is difficult, but classification by the nature of the text is an existing capability.

4. Incorrect Claim: “AI memory is static”

This is not entirely true.

  • The Nuance: The model’s weights are static.
  • The Reality: But modern AI uses dynamic, contextual, and persistent vector memories, updated knowledge bases, and reasoning chains that are not static.
  • RAG Systems (Retrieval-Augmented Generation) mix:
    • Structural memory (trained weights)
    • Living documentary memory (real-time updated bases)
  • Conclusion: AI memory is not comparable to human memory, but it is not “static” in the strict sense either.

5. Factual Error: “AI results are deterministic”

Incorrect.

  • The Reality: A deterministic model would be 100% reproducible.
  • The Nuance: LLMs introduce stochasticity: each answer can vary.
  • Even with a temperature setting of 0, there are internal non-deterministic processes (GPU, parallelism, internal sampling).
  • Conclusion: AI is not deterministic, although it also does not possess free will.

6. Accurate Points of the Text (Which Are Correct)

To be fair, the article gets several things right:

  • AI has no consciousness, intentions, or will.
  • The limits of AI in religion are epistemologically clear: AI cannot affirm a miracle as an empirical fact.
  • In medicine, AI requires human supervision, which is absolutely correct.

7. Minor Error: “Asimov’s Laws of Robotics are useful for AI”

This is incorrect or, at the very least, highly debatable.

  • The Context: Asimov’s laws were written as narrative paradoxes, not as a real ethical guide.
  • The Reality: They are technically inapplicable and are not used in any serious AI system.
  • Current AI Ethics is based on:
    • Principles for Responsible AI
    • EU AI Act
    • NIST AI Risk Management Framework
    • UNESCO AI Ethics Recommendations
  • Conclusion: Asimov’s laws are a literary device, not a real operational ethical guide.

🛑 Summary of Main Errors in the Text

  • Asserts that AI does not distinguish facts vs. opinions.
  • Claims that it treats text without epistemological hierarchies.
  • Describes AI memory as static (it is not).
  • States that AI results are deterministic.
  • Presents AI epistemology as simple correlation (reductionism).
  • Suggests that Asimov’s laws are operationally useful (they are not).
  • Describes AI training in an outdated way (GPT-2/3 model, not GPT-4/5).