Vatican Issues Note on Artificial Intelligence

The Vatican has issued a Note on artificial intelligence (AI) which highlights the challenges and the opportunities in the development of AI in various fields, such as education, relationships, work, health, the economy, and warfare.

Vatican News is reporting on the note, entitled “Antiqua et Nova: Note on the relationship between artificial intelligence and human intelligence” which is a reflection on the subject from the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education.

The document, which contains 117 paragraphs, deals with the risks and the progress associated with AI which it encourages as “part of the collaboration of man and woman with God” [2], but includes warnings about how this technology can be abused.

First and foremost, it seeks to make a distinction between AI and human intelligence. Paragraph 35 cites an observation by Pope Francis that “the very use of the word ‘intelligence’” in connection with AI “can prove misleading” and risks overlooking what is most precious in the human person. In light of this, AI should not be seen as an artificial form of human intelligence but as a product of it.”

It warns that “Like any product of human ingenuity, AI can also be directed toward positive or negative ends”. AI could introduce important innovations, “but it also risks aggravating situations of discrimination, poverty, ‘digital divide,’ and social inequalities.

The Note specifically addresses the use of AI in various fields, some of which include:

WARFARE
The Note devotes a section to the use of AI in lethal weapons systems that are capable of “identifying and striking targets without human interventions” which are a “cause for grave ethical concern.” These concerns are so great that Pope Francis called for the banning of their use because they pose “an ‘existential risk’ by having the potential to act in ways that could threaten the survival of entire regions or even of humanity itself.”

The use of AI in war has many advantages, such as improving weapons targeting systems which would decrease unnecessary collateral damage and civilian deaths, and the creation of creating robotic combatants to replace human soldiers could spare countless lives.

However, as this article in the Army University Press explains, AI-equipped machines make decisions on their own, “so it is difficult to determine whether a flawed decision is due to flaws in the program or in the autonomous deliberations of the AI-equipped (so-called smart) machines.” That flawed decision could cost countless lives which is why arguments about the “irreplaceability of human conscience and moral judgment” are important to address.

HUMAN RELATIONS
The Note points out the main danger of the use of AI in human relationships, saying that it can lead to “harmful isolation.”

For example, as this article in Psychology Today reports, platforms generating AI girlfriends are experiencing a massive growth in popularity, with millions of users, mostly young single men who are turning to these virtual companions to combat loneliness. Almost 20% of men who use traditional dating apps say they have had AI-generated romances at one point, with nearly half of those who use these platforms saying they interacted with their virtual partner on a daily basis.

The problem is that “AI girlfriends can perpetuate loneliness because they dissuade users from entering into real-life relationships, alienate them from others, and, in some cases, induce intense feelings of abandonment.”

Some scholars believe these virtual companions can threaten a person’s ability to connect and collaborate in other areas of life as well.

HEALTHCARE
The Note states: “Responsibility for the well-being of patients and the decisions that touch upon their lives are at the heart of the healthcare profession. This accountability requires medical professionals to exercise all their skill and intelligence in making well-reasoned and ethically grounded choices regarding those entrusted to their care, always respecting the inviolable dignity of the patients and the need for informed consent. As a result, decisions regarding patient treatment and the weight of responsibility they entail must always remain with the human person and should never be delegated to AI.”

One of the most concerning issues with AI in healthcare is algorithmic bias which can come from biases in data collection methods and even human biases introduced during algorithm development. AI systems are also vulnerable to cyberattacks which risk patient privacy. Misdiagnoses and mistaken decisions by AI can produce harmful patient outcomes.

In addition, as the Note explains, “if AI were to replace the doctor-patient relationship, it would risk worsening the loneliness that often accompanies illness.”

EDUCATION
“In this context, AI presents both opportunities and challenges,” the Note advises. “If used in a prudent manner, within the context of an existing teacher-student relationship and ordered to the authentic goals of education, AI can become a valuable educational resource by enhancing access to education, offering tailored support, and providing immediate feedback to students. These benefits could enhance the learning experience, especially in cases where individualized attention is needed, or educational resources are otherwise scarce.”

However, one of the problems with AI in education is that many programs ‘merely provide answers instead of prompting students to arrive at answers themselves or write text for themselves’; which can lead to a failure to develop critical thinking skills.

FAKE NEWS and DEEPFAKES 
“While AI has a latent potential to generate false information, an even more troubling problem lies in the deliberate misuse of AI for manipulation,” the Note warns. “This can occur when individuals or organizations intentionally generate and spread false content with the aim to deceive or cause harm, such as ‘deepfake’ images, videos, and audio—referring to a false depiction of a person, edited or generated by an AI algorithm. The danger of deepfakes is particularly evident when they are used to target or harm others. While the images or videos themselves may be artificial, the damage they cause is real, leaving ‘deep scars in the hearts of those who suffer it’ and ‘real wounds in their human dignity.’”

Deepfakes, which are fabricated videos or photographs which superimpose a new face or new audio on a person to make it look as though they said or did something they did not do, were prevalent during the recent president election. For example, a video of former President Joe Biden urging people to stay home during the primaries and the fabricated images of Taylor Swift endorsing President Trump.

However, other examples, such as the fake image of an explosion near the Pentagon which circulated on social media in 2023 that negatively impacted the stock market can cause serious problems. They can also be used to harass or blackmail people and steal sensitive information in increasingly elaborate and convincing scams.

The proper use of AI continues to be one of the greatest challenges of our time.

The Note concludes: “The vast expanse of the world’s knowledge is now accessible in ways that would have filled past generations with awe. However, to ensure that advancements in knowledge do not become humanly or spiritually barren, one must go beyond the mere accumulation of data and strive to achieve true wisdom.”

© All Rights Reserved, Living His Life Abundantly®/Women of Grace®  http://www.womenofgrace.com

Send your New Age questions to newage@womenofgrace.com

Comments are closed.