Artificial intelligence and peace

Artificial intelligence and peace

Artificial intelligence and peace

Message from Pope Francis for the celebration of the 57th World Day of Peace (January 1, 2024)

At the beginning of the new year, a time of grace that the Lord gives to each of us, I would like to address the People of God, the nations, the Heads of State and Government, the Representatives of the different religions and the civil society, and to all the men and women of our time to express my best wishes for peace.

1. The progress of science and technology as a path to peace
Holy Scripture attests that God has given men his Spirit so that they may have “skill, talent, and experience in the execution of all kinds of work” ( Ex 35:31). Intelligence is an expression of the dignity that the Creator has given us by making us in his image and likeness (cf. Gen 1:26) and has made us capable of responding to his love through freedom and knowledge. Science and technology manifest in a particular way this fundamentally relational quality of human intelligence; both are extraordinary products of their creative potential.

In the Pastoral Constitution Gaudium et Spes , the Second Vatican Council has insisted on this truth, declaring that "man has always strived with his work and his ingenuity to perfect his life." [1] When human beings, "with the help of technical resources," strive so that the earth "becomes a worthy dwelling place for the entire human family," [2] they act according to God's plan and cooperate with his will to carry out creation and spread peace among peoples. Likewise, the progress of science and technology, to the extent that it contributes to a better order of human society and to increasing freedom and fraternal communion, leads to the perfection of man and the transformation of the world.

We rightly rejoice and are grateful for the extraordinary achievements of science and technology, thanks to which it has been possible to remedy countless evils that affected human life and caused great suffering. At the same time, technical-scientific progress, making it possible to exercise control over reality never seen before, is putting a vast range of possibilities in the hands of man, some of which represent a risk to human survival. and a danger to the common home. [3]

The notable progress of new information technologies, especially in the digital sphere, therefore presents exciting opportunities and serious risks, with serious implications for the search for justice and harmony between peoples. Therefore, some urgent questions need to be asked. What will be the consequences, in the medium and long term, of new digital technologies? And what impact will they have on the lives of individuals and society, on international stability and on peace?

2. The future of artificial intelligence between promises and risks
The progress of computing and the development of digital technologies in recent decades have already begun to produce profound transformations in global society and its dynamics. New digital instruments are changing the face of communications, public administration, education, consumption, personal interactions and countless other aspects of daily life.

Furthermore, technologies that use a large number of algorithms can extract, from the digital traces left on the Internet, data that allows the mental and relational habits of people to be controlled for commercial or political purposes, often without their knowledge, limiting their exercise. conscious of freedom of choice. In fact, in a space like the web, characterized by an overload of information, the flow of data can be structured according to selection criteria that are not always perceived by the user.

We must remember that scientific research and technological innovations are not divorced from reality or "neutral," [4] but are subject to cultural influences. As fully human activities, the directions they take reflect decisions conditioned by the personal, social and cultural values of each era. The same can be said about the results they achieve. These, precisely as the result of specifically human approaches towards the surrounding world, always have an ethical dimension, strictly linked to the decisions of those who plan the experimentation and focus the production towards particular objectives.

This also applies to forms of artificial intelligence, for which, until today, there is no univocal definition in the world of science and technology. The term itself, which has already entered common language, encompasses a variety of sciences, theories and techniques aimed at making machines reproduce or imitate, in their operation, the cognitive abilities of human beings. Speaking in the plural of “forms of intelligence” can help to highlight above all the unbridgeable gap that exists between these systems and the human person, no matter how surprising and powerful they may be. These are, ultimately, “fragmentary”, in the sense that they can only imitate or reproduce some functions of human intelligence. The use of the plural also makes it clear that these devices, very different from each other, must always be considered as “socio-technical systems”. Indeed, their impact, regardless of the underlying technology, depends not only on the project, but also on the objectives and interests of those who own them and those who develop them, as well as the situations in which they are used.

Artificial intelligence, therefore, must be understood as a galaxy of different realities and we cannot assume a priori that its development will make a beneficial contribution to the future of humanity and peace between peoples. Such a positive outcome will only be possible if we are able to act responsibly and respect fundamental human values such as “inclusion, transparency, security, fairness, privacy and responsibility”. [5]

It is not enough to even assume, on the part of those who project algorithms and digital technologies, a commitment to act ethically and responsibly. Bodies must be strengthened or, if necessary, established to examine emerging ethical issues and to protect the rights of those who use or are influenced by forms of artificial intelligence. [6]

The immense expansion of technology, therefore, must be accompanied, for its development, by adequate training in responsibility. Freedom and peaceful coexistence are threatened when human beings give in to the temptation of selfishness, personal interest, the desire for profit and the thirst for power. We therefore have the duty to broaden our vision and guide the technical-scientific search towards the achievement of peace and the common good, at the service of the integral development of man and the community. [7]

The intrinsic dignity of each person and the fraternity that binds us as members of a single human family must be at the basis of the development of new technologies and serve as indisputable criteria to evaluate them before their use, so that digital progress can carried out with respect for justice and contribute to the cause of peace. Technological developments that do not lead to an improvement in the quality of life of all humanity, but, on the contrary, aggravate inequalities and conflicts, cannot be considered true progress. [8]

Artificial intelligence will become increasingly important. The challenges it poses are not only technical, but also anthropological, educational, social and political. It promises, for example, savings in effort, more efficient production, more agile transportation and more dynamic markets, as well as a revolution in the processes of data collection, organization and verification. It is necessary to be aware of the rapid transformations that are occurring and manage them so that fundamental human rights can be safeguarded, respecting the institutions and laws that promote integral human development. Artificial intelligence should be in service of greater human potential and our highest aspirations, not in competition with them.

3. The technology of the future: machines that learn on their own
In its multiple forms, artificial intelligence, based on machine learning techniques, although still in a pioneering phase, is already introducing notable changes in the fabric of societies, exerting a profound influence on cultures, social behaviors and peacebuilding.

Developments such as machine learning or deep learning raise questions that transcend the fields of technology and engineering and have to do with an understanding strictly connected to the meaning of human life, the basic processes of knowledge and the ability of the mind to reach the truth.

The ability of some devices to produce syntactically and semantically coherent texts, for example, is no guarantee of reliability. It is said that they can “hallucinate”, that is, generate statements that at first glance seem plausible, but which in reality are unfounded or reveal prejudices. This creates a serious problem when artificial intelligence is used in disinformation campaigns that spread fake news and lead to growing distrust of the media. Confidentiality, data possession and intellectual property are other areas in which the technologies in question pose serious risks, to which are added further negative consequences linked to their improper use, such as discrimination, interference in electoral processes, the implementation of a society that monitors and controls people, digital exclusion and the intensification of an individualism increasingly disconnected from the community. All of these factors risk fueling conflict and hindering peace.

4. The sense of the limit in the technocratic paradigm
Our world is too vast, varied and complex to be completely known and classified. The human mind will never be able to exhaust its wealth, even with the help of the most advanced algorithms. These, in fact, do not offer guaranteed forecasts of the future, but only statistical approximations. Not everything can be predicted, not everything can be calculated; In the end "reality is superior to the idea" [9] and, no matter how prodigious our calculation capacity may be, there will always be an inaccessible residue that escapes any attempt at quantification.

Furthermore, the large amount of data analyzed by artificial intelligence is not in itself a guarantee of impartiality. When algorithms extrapolate information, they always run the risk of distorting it, reproducing the injustices and prejudices of the environments in which they originate. The faster and more complex they become, the more difficult it is to understand why they have generated a certain result.

Intelligent machines can perform the tasks assigned to them with increasing efficiency, but the purpose and meaning of their operations will continue to be determined or enabled by human beings who have their own universe of values. The risk is that the criteria underlying certain decisions become less transparent, that decision-making responsibility is hidden, and that producers may evade the obligation to act for the good of the community. In a certain sense, this is favored by the technocratic system, which allies the economy with technology and privileges the criterion of efficiency, tending to ignore everything that is not linked to its immediate interests. [10]

This should make us reflect on the “sense of the limit”, an aspect often neglected in the current technocratic and efficient mentality, and yet decisive for personal and social development. The human being, in fact, mortal by definition, thinking of surpassing all limits thanks to technology, runs the risk, in the obsession of wanting to control everything, of losing control of himself, and in the search for absolute freedom, of falling into the spiral of a technological dictatorship. Recognizing and accepting one's own limit as a creature is for man an indispensable condition to achieve or, better yet, to welcome plenitude as a gift. On the other hand, in the ideological context of a technocratic paradigm, animated by a Promethean presumption of self-sufficiency, inequalities could grow disproportionately, and knowledge and wealth accumulate in the hands of a few, with serious risks for democratic societies and peaceful coexistence. [11]

5. Hot topics for ethics
In the future, the reliability of someone applying for a loan, the suitability of an individual for a job, the possibility of recidivism of a convicted person or the right to receive political asylum or social assistance could be determined by artificial intelligence systems. The lack of diversified levels of mediation that these systems introduce is particularly exposed to forms of prejudice and discrimination. Systemic errors can easily multiply, producing not only injustices in specific cases but also, through a domino effect, authentic forms of social inequality.

Furthermore, forms of artificial intelligence often seem capable of influencing individuals' decisions through predetermined options associated with stimuli and persuasion, or through systems for regulating personal choices based on the organization of information. These forms of manipulation or social control require precise attention and supervision, and imply clear legal responsibility on the part of producers, those who use them, and government authorities.

Reliance on automatic processes that classify individuals, for example, through the widespread use of surveillance or the adoption of social credit systems, could also have profound repercussions on the social fabric, establishing inappropriate categorizations among citizens. And these artificial classification processes could even lead to power conflicts, not only with respect to virtual recipients, but also to real people. Fundamental respect for human dignity postulates rejecting that the uniqueness of the person be identified with a set of data. We must not allow algorithms to determine the way we understand human rights, to ignore the essential values of compassion, mercy and forgiveness, or to eliminate the possibility for an individual to change and leave the past behind.

In this context, we cannot fail to consider the impact of new technologies in the workplace. Jobs that were once the exclusive province of human labor are quickly absorbed by industrial applications of artificial intelligence. Here too there is a substantial risk of disproportionate benefit to a few at the expense of impoverishment for many. Respect for the dignity of workers and the importance of the occupation for the economic well-being of individuals, families and societies, job security and wage equity should be a high priority for the international community, as these forms of technology are increasingly introduced into the workplace.

6. Shall we beat swords into plowshares?
These days, looking at the world around us, we cannot avoid the serious ethical questions linked to the arms sector. The possibility of conducting military operations through remote control systems has led to a lesser perception of the devastation they have caused and the responsibility in their use, contributing to an even colder and more distant approach to the immense tragedy of war. . The pursuit of emerging technologies in the field of so-called “lethal autonomous weapons systems,” including the warfighting use of artificial intelligence, is a major source of ethical concern. Autonomous weapons systems can never be morally responsible subjects. The exclusive human capacity for moral judgment and ethical decision is more than a complex set of algorithms, and this capacity cannot be reduced to the programming of a machine that, although “intelligent”, is still always a machine. For this reason, it is imperative to ensure adequate, meaningful and consistent human oversight of weapons systems.

Nor can we ignore the possibility that sophisticated weapons end up in the wrong hands, facilitating, for example, terrorist attacks or actions aimed at destabilizing legitimate government institutions. In short, really the last thing the world needs is for new technologies to contribute to the unjust development of the arms market and trade, promoting the madness of war. If he does so, not only intelligence, but the very heart of man will run the risk of becoming increasingly “artificial.” The most advanced technical applications should not be used to facilitate the violent resolution of conflicts, but to pave the paths of peace.

In a more positive perspective, if artificial intelligence were used to promote integral human development, it could introduce important innovations in agriculture, education and culture, an improvement in the standard of living of entire nations and peoples, the growth of fraternity human and social friendship. Ultimately, the way we use it to include the least, that is, the weakest and most needy brothers and sisters, is the measure that reveals our humanity.

A human perspective and the desire for a better future for our world lead to the need for an interdisciplinary dialogue aimed at the ethical development of algorithms - algorithmics -, in which values guide the itineraries of new technologies. [12] Ethical issues should be taken into account from the beginning of the research, as well as in the experimentation, planning, distribution and commercialization phases. This is the approach to planning ethics, in which educational institutions and those responsible for the decision-making process have an essential role to play.

7. Challenges for education
The development of technology that respects and serves human dignity has clear implications for educational institutions and the world of culture. By multiplying the possibilities of communication, digital technologies have allowed us new ways of meeting. However, a permanent reflection on the type of relationships it is leading us to continues to be necessary. Young people are growing up in cultural environments impregnated with technology and this cannot help but question teaching and training methods.

Education in the use of forms of artificial intelligence should primarily focus on promoting critical thinking. It is necessary for users of all ages, but especially young people, to develop a capacity for discernment in the use of data and content obtained on the web or produced by artificial intelligence systems. Schools, universities and scientific societies are called to help students and professionals to make the social and ethical aspects of the development and use of technology their own.

Training in the use of new communication instruments should consider not only misinformation, false news, but also the disturbing increase in "ancestral fears that [...] have been able to hide and enhance themselves behind new technologies." [13] Unfortunately, once again we find ourselves having to combat “the temptation to create a culture of walls, to build walls to prevent encounters with other cultures, with other people” [14] and the development of peaceful and fraternal coexistence. .

8. Challenges for the development of international law
The global reach of artificial intelligence makes it evident that, along with the responsibility of sovereign states to internally discipline its use, international organizations can play a decisive role in achieving multilateral agreements and coordinating their application and action. [15] To this end, I urge the community of nations to work together to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms. Naturally, the aim of regulation should not only be to prevent bad practice, but also to encourage best practice, stimulating new and creative approaches and facilitating personal and collective initiatives. [16]

In short, in the search for regulatory models that can provide ethical guidance to those who develop digital technologies, it is essential to identify the human values that should be the basis of societies' commitment to formulate, adopt and apply the necessary legislative frameworks. The work of drafting ethical guidelines for the production of forms of artificial intelligence cannot ignore the consideration of deeper questions, related to the meaning of human existence, the protection of fundamental human rights and the search for justice and peace. This process of ethical and legal discernment can be revealed as a valuable occasion for shared reflection on the role that technology should have in our personal and community life and on how its use could contribute to the creation of a more just and humane world. For this reason, discussions on the regulation of artificial intelligence should take into account the voice of all stakeholders, including the poor, the marginalized and others who often go unheard in global decision-making processes.

* * * * *

I hope that this reflection encourages progress in the development of forms of artificial intelligence to contribute, ultimately, to the cause of human brotherhood and peace. It is not the responsibility of a few, but of the entire human family. Peace, in fact, is the fruit of relationships that recognize and welcome the other in their inalienable dignity, and of cooperation and effort in the search for the integral development of all people and all peoples.

My prayer at the beginning of the new year is that the rapid development of forms of artificial intelligence will not increase the already numerous inequalities and injustices present in the world, but will help to end wars and conflicts, and alleviate so many forms of suffering that affect the human family. May the Christian faithful, believers of different religions and men and women of good will be able to collaborate in harmony to take advantage of the opportunities and face the challenges posed by the digital revolution, and leave future generations a more supportive, just and peaceful world. .

Vatican, December 8, 2023

[1] No. 33.
[2] Ibid. , no. 57.
[3] Cf. Letter enc. Laudato si' (May 24, 2015), 104.
[4] Cf. ibid. , 114.
[5] Speech to the participants in the “Minerva Dialogues” meeting (March 27, 2023).
[6] Cf. ibid.
[7] Cf. Message to the Executive President of the “World Economic Forum” in Davos-Klosters (January 12, 2018).
[8] Cf. Letter enc. Laudato si' , 194; Speech to participants in a Seminar on “The common good in the digital age” (September 27, 2019).
[9] Exhort. ap. Evangelii gaudium (November 24, 2013), 233.
[10] Cf. Letter. enc. Laudato si' , 54.
[11] Cf. Address to the participants in the Plenary of the Pontifical Academy for Life (February 28, 2020).
[12] Cf. ibid.
[13] Letter enc. Fratelli tutti (October 3, 2020), 27.
[14] Cf. ibid.
[15] Cf. ibid. , 170-175.
[16] Cf. Letter enc. Laudato si' , 177.