When science fiction author Isaac Asimov wrote his short story Runaround in 1941, he devised the Three Laws of Robotics, laws that have since been quoted and repeated by fans and adopted or revised in other forms of speculative fiction.
The Three Laws are: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In other words, in Asimov’s world, a robot is essentially barred from doing anything that might harm a human being, allaying fears that these machines could be weaponized in any shape or form.
Other pop culture depictions, however, are not as benign as Asimov. In fact, Artificial Intelligence in films run the gamut of benign (Bicentennial Man — cyborg developing feelings and becoming self-aware) to the downright hostile (Skynet in the Terminator franchise is downright terrifying).
But is a Skynet scenario really possible? Can machines – rather, are machines really capable of interfacing with each other and plot a literal hostile takeover and wipe out humans from the face of the earth?
These are the questions being levied during the upcoming confab on Artificial Intelligence organized by the Colegio San Agustin-Bacolod, along with Lifebank Foundation, De La Salle – College of St. Benilde, and the Strategic Research and Development Center, Inc.
The International Conference on Artificial Intelligence has the theme Discerning Societal Impacts of Artificial Intelligence, and aims to explore the various way that a rampant use of A.I. could affect people. Live sessions of the conference will be on 12 April to 16 April 2021, while asynchronous sessions will be 19 April to 23 April 2021, at P700 for local participants and $25 for international participants.
WHAT IS ARTIFICIAL INTELLIGENCE?
“Artificial Intelligence is the capacity of the [machine] to mimic the human brain,” Project Manager Paolo Hilado tells DNX.
There are several instances in other countries he says where computers are slowly replacing humans in repetitive tasks.
“Are we prepared for that?” he asks.
Same questions that the likes of astrophysicist Stephen Hawking and Tesla Motors founder Elon Musk has asked, especially focusing on the ethical use of A.I., which they warned, could be weaponized.
Fr. Tito Soquiño echoes the sentiment.
There is a possibility that A.I. could be weaponized. It could also lead to the displacement of humans in the labor and industry sectors.
“When that happens, there will be loss of jobs, society will be affected economically, and there is going be breakdown of society, leading to poverty,” Soquiño explains, adding, “That’s where the Church is concerned. It becomes an ethical, even moral issue.”
Even the Vatican – through the Pontifical Academy for Life — has been involved, he says, and big companies like IBM has also joined in voicing their concerns, pushing for the ethical use of A.I.
The April conference will spread more awareness about it through a series of talks.
Speakers include global activist and 2003 Right Livelihood Awardee Nicanor Perlas, Dr. Tiago Dela Silva Lopes of the National Center for Child Health and Development, Dean Barron of the University of California – San Diego; Web Application Scientist Boyd Collins, and book author Sr. Ilia Delio, OSF.