By Dr. Paula Hidalgo-Sanchis
The short answer is because I couldn’t stop myself from doing it. The long answer is in this blog…
I have worked in the field of AI for the last eight years. I was responsible for managing the development, testing, and deployment of AI prototypes to advance the work of the United Nations and its partners to tackle issues such as poverty, climate change, and conflict around the world.
Every day I advocated for the ethical design and use of AI technology whether I was in the office or on the world stage.
A few months back, my son Alan asked me (as he does every other night) “Mommy, tell me a story.” As I told him the story of a child pirate who navigated the Caribbean Sea, something clicked inside me – why not share my thoughts and beliefs about AI and ethics by telling a story about it? The day after, I woke up before my alarm clock, and I wrote the first paragraphs of Teaching Machines How to Cry. In part to answer some of the questions below.
Where are we heading?
There has been a lot of discussion lately about the dangers of the ‘uncontrolled’ development of AI technology, and how to regulate it. While some policies and legal frameworks are already in place, the question of how to implement them raises many questions.
But beyond the development and implementation of policies and legal frameworks, there is a bigger picture to consider –where are we heading with the light-speed development of AI technologies? I wrote Teaching Machines how to Cry to present this question to the reader.
AI technology is shaping our individual and societal behavior. Many of the daily choices we make, such as what song to listen to, what movie to watch, or what wine to drink, are based on an algorithmic proposal. The news feeds we receive on any of the digital devices we look at a thousand times per day show us a world that algorithms have curated for us. And now we can even write stories without even imagining them.
Additionally, AI is accelerating the use of technology to ‘augment’ humans. This is a term used by transhumanists that means enhancing humans. From neural implants that apply deep brain stimulation (DBS) to cure mobility diseases like Parkinson’s to the use of robotic prostheses, the incorporation of what I call ‘intelligent machine components’ into the human body is advancing. Soon, brain-computer interfaces might be used to monitor and act on the first symptoms of depression, and wearable AI devices might alert us to the risk of a heart attack.
At the same time, AI technology has been trained on human cognitive skills and mobility and can recognize and simulate human emotions. Designed to look and act like humans, AI patient simulators are used to train medical staff. Humanoid robots work in customer service. We are shaping AI technologies to become more ‘human-like’ to increase their usability and acceptability.
Who decides?
We already live in symbiosis with AI technologies, and we are becoming more alike. But who makes the decisions on how much alike we become and how? Who decides what human characteristics machines are to learn? Who decides what machine components to embed in us?
Will the future be led by scientists who understand ‘aesthetic appreciation’ as a doorway to connect to the Divine, or scientists who are agnostic? Researchers who think that fate has an impact on human life or those who think that only probability does? Experts who believe that the human mind is what defines us or those who also consider spirituality? Leaders who consider that psychic abilities are trickery or valuable skills?
Depending on who decides, the definition of who we are, and who we become changes.
My invitation
My first novel, Teaching Machines how to Cry, is an invitation to you to ponder the questions: Where are we heading with the development of AI technologies? And who decides what happens next? It is also an invitation to enjoy a book about the wonders of machine-learning, AI prototypes, humor, love, dreams, and inexplicable events.