You may have already seen the film "Terminator" or "I, Robot", both are very famous and talk about the extermination of the human race through robots, and this caused people to fear and fear trusting Artificial Intelligence. Therefore, enthusiasts, developers and companies assume the importance of a code of ethics in building these intelligences.
Over the past few years, many researchers and activists have exposed the countless errors, biases and misuse of technology, for example, a model used in North American criminal cases to predict which criminals are most likely to reoffend and thus judges take action. decisions based on it. This system was called COMPAS and was shown to carry racial bias when making its classifications, where the color of a person's skin ended up being a factor in being considered a high-risk criminal.
Some of the results sent by the Compas software
Another curious case was that of two researchers from Stanford University who trained an AI algorithm to guess people's sexual orientations based on photographs, which could end up exposing some people, leading others to torment them with homophobic insults, never considering the validity of the data used to create the technology.
Or, for example, if police officers used something similar to uncover crimes, the biased algorithms could make people from certain ethnic groups increasingly vulnerable to crimes they didn't commit.
Efforts to address these growing challenges often focus on the importance of the “Ethics+AI” binomial.
So, how do we, developers and creators, plan to build intelligence that achieves its goals and impacts society in a healthy way?
How do machines learn?
There are 3 types of machine learning supervised, unsupervised and reinforcement . In this post, as we are talking more about chatbots, we will talk about the supervised and unsupervised bots.
Supervised
We developers have control over what the bot says, creating responses instead of allowing users to teach it, in other words, we have greater power over what our bot will "learn".
Advantages: You know exactly how it will respond and the bot cannot be harmed unless you train it with harmed data.
Disadvantages: It is more time-consuming and creating a convincing bot takes a lot of time.
Unsupervised Learning
The bot is educated by its users and not by the developer.
Advantages: Users do the work of training and teaching the bot and you don't have to worry about spending time updating it.
Disadvantages: Your bot will develop an inconsistent personality and you may not be aware of what is being taught. At worst, it turns into annoying, racist, sexist, homophobic software.
Tay and unsupervised learning
Tay was a chatbot created by Microsoft in March 2016, where it interacted and learned from users' tweets on the social network. And without the correct treatment in the learning base, she ended up in less than 24 hours becoming homophobic, racist and everything else.
And this has made many developers and companies question 100% unsupervised learning. Emerging as an alternative to semi-supervised learning, where we would have prior treatment of the data that will be learned by the bot. For example, in Tay's case, a step would be introduced to identify swear words, homophobic and racist words, removing them from the knowledge base.
Prevention
Many companies do not think about preventing abuse by chatbot users, for example: trust, swearing, death threats or even racism.
After the event with Tay, Microsoft soon took her offline and created a new chatbot, Zo, which is treated in the knowledge base where it avoids blacklist terms.
Some open source blacklist libraries are maintained and used by virtual conversational agents to prevent certain dialogs.
Transparency
Be indifferent to the fact that you use bots as part of your product or service.
Users are more likely to trust a company that is transparent about its use of the technology. A bot is more likely to be trustworthy if users understand that the bot is working to meet their needs and is clear about its limitations.
Since designers can equip their bots with “personality” and natural language capabilities, it is important to convey to users that they are not interacting with another person but with a bot. There are several design options and this can be done so that it does not detract from the user experience.
Reliability
The performance of AI-based systems can vary from development to implementation, and from the time the bot is rolled out to new users and in new contexts, it is important to continually monitor reliability.
How to do this?! Be transparent about bot reliability by presenting system or context-specific performance summaries. It is important to always ask users for feedback regarding interactions they have had, as this will help us better understand where our bot is going wrong and adjust it.
Privacy
Inform the user that the data will be found and how it will be used. Don't forget to obtain user consent and don't collect more personal data than necessary!
And then a question arises: “We have an AI that is being used to prevent suicides, but to what extent can it interfere with human decisions?”
Equality
The possibility of AI-based systems perpetuating existing social biases or introducing new views is one of the main issues identified by the AI community related to its rapid deployment.
Development teams must be committed to ensuring that their bots treat everyone fairly. This will be achieved with diversity in your development team. By employing a mentored team focused on the design, development and testing of the technology, the bot will have a better chance of functioning fairly.
Pay attention to the database being used to train the AI or chatbot to check that it is not envious.
“Development teams must be committed to ensuring their bots treat all people fairly.”
(Microsoft, 2018, Bot Guidelines)