Prof. Dr. Tom Lenaerts (contributor to Omina Technologies’ Justifai)
VUB
Talk at Belgium AI Week, March 18th
Organized by AI4Belgium
Prof. Dr. Tom Lenaerts (contributor to Omina Technologies’ Justifai)
VUB
Talk at Belgium AI Week, March 18th
Organized by AI4Belgium
Most AI is weak AI, pro-self AI: it is aiming to maximize a predefined objective in a rational manner.
To solve problems that are less rational, we should have another kind of AI that is designed to be social.
Pro-social AI is AI that can help us to become more pro-social. Pro-Social AI could assist us in solving grand societal challenges.
Pro-Social AI is contextual: The AI system has to align with the social norms we have, as there are different social norms in different countries.
Pro-Social AI is more irrational.
To become more pro-social humans have to overcome the fear of being exploited.
If humans can customize the delegation to AI agents, it promotes pro-social and compensating behavior.
However, we need to increase the trust of humans in the AI agents.
When humans have to strive for a social good, they tend not to reach the group goal.
Five different kinds of behavior are observed:
Compensating: Act more pro-social to compensate for the pro-self behavior of the group in the past
Reciprocate: Act more selfish if the group acted selfishly in the past too
Human (individualistic): Act very selfish by never taking one for the team: give in 0 in the experiment.
Pro-Social (Altruist): Always act in the most pro-social way: give 4 in the experiment.
Cooperative: Always act reasonably pro-social: give 2 in the experiment.
The VUB Lab designed an AI agent based on these 5 behaviors that overcome the fear for exploitation (social fear).