Belgian AI Week Key Takeaways

Omina Technologies > Conference > Webinar > AI for Social Good: Pro-Social AI

AI for Social Good: Pro-Social AI

  • Posted by: MartinCanter
AI for Social Good: Pro-Social AI 1

From Pro-Self to Pro-Social AI

Prof. Dr. Tom Lenaerts (contributor to Omina Technologies’ Justifai)

VUB

Talk at Belgium AI Week, March 18th

Organized by AI4Belgium

Pro-Self AI

Most AI is weak AI, pro-self AI: it is aiming to maximize a predefined objective in a rational manner.

To solve problems that are less rational, we should have another kind of AI that is designed to be social.

Pro-Self AI

Pro-Social AI

Pro-social AI is AI that can help us to become more pro-social. Pro-Social AI could assist us in solving grand societal challenges.

Pro-Social AI is contextual: The AI system has to align with the social norms we have, as there are different social norms in different countries.

Pro-Social AI is more irrational.

To become more pro-social humans have to overcome the fear of being exploited.

Pro-Social_Dimension
Pro-Social AI
Pro-Social Agenda

Overall Conclusion

If humans can customize the delegation to AI agents, it promotes pro-social and compensating behavior.

However, we need to increase the trust of humans in the AI agents.

Conclusion_From_Experiments
Conclusion_Social_AI_Experiments

Experiments Leading Up to The Conclusion

When humans have to strive for a social good, they tend not to reach the group goal.

Five different kinds of behavior are observed:

Compensating: Act more pro-social to compensate for the pro-self behavior of the group in the past

Reciprocate: Act more selfish if the group acted selfishly in the past too

Human (individualistic): Act very selfish by never taking one for the team: give in 0 in the experiment.

Pro-Social (Altruist): Always act in the most pro-social way: give 4 in the experiment.

Cooperative: Always act reasonably pro-social: give 2 in the experiment.

The VUB Lab designed an AI agent based on these 5 behaviors that overcome the fear for exploitation (social fear).

Social_AI_Experiments_Design
Social_AI_Experiments_Design
  • The ability to succeed is higher when delegating to AI (without human intervention) or for customized delegation to AI agent (human can define the strategy of the agent upfront).
  • Delegation to AI agent increases pro-social behavior. Delegation removes fear of betrayal because you commit to a strategy at the start and you can no longer respond to the behavior of others in the past.
  • Customization increases compensation: acting more pro-social to compensate for pro-self behavior of the group in the past. So humans seem to believe that compensating for the pro-self behavior of others is a good way to deal with the problem.
  • Trusted AI: There are still trust issues. In the nudge experiment, humans thought agents contributed less (even if they contributed about the same amount), humans blamed agents for failing.
Compare_Social_AI Experiements
Delegation_Social_AI
Compensation_Social_AI
Trust_AI_Agents