As the European Commission launched its regulatory framework proposal for artificial intelligence on April 21st, the conference on Artificial Intelligence and the Law (KULeuven, May 5th, 2021) could not have been timed any better.
The conference was mainly organized along the different chapters of the book Artificial Intelligence and the Law, that was published in February 2021.
As Omina Technologies is specialized in ethical and trustworthy AI, we provide our key insights from the perspective of the ALTAI principles. We also discuss the speakers feedback on how AI regulation affects innovation.
Note: Unfortunately, we missed the presentation on Consumer Protection and AI and the presentation on Intellectural Property Rights and AI. Our analysis does not include these two talks.
-Regulation is needed for high-risk AI applications. According to the essentiality principle, we need to reflect on which challenges of AI can be safely and with good conscience be left to ethics and which challenges of AI need to be addressed by rules which are enforceable and based on laws? (Peggy Valcke)
-Regulation does not stifle innovation. In the early days of the internet we also heard that legislation would stifle innovation. (Peggy Valcke)
-It is important to have legal certainty on how legal concepts and regulation are to be interpreted. In absence of such clarity regulation could hinder innovation (Stephanie Rossello).
-Sandboxes could provide a solution to the pacing problem (regulation typically lags behind technological innovation) and the Collingridge dilemma (Ellen Wauters). Applying the Collingridge dilemma to AI, it means that the impact of AI cannot be easily predicted until AI is extensively developed and widely used, however, when postponing regulation to the moment these impacts are known, makes control difficult as AI will become entrenched .
– Some speakers wonder whether there is a need for an entirely new Artificial Intelligence regulation or whether existing regulation has to be adjusted to cover artificial intelligence. Jeffrey Amankwah suggests to modify the Insurance Law and to clarify how GDPR applies to AI. Jan De Bruyne, Elias van Gool and Thomas Gils suggest that the current tort law should be clarified and it should be assessed whether minimal changes to the existing tort law may suffice to govern AI or whether there is indeed need for an entirely new liability framework as in the European regulatory framework proposal on artificial intelligence. In other words, do we need artificial intelligence law or should existing law cover AI?
–Should a worker have the right to refuse to work with a robot (Simon Taes)?
-If AI is defined as a legal entity signing a contract on behalve of a human, should these actions be attributed in full to the AI agent or should there be, for some contractual actions, be human oversight? (Alexander Appelmans, Maarten Herbosch,Benjamin Verheye)
-Safety could be at stake when robots/cobots are hacked and injure the worker (Simon Taes).
-Health risks: Robotised workplaces might cause techno-stress (Simon Taes). A worker might feel stress because the robot never gets tired. The right to disconnect only solves the technical invasion part of techno-stress.
-If AI is defined as a product, the lack of product safety can be seen as a product defect and subject to product liability. However, how to define the expected safety for an AI system (e.g. autopilot in car)? How to define the reference? We want to avoid that any AI system that is not the safest is by default liable (Jan De Bruyne, Elias Van Gool, Thomas Gils). Also, we must avoid the Nirvana fallacy in which we would not launch certain AI applications because they are not perfect.
-Privacy is one of the 5 common themes in the 84 analyzed ethical guidelines (Nature Machine Intelligence, 2019) (Peggy Valcke).
-A worker’s privacy is challenged when humans only work together with robots as the right to establish relationships with humans is deprived (Simon Taes).
–Data protection and Smart Home Assistents (Carl Vander Maelen): Smart home assistants pose challenges to art. 5 of GDPR as not all household members gave explicit consent in recording and processing personal data, likely to be part of the conversations. In Carl’s opinion a DPIA is obliged.
–Articles 46/1 until 46/3 of the Insurance Act restrict the use of health data from devices collecting lifestyle and health data for life insurance and health contracts (Jeffrey Amankwah).
– The Regulatory Framework proposal on Artificial Intelligence, article 11 states data quality requirements (Stephanie Rossello)
–Data has to be good quality and complete for AI in insurance (Jeffrey Amankwah). Especially relevant when using third-party data. For usage-based car insurance, a mobile app is better than a black box device for capturing reliable and valid data to define the price premium in an accurate way.
–Data bias should be avoided. Matthias Van Der Haegen points out the importance of avoiding bias as part of safeguarding procedural fairness.
–Data access: Alexa: if voice is not recognized for 3 years the data will be removed. We agree with Carl Vander Maelen that this 3-year period is way too long.
Multiple speakers indicated a need for AI that is:
-Explainable. Explainable AI aims to clarify why is a certain prediction was made. According to Rembrandt Devillé we need explainable AI. Deep learning provides good predictions but it does not provide us with why the prediction was made.
-Interpretable: Interpretable AI aims to clarify the logic/innerworkings of a model. How does the model derives its prediction from the input? According to Jeffrey Amankwah, insurance companies should strive to increase the interpretability of their AI, in particular for those insurances that have a big impact on people’s lives (e.g. health insurance).
-Transparent: companies should be transparent on whether decisions are based in full or partially on AI. Companies have to be transparent on which variables weighed into a decision. Insurance companies have to communicate their segmentation criteria (art. 43-45 Insurance Act) (Jeffrey Amankwah). Producers of smart home assistants should be more transparent on which triggers activate the smart home assistent. Often the smart home assistant is activated unintentionally (Carl Vander Maelen). Matthias Van Der Haegen wonders whether certification might contribute more to perceived procedural fairness for end-users than being transparent.
-You can avoid discrimination by analyzing your data well and testing whether you capture all the variance related to the task you want to do (Rembrandt Devillé and Catherine Middag).
–Discrimination in insurance: Segmentation is allowed on objective criteria that can be justified for the conduct of your business (art. 43-45 Insurance Act) (Jeffrey Amankwah). In Belgium, there is no specific anti-discrimination law for insurance and AI. In Belgium, the General Anti-Discrimination Act, Gender-Equality Act and Anti-Racism Act are applicable to AI in insurance.
–Fairness/justice is one of the 5 shared themes found across the 84 ethical guidelines analyzed in 2019 (Nature Machine Intelligence, 2019). (Peggy Valcke)
-Procedural fairness is about whether the process leading to an outcome is fair. Outcome fairness is about whether the outcome of a process is fair). In court, procedural fairness is also taken into account besides outcome fairness (Matthias Van der Haegen) . Computer sciences mostly focusses on predicting the right outcome no matter how and on outcome fairness.
–Fairness in insurance: Jeffrey Amankwah wonders how fairness can be taken into account given the nature of the private insurance industry.
– We need to avoid intrusive apps and be careful with military applications (Rembrandt Devillé and Catherine Middag).
-We should stimulate privacy preserving AI (Rembrandt Devillé and Catherine Middag).
– All stakeholders should be involved to stimulate fair AI (Rembrandt Devillé and Catherine Middag).
-The accountability of AI is related to tort law and contract law.
-Tort law (Jan De Bruyne, Elias Van Gool, Thomas Gils) applies to when AI would cause harm or loss to someone. In general, the liability of fault and product liability are relevant for managing the liability of AI. For stand-alone AI software, the liability of fault is probably most relevant as it is unclear whether stand-alone AI software is a product or a service. For AI that is captured on a medium or device, the product liability is more relevant.
-Contract Law (: what if an AI system would sign a contract on behalve of a human? Contractual liabilities depend on whether AI is defined as a tool or a legal entity.
When AI is seen as a tool (e.g. AI trading on behalve of human as in algoritmic trading), the validity of consent is questionable and the Belgian Error Regime cannot be applied straightforwardly.
When AI is defined as a legal entity, then two questions rise. First, should the actions be attributed in full to the AI system or should there be some human oversight for some actions? Secondly, is the AI agent a direct or indirect representative and what is the mandate of the AI agent? Is the mandate imputable? There might be value in offering a peculium to an AI system similar to the Roman law peculium.