To use this sharing feature on social networks you must accept cookies from the 'Marketing' category
Create PDF

Artificial Intelligence: ethical-legal principles

Differences in the approach of the ethical-legal guidelines compared to Artificial Intelligence from country to country

We have already talked about Artificial Intelligence systems, destined to a future full of satisfaction, since the potential application of these technologies knows no limits in the sector: from strategic commercial, to medical-health, economic-financial and legal.

The possible obstacles to the diffusion of machine learning systems are of an ethical and legal rather than technological nature.

"The main problem to be faced when approaching these issues is that, although created by man, these systems often evolve in such a complex way that not even man is able to fully understand them. Some very sophisticated models are really difficult to interpret, because it is almost impossible to scrutinise their internal behaviour and left their decisions, which, in many cases, should be accepted almost in a closed box" says lawyer Vittorio Colomba. 

While waiting for these shortcomings to be remedied in the future, the ethical principles recognised as the foundation of the matter (transparency, justice, non maleficence, responsibility and privacy) have been interpreted by each country in its own way and have been translated into ethical-legal guidelines for the responsible use of AI that are profoundly different throughout the world, depending on the cultural context.  

The positions move on an axis that has at its extremes regulation and innovation.

United States

The USA is affected by the Silicon Valley model: "move fast, break things, apologise later". The government has very little impact, from a regulatory point of view, on the development of technology and the drive for innovation.

China

"The Chinese approach is influenced by Confucian values and socialist ideology, so the goal of social harmony is at the centre of every innovation, to be achieved also through elements of moral control and surveillance by the government" explains Colomba.

Europe

The European Union's approach is based on regulation, respect for fundamental rights and democracy. "The development of AI systems must therefore comply with four ethical principles considered essential: risk prevention, respect for human autonomy, equity and explanation".

This approach is seen by some as a comelimitant to the development of AI, on the other hand, others believe that it will result in a competitive advantage: consumers will be more favourably attracted by systems able to offer greater guarantees and protections.

In 2019 the European Commission through a High-Level Expert Group on AI (AI HLEG) produced some guidelines for the development of AI systems. Among the seven principles set out, there are some that should be mentioned: 

1. Privacy

The need to protect data throughout the life cycle of the technology is recognised and protected through a systematic evaluation mechanism that quantifies the impact of each model on data protection.

"The learning phase of machine learning models, as a rule, is based on the use of de-identified data, which therefore does not allow the direct identification of the individuals to whom they refer. However, it is possible that by associating that data with other information it may prove possible to re-identify the individuals concerned, with potential risks to their individual rights and freedoms".

2. Data governance

Adequate data governance is essential, including the implementation of protocols and procedures for accessing data and ensuring its quality and integrity.

This is even truer in relation to IA systems that provide for the processing of "particular" data, for example health data, to be addressed through a strict application of the "privacy by design" principle enucleated by the GDPR.

3. Transparency and interpretability

At all times, an IA system should give the possibility to obtain a complete view of the whole mechanism, but, as mentioned above, most IA systems exploit very complex machine learning models, such as to make the reconstruction, in detail, of the decision-making process often not very feasible. For this reason, technological efforts are also focusing in this direction.

Translated with www.DeepL.com/Translator 


Did you like this article? Sign up for the newsletter and receive weekly news!

Subscribe to Newsletter

Comments:

No comments are in yet. You be the first to comment on this article!

Post a comment

User:
E-Mail (only for alert)
Insert your comment: