We have put together 8 practical steps that can help you build a culture of trusted artificial intelligence. These are:

Start on the top::

Company management is generally well aware of the typical risks in terms of compliance and business ethics, but they are still not well informed about how artificial intelligence is used in their company. Therefore, they need to be educated on ethical guidelines for building trustworthy artificial intelligence , also issued by the European Commission earlier this year. In this way, they will be able to form a clear position on ethics and artificial intelligence and ensure that the use of artificial intelligence complies with relevant laws and regulations.



Perform a risk assessment:

Artificial intelligence is still an emerging technology, which means that its definition in regulations and standards is unclear and the risks are difficult to determine. A risk assessment framework is needed that can help us identify significant high risks and plan important risk mitigation measures. On pages 31-39, the High Level Expert Group on Artificial Intelligence has compiled a list of questions to help companies assess the significant risks associated with artificial intelligence.

Define roles and responsibilities:

In providing trusted artificial intelligence in the company, compliance managers connect with their colleagues in the information department and define roles and responsibilities with them.

Establish a starting point:

Processes to provide trusted artificial intelligence should be integrated into a company’s management system. Business policies and processes need to be adapted to reflect society's expectations of preventing the negative effects of artificial intelligence on human rights and addressing potential problems. A reliable program to ensure compliance and ethics of artificial intelligence will have to include both technical and technical part, the first relating to protection against discrimination and the second to the provision of consistent algorithms.

Promote awareness of trusted artificial intelligence:

It is necessary to make all stakeholders of the company aware of trusted artificial intelligence in such a way that they know the risks associated with artificial intelligence and measures to reduce them. Workshops on ethics and values will be key to training in this area. You can also help yourself with a free online course "Elements of AI" developed by the University of Helsinki together with Reactor.

Monitor and control:

Constantly monitor and control the entire program of trusted artificial intelligence, as only in this way will existing systems be able to improve. The High Level Expert Group on Artificial Intelligence also calls on all stakeholders to put into practice a list of questions to assess trusted intelligence and provide them with feedback on its feasibility, relevance, possible additions or shortcomings, which will lead to the Commission in early 2020. proposed revised version.

Also include suppliers:

Suppliers are also involved in the process of developing artificial intelligence, where it will be necessary to expand supplier auditing programs in such a way as to address potential adverse human rights impacts during the development of artificial intelligence.

Develop a culture of sharing opinions:

Allow all company stakeholders to complain, either through the development of a grievance mechanism or through other channels of communication, in the event that circumstances are identified in which artificial intelligence could adversely affect human rights.

Sandra Marković

SOURCE: Lehocky, H. (2019). Ethics and AI: 8 steps to build trust in intelligent technology. Available via https://www.ericsson.com/en/blog/2019/10/8-principles-of-ethics-and-AI