Inspire. Engage. Connect

25 - 27 November, 2019 | Twickenham Stadium, London, United Kingdom

The Ethical Guidelines Shaping the Future of Artificial Intelligence

By: Intelligent Automation Week

Earlier this year, the High-Level Expert Group on AI unveiled seven guidelines put in place to aid the development of 'trustworthy' Artificial Intelligence. This came not long after the publication of an initial draft in which the group held an open consultation to gather more practical advice. In the future, the expert group explain that all Artificial Intelligence tools that are developed should be; lawful, ethical and robust. In a time in which many people fear the potential power of AI, these guidelines have been put in place to nurture the development of the tool safely and dependably. This aims to put concerns surrounding Artificial Intelligence to bed, instead highlighting the immeasurable benefits of the technology.

The AI HLEG's seven guidelines include:

1. Human Agency and Oversight

Artificial Intelligence and similar systems should be used as a tool to enable human capabilities, aiding them as they try to make increasingly informed decisions. To do this, oversight measures need to be put in place by the owner of the system. According to an article by the European Commission, this can be achieved through human-in-the-loop, human-on-the-loop and human-in-command approaches. 

2. Technical Robustness and Safety

Artificial Intelligence systems of the future need to be robust, as well as secure. Many people share concerns about the capabilities of AI and to mitigate and prevent any harm, the systems that are developed need to be safe. This would include ensuring that there is a fall back plan in case of any attacks or faults, as well as being 'accurate, reliable and reproducible.'

3. Privacy and Data Governance 

With new GDPR laws passing over recent years, all Artificial Intelligence tools should fully respect privacy and data protection laws. As well as this, there should be sufficient data governance mechanisms put in place that will be accountable for the quality and integrity of the data. No data should be accessed illegitimately and access to the data within the system should be done according to all data regulations.

4. Transparency

All data, system and business models centred around Artificial Intelligence should be entirely transparent. The European Commission and AI HLEG recommend that traceability mechanisms be put in place to help achieve this. For many, Artificial Intelligence can be a complex system to understand, therefore, all decisions should be explain to stakeholders in a transparent and easy-to-understand manner. Stakeholders need to be aware of their interactions with Artificial Intelligence and therefore need to remain informed.

5. Diversity, Non-discrimination and Fairness

According to the AI HLEG, unfair bias will have multiple negative implications on Artificial Intelligence systems. These will range from the marginalisation of minority groups to the heightening of discrimination. Therefore, to make a trustworthy and reliable form of Artificial Intelligence, it should be accessible to all and should involved all investors and stakeholders throughout the whole process of the development.

6. Environmental and Societal Well-being

Due to the capabilities of Artificial Intelligence, it should be of benefit not only to us but also to future generations. This means that all systems developed should be sustainable, as well as eco-friendly. Aside from thinking solely about the environment, the developers should take into consideration society as a whole and the impact that the technology may have on society. This should be one of the most careful considerations when it comes to creating AI systems.

7. Accountability

Systems need to be put in place to establish responsibility and accountability for Artificial Intelligence and what may happen as a result of their development According to AI HLEG, auditing will play a key role in delivering this, especially when it comes to the development of critical systems. Aside from this, an adequate, accessible redress needs to be in place.

With these guidelines in place, it is possible that several concerns with Artificial Intelligence will be mitigated. The European Commission and AI HLEG are paving the way to allow the development of AI tools that are reliable, trustworthy and still reap the benefits of Artificial Intelligence capabilities. With the AI industry picking up more speed every day, it is interesting to see where the future of these guidelines will take us and what systems will be available to us in the future. What do you think about the guidelines put in place by the High-Level Expert Group of AI? Let us know below.

Interested in learning more about the regulations behind Artificial Intelligence and other Intelligent Automation technologies? Intelligent Automation Week 2019 will be providing numerous insights into government red-tap and regulations, discussing real-life case studies of how organisation have worked and developed successfully alongside regulations. Discover more about Intelligent Automation Week 2019 here.


Source: The European Commission