Establishing trustworthiness is vital in our human-machine world

As long as human civilization has existed, people have needed to trust people in personal and business situations. Trust is at the heart of everything.

By Antoinette Price

Over the last century, automation has advanced in many industries. More recently people must work with non-human entities, which increasingly use artificial intelligence (AI) technologies.

Robot arm does surgery Patients undergoing robotic surgery want to be sure the machinery is trustworthy

In manufacturing plants, programmed robotic arms and humans work in close proximity side by side. Transport uses more and more automated systems. Self-driving vehicles deploy advanced driver assistance systems, while modern airline autopilot and safety systems use manoeuvring augmentation characteristics systems. Both rely on sensor data processing algorithms to analyze data gathered from many sensors around the vehicles and airplanes in order to ensure safe, efficient journeys. In healthcare, diverse professionals are using analysis from big data mined by machine learning algorithms, to help diagnosis diseases.

“A key barrier to adoption of artificial intelligence is concerns about the trustworthiness of the system. Led by SC 42/WG 3, the projects that the committee is pursuing in this area, not only try identify and put a framework around these emerging issues but also provide technical approaches to mitigating the concerns and link to the non-technical requirements such as ethical and societal challenges. This revolutionary approach that SC 42 is taking by looking at the full AI ecosystem will enable wide scale adoption of AI and the promise it has as a ubiquitous technology enabling the digital transformation”, said Wael Diab, who leads the standardization work on AI, through Subcommittee 42 of the IEC and ISO joint technical committee (ISO/IEC JTC 1) for information technology.

In these and many more situations, humans put their trust in machines which is why it is imperative that nothing goes wrong. As new products and services evolve and incorporate AI technologies, their broad adoption will only be successful if people feel they can be trusted. This means that if there is an issue it will be possible to understand what has happened, how it happened, how to avoid it in future.

e-tech caught up with Dr David Filip, Convenor of the SC 42 Working Group 3 to learn more about the work on trustworthiness of AI.

What is trustworthiness and why is it so crucial?

In our standards work we have identified certain characteristics of trustworthiness, such as accountability, bias, controllability, explainability, privacy, robustness, resilience, safety and security. But for me, before all these aspects can be considered, it always comes back to transparency or transparent verifiability of AI systems’ behaviour and outcomes. Is an outcome of an AI system transparently verifiable or is the system a so-called black box, in other words, is it trustworthy or opaque? Is there someone who can assess it for vulnerabilities or unintended consequences? In order for it to be trustworthy, we need to be able to understand the algorithm’s internal workings.

Since machine learning is functionally defined and based on the huge amounts of data for the training sets, the machines will only be as good as the data they have been fed. So in order to achieve trustworthiness, humans will still need to be part of the process, to vet and control what are the underlying AI algorithms and that associated training data don’t introduce unfair or otherwise unwanted bias.

How can standards help achieve transparency?

Standards are behind all systems that make our civilization work as we know it and there are many examples, such as railways, with a legacy system over 100 years old, or HTML 5, which defines the properties and behaviours of webpage content and without which you could not see all the things you can see in your browser.

We are at an important time for writing the horizontal standards for innovative AI technologies that in five or ten years will be taken for granted by all of us. The pace for standards being taken for granted has obviously increased. This is why we need to get it right now and make sure we consider as many angles as possible, including ethical and societal concerns.

Although standards are voluntary, they are used by policy makers and regulators. For example, many countries are working towards achieving the UN Sustainable Development Goals. Standards will need to ensure many aspects of trustworthy AI use, including privacy and security or functional safety of devices, systems and infrastructures in which AI technologies are embedded. We have a broad range of stakeholders – academia, consumer protection bodies, industry and regulators, who are defining standards which will help regulators to do their work as they have been mandated. However, you cannot enforce something that does not have the right handles defined at the technical level.

Which standards are you working on to address these issues?

We are working on a number of deliverables at the moment including:

  • Overview of trustworthiness in AI (ISO/IEC DTR 24028), provides a high-level overview of the SC 42 programme of work in the area of trustworthiness as it relates to the AI domain including trustworthiness for AI systems and applications.
  • Robustness of neural networks (ISO/IEC WD TR 24029-1) offers background about the existing methods for assessing robustness properties of neural networks.
  • Bias in AI systems and AI aided decision-making (ISO/IEC WD TR 24027) describes measurement techniques and methods for assessing bias, with the aim to address inadvertent bias related vulnerabilities, and mitigating these. Various AI system life cycle phases are covered, for example, data collection, training, continual learning, design, testing, evaluation, use, as well as retirement of a system.
  • Overview of ethical and societal concerns (ISO/IEC WD TR 24368) relative to AI systems and applications, will look at principles, processes and methods in this area. This is a newly approved project, resulting from the 3rd plenary in Dublin in April and is intended for technologists, regulators, interest groups, and the society at large. The effort will help link these non-technical requirements and challenges to the trustworthiness technology projects to address the issues.
  • We are also developing a key Risk management standard for AI (ISO/IEC WD 23894), which builds on the ubiquitous and generic ISO 31000. We are working closely with ISO TC 262 Risk Management.

Finally, we anticipate starting work on Part 2 of the Robustness of neural network series. This would eventually become an international standard and will consider formal methods for assessment of robustness in neural networks. It would be of great use to insurers of heavy machinery, such as ships or construction machines, which contain neural networks. It will help industry demonstrate that their systems containing machine learning technology do still work in a functional, predictable, and explainable way, and that their robustness characteristics that insurers must consider can be formally proven.

Find out more about the work of SC 42

Gallery
Dr David Filip, Convenor JTC 1/SC 42 WG 3 Trustworthiness in AI Dr David Filip, Convenor of the JTC 1/SC 42 Working Group 3 on Trustworthiness in AI
Man working in factory which uses robot arms Humans who work with robots need to know they can be trusted and function safely and reliably
Robot arm does surgery Patients undergoing robotic surgery want to be sure the machinery is trustworthy