The role of ethics in AI development, implementation and governance

Aurelie Jacquet chairs the Standards Australia committee (IT-043) that is representing Australia as ISO develops international standards on AI (ISO/IEC JTC1/SC42). She spoke to Jonathan Foda, Head of Science at Silverpond, to discuss why putting an ethical framework around AI systems matters, and how to do it successfully.

Jonathan: You have a unique perspective on AI. What is your background and how did you get involved with AI?

Aurelie: My background is as a lawyer, mostly practising in commercial and international law. From 2007-2018, I was looking after algorithmic trading and high-frequency traders. At a time, there was the GFC and other financial crashes, and I could see a number of issues surrounding algorithmic trading.

Regulation only increased for algorithmic traders after the crashes. This regulatory backlash may have been avoided, if the traders had the opportunity to develop and promote recognised best practice, best governance at the beginning. From that perspective, having an established best practice is extremely useful,

This time was the height of automated decision-making in markets, and I could see how the issues algorithmic traders faced in a mostly digitised and well-regulated environment could apply and be magnified when organisations start using algorithms to automate more complex decisions that can impact our everyday life .

This is why my work and the work of my committee is focused on developing internationally recognised standards for AI, that are recognised as best practice and help organisations implement AI responsibly.

Jonathan: I think that is an interesting perspective because in some senses, the challenge with algorithmic traders is one of scale, isn’t it? If we go back 100 years, people were trading at a much smaller scale, so the problems were more localised. There were probably checks and balances at a local level, but traders started participating in global markets, I imagine things could get out of hand much more quickly.

Aurelie: I focus on the international piece because to develop best practice that is recognised internationally will help you not just trade in your market, but all around the world. Similarly with AI, If we have a best practice that is recognised in Australia and overseas, it really helps Australian organisation grow locally and internationally.

Jonathan: I imagine the concept of having a global impact is a broad trend in technology. I presume that experience in one area, such as the finance industry, would lend itself to other areas as they become more automated. Finance is probably a leading indicator because stock trading is already digital, as opposed to other industries that are still digitising their processes.

Aurelie: From a financial market’s perspectives, it is a very regulated, very controlled environment. The markets have strong regulation. What we saw after the crashes was a change in those regulations and international alignment.

With AI, what you see now is that around the world, we have a fragmented approach. There are multinational initiatives — the World Economic Forum, the UN, UNESCO, OECD, GPAI.  Each are developing their own framework. Then you have national and also industry led initiatives (e.g. Partnership on AI), so it is difficult for companies to make sense of it. Which framework to pick, which ethics to examine and so on. From an industry perspective, making these decisions can be difficult.

Jonathan: Do you think your role and the role of Standards Australia (and similar organisations in other countries), is to help sift through all these different perspectives? Is one person saying: “We have done the analysis of these different approaches and we should prescribe this one for organisations to use?”

Aurelie: The great advantage of international standard organisations is that they have been around since the 1940s, and they have developed an unparalleled ability to achieve consensus on complex technical matters and deliver standards that are recognised internationally. In the case of the AI Standards, experts from industry, government and academia are working together to define best practice for AI.  

The proof is that at the international standards on AI (ISO/IEC JTC1/SC42), we have 47 countries (30 participating and 17 observing) including Australia that are involved in the development of 22 international standards for AI.

So the international standards on AI are really here to build a best practice that is interoperable and can be recognised across jurisdictions.

Jonathan: Do you think we will ever actually arrive at a single universal standard or is the international standard going to be regionalised so everyone has their own version?

Aurelie: My standards committee IT-043 is mirroring the work of the international standards committee ISO/IEC JTC1/SC42. We participate actively in shaping the international standards and we also have the ability once an international standard is published to adopt it in Australia directly or with modifications.

Jonathan: I guess that would give the industry a baseline, then as companies participate in different regions, they could adjust as needed to suit them.

Aurelie: The international standards that are published provide for the agreed baseline and yes local standards organisations can decide to adopt them directly or with modifications.

Aurelie Jacquet is an expert in the governance and responsible use of emerging technologies. She promotes the design and development of safe, legal and ethical AI products.
After gaining two Masters of Law degrees (France and Australia), Aurelie spent 18 years practising commercial and international law. She now works on leading Australian, European and global initiatives for the implementation of responsible AI.
She is a member of the NSW Government AI Advisory Committee; chairs the Standards Australia committee IT-043, representing Australia’s interests at the International Standard on AI (ISO/IEC JTC1/SC42); and sits on the Organisational Governance of AI Working Group for the Institute of Electrical and Electronics Engineers (IEEE). 
Aurelie is part of the editorial board of Springer’s new international journal on AI & Ethics, and she also founded the Ethics for Automated Decision-making industry group.