Some of AstraZeneca’s guidelines for applying responsible and ethical AI in its operations have been made public.
The British-Swedish giant has four foundations for AI governance: inventory (of technologies to log), definition (of AI application), governance framework and controls, and overall AI standards and policies.
Asking questions on ethical AI
Wale Alimi, AstraZeneca Lead in Artificial Intelligence Governance, said at the AI Summit 2022 in London Tech Week that the firm is still investigating and asking questions in a global market. There is no one-size-fits-all solution to these issues, according to him. To create an ethical AI-sphere, international regulations should co-operate.
“We’re a global organization, so which regulations do we need to be cognizant of? And how do we ensure that what we’re rolling out would not conflict with those regulations? We’ve got a huge business in China, and China has rolled out its regulations. So, we need to check and confirm that things align with those. We expect European regulations to come through, so are they conflicting? And how do we manage that?” said Alimi.
Alberto Alimi’s talk occurred at a moment when data security, AI regulation, and the impact of related technologies, including autonomous vehicles and real-time face recognition, are all at the forefront in several nations and authorities.
For instance, the EU AI Act is also trying to regulate the future of artificial intelligence. The European Union is disturbed by the lack of comprehensive regulation of artificial intelligence. The EU AI Act is an important step that will determine the future of artificial intelligence in the context of personal data protection.
Not only that, a “tech NATO” is needed to guard cyber borders, according to Darktrace CEO. Poppy Gustafsson, the CEO of AI cybersecurity company Darktrace, has suggested the formation of a “tech NATO” to combat increasing cybersecurity dangers. These are important steps for building common sense for the future of creating an ethical AI.
The UK government has announced that it intends to depart from certain European data protection standards. However, the country’s AI strategy also includes ethical AI development and implementation.
Meanwhile, in the United States, there is considerable debate regarding whether to follow the EU’s example and safeguard customers from Big Tech’s data breaches, as well as whether to regulate technologies like AI and real-time face detection at a federal level.
Is it more important to protect individuals and customers, as Europe believes? Alternatively, is it more about encouraging private-sector invention until harms can be addressed with modest regulation? For a long time, the US had this policy. But now that we’ve witnessed what happened at Cambridge Analytica et al. can businesses truly be trusted to act responsibly? Do they make contributions to ethical AI?
We’re seeing a movement toward a more European approach to regulating Big Tech at both the state and federal levels in the United States. Meanwhile, the United Kingdom has signaled its intention to address the ethical AI issue alone.
Each nation has different regulations
The difficulties that these cultural nuances and political squabbling pose to any multinational are immense since it forces the firm to choose between developing a global solution that attempts to satisfy everyone or creating a local one. The choices include standing for something and maintaining higher standards than are required in other countries or adopting a piecemeal, localized approach to governance and norms.
Alimi stated that AstraZeneca’s solution has been to create a globe-wide discussion center, allowing the organization to work together towards correct answers: “We have an active consultancy office where people come from across the globe to ask questions around ‘What should I be doing or not?’ The federated structure of the organization means deciding how much we can centralize, and how much we can standardize the things we are rolling out. Do we put in place guardrails and leave each business area to determine what they can put in place? Well, that’s the approach that we’ve taken, with some level of oversight from us as a data office.
Third-party solutions may be difficult to maintain. “When we procure AI solutions, or when we procure IT systems that have got some AI capability in there, are we expecting them to live up to our principles? And if so, how can we demonstrate that they’re doing so? And when we collaborate as an R&D organization – we do a lot of scientific collaboration – how are we ensuring that our third parties are living up to our principles? These are some of those challenges we have to go through. I wouldn’t say we are there yet. We are still dealing with them as time goes on,” said Alimi.
What have Alimi and his team learnt from dealing with these problems both locally and across the world?
“We went down the route of implementing an AI global standard, but then on the back of that, deciding what policies and operating procedures to put in place locally. What we have done in AstraZeneca with all our high – and very high-risk projects, is we expect at the point of deployment that the lead scientist or lead project manager will certify that they have lived up to our ethical AI certification,” explained Alimi.
Synthesis
Although it might be seen as a bit gloomy, this talk is an informative and fascinating presentation that is honest about the real-world difficulties of governing a fast-emerging technology that, at some point, may make decisions about human lives. Alimi deserves praise for being upfront about the local and international complexities that businesses confront, especially multinationals. It appears AI is a long and difficult road rather than the quick stop marketers promise.
The sector professionals are discussing the artificial intelligence sphere; before creating an ethical AI framework, it is important to understand the comparison between artificial intelligence and artificial intelligence vs. human intelligence.