Electronic Specifier Insights

NXP launches AI Ethics initiative

Episode Summary

NXP Semiconductors has publicly launched its AI Ethics initiative, underscoring the company’s commitment to the ethical development of AI components and systems where people work and live, known as the ‘edge’ of computer networks

Episode Transcription

Electronic Specifier Hello and welcome to the latest podcast from electronics best for insights. Today we'll be talking to Dr. Sven bull who is head of government affairs for an XP, Germany. He's also the chairman of the global government affairs Board of NXP Semiconductors spend, we'll be discussing the recent announcement that NXP Semiconductors have publicly launched its AI ethics initiative, which underlines the company's commitment to the ethical development of AI components and systems. We will also be discussing the findings of NXP new white paper entitled The morals of algorithms, where the company details its comprehensive framework for AI principles. So spent welcome and to kick us off, perhaps you could provide a bit of background behind an XPS experience in AI and machine learning,

 

we have a long history in security products in insecure communication, for example, still here at the site in Hamburg, we develop tests and partly manufacture the chips for the passports of more than 100 countries. And of course, we have a long track record in smart cards for secure applications. So that's where we come from, and moving more and more over the past decade into automotive. And, of course, also expanding our business and banking, more secure and trusted applications came into the portfolio. So trust is a matter of concern to many of our clients. And now, with the increasing number of applications, particularly on edge devices that contain AI or machine learning elements, this matter of Trust has also arrived at our customers. And we have been approached by OEMs and other customers on trust of artificial intelligence, but more recently, so also with concrete questions on how to make sure that these systems that we provide for actually follow our moral code and are compliant with the values that we have or that our customers have. So the initiative was triggered by our by our CEO directly, who is himself a very keen follower of these methods of trust and ethical principles in doing business in that particular field, we're active. And so he has founded a task force to look into that how we were connected to this and how we could contribute as a semiconductor company into developing a framework that is actually something that we would commit to one that we would also ask our customers to commit to. And that is how the whole idea started? I say, Sure, sure.

 

So is the demand for ethical AI really being driven at the customer end of things?

 

There are two aspects of it. Yes, of course, in particularly in automotive applications, we see a number of reputation critical issues going through the media, where AI that failed to comply with ethical values is not only harming the reputation of businesses, but also of course, harming the interests of users and customers. And no business actually wants that. So it is a concern of the businesses. Yes, but your question isn't so far, right? Because we also see an increasing interest by regulators and policymakers into that matter, for example, the GDPR regulation in Europe, when it comes to healthcare, medical applications actually integrated some requirements for ethical behaviour, when AI systems are deployed. So there is also a policy and regulation component to it.

 

I see. Okay. Okay. And written the white paper, it mentions instances where AI is gone, has gone wrong? Are we likely to see increased instances of that as the proliferation of AI increases?

 

Well, I think the more you deploy AI systems and the more the the awareness of public awareness increases over questions of ethical behaviour of AI, the more reports we will see come to light. I don't know whether that reflects an actual reality of increasing cases or whether it just reflects the increased awareness. But it's true. We definitely see an increase in recordings of such cases. Yes.

 

Okay. And the release itself mentioned that then XP identifies five AI principles. Could you perhaps tell us a little bit more about those?

 

Yes. So I mean, it's a very complex matter, but A few of those principles, I think not need no further explanation like the principle of non maleficence. Whenever we introduce technology, we want it to serve its users and not to dominate them or control them. So that is actually what this what this principle needs, there is mentioning of algorithmic bias in this principle, and that, indeed, is a very complicated challenge. Since for let me give you an example, if you deploy a facial recognition system, and this facial recognition system or model is trained with training data that you buy off the shelf, and that reflects a various extent of the population, but not all of it, you will have bias for those minorities not represented in the training data, for example, Caucasian faces, or Asian facial schemes are very, very well represented in the training data that is available. But if you look into South America, around the polls, or some African minorities, you will not see those facial types represented in the training data. And that will, of course, bias when an AI system that is trained with this data is being deployed. And our response to that is we should not sit back and say, well, it's it cannot be changed. But but there are actually two approaches to to solve the matter. One way is to extend the training data, which probably in the field will not always be possible. But the second is you could integrate an element of uncertainty and once that that threshold is triggered by the system would require a human to come into the loop? I see. And

 

the white paper itself mentioned that the definition of exactly what is ethical varies in different cultures to different locations around the world. So how does AI go about solving that variability

 

issue? Well, we are a global semiconductor company. So we with our products, we need to keep in mind a global group of users. However, the research over that subject is not yet conclusive. What we see in the trolley problem that you mentioned is that there are some variations across across cultures, but the latest research has shown that actually the classes in which the solutions for they are quite uniform over cultural, or throughout the cultural variation. So what that tells us is that more or less as human beings, we share a common ground on what is what is ethical and what is not just in the very detailed implications of these assessments, there might be disagreement across cultures. So basically, there is there is no doubt about what is right and what is wrong. But then, again, you have different rooms of legislation that you need to take into account. And all these all these constraints come into play. When developing a system we as a hardware and the software provider for for artificial intelligence must make sure that the system itself is capable of fulfilling these requirements. That is a very basic start for us. If that answers, Indeed, indeed, yes, yes.

 

If you're enjoying this podcast, take the time to check out the cliff notes podcast. They have guests that are long standing figures in the manufacturing industry, help companies approach digital transformation, practical tips from the top engineer leaders, and the latest tech companies are using. The latest episode focuses on the eight wastes in a lean business. And you can find them by searching Cliff Notes podcast on any podcast directory or by going to Cliff Notes. podcast.com. It's the white paper also mentioned that there are new policies and laws being introduced around governance of AI will that make the development of AI and machine learning solutions more challenging for companies like NXP?

 

That depends on because those those rules and regulation, they differ across the global markets and we have RND and artificial intelligence both in in Europe and in the United States. So of course, we need to take these regulations into account when using data and when developing systems. But it is true that we must make sure in these regular when these regulations are made, that the use of data for the benefit of human beings for example, in medical applications and so is allowed that this data comes becomes available to companies that are researching in particular fields and where we are The other hand, we need to make sure that ethical values are or that these systems are developed in compliance with ethical values and regulation, we also must make sure that the data is not too restricted to be used for the benefit of the users. So it's a it's a two edged sword. And it's very hard to give an answer that covers the entire field, you would need to look at an individual specific cases there.

 

I see. Sure. And he also mentioned that companies are developing their own AI, ethical AI principles, starting companies like Huawei, IBM, Google, etc. are these companies developing these principles in collaboration with with one another? Or are they are they very varied and disparate and and if so, if they know if they're if they are very varied, is that going to create further challenges down the line.

 

So there are certain platforms where the development of ethical code is being shared and collaborate, collaboratively driven by companies the charter of trust, which is also mentioned in the paper is one of them. But there are certain government initiated forums where these things are being discussed, there is a huge initiative, mainly of companies in the United States. But all in all, I think all these companies that are developing ethical codes, or code of conduct, they are orienting themselves on those universal human values that I mentioned in the beginning. So if we look at companies in the US, they often have surveys among their employees, to identify the most pressing issues and also surveys on their clients and customers and users, and usually the service come to very similar results. So what we see in the industry landscape is that all those ethical codes are not very different from each other. However, one must say the way they are implemented, I think that is what differentiates them. I can't speak for the other companies. But we as NXP, we have chosen to actually commit ourselves to these principles. And I think that's, that's a big difference. When you look into the semiconductor company landscape or into the industry here, you will find very few companies that actually issue a moral or an ethical framework. And that committed them usually these members are meant as a as a guidance for policy makers or as a recommendation to customers, but what you rarely see is that they actually commit to these principles themselves that that's just my personal opinion, but from a from the frameworks that we looked at. This is true for the majority of these.

 

I see. Okay. And the white paper also mentioned that the EU High Level expert group has also published an assessment list to to aid companies in their AI policies and processes. Could you perhaps tell us tell us a little bit more about that assessment list and what sort of parameters are on that

 

the high level expert Working Group, they have issued a draft on an ethical framework, I think, a year or two years ago, and that went into a commenting commenting period for all the in the main industry bodies across the European member states. And at the end of this process, all the input was consolidated. And what we see now this assessment list is basically a consolidation of this survey and assessment and is this commenting period. So the basic parameters that this assessment is following is not very different from our ethical principles, it turns around human agency and oversight so that the human factor remains in control of an artificially intelligent system. It focuses very much on technical robustness and safety, which is also our understanding that you need to have a very solid foundation, technical foundation. If you want your systems to follow moral values, you really need to make sure the integrity of these systems first, they are of course, or they have of course privacy and data governance as one big focus point. Transparency is important to them, diversity and non discrimination and fairness. So the bias of prevention of bias, and there is a component to it about environmental and societal well being and I think the last one is accountability so that you, if you deploy a system, you must make sure that they're actually It is an existence being accountable for the actions of that system. In a nutshell, the principles of the assessment of the ethics guidelines for trustworthy AI by the high level committee,

 

okay. Okay. And the white paper speaks a lot about the need for safety and privacy and the importance of AI systems in regards to resilience to any malicious attacks, is that something that's going to be a growing issue moving forward as attack vectors are increased and methods of attack become greater and more sophisticated?

 

Definitely, because because AI is is also in that field, a two sided coin. So you can use it to protect systems by deploying systems for malware detection or anomaly detection. But of course, you can also deploy AI to attack systems, for example, brute force attacks, you can use it for remote attacks. And I think there are already are, there is at least one scientific paper from Dutch University, where they have proven that an AI backed up attacked, it is already known as much more effective and much faster than without using machine learning as a as a support tool of that attack. So we will see, we will definitely see an increase in AI ml into the text. And that is why the defence against the resilience against these sort of attacks must must be integrated in the in the cyber security measures of when these systems are being designed.

 

Okay. And looking ahead of the future, you know, off the back of the research that you guys have done. For the for the white paper, how do you see the ethics of AI evolving as we move forward?

 

That's a good question. If I would know, I could bet on it. But I think we what we did now here at NXP is that we started to create an awareness with the design teams, the engineers and the entire employees for the matters. So one thing that we will be able to achieve moving forward is that the the nature of the problem is well understood. And countermeasures can be developed. For example, we have a very successful track record of peace product security incident response team. This is a taskforce of specialists, addressing own vulnerabilities and working with our customers to mitigate these and based are building up on tools like this and on expertise in this field. I think we now have with this framework, the ability to tackle ethical issues around article artificial intelligence much faster and much closer collaboration with our customers than previously. And that is I think, the big win here. If we look into the industry lights landscape, I think it does indeed get much more attention than in the past. So I'm positive that ethical values will be reflected in the design of artificial intelligence systems more and more moving forward. I think that's a good prognosis to make. Okay. Okay.

 

And in terms of an XPS AI ethics initiative and the white paper in terms of timing, what been the driving factors behind why, and it's particularly doing this this now? And how long has it been in the development process?

 

Have you collaborated with any other partners or, or customers in the research or anything like that?

 

So definitely, we do joint research with partners and with scientific institutions. And the process. I mean, that's, that's why I told you why we have such a long track record in, in implementing trust in connectivity and also in chip solutions. I think it's a natural consequence of the way we look at the market and the business. It was just a matter of time until we implemented it. They took us about a year, I think, to involve the entire engineering teams and all other stakeholders that have touchpoints with artificial intelligence, including our cryptography teams, or security, evaluation and assessment teams to come up with a framework because in the end, it's very easy to claim a few more requirements. But I think it's much more difficult to implement a system that the company is able to follow up with and that we are able to, to implement in our own processes. That's where we are right now. So I would not say it's, it's a completed initiative, but we are now starting to anchor these values into our own code of conduct into Our compliance regulations and within the individual teams that are dealing with product issues, vulnerabilities and technology development in the field. Well, thank

 

you spend. That's all we have time for the moment. But thank you very much for the insights. I'm sure it will generate a lot of interest amongst our listeners. So if anyone does have a question around the ethics of AI then please get in touch or visit the podcast section of the electronic specified website where you will also be able to download the NXP white paper. But for now, a big thank you to spend and goodbye to everyone. Thank you Electronic Specifier