Skip to main content

Wed, 01/13/2021 - 12:25

COMPANIES INTERVIEWS: HUAWEI

Investigating how companies are approaching ethics in Europe, in this article we hear the experience of Xin Chen, Director of European Standards and Industry Development, and the company’s lead in AI

Huawei

Investigating how companies are approaching ethics in Europe, in this article we hear the experience of Xin Chen, Director of European Standards and Industry Development, and the company’s lead in AI ethics, governance and policy at Huawei Technologies Ltd.

Huawei is a Chinese global provider of information and communications technology (ICT) infrastructure and smart devices, which includes a European office in London, among others. 

In this article we present a short preview of the interview, the complete version can be found here: Complete Version

Tell us about your company.

Huawei has developed an AI portfolio which includes products such as Ascend (chipset), Atlas (sever/module), MindSpore (developer framework), and the Application Enablement layer (e.g., CV/NLP). We are investing heavily in AI research, which focuses on developing the capabilities for more efficient, secure, and automated machine learning solutions.

Our recently announced Ascend family of AI chips will power a full range of AI scenarios for customers and partners. These chips are part of a portfolio that includes an automated development toolkit, a unified training framework, and a set of powerful application enablement tools. The goal is to give companies and developers the power, tools, and platforms they need to develop AI applications for almost any situation.

 

Image removed.

As a matter of fact, working on AI raises ethical issues. Which are the ethical challenges in your job and which effort you and your team make to avoid it?

A large number of civil society organizations, companies and academics have highlighted a range of risks AI presents, such as fairness, transparency, accountability, and bias. These are all important and as such we have invested considerable efforts in tackling them. For example:

  • From a technical point of view, our MindSpore team has been developing modules that allow users to understand how certain machine learning outputs were reached, helping achieve more transparency. They are also exploring tools to interrogate algorithms and test various mathematical definitions of fairness.
  • From a governance point of view, we are working with think tanks and consultants to develop frameworks to allocate responsibilities and assurances-giving mechanisms to different actors in the AI supply chain. We are also exploring ideas such as the third-party auditing of AI systems.

Do keep informed with the European regulations best practice or international standard? (e.g., GDPR or Trustworthy AI)

Absolutely. We are actively engaged in the EU’s efforts to promote trustworthy AI, whether through standard setting, regulations, or best practice guidance. For example, in our response to the AI white paper we focused on the need for high-risk applications to be regulated under a clear legal framework, and proposed ideas for what the definition of AI should be. In this regard, we believe the definition of AI should come down to its application, with risk assessments focusing on the intended use of the application and the type of impact resulting from the AI function. If there are detailed assessment lists and procedures in place for companies to make their own self-assessments, then this will reduce the cost of initial risk assessment – which must match sector-specific requirements.

There is any particular example of positive impact of one of your AI products that you would like to share.

Story Sign, a free app developed with the ‘European Union for the Deaf’, uses AI to read selected children's books and translating them into sign language. We wanted to create an authentic reading experience and make it possible for families with deaf children to enjoy an enriched story time. Given there are approximately 32 million deaf children globally and many struggles to learn to read, often due to a lack of resources bridging sign language and reading, technology can help to open the world of books to many of them and their families.

Do you reflect and/or measure on the environmental and social consequences of your work? If so, why it’s important in your opinion?

We try to take into account social and environmental consequences of our work in different ways. For example we are particularly committed to developing tools, technologies and applications that help further important environmental goals. 

  • Case 1: Protecting biodiversity is how Huawei’s AI is being used by Rainforest Connection (RFCx), an NGO that combats illegal deforestation and poaching. RFCx creates sensors from upcycled old cell phones which are placed strategically across protected areas and uses AI to analyse sounds and identify loggers and poachers. Park rangers receive real-time alerts from the sensors, helping them react more quickly and efficiently to threats – targeting their limited resources to prevent environmental harm.
  • Case 2: Huawei has a free mobile app called StorySign that aims to help deaf children read by translating text from selected books into sign language with AI.
  • Huawei is also committed to promoting green ICT solutions: Huawei’s continued investment in R&D aims to help industries conserve energy and reduce emissions via the use of new technologies and to build an environmentally friendly low-carbon society that saves resources (i.e. PowerStar energy management technology). In the case of a wave soldering unit for example, we can contribute to consuming 25.6% less energy and can saving about 31,000 kWh of electricity each year.
  • AI solutions for a greener Europe: AI can facilitate evidence-based decisions and expand capacities to understand and tackle environmental challenges. Broader use of AI could reduce worldwide greenhouse gas (GHG) emissions by 4% in 2030, an amount equivalent to 2.4 Gt CO2e. AI capability is a crucial component of the Farm to Fork Strategy as AI brings about reduced costs for farmers, improved soil management, a reduction in the use of pesticides, fresh water, and GHG emissions.

There are also indirect ways our products help further environmental goals: achieving the European Green Deal goes hand in hand with increased digitalisation which essentially leads to greater efficiency in the application of sustainable solutions. Digital technologies contribute to the greening of the economy mainly through reducing transaction costs, increasing real-time usage of data, shedding light on interdependencies and creating efficiencies: digitalisation allows everyone to do more with less. Digital technologies have the overall potential to enable a 20% global reduction in CO2 emissions by 2030 and could prevent 10 times more CO2 emissions than they actually produce.

Do you have any other personal reflections or experiences that you want to share with us?

I would like to emphasize something I haven’t seen highlighted sufficiently, which is the importance of responsible AI across the supply chain. No initiative or industry association seems to be looking at that; and while there are many AI ethics/governance initiatives, few involve all the different actors that form part of the AI supply chain. Digital Europe has a diverse group of members, but their aims are more geared towards lobbying rather than developing trustworthy AI specifically. I think for there to be effective governance mechanisms and assurances to support trustworthy AI, you need to really look at the different companies that form part of the supply chain, and better understand their respective roles. 

 

Source: Xin Chen