Robots bring Asia into the AI research ethics debate


As the robotics industry grows, Asian players are looking to formalise ethical guidelines for research. Yojana Sharma reports.


A robot inspects at a 500-kilovolt converter station at Shapingba district in Chongqing, China.
A robot inspects at a 500-kilovolt converter station at Shapingba district in Chongqing, China.
Zhou Yi/CHINA NEWS SERVICE/VCG via Getty Images

Universities in China and elsewhere in Asia are belatedly joining global alliances to promote ethical practices in artificial intelligence or AI, which were previously being studied in university research centres in a fragmented way.

Countries like South Korea, Japan, China and Singapore are making huge investments in AI research and development, including the AI interface with robotics and are in some areas rapidly narrowing the gap with the United States. But crucially there are still no international guidelines and standards in place for ethical research, design and use of AI and automated systems.

China’s universities in particular are turning out a large number of researchers specialising in AI. Whereas in the past they would head for Silicon Valley in the US, many are now opting to stay in the country to work for home-grown technology giants such as Alibaba, Tencent and Baidu – companies which gather and use huge amounts of consumer data with few legal limits.

In July Chinese leader Xi Jinping unveiled a national plan to build AI into a US$152.5 billion industry by 2030 and said the country was aiming for global dominance.

“China’s pace of AI research and adoption is astoundingly fast, it is perhaps the market that adopts AI technology the quickest, so there is a lot of advanced research being done,” Pascale Fung, a professor in the department of electronic and computer engineering at Hong Kong University of Science and Technology, or HKUST, told University World News.

“Our prime concern is to look at the ethical adoption of AI in terms of setting up standards. Do we also need regulations; if so, what? This conversation has not happened in this region yet.

“There is no transparency about dataflow. And there is no certification of AI safety,” she says.

Major US technology companies Google, Facebook, Amazon, IBM and Microsoft last year set up an industry-led non-profit consortium ‘Partnership on AI to Benefit People and Society’ to come up with ethical standards for researchers in AI in cooperation with academics and specialists in policy and ethics.

HKUST announced earlier this month it had become the first Asian university partner in the alliance. The previous lack of Asian participation, academic or otherwise, is surprising considering the fast pace of AI developments in the region.

The global focus on AI ethics is “only starting and it is an international effort but with very little participation from Asian countries”, says Fung. “My role is to bring the top adopters of AI technology, namely the East Asian countries, to the table and to co-lead this effort.”

Researchers have also become concerned about regional efforts, such as in the European Union, to regulate AI systems, particularly those driving robots, to establish liability. The European Parliament, for example, has put forward ideas to recognise robots as a legal entity, such as in the case of driverless cars.

It was announced last week that a robot developed by a leading Chinese AI company, iFlytek, passed the written test of China’s national medical licensing examination. Although iFlytek said its robot is not intended to replace doctors but to assist them, it has brought the issue of AI ethics to the fore in a country with a massive shortage of doctors, particularly in rural areas.

“We advocate that AI should not be the one making life-and-death decisions. AI can advise the medical doctors who in turn are the ones certified to practise medicine,” Fung says. “But so far these ideas have not yet been adopted internationally.”

“There need to be good practice guidelines and standards of how we use AI, for example in healthcare. Right now there are absolutely no guidelines. We are just playing it by ear,” says Fung. “If we don’t start working on this now, I am afraid there will be a huge accident and then the regulations will come and that will be a bit too late.”

The World Economic Forum’s Global Risks Report 2017, which surveyed 745 leaders in business, government, academia, non-governmental and international organisations, including members of the Institute of Risk Management, named AI and robotics as "the emerging technology with the greatest potential for negative consequences over the coming decade".

Fung believes Asian involvement in setting ethical guidelines is essential if globally acceptable guidelines are to be adopted within the region. “There are standards associations around the world and they are international, but that has been very little participation so far by East Asian countries, including China,” she notes.

The main work on global standards and ethical best practice for automated and intelligent systems is being carried out by the Institute of Electrical and Electronics Engineers or IEEE.

“Our ambition is to make it possible for the technical and scientific community to take into account at least the values of society and right now this is not done,” says Konstantinos Karachalios, managing director at IEEE Standards Association.

The race to be first in developing AI systems “is the big temptation of our time, just do it before others do it”, Karachalios adds. The assumption is that what is being researched and developed is good and the prevailing view is “if there is a problem with the final project it is not our problem, it is the damn people who use it”, Karachalios told University World News. “This is wrong.”

The first version of the IEEE’s global standards released last year incorporated the views of more than 150 experts in AI, law, ethics and policy. But it was seen as based largely on Western principles. This is being rectified with a new version to be released next month based on feedback, including from non-Western countries, particularly in Asia.

Cultural sensitivity is key for universal adoption of ethical standards into the design of systems. Karachalios says the need is for ethical standards to be incorporated, "but we don’t say which values to embed”.

Sara Mattingly Jordan, an assistant professor in the Centre for Public Administration and Policy at Virginia Tech in the US, who is collating the inputs and responses to the IEEE standards document, says AI ethics “is still very much an intellectual’s topic”, involving mainly university academics.

Within the AI industry, “right now we are relying on people’s professional judgment and professional expertise at an individual level of ethics. That’s what’s controlling the system right now and it’s pretty fragile.”

“The hazard of people working in teeny tiny disaggregated teams with global reach in a vacuum is a serious potential hazard,” she says. “If each individual nation or each individual university tries to publish its own code of ethical data standards, how is anybody going to operate as a vendor in that system? It’ll create substantial problems.”

But companies, including law firms, are now beginning to join the debate and the need to include the major Asian AI powerhouses – South Korea, Japan, Singapore and China – is also recognised. “It would be great if we can get China on board; nobody disputes that they are a major player,” she says. “But that doesn’t mean that we are demanding that China change its perspective.”

Experts say the Chinese government would balk at any legalistic rules or guidelines that question the supremacy of the state to control such technologies, as well as anything that smacks of individual privacy rights that might supersede the right of government over its citizens.

“There is a big interest from their [the Chinese] side to engage with the ethical aspects and not the political,” says the IEEE’s Karachalios. “The political dimension is involved because in the end it is about freedom and freedom also has an ethical dimension. This may not be something that is interesting for them and we must respect it.

“We must still find a way to engage with each other and have a fruitful dialogue,” he says, and points out that “our standards are not laws, they are recommendations from peer to peer”.

If producers of AI systems “can show that they can produce a product that is trustworthy and respects privacy then maybe people will preferentially choose it, even if they make it more expensive because they use more time and energy looking at these [ethical] aspects,” Karachalios says.

With its global AI ambitions, China definitely wants to be part of the process, says HKUST’s Fung. “On standards and regulations you can bet the Chinese don’t want to be left out.”

Originally published in University World News, and used here with permission.

  1. http://www.universityworldnews.com/
Latest Stories
MoreMore Articles