Skip to main content

China’s AI use spooks expert

February 6, 2019 | Expert Insights

Yoshua Bengio, a Canadian computer scientist who helped pioneer the techniques underpinning much of the current excitement around artificial intelligence, is worried about China’s use of AI for surveillance and political control.

Background 

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. In the twenty-first century, AI techniques have experienced a resurgence, following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.

On 8 July 2017, the Chinese State Council announced plans to turn China into the world leader in artificial intelligence (AI) by 2030, seeking to make the industry worth 1 trillion yuan. The State Council published a three-step road map to that effect, in which it outlined how it expects AI to be developed and deployed across a wide number of industries and sectors, such as in areas from the military to city planning. According to the road map, China plans to catch up to current AI world leaders' technological abilities by 2020, make major breakthroughs by 2025 and be the world leader in 2030.

Analysis 

Yoshua Bengio, the co-founder of Montreal-based AI software company Element AI, said he was concerned about the technology he helped create being used for controlling people’s behaviour and influencing their minds. "This is the 1984 Big Brother scenario," he said in an interview. "I think it’s becoming more and more scary."

Bengio, a professor at the University of Montreal, is considered one of the three "godfathers" of deep learning, along with Yann LeCun and Geoff Hinton. It’s a technology that uses neural networks - a kind of software loosely based on aspects of the human brain - to make predictions based on data. It’s responsible for recent advances in facial recognition, natural language processing, translation, and recommendation algorithms.

Deep learning requires a large amount of data to provide examples from which to learn - but China, with its vast population and system of state record-keeping, has a lot of that. The Chinese government has begun using closed-circuit video cameras and facial recognition to monitor what its citizens do in public, from jaywalking to engaging in political dissent. It’s also created a National Credit Information Sharing Platform, which is being used to blacklist rail and air passengers for "anti-social" behaviour and is considering expanding uses of this system to other situations.

"The use of your face to track you should be highly regulated," Bengio said. Bengio is not alone in his concern over China’s use-cases for AI. Billionaire George Soros recently used a speech at the World Economic Forum on Jan. 24, to highlight the risks the country’s use of AI poses to civil liberties and minority rights.

Unlike some peers, Bengio, who heads the Montreal Institute for Learning Algorithms (Mila), has resisted the temptation to work for a large, advertising-driven technology company. He said responsible development of AI may require some large technology companies to change the way they operate.

The amount of data large tech companies’ control is also a concern. He said the creation of data trusts -- non-profit entities or legal frameworks under which people own their data and allow it to be used only for certain purposes -- might be one solution. If a trust held enough data, it could negotiate better terms with big tech companies that needed it, he said Thursday during a talk at Amnesty International U.K.’s office in London.

However, Bengio said there were many ways deep learning software could be used for good. In a recent talk, he unveiled a project he’s working on that uses AI to create augmented reality images depicting what people’s individual homes or neighbourhoods might look like as the result of natural disasters spawned by climate change.

Assessment 

Our assessment is that Bengio voices serious concerns about the potential misuse of AI and facial recognition technology in the upcoming decades. We believe that there should be an ethical code for governments and private companies to follow while developing or deploying AI. We also feel that humanity may need a new perspective in adopting cutting-edge technological changes.