In a low-rise building overlooking a busy intersection in Beijing, Ji Rong Wen, a middle-aged scientist with thin-rimmed glasses and a mop of black hair, excitedly describes a project that could advance one of the hottest areas of artificial intelligence.
Wen leads a team at the Beijing Academy of Artificial Intelligence (BAAI), a government-sponsored research lab that’s testing a powerful new language algorithm—something similar to GPT-3, a program revealed in June by researchers at OpenAI that digests large amounts of text and can generate remarkably coherent, free-flowing language. “This is a big project,” Wen says with a big grin. “It takes a lot of computing infrastructure and money.”
Wen, a professor at Renmin University in Beijing recruited to work part-time at BAAI, hopes to create an algorithm that is even cleverer than GPT-3. He plans to combine machine learning with databases of facts, and to feed the algorithm images and video as well as text, in hope of creating a richer understanding of the physical world—that the words cat and fur don’t just often appear in the same sentence, but are associated with one another visually. Other top AI labs, including OpenAI, are doing similar work.
One thing that drew Wen to BAAI is its impressive computational resources. “The BAAI has received stellar support from the government and has strong data and computing power,” he says.
His language model is one of many BAAI projects aimed at fundamental advances in AI, reflecting a new era for Chinese technology. Despite considerable hype and hand-wringing over China’s technological ascent, the country has so far primarily excelled at taking innovations from elsewhere and deploying them in new ways. This is particularly evident in AI, an area Chinese leaders consider crucial to their aspirations of becoming a true superpower.
Some breakthroughs at BAAI could benefit the government directly. Wen says his language system could serve as an intelligent assistant to help citizens perform civic tasks online like obtaining a visa, a driver’s license, or a business permit. Instead of spending days filling out paperwork and waiting in line, as is the norm, a clever helper could guide citizens through the red tape. Zhanliang Liu, project lead for the effort and previously an engineer at Baidu, China’s top web search company, says his team has built a prototype for Beijing’s Department of Motor Vehicles. “It is a really tough challenge,” he says.
The government might, of course, benefit in other ways. More sophisticated AI language systems could prove useful for scanning social media for questionable comments or for scouring phone call transcripts. The Chinese state has embraced AI as a tool of governance, including for censorship and surveillance, particularly of Muslims in western Xinjiang province. There’s no evidence of BAAI’s work feeding into policing or intelligence, but it is being released openly for anyone to commercialize or apply.
At the same time, officials are wary about the potential for AI to erode the power of the state. Several projects at the institute aim to set guardrails for commercial use of AI, to head off ethical challenges and curb the power of big tech companies.
“The Chinese government’s trying to get on top of this, to make sure that they’re properly in control, and I think that’s actually not proving altogether straightforward,” says Nigel Inkster, author of The Great Decoupling, a recent book about the fracturing relationship between China and America.
The government made its AI ambitions clear in a sweeping plan released in 2017. It set AI researchers the goal of making “fundamental breakthroughs by 2025” and called for the country to be “the world’s primary innovation center by 2030.”
BAAI opened a year later, in Zhongguancun, a neighborhood of Beijing designed to replicate US innovation hubs such as Boston and Silicon Valley. It is home to a few big tech companies modeled on Western successes, like the PC maker Lenovo and the search engine Sogou, as well as countless cheap electronics stores.
In recent years, the electronics stores have begun disappearing, and dozens of startups have sprung up, many focused on finding lucrative uses for AI—in manufacturing, robotics, logistics, education, finance, and other fields.
BAAI will move into a new building not far from the current offices later this year. The location is both symbolic and practical, within walking distance of China’s two most prestigious universities, Tsinghua and Peking, as well as the Zhongguancun Integrated Circuit Park, opened by the government last year to attract home-grown microchip businesses.
The pandemic has interrupted visits to China. I’ve met some academics working at BAAI before, and talked to others there over Zoom. An administrative assistant gave me a guided tour over WeChat video. Through the tiny screen, I saw engineers and support staff seated in an open-plan office between lush-looking potted plants. Plaques on the wall of the reception area identify the academy’s departments, including Intelligent Information Processing and Face Structured Analysis. A large sign lays out the principles that guide the center: Academic thinking. Basic theory. Top talents. Enterprise innovation. Development policy.
One group at BAAI is exploring the mathematical principles underpinning machine-learning algorithms, an endeavor that may help improve upon them. Another group is focused on drawing insights from neuroscience to build better AI programs. The most celebrated machine-learning approach today—deep learning—is loosely inspired by the way neurons and synapses in the human brain learn from input. A better understanding of the biological processes behind animal and human cognition could lead to a new generation of smarter machines. A third group at the academy is focused on designing and developing microchips to run AI applications more efficiently.
Many BAAI-affiliated researchers are doing cutting-edge work. One works on ways to make deep learning algorithms more efficient and compact. Another studies “neuromorphic” computer chips that could fundamentally change the way computers work by mirroring biological processes.
China boasts some top academic AI talent, but it still has fewer leading experts than the US, Canada, or some European countries. A study of AI research papers by the Paulson Institute released in June found that China and the US produce about the same number of AI researchers each year, but the vast majority of them end up working in the US.
The issue has become more urgent of late, after the Trump administration imposed sanctions that capitalize on China’s inability to manufacture the most advanced microchips. The US has most prominently targeted Huawei, which it accuses of funneling data to the government, including for espionage, cutting off its supplies of the chips needed to make high-end smartphones. In 2019, the US broadened Chinese sanctions to ban US firms from doing business with several AI firms, accusing them of supplying technology for state surveillance. President Biden may take a different approach than Trump, but he is unlikely to ignore China’s technological threat.
Tiejun Huang, a codirector of BAAI, speaks carefully, after a long pause to collect and translate his thoughts. He says the center is modeled on Western institutions that bring together different disciplines to advance AI. Despite difficult US-China relations, he says, it is crucial for the academy to build ties with such institutions. It has sent researchers to visit MILA in Canada and the Turing Institute in the UK, two of the world’s top centers of AI expertise. AI scientists from US institutions including Princeton and UC Berkeley serve on the academy’s advisory committee.
The Chinese government is not alone in investing in AI. The US Defense Advanced Research Projects Agency backs research with potential military uses. Yet many in the West are wary of how the Chinese state could use technology to further its interests and values—for example, tying digital technologies to the Belt and Road Initiative, which builds economic and infrastructure links to neighboring countries. With clear ties to the Chinese government, it isn’t hard to see a broader agenda in BAAI’s work.
Research at BAAI could perhaps serve as tools of soft power, through technical standards, for example. Some Western students of China see the government’s efforts to define standards as a way to favor domestic companies and to shape perceptions and norms of a technology. Chinese firms have been active in setting technical standards for advanced 5G mobile networks. A research group at BAAI is focused on technical standards for AI, releasing proposed notation for machine-learning articles in July.
Some Western researchers say some of what China is doing is not exceptional. Danit Gal, a researcher at Cambridge University’s Leverhulme Center for the Future of Intelligence who specializes in AI ethics and was previously a technology adviser to the UN, was studying at Peking University when the academy opened and has attended several meetings there. She says it is unfair to focus on the controversies when the academy is doing earnest research. “What China is doing, you know the surveillance part, is not unique to China,” she says. “I’m from Israel, and Israeli surveillance and borders are powered by Microsoft.” (Microsoft invested in AnyVision, an Israeli company providing facial-recognition software used at West Bank checkpoints, but it said in March 2020 that it would divest its stake.)
Huang and others at BAAI say international researchers should engage with the institute as a way to indirectly influence the Chinese government. “The BAAI is a platform to put together people with different answers, different backgrounds, different views, and from different countries so they can talk to each other and know each other,” Huang says.
Glenn Tiffert, who focuses on China at the Hoover Institution, says engagement makes sense, but it is important to appreciate the broader context. “I am absolutely not in favor of decoupling,” he says. “They may be honorable people, people of good faith,” he says of the staff and researchers at the academy. “But it’s important to remember there is a commissar behind the curtain.”
In the summer of 2019, before the pandemic, I visited a researcher at the Institute of Automation in Beijing who is now a key member of BAAI. The Institute of Automation is also located in Zhongguancun. Its entrance bears testament to the Chinese Communist Party’s longstanding interest in technological innovation: Black-and-white photographs show Mao Zedong meeting with scientists there, alongside color ones showing Xi Jinping, China’s current leader, doing the same.
Yi Zeng, a fresh-faced researcher at the Automation Institute, is also director of BAAI’s Research Center for AI Ethics and Safety. His group produced a code of ethics covering uses of AI on behalf of the Beijing city government. The code, which is voluntary for now, says AI should not discriminate; should not be used in ways that pose safety risks; and that end users can choose to opt out if AI systems misbehave.
Zeng showed me a chart of 47 AI ethics codes drawn up by companies and governments in different countries. He said that his group wants to talk to researchers from around the world about issues such as AI bias and privacy protection, but he sidestepped questions about government surveillance.
Some students of China believe the Chinese Communist Party is in fact wrestling with the ethical implications of AI algorithms—at least those used by private industry—just as much as Western governments are.
In November, government regulators blocked Ant Group, a financial tech spin-off of Alibaba, from completing its planned IPO in Hong Kong and Shanghai. The government also said it would investigate Alibaba for possible antitrust abuses. Inkster, the author of The Great Decoupling, says the government is “making strenuous efforts to remind the private sector in China they exist at the government’s pleasure.”
The Chinese government is preparing a major new privacy law that will limit what data companies can collect and use—but also reinforces the state’s access to data for law enforcement and surveillance. Some work underway at BAAI reflects this new era. In response to the pandemic, a team at the BAAI developed a Bluetooth Covid contact-tracing app that can alert people of possible exposure without collecting identifying information. The BAAI spokeswoman says this has been tested at several offices around Zhongguancun.
Noam Yuchtman, a professor at the London School of Economics, has published work that uses evidence from China to suggest that AI benefits uniquely from state intervention, because algorithms are so hungry for data and computer power that governments have access to. But he adds that such a fast-moving and unpredictable technology may also pose problems for governments. “Innovation by its very nature is sort of uncertain, and perhaps nowhere more so than in AI,” he says.
- ? Want the latest on tech, science, and more? Sign up for our newsletters!
- Your body, your self, your surgeon, his Instagram
- My quest to survive quarantine—in heated clothes
- How law enforcement gets around your phone’s encryption
- AI-powered text from this program could fool the government
- The ongoing collapse of the world’s aquifers
- ? WIRED Games: Get the latest tips, reviews, and more
- ??♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones