China said to fall short of matching US advances in AI owing to ‘many challenges in theory and technologies’

The presentation said there is “a serious lack of self-sufficiency” in that area of Chinese AI development because most domestic LLMs are built on LLama. Facebook parent Meta in July last year made its open-source Llama 2 AI model free for research and commercial use.
Chinese Premier Li Qiang, sixth from left, is seen at the Beijing Academy of Artificial Intelligence during his inspection tour of the non-profit AI research and development group’s headquarters in the nation’s capital on March 13, 2024. Photo: Xinhua
That particular LLM shortcoming lends weight to growing anxiety around the idea that the mainland faces a widening gap with the US in terms of AI innovation, which was highlighted in a sideline discussion at the recent “two sessions” meetings in Beijing.

While state agencies are now working in parallel with private Chinese tech firms to develop a range of AI innovation, they still face problems related to computing infrastructure for training LLMs.

“Dozens of locally developed chips are different in terms of families and ecosystems”, making the 100-billion-parameter training for Chinese LLMs “very unstable”, the presentation said. US tech sanctions on China have restricted the mainland’s access to advanced semiconductors, made with American technology, for local AI development projects.
An LLM’s capability partly hinges on its number of parameters, a measure of the sophistication of a model. ChatGPT creator OpenAI’s LLM, for example, was trained on 175 billion parameters, while most open-source Chinese LLMs in the market have between 6 billion and 13 billion parameters.

Baidu CEO slams China tech firms’ frenzy over AI models as ‘waste of resources’

The number of government-approved LLMs and related AI applications on the mainland currently total more than 40. But at present, there are more than 200 China-developed LLMs in the market.

Another major issue pointed out by the presentation at BAAI refers to control of AI-generated content.

It said the unique challenge faced by Chinese-developed LLMs is in generating “quality content that is in line with facts”, while also taking into account ideology and various emotions.

AI chatbots, including ChatGPT and Google’s Gemini, are prone to generating inaccurate output, referred to as hallucinations.

In global AI race, China Premier Li Qiang vows more leeway to ‘overtake rivals’

Although CCTV did not identify the BAAI presentation’s author, the slides that were broadcast show the logo of start-up Beijing Zhipu Huazhang Technology Co (Zhipu AI). A representative of Zhipu AI on Thursday confirmed that the company was present during the Chinese premier’s inspection tour at BAAI the day before.

Zhipu AI, which forms part of the collaborative ecosystem that BAAI has been cultivating, said it has already built a 100-billion parameter scale LLM.

The firm raised a total of 2.5 billion yuan (US$347 million) in new funding as of October last year, when it achieved unicorn status with a valuation of more than US$1 billion. Investors included Tencent Holdings, Ant Group, Meituan, Xiaomi and Alibaba Group Holding, owner of the South China Morning Post.