Recently, the development of big data, artificial intelligence(AI) and information technologies in China has attracted increasingly attention from western countries, and faces a tendency of jointly containing. What should we dispose of this situation and break through? How should we regard the development of related technologies? On December 23th, Chen Yueguo, professor of School of Information at Renmin University of China, was invited by Chongyang Institute for Financial Studies at Renmin University (RDCY) to deliver a lecture on the development of big data and artificial intelligence in China.
Professor Chen firstly pointed out that IBM, Oracle and EMC are three pillar companies in database that occupy large market share in banking and financial institutions. The three companies charge Chinese banking and financial industry highly. However, it doesn’t only involve money, but relate to security of national core information. Thus it’s inevitable that we need to develop substitute products independently.
Professor Chen noted that another issue is chip manufacturing. As is shown in the ZTE case, we are aware that China largely depends on import in chip manufacturing field, especially the crucial hardware such as CPU and GPU.
Professor Chen emphasized In the era of big data, the more open source software we have, the more benefits we get. But the problem is that the research and development of big-data system software, such as Hadoop and Spark, is dominated by the United States while we haven’t done much. This problem has been recognized by many companies in China, for instance, Ali and Huawei have invested a lot to develop open source software and independent software, and made some progress.
Professor Chen mentioned that AI has made rapid progress these years in China. However, the system software is rarely done. Like TensorFlow, a systematic deep learning software that is not developed by ourselves, it is developed by a large foreign company such as Google and a core research team. What the hidden concern behind the development of artificial intelligence in China is insufficiency of the basic independent research.
Professor Chen analyzed the reasons why Chinese big data and AI develop so fast. He believes that one of the key reasons is data resources. Big data and AI are definitely important to big Internet companies. But for colleges, there are no data resources, so big data and AI are not accessible. In the United States, colleges confront the same situation that big data isn’t easily accessible for its privacy protection, policies and regulations, so they are relatively conservative when they make technological breakthroughs. Several media even compare big data to new oil to indicate its importance.
Professor Chen reminded us that we should have a clear recognition on the strengths and weaknesses of Chinese big data and AI. He highlighted that China has developed very fast in big data and AI in the past five and six years. At present, China is also very attractive to talents from all over the world, and high-end talents are back. The reason why we have advantages in big data and AI is that we have rich data resources, numerous users and abundant scenes, which are China's core competitiveness for the future development of big data and AI. The weaknesses are mainly reflected in some system-based fields, such as basic system software. In addition, intellectual property protection needs to be further strengthened because only the better the property rights protection, the more reward on innovation can be protected.
In the end of the lecture, Professor Chen concluded that only through working hard and persevering can we better cope with western technological containment.