Image source: generated by Unbounded AI
In the long history of mankind, the development of technology has always been accompanied by a fundamental question: What is technology used for? Serving humanity, or ultimately transcending humanity?
With the continuous development and advancement of artificial intelligence, this problem has become particularly urgent. Especially after the O1 large model launched by OpenAI showed amazing thinking chain capabilities, we have to face a more challenging issue: how to ensure that these increasingly “smart” AIs not only have abilities but also human-like abilities value judgment.
McLuhan, known as the "Father of Communication Studies," once proposed a concept: human extension. In his theory, media is an extension of human senses, telescopes are an extension of human eyes, and cars are an extension of human legs. Then artificial intelligence should be an extension of human thinking.
The essence can be condensed into four words, which is "humanistic intelligence".
In the book "The World I See", Chinese-American scientist and "AI Godmother" Li Feifei also emphasized the importance of people-centeredness. She believed: "If artificial intelligence is to help people, our thinking must It has to start with the people themselves.”
Behind the people-centered approach is technological innovation that constantly breaks through boundaries and embraces new possibilities with an open mind; social security emphasizes that while AI technology is developing, it must ensure its impact on human society. To minimize the negative impact; at the same time, it is also the universal benefit of technology, which requires the development results of AI technology to benefit everyone, rather than becoming the privilege of a few.
Recently, produced by Caixin Think Tank and ESG30, Shanghai Jiao Tong University, Artificial Intelligence Research Institute, and Lenovo Group jointly released the report "Human-Centered Intelligence: A Concept of Scientific and Technological Development in the Era of Human-Machine Symbiosis". The scientific and technological circles and enterprises The world is actively exploring the answer to this proposition.
The core of these guidelines and reports is to try to find a balance between the rapid development of AI and the ethics of human society. They try to answer a fundamental question: As AI technology increasingly penetrates into human life, how can we ensure that the development of these technologies does not deviate from the fundamental interests of mankind.
This is a question that must be considered carefully at the beginning of artificial intelligence.
New troubles with the "growth" of AI
With the rise of artificial intelligence With the rapid development of technology, AI capabilities have evolved from simple dialogue responses to complex logical reasoning. Take OpenAI's O1 large model as an example. It not only shows amazing capabilities in the field of natural language processing, but also makes breakthroughs in simulating the human thinking chain. Although this kind of progress has brought convenience to mankind, it has also brought unprecedented challenges.
Some worrying cases have emerged one after another: In the medical field, AI systems may make value judgments when making diagnostic decisions.Bias; in the field of autonomous driving, the decision-making criteria for AI when faced with moral dilemmas such as the "trolley problem" are still controversial; in the field of content creation, AI may produce biased or misleading information. These cases show that as AI systems become more and more “smart”, the importance of their decision-making logic and value orientation has become increasingly prominent.
What follows is an increasingly strong need for supervision and control of AI. AI’s decision-making bias in moral dilemmas, such as when it must choose to hit a pedestrian on one side or an obstacle on the other side in autonomous driving, what is the basis for its decision-making logic and value judgment? The complexity of these problems is that the AI's values are actually "projections" of its training data. This means that AI’s decision-making is not only affected by the logic of the algorithm, but also by the human values contained in its training data.
Picture source: Internet
Challenges include how to deal with value differences in a multicultural context, how to maintain mainstream values while maintaining objectivity, and how to reconcile technological development and ethics. Find a balance between ethics. These challenges require us to deeply analyze the specific impact of AI decision-making instead of over-exaggerating anxiety. We need to objectively tell the real problem, that is, the development of AI may inadvertently amplify human biases, may make decisions that are not in line with human expectations at critical moments, and may cause misunderstandings in cross-cultural communication.
In the process of establishing the relationship between man and machine, the "Three Lines" concept was proposed in the "Humanistic Intelligence" report. The baseline is human-machine collaboration, which realizes the collaborative work of humans and AI through hybrid intelligence to improve efficiency and creativity; the trend line is human-machine symbiosis. With the advancement of technology, the relationship between humans and AI will become closer, moving towards a "human-machine" A new era of three-dimensional integration of things and things; the bottom line is to ensure that humans are always in the dominant position. No matter how AI technology develops, we must ensure that human interests and values are the final basis for decision-making.
The conceptual framework in "human-centered intelligence" provides important guidance for the healthy development of AI.
In this process, the industry has gradually realized that the "growth" of AI is not only a technical issue, but also a social issue. It involves multiple levels such as law, ethics, and culture, and requires extensive participation and in-depth discussions from governments, businesses, academia, and the public. These challenges are not insurmountable obstacles, but the only way forward in the development of AI.
Technology Enterprises: Exploration from Concept to Action
Humanistic Intelligence The brand-new concept has already been put into practice by some companies.
Lu Chuan, a well-known director who has won many international film festival awards, made an interesting attempt in 2024.
Since the emergence of Sora at the beginning of the year, more and more filmmakers have begun to explore what kind of sparks can be generated by co-creation with AI. During the creation process of the movie "Western Wild", Lu Chuan cooperated with Lenovo AI PC to use AITechnology has greatly improved work efficiency in the post-production process.
“If we do it in the traditional way, storyboard, motionboard, previs, etc., plus CG... it may take two months, but
for this film, we It was done in just two days. It greatly saved costs and manpower time," Lu Chuan concluded.
More importantly, for content creators, AI and AI PC are a process of knowledge equality, allowing young creators to cross the threshold and quickly realize the visualization of creativity. Ordinary creators can rely on AI technology to complete high-quality works even if they do not have a professional post-production team.
This is a typical case where AI technology serves everyone and brings inclusive value. Enterprises represented by Lenovo, as the main body of technological innovation, are also key players in ensuring the implementation of "human-centered intelligence".
Cases with "human-centered intelligence" as the core have begun to spread in all walks of life.
The driverless taxi Carrot Kuaipao, which became popular in Wuhan in May this year, also demonstrated how to balance efficiency and safety in complex scenarios during its specific driving process. Facing China's unique and complex traffic environment, Luobo Kuaipao cooperates with the government to promote the formulation and improvement of relevant regulations to ensure the responsible use of autonomous driving technology. At the technical level, Luobo Kuaipao uses high-precision maps, intelligent perception and precise decision-making technology to ensure safe and stable driving of vehicles in complex road environments. At the same time, Luobo Kuaipao has also established a passenger feedback mechanism to continuously optimize services by collecting and analyzing user opinions, reflecting the company's self-discipline and social responsibility in technological innovation.
Luobo Kuaipao self-driving taxi
During actual operations in Wuhan, Luobo Kuaipao strictly abides by traffic rules, including giving way to pedestrians when encountering them. Dimension's sensors collect information about the surrounding environment in real time, helping vehicles identify obstacles, pedestrians, vehicles, etc., and respond accordingly. Especially when passing through the school section, progress is often slow due to continuous courtesy of pedestrians, but it arouses praise among the masses.
Lenovo also demonstrates the humanistic care of AI technology in serving special groups. To address the communication difficulties faced by patients with ALS, Lenovo has collaborated with the Scott Morgan Foundation to develop an innovative AI solution. The system integrates technologies such as a circular keyboard interface, predictive AI, personalized voice replication and eye tracking, allowing patients to maintain a personalized communication style.
With this system, ALS patient Erin Taylor was able to retain her voice, create a digital avatar, and realize her wish to continue singing lullabies to her children. What's more, this technology can generate emotional and natural speech effects even with limited speech samples.
Technological innovation also extends to Alzheimer’s patients. Lenovo and Dementia Innovation Organization (Innovations in Dementia), Lenovo customized a specialized AI based on the life experiences of Alzheimer’s patients to create a realistic 3D digital avatar (avatar) to help those who are dealing with Alzheimer’s disease. A 24-hour conversational virtual companion is available for patients and families with silent diagnoses.
These practices demonstrate the unique value of AI technology in improving the quality of life of special groups, and reflect the deep thinking of enterprises in the development process of AI. From the initial development of simple functions to the current focus on the coordination of human-machine collaboration, the AI ethics practice of enterprises is experiencing qualitative improvement. This transformation is not only an inevitable requirement for technological development, but also reflects the company's deep understanding of social responsibility.
In this process, companies have gradually formed a systematic practice methodology. The first is to embody the "people-oriented" concept into product design principles to ensure that the development of every function is based on serving people's needs. The second step is to establish an ethical review mechanism covering the entire research and development cycle. From problem definition, algorithm design to application deployment, ethical assessment must be conducted in every link. Finally, through continuous user feedback and scenario practice, the performance of the AI system is continuously optimized and improved.
From concept to action, human-centered intelligence is pushing AI technology to a more mature stage of development.
Technological innovation: from thinking chain to value chain
In artificial intelligence In an intelligent world, technological progress is not just the iteration of codes and algorithms, but also a profound reflection on the extension of human wisdom and moral ethics. As AI begins to learn to think like humans, the challenges we face will escalate. Take the case of Lenovo Group's AI technology helping people with ALS "speak" as an example. This is not only a victory of technology, but also a profound practice in shaping the "three views" of AI.
In terms of screening and optimization of training data, AI’s “three views” are actually the “projection” of its training data. This means that the quality and representativeness of data directly shape AI’s language expression, logical reasoning and value judgment. However, data quality issues and representativeness issues have become two major problems in the development of AI. The pollution of harmful content, the embedding of biased and discriminatory information, and the spread of wrong or outdated information are all testing the "three views" of AI. In addition, the uneven geographical distribution of data, incomplete language coverage, and lack of cultural diversity are also issues that we must face squarely.
To overcome these problems, we need to start from three levels: data source, algorithm and evaluation. At the data source level, building diversified and high-quality training data sets, strengthening data review and screening mechanisms, and establishing a culturally balanced database are the basis for ensuring the correctness of AI's "three views." At the algorithm level, developing value alignment technology, introducing ethical restraint mechanisms, and designing bias detection and correction methods are the keys to ensuring the fairness of AI decision-making.. At the evaluation level, establishing a values evaluation system, carrying out continuous monitoring and optimization, and introducing multiple stakeholders to participate in evaluation are the guarantees for the continuous evolution of AI's "three views".
The profound impact of training data on the "three views" of AI. Just like human growth is affected by the environment in which they grow, the behavior patterns of AI systems are also deeply affected by training data. This influence includes both direct language expression and reasoning logic, as well as indirect knowledge structure formation and value judgment. Faced with this challenge, on the one hand, companies work hard on data sources to build diversified and high-quality training data sets; on the other hand, they innovate at the algorithm level, develop value alignment technology, and design bias detection and correction methods.
In terms of specific methods of safety training and the design of multiple guarantee mechanisms, AI’s “three views” are also facing tests. How AI makes choices that are consistent with human values when faced with moral dilemmas and complex decisions is a question we must answer. Through simulation training, we can let AI experience various emergency situations in a virtual environment and learn how to make decisions that are consistent with human ethical standards. At the same time, the establishment of an ethical review committee can provide guidance and supervision for the decision-making logic of AI. In addition, improving the transparency and explainability of AI decision-making is also an important means to ensure that AI’s “three views” are correct.
Practice has proved that solving these technical problems cannot rely on a single method, but requires a systematic methodology.
In the process of promoting these technological innovations, companies have gradually formed their own best practices. In this process, how to maintain the ethical bottom line while pursuing technological breakthroughs, how to expand the boundaries of capabilities while adhering to responsibilities, all of which require the wisdom and courage of the entire industry.
Only in this way can AI truly become the core engine that drives business growth and create lasting value for enterprises and users.