In the age of AI, Professor Huey-Jen Jenny Su, the head of the UN Sustainable Development Solutions Network (SDSN)in Taiwan and former president of National Cheng Kung University in Taiwan, spoke with《The Icons》in an interview to remind us to reflect on, and even critique, new technologies, and to view these constantly changing trends through the lens of sustainability
“The pervasiveness of AI often hinders us from taking a moment to pause and reflect, especially decision-makers across different fields, who need to consider broader structural issues beyond themselves. Today, I will present several scenarios from an educational perspective, which we can collectively reflect on.”
“Good Teachers” and “Bad Teachers”
Here, “teachers” can refer to either humans or AI.
It is evident that interactions with ChatGPT and similar AI programs can yield vastly different answers or feedback, depending on whether the question is “good” or “bad.” Therefore, Professor Su raises a critical issue: “How do you guide AI?”
People are constantly training the logic of AI, both good and bad. This is similar to education, where each person encounters good or bad teachers in their growth process. The outcome will be increasingly positive with good teachers and increasingly negative with bad ones.
“What is beyond our control currently is the number of”good ” and”bad ” teachers who are training AI, and similarly, what kind of content AI is constantly directing humans and even the future generation of children towards, amidst various unpredictable factors.” This is akin to many busy parents today who provide their children with a tablet for an entire afternoon, assuming that it is beneficial for the child to remain in one place. However, in reality, they have no idea what their child is browsing. In today’s AI landscape, this potential is even more limitless.
For example, consider the sustainable development goals (SDGs) of the
United Nations. ChatGPT’s content may be subject to various biases or even discrimination due to data and information limitations and constraints. ” People of all ages may unknowingly absorb and internalize inappropriate values when interacting with different social groups or dealing with cross-cultural issues related to gender, race, culture, religion, and other sensitive topics, due to the influence of AI.”
The Issue of Inequality is Becoming Increasingly Severe
Firstly, as the gap between individuals continues to widen, those who are able to understand and utilize technology like ChatGPT will have an advantage over those who cannot. This could exacerbate social inequality, as some people will be unable to enjoy the benefits of ChatGPT.
Moreover, if these technologies and resources are only available to certain social groups, this will lead to even greater inequality.
From a broader perspective, we can also see a competition between different language systems.
“AI training in languages is currently dominated by English in the Western world and Chinese in the Chinese-speaking world. However, out of the numerous languages spoken worldwide, only 67 languages are primarily used in ChatGPT. This poses a significant challenge for smaller languages in terms of integrating with new technologies like ChatGPT.”
Professor Su emphasizes the potential impact of language inequality on the development of new technologies, as well as the exacerbation of existing inequality. We must be mindful of the implications of technological advancements and work to ensure that access to them is fair and equitable for all individuals, regardless of language or social status.
Two Skills Determine Whether You Can Coexist Sustainably with AI
Professor Su emphasized that in the era of walking alongside AI, two critical skills are especially important:
“Since the Renaissance, two key competencies have been highly relevant to human development: logical reasoning and expression. In the face of the impact of AI, these abilities are still critical. When it comes to training AI, the mastery of these two critical skills determines whether we can use AI effectively. Moreover, they also affect how AI will shape the world in the future.”
However, there is a risk or variable in this process, namely whether people have sufficient cross-disciplinary knowledge. Because AI is limited by existing information or data, it may provide incorrect or inappropriate content to humans. If humans lack sufficient knowledge, it is difficult to grasp whether the information provided by AI is correct, which may lead to misconceptions and errors in understanding the world.
Finally, from a sustainable perspective, Professor Su also has an important reminder: “Whether to provide solutions that meet the time and space background for the target population and solve practical problems in every place.”
For instance, in remote villages, there may be persistent diarrhea problems. However, the prescribed medication from a developed hospital may be unrealistic due to local factors that hinder access to such drugs. Despite the laboratory providing sufficient data proving the drug’s effectiveness, the manufacturer may be reluctant to develop it due to profit reasons. “Therefore, it is essential to consider alternative solutions such as educating the villagers about dietary habits or finding other means to address the issue.”
“Great scientific developments are rendered meaningless if they fail to bring balanced benefits to society in practice, or if they result in subsequent problems. When discussing AI, we are also discussing sustainability. Our next challenge is to improve the quality of education. Although it can be addressed through the fourth item of the Sustainable Development Goals (SDGs), the quality of education is, in fact, related to all aspects of inactivity. Therefore, it is not just the quality of education but also the quality of life that is at stake.”
《The Icons》will delve further into case studies of remote villages in the next interview with Professor Su, exploring more diverse aspects of sustainability and the world.