學術研究 • ACADEMIC RESEARCH 54 澳大新語 • 2025 UMAGAZINE 32 the age of artificial intelligence, we argue that Maslow’s hierarchy of needs provides a useful framework for this purpose. Guided by Maslow’s theory, AI systems should not only address basic human needs but also assist individuals in reaching their full potential through human-AI interaction, thereby facilitating ‘self-actualising’ as described in the theory. In addition, promoting AI wellbeing involves creating positive emotional experiences during interactions with AI systems and products. To measure AI wellbeing, we have developed the ‘AI-interaction positivity scale’, which evaluates how happy, excited, delighted and satisfied individuals feel when they interact with an AI system or product. This scale was recently introduced in our paper, Introduction of the AI-Interaction Positivity Scale and its relations to satisfaction with life and trust in ChatGPT, published in the journal Computers in Human Behavior. The findings from this study show that higher scores on AI wellbeing correlate with greater trust in ChatGPT, a widely used LLM with over one billion users. Although our data is cross-sectional, it suggests that higher AI wellbeing may enhance trust in AI products. This can be beneficial when the AI system is benevolent. However, the emotional benefits of interacting with AI also come with potential risks. Overreliance on AI systems may lead to cognitive outsourcing in situations where independent thinking is essential. These risks are explored in another paper, The darker side of positive AI attitudes: Investigating associations with (problematic) social media use. This study found that individuals with higher positive attitudes towards AI are also more likely to have social media addiction, especially among male participants, given the heavy reliance of social media platforms on AI. In this view, the study shows that overreliance could result in addictive behaviours towards AI. While I recommend avoiding the term ‘addiction’ in this context—both to avoid overpathologising everyday behaviour and because the term ‘social media addiction’ is not 「學術研究」為投稿欄目,內容僅代表作者個人意見。 Academic Research is a contribution column. The views expressed are solely those of the author(s). Christian Montag 是澳門大學協同創新研究院認知與腦科學特聘教授、協同創新研究院副院長。他的研究領域包括心理 學、神經科學和電腦科學。 Christian Montag is Distinguished Professor for Cognitive and Brain Sciences, and associate director of the Institute of Collaborative Innovation at the University of Macau. He works at the intersection of psychology, neuroscience and computer science. officially recognised—we must remain vigilant about these potential risks and maintain our ability to think independently while enjoying the convenience AI offers. How are AI Wellbeing and Attitudes Towards AI Related? Readers may wonder how AI wellbeing and attitudes towards AI are linked. These constructs are indeed related: higher AI wellbeing shows correlations with attitudes towards AI, with the correlations ranging from moderate to strong. This relationship is not surprising, as attitudes towards AI reflect a more cognitive approach to understanding human reactions to AI, while the ‘AI interaction positivity scale’ primarily captures emotional responses. Despite these differences, understanding people’s attitudes towards AI remains valuable for developing AI systems and products that increase AI wellbeing. The Opportunities and Challenges that AI Offers AI offers numerous opportunities, but both citizens and governments need to adopt a thoughtful and responsible approach to its diverse applications. To understand public perceptions of AI and its potential to foster wellbeing, many factors need to be considered. Another paper from our research group, Considering the IMPACT framework to understand the AI-well-being complex from an interdisciplinary perspective, published in Telematics and Informatics Reports, presents a new theoretical framework called ‘IMPACT’. This framework outlines key considerations for successfully navigating the AI revolution. These include the ‘Interplay of Modality’ of an AI system, human factors (‘Person-variable’), the ‘Area’ where AI is applied, cultural and regulatory aspects (‘Country/ Culture’), and the level of ‘Transparency’ of AI systems. I believe that the interdisciplinary work being done at the Institute of Collaborative Innovation at the University of Macau holds promise for addressing the complex challenges brought about by the rise of AI.
RkJQdWJsaXNoZXIy NzM0NzM2