44 人物專訪 • EXCLUSIVE INTERVIEW 澳大新語 • 2025 UMAGAZINE 31 Tackling the Fundamental Challenges in BCI Technology Prof Spapé has long been driven by a profound question: How can we translate subjective human consciousness into a form that machines can understand? Current BCI technology works by detecting electrical signals generated by brain activity and converting them into commands that computers can interpret, enabling the brain to control external devices. However, despite these advancements, the technology often feels unnatural in real-world use. Prof Spapé uses a simple example to explain this challenge. When we pick up a teacup, our brain naturally and effortlessly generates a smooth sequence of motor commands, much like breathing. Existing BCI systems, however, require users to break this action into a series of deliberate mental steps. To lift a teacup, a person must first imagine raising their arm, then moving it forward, and finally gripping the handle with precise finger movements. This rigid, step-by-step process makes interactions feel awkward and counterintuitive. Prof Spapé asks: ‘How can we make these commands as natural as breathing, without requiring conscious effort?’ In CCBS, he is leading his team to answer this question. They are dedicated to overcoming this bottleneck by creating BCIs that enable seamless and intuitive interactions between the brain and external devices. Through relentless effort, Prof Spapé’s team has developed a bidirectional BCI system powered by neuroadaptive modelling. This advanced technology uses real-time brain activity analysis to determine ongoing motivations and emotions, and uses this information to perform actions or higher-level interactions. For example, by integrating the learning and training capabilities of generative AI, the system can interpret how we respond intuitively to different stimuli, allowing for far more advanced brain control than simply movements. A notable application of this technology is the ‘AI Tutor’ system, which Prof Spapé’s team is currently developing. This system combines neuroadaptive modelling with multimodal emotional AI, and features a Deepseek-based artificial agent that acts as a tutor. The AI Tutor assesses students’ emotional states by analysing their micro facial expressions, speech prosody, and real-time electroencephalography (EEG) data. Using this information, it can control teaching strategies to better suit individual learners. Specifically, the AI Tutor identifies emotional indicators that are essential for motivation and learning, including frustration intensity, attentional engagement, and moments of sudden insight. For example, if the system detects a student’s frustration with the learning materials or a decline in interest, it signals the need to adjust teaching strategies, such as by simplifying the material or repeating key concepts. This technology has the potential to significantly enhance learning effectiveness, particularly in distance learning scenarios and during study sessions. ‘It’s like the difference between a choreographed routine and an improvised dance,’ explains Prof Spapé. ‘Current BCIs force the brain to follow pre-set commands, whereas we’re developing technology that enables genuine neural dialogue—where both systems adapt to each other in real time.’ Capturing the Brain’s ‘First-Person Experience’ A central focus of Prof Spapé’s research involves advancing BCI systems beyond simple movement decoding to capturing the motivational dimension of actions—what a person truly desires and how it feels to perform an action. This work builds on a fundamental insight from cognitive science: Michiel Spapé教授合著書籍及學術專著 Prof Michiel Spapé’s co-authored book and academic monograph
RkJQdWJsaXNoZXIy NzM0NzM2