Categories
Uncategorized

The project is over

Now that the project has officially ended, it is a good opportunity to reflect on our key research advancements, as evidenced by our publications.

Temporal consistency estimates based on neural networks for gaze and visual data

This is one of our key achievements, where a baseline convolutional neural network proved effective in learning to predict the attention score as defined in previous work. We have published our results in Brainsourcing for Temporal Visual Attention Estimation, in the prestigious Biomedical Engineering Letters journal [1].

Late and early fusion of multimodal representations of imagery data

This achievement was reported in the research paper The Elements of Visual Art Recommendation: Learning Latent Semantic Representations of Paintings [2], presented at the ACM conference on human factors in computing systems (CHI), the top conference in Human-Computer Interaction, and in the follow-up paper Together Yet Apart: Multimodal Representation Learning for Personalised Visual Art Recommendation [3], presented at the ACM Conference on User Modeling, Adaptation, and Personalization (UMAP), the oldest international conference for researchers and practitioners working on user-adaptive computer systems.

Multi-device multi-computer offline signal synchronization software

This achievement was reported in the research paper Gustav: Cross-device Cross-computer Synchronization of Sensory Signals [4], presented at the ACM Symposium on User Interface Software and Technology (UIST), the premier forum for software innovations in Human-Computer Interaction. We have recently published an improved system that we have called Thalamus [5] in the ACM UMAP’25 conference, to be presented in June in New York, USA.

Crowdsourcing via brain-computer interfacing

This has been addressed in two different settings: on the one hand, for fNIRS signals elicited from static (image) stimuli; and on the other hand on EEG signals elicited from dynamic (video) stimuli. The effectiveness of crowdsourcing mechanisms has been demonstrated. The first work on images+fNIRS has already been published in the top journal in Affective Computing: Crowdsourcing Affective Annotations via fNIRS-BCI [6].

Information retrieval based on emotions captured via brain-computer interfacing

This achievement was reported in Feeling Positive? Predicting Emotional Image Similarity from Brain Signals [7], presented at the ACM Multimedia, one of the best conferences in Computer Science (ranked as CORE A*, top 7% of all CS conferences).

Generative model control and conditioning via brain input

Generative models controlled by brain input are now a reality, as we reported in Cross-Subject EEG Feedback for Implicit Image Generation [8], Perceptual Visual Similarity from EEG: Prediction and Image Generation [9] and Brain-Computer Interface for Generating Personally Attractive Images [10].

Semantic similarity estimation via brain-computer interfacing

This achievement was reported in Affective Relevance: Inferring Emotional Responses via fNIRS Neuroimaging [11], presented at the ACM conference on Research and Development in Information Retrieval (SIGIR), the premier forum for researchers and practitioners interested in data mining and information retrieval research. In addition, the basic research underlying this achievement was reported in The P3 indexes the distance between perceived and target image [12], published in the Psychophysiology journal.

Consistency estimates of brain signals

Consistency is closely related with the crowdsourcing concept (see below) but can also have additional perspectives such as temporal consistency, which we have also worked on. This is an incredibly challenging task due to the limited data available and the great inter- and intra-subject variability of brain signals. We have published our results in Affective annotation of videos from EEG-based crowdsourcing [13], in the Pattern Analysis and Applications journal (in press).

References

  1. Y. Moreno-Alcayde, T. Ruotsalo, L. A. Leiva, V. J. Traver. Brainsourcing for temporal visual attention estimation. Biomed. Eng. Lett. 15, 2025. https://doi.org/10.1007/s13534-024-00449-1
  2. B. A. Yilma, L. A. Leiva. The Elements of Visual Art Recommendation: Learning Latent Semantic Representations of Paintings. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI), 2023. https://doi.org/10.1145/3544548.3581477
  3. B. A. Yilma, L. A. Leiva. Together Yet Apart: Multimodal Representation Learning for Personalised Visual Art Recommendation. Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization (UMAP), 2023. https://doi.org/10.1145/3565472.3592964
  4. K. Latifzadeh, L. A. Leiva. Gustav: Cross-device Cross-computer Synchronization of Sensory Signals. Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST Adjunct), 2022. https://doi.org/10.1145/3526114.3558723
  5. K. Latifzadeh, L. A. Leiva. Thalamus: A User Simulation Toolkit for Prototyping Multimodal Sensing Studies. Adjunct Proceedings of the 33rd ACM Conferfence on User Modeling, Adaptation and Personalization (UMAP), 2025. In Press.
  6. T. Ruotsalo, K. MÀkelÀ, M. Spapé. Crowdsourcing Affective Annotations via fNIRS-BCI. IEEE Trans. Affect. Comput. 15(1), 2024. https://doi.org/10.1109/TAFFC.2023.3273916
  7. T. Ruotsalo, K. MÀkelÀ, M. M. Spapé, L. A. Leiva. Feeling Positive? Predicting Emotional Image Similarity from Brain Signals. Proceedings of the 31st ACM International Conference on Multimedia (MM), 2023. https://doi.org/10.1145/3581783.3613442
  8. C. de la Torre-Ortiz, M. M. Spapé, N. Ravaja, T. Ruotsalo. Cross-Subject EEG Feedback for Implicit Image Generation. IEEE Trans. Cybern. 54(10), 2024. https://doi.org/10.1109/TCYB.2024.3406159
  9. C. de la Torre-Ortiz, T. Ruotsalo. Perceptual Visual Similarity from EEG: Prediction and Image Generation. Proceedings of the 31st ACM International Conference on Multimedia (MM), 2024. https://doi.org/10.1145/3664647.3685508
  10. M. Spapé, K. M. Davis, L. Kangassalo, N. Ravaja, Z. SovijÀrvi-Spapé, T. Ruotsalo. Brain-Computer Interface for Generating Personally Attractive Images. IEEE Trans. Affect. Comput. 14(1), 2023. https://doi.org/10.1109/TAFFC.2021.3059043
  11. T. Ruotsalo, K. MÀkelÀ, M. M. Spapé, L. A. Leiva. Affective Relevance: Inferring Emotional Responses via fNIRS Neuroimaging. Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2023. https://doi.org/10.1145/3539618.3591946
  12. C. de la Torre-Ortiz, M. Spapé, T. Ruotsalo. The P3 indexes the distance between perceived and target image. Psychophysiol. 60(5), 2023. https://doi.org/10.1111/psyp.14225
  13. Y. Moreno-Alcayde, T. Ruotsalo, L. A. Leiva, V. J. Traver. Affective annotation of videos from EEG-based crowdsourcing. Pattern Analysis and Applications, 2025. In Press.

Categories
Uncategorized

BANANA visit in Torun (PL)

Prof. Aleksandra Kawala-Sterniuk from the BANANA team, Polish partner, visited on 14-16.06.2023 Torun.

She met Polish-PI and her team from BITSCOPE (Brain Integrated Tagging for Socially Curated Online Personalised Experiences) – prof. Veslava Osinska from the Nicolaus Copernicus University in ToruƄ (Poland).

BITSCOPE introduces a vision for enhancing social relationships through brain-computer interfaces (BCIs) in virtual environments. Instead of relying on explicit feedback like “likes,” our improved BCI technology captures attention, memorability, and curiosity by passively collecting neural data signatures. This data, refined through machine learning, can be used by recommender systems to create better online experiences. Our work focuses on developing a passive hybrid BCI (phBCI) that combines electroencephalography, eye tracking, galvanic skin response, heart rate, and movement to estimate the user’s mental state without interrupting their immersion. This approach improves signal quality, denoising capabilities, and adaptability to home environments. We leverage deep learning, geometrical approaches, and large datasets to address user state classification, including attention, curiosity, and memorability. These advancements are achieved through co-designed user-centered experiments.

BITSCOPE partners:

  • Dublin City University – Ireland (Coordinator)
  • Universitat PolitĂšcnica de ValĂšncia – Spain
  • Centre de Recherche Inria Bordeaux – Sud-Ouest – France
  • Nicolaus Copernicus University – Poland

She and Dariusz Mikolajewski (Co-PI, OUT) also had a pleasure to meet prof. Dean J. Krusienski from VCU, leader of the ASPEN Lab.

Dean J. Krusienski, a Senior Member of IEEE, obtained his B.S., M.S., and Ph.D. degrees in electrical engineering from The Pennsylvania State University, University Park, PA, USA. He conducted his Postdoctoral Research at the Brain-Computer Interface Laboratory, Wadsworth Center of the New York State Department of Health. Currently, he holds the position of Professor and Graduate Program Director of biomedical engineering at Virginia Commonwealth University (VCU), Richmond, VA, USA. Additionally, he directs the Advanced Signal Processing in Engineering and Neuroscience (ASPEN) Laboratory at VCU. His research interests encompass biomedical signal processing, machine learning, brain-computer interfaces, and neural engineering.

Aleksandra and Dariusz also spoke to prof. Wlodzislaw Duch.

Wlodzislaw Duch is a distinguished figure in neuroinformatics and artificial intelligence. He heads the Neurocognitive Laboratory at the Center of Modern Interdisciplinary Technologies and leads the Neuroinformatics and Artificial Intelligence group at the University Centre of Excellence Dynamics, Mathematical Analysis, and Artificial Intelligence. With a Ph.D. in theoretical physics/quantum chemistry and a D.Sc. in applied math, Duch has made significant contributions to the field. He has held prestigious positions, including the President of the European Neural Networks Society executive committee and fellowships in renowned international associations. Duch has an extensive publication record, authored books, and served on the editorial boards of numerous journals. Additionally, he has held visiting professor positions at esteemed institutions worldwide.

We are looking forward for further cooperation! 🙂

Categories
Uncategorized

AI- and ML-based methods in BCI systems


Brain-computer interfaces (BCIs) enable direct and bidirectional communication between the human brain and computers. The analysis and interpretation of brain signals, which provide valuable information about mental state and brain activity, pose challenges due to their non-stationarity and vulnerability to various interferences. Consequently, research in the BCI field emphasizes the integration of artificial intelligence (AI), particularly in five key areas: calibration, noise reduction, communication, mental state estimation, and motor imagery. The utilization of AI algorithms and machine learning has shown great promise in these applications, primarily because of their capacity to predict and learn from past experiences. As a result, implementing these technologies within medical contexts can provide more accurate insights into the mental state of individuals, mitigate the effects of severe illnesses, and enhance the quality of life for disabled patients.

New paper regarding Brain-Computer Interfaces published:

Barnova, K., Mikolasova, M., Kahankova, R. V., Jaros, R., Kawala-Sterniuk, A., Snasel, V., … & Martinek, R. (2023). Implementation of artificial intelligence and machine learning-based methods in brain-computer interaction. Computers in Biology and Medicine, 107135.

https://www.sciencedirect.com/science/article/pii/S0010482523006005

Categories
Uncategorized

Inferring Emotional Responses via fNIRS Neuroimaging

Information retrieval (IR) relies on a general notion of relevance, which is used as the principal foundation for ranking and evaluation methods. However, IR does not account for more a nuanced affective experience. In a recent paper we have published, we consider the emotional response decoded directly from the human brain as an alternative dimension of relevance.

We report an experiment covering seven different scenarios in which we measure and predict how users emotionally respond to visual image contents by using functional near-infrared spectroscopy (fNIRS) neuroimaging on two commonly used affective dimensions: valence (negativity and positivity) and arousal (boredness and excitedness). Our results show that affective states can be successfully decoded using fNIRS, and utilized to complement the present notion of relevance in IR studies.

This work will be presented at SIGIR’23 in Taipei, Taiwan. SIGIR is the flagship conference on Information Retrieval.

REFERENCE

Tuukka Ruotsalo, Kalle MĂ€kelĂ€, Michiel SpapĂ© and Luis A. Leiva. Affective Relevance: Inferring Emotional Responses via fNIRS Neuroimaging. Proc. SIGIR’23. https://doi.org/10.1145/3539618.3591946

Categories
Uncategorized

Prof. Jacek Gwizdka in Luxembourg

This week we hosted Prof. Jacek Gwizdka from The University of Texas at Austin, USA, where he directs the Information eXperience Lab (IX Lab).

Prof. Gwizdka is one of the pioneers of Neuro-Information Science. He studies human-information interaction and retrieval and applies cognitive psychology and neuro-physiological methods to understand information search and improve search experience.

He gave an interesting seminar talk titled “Neuro-physiological evidence as a basis for understanding human-information interaction”. He gave an overview of several projects he worked on which demonstrate the use of eye-tracking and EEG for inferring information relevance.

We had very productive meetings! We hope to materialize our research ideas in a future academic paper(s).

Categories
Uncategorized

How do we recall known faces?

Visual recognition requires inferring the similarity between a perceived object and a mental target. However, a measure of similarity is difficult to determine when it comes to complex stimuli such as faces. Indeed, people may notice someone “looks like” a familiar face, but find it hard to describe on the basis of what features such a comparison is based. Previous work shows that the number of similar visual elements between a face pictogram and a memorized target correlates with the P300 amplitude in the visual evoked potential. Here, we redefine similarity as the distance inferred from a latent space learned using a state-of-the-art generative adversarial neural network (GAN). A rapid serial visual presentation experiment was conducted with oddball images generated at varying distances from the target to determine how P300 amplitude related to GAN-derived distances.

The results showed that distance-to-target was monotonically related to the P300, showing perceptual identification was associated with smooth, drifting image similarity. Furthermore, regression modeling indicated that while the P3a and P3b sub-components had distinct responses in location, time, and amplitude, they were similarly related to target distance. The work demonstrates that the P300 indexes the distance between perceived and target image in smooth, natural, and complex visual stimuli and shows that GANs present a novel modeling methodology for studying the relationships between stimuli, perception, and recognition.

REFERENCE

de la Torre‐Ortiz, C.; SpapĂ©, M.; Ruotsalo, T. (2023). The P3 indexes the distance between perceived and target image. Psychophysiology, e14225. https://doi.org/10.1111/psyp.14225

Categories
Uncategorized

We hosted Prof. Aleksandra Kawala in Luxembourg

This week the Luxembourg team has hosted our Polish partner, Prof. Aleksandra Kawala-Sterniuk from Opole University of Technology. We did good progress on the project and discussed lots of ideas to be developed in the upcoming months.

Looking forward to visiting Poland!

Categories
Uncategorized

CHIST-ERA Seminar 2023

4.-5. April 2023 – Bratislava, Slovakia

All researchers funded by Chist-ERA were brought together in Bratislava! Including BANANA project team members.

The event was fantastic and we could share our experiences with other BCI-call projects under under Chist-ERA IV “Advanced Brain-Computer Interfaces for Novel Interactions (BCI)“:

GENESIS, BISTSCOPE, ReHaB and BANANA!

We have presented poster, presentation and me (Aleksandra Kawala-Sterniuk) also co-chaired (with Hakim Si-Mohammed) session. regarding our call!

Categories
Uncategorized

Contradicted by the Brain

We investigated inferring individual preferences and the contradiction of individual preferences with group preferences through direct measurement of the brain. We report an experiment where brain activity collected from 31 participants produced in response to viewing images is associated with their self-reported preferences. First, we show that brain responses present a graded response to preferences, and that brain responses alone can be used to train classifiers that reliably estimate preferences. Second, we show that brain responses reveal additional preference information that correlates with group preference, even when participants self-reported having no such preference.

Our analysis of brain responses carries significant implications for researchers in general, as it suggests an individual’s explicit preferences are not always aligned with the preferences inferred from their brain responses. These findings call into question the reliability of explicit and behavioral signals. They also imply that additional, multimodal sources of information may be necessary to infer reliable preference information.

REFERENCE

K. M. Davis, M. Spapé and T. Ruotsalo. Contradicted by the Brain: Predicting Individual and Group Preferences via Brain-Computer Interfacing. IEEE Transactions on Affective Computing, https://doi.org/10.1109/TAFFC.2022.3225885

Categories
Uncategorized

Gustav software

Temporal synchronization of behavioral and physiological signals collected through different devices (and sometimes through different computers) is a longstanding challenge in HCI, neuroscience, psychology, and related areas. Previous research has proposed to synchronize sensory signals using (1) dedicated hardware; (2) dedicated software; or (3) alignment algorithms. All these approaches are either vendor-locked, non-generalizable, or difficult to adopt in practice.

We propose a simple but highly efficient alternative: instrument the stimulus presentation software by injecting supervisory event-related timestamps, followed by a post-processing step over the recorded log files. Armed with this information, we introduce Gustav, our approach to orchestrate the recording of sensory signals across devices and computers. Gustav ensures that all signals coincide exactly with the duration of each experiment condition, with millisecond precision.

Gustav injects a supervisory timing signal that helps orchestrating the experiment conditions across devices and computers, from simple (a) to complex (c) setups.

Gustav is publicly available as open source software: https://gitlab.uni.lu/coin/gustav/

Reference

Kayhan Latifzadeh, Luis A. Leiva. Gustav: Cross-device Cross-computer Synchronization of Sensory Signals. In Adjunct Proc. UIST, 2022. https://dl.acm.org/doi/10.1145/3526114.3558723