This app provides a BCI speller based on code-modulated visual evoked potential (c-VEP). The use of c-VEP as control signals is a recent but promising alternative to achieve reliable, high-speed BCIs for communication and control. This paradigm generally allows obtaining accuracies greater than 90% with a very short calibration of only 30 secs. More information on paradigm and signal processing can be found in: Martínez-Cagigal, Víctor, et al. "Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review." Journal of Neural Engineering (2021).
C-VEPs are exogenous signals generated naturally by our brains in response to stimuli. For that reason, c-VEP-based BCIs do not require users to be trained, but just a small calibration. In calibration stage, user is asked to pay attention to a flickering command encoded with the original m-sequence. We recommend to user, at least, 100 entire cycles (i.e., a full stimulation of the m-sequence) to train the model. That is, two runs of 5 trials each, in which trials are composed of 10 cycles. It is important to avoid blinking when trials are being displayed. Users can freely blink in the inter-trial time window.
If your monitor is capable to refresh at 120 Hz, we recommend using a “Target FPS (Hz)” that matches the monitor refresh rate. Imagine that you are using a 63-bit m-sequence. For a 60 Hz presentation rate, each cycle will last 1.05 s (i.e., 63/60). You can reduce that duration by half using 120 Hz, lasting 0.525 s (i.e., 63/120).
If you are using a 120 Hz presentation rate, we recommend you use more than a single filter. For instance, a filter bank composed of 3 IIR filters: (1, 60), (12, 60) and (30, 60) usually gives good results.
If you want to know more about the paradigm, the signal processing pipeline or the state-of-the-art methods that are used in c-VEP-based BCIs, we recommend to read the following paper: Martínez-Cagigal, Víctor, et al. "Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review." Journal of Neural Engineering (2021).
Flexible c-VEP Speller: supports multiple sequences, stimulus designs, and decoding algorithms.
Adaptation to v2025 (RHEA)
Introduced parameters for adjusting color opacity and setting an image as the background.
Minor fix to work with configurations built in other computers
Improved exception handling. Now users can also chose if artifact rejection must be applied in calibration.
Improved the method to assign lags to commands.
Minor fix
Adaptation to v2024 (KRONOS): - Changed from PyQt5 to PySide6 - Now the app can save all recored signals (not just the EEG) - The app detects several monitors and warns user if monitor refresh rate is not the same
Improved EEG stream detection for streams with invalid lsl_type.
Updated encoded visualization
Fixed a bug where a call to a (now) obsolete PyQt5 function was done
Fixed a bug where a call to a (now) obsolete PyQt5 function was done
Updated TCPClient in Unity
Fixed a bug where an additional trial was displayed in training.
Initial "cvep_speller" app for MEDUSA Platform v2022.0. This app implements a c-VEP-based BCI speller that uses the circular-shifting paradigm. Currently, only binary m-sequences are supported. Signal processing was implemented following the common "reference method" for circular shifting (i.e., CCA + correlation analysis).
Definitely give this one a try if you're looking for practical BCI communication! It's not hard to get 100% accuracy with less than 1 minute of calibration =)
Please, use the editor below to modify the functionalities of your app
Please, use the editor below to modify the tutorial of your app
Rate this app
