This repository provides a set of Python examples to help you get started with the Emotiv Cortex API. Each script demonstrates a specific workflow, making it easier to understand and integrate Cortex API features into your own projects.
- Python 2.7+ or Python 3.4+
- Install dependencies:
pip install websocket-clientpip install python-dispatch
Before running the examples, please ensure you have completed the following steps:
- Download and Install EMOTIV Launcher: Download from here. Log in with your Emotiv ID and accept the latest Terms of Use, Privacy Policy, and EULA in the Launcher.
- Accept Policies: If prompted, accept any additional policies in the EMOTIV Launcher.
- Obtain an EMOTIV Headset or Create a Virtual Device:
- Purchase a headset from the EMOTIV online store, or
- Use a virtual headset in the EMOTIV Launcher by following these instructions.
- Create your Cortex App: Once you register your Cortex App ID, you will receive a Client ID and Client Secret, which serve as unique identifiers for your software application. For instructions, visit: https://emotiv.gitbook.io/cortex-api#create-a-cortex-app
Central wrapper class for the Cortex API. Handles:
- Opening and managing the websocket connection
- Buidling JSON-RPC requests
- Handling responses, errors, and emitting events to corresponding classes
- Parsing and dispatching data to workflow scripts
Demonstrates how to:
- Subscribe to data streams (EEG, motion, performance metrics, etc.)
- Print or process incoming data See: Data Subscription
Demonstrates how to:
- Create a new record
- Stop a record
- Export recorded data to CSV or EDF
- Query records and request to download record data See: Records
Demonstrates how to:
- Inject markers into a record during data collection
- Export records with marker information See: Markers
Demonstrates how to:
- Load or create a training profile
- Train mental command actions (e.g., neutral, push, pull) See: BCI
Demonstrates how to:
- Load or create a training profile
- Train facial expression actions (e.g., neutral, surprise, smile) See: BCI
Demonstrates how to:
- Load a trained profile
- Subscribe to the 'com' stream for live mental command data
- (Optionally) Subscribe to the 'fac' stream for live facial expression data
- Get and set sensitivity for mental command actions in live mode See: Advanced BCI
- Each script is self-contained and demonstrates a specific workflow.
- Adjust the code as needed for your own applications.
- For more details, refer to the official Cortex API documentation.