Search for a command to run...
The Neural Engineering Data Consortium (NEDC) began annotating large amounts of EEG data in 2012 [1] [2] . The most current release of the Temple University EEG Corpus, TUEG v2.0.1, consists of 26,846 sessions from 14,987 patients. TUEG has become one of most significant open-source resources available in the community. Over 10,000 researchers have subscribed to this corpus. However, the TUEG data [3] consists of pruned EEGs [4] [5] . In clinical settings, technicians condense long term studies highlight any potential abnormalities, reducing the burden of reading long-term EEGs, and allowing a neurologist to focus on diagnosing the patient accurately and more efficiently. This results in data that has been split into a series of shorter files, destroying the continuous nature of the data. Gaps between files have been discarded (historically to save disk space and reduce review time), which prevents reconstruction of the original continuous recording. This makes it difficult to use this data to develop seizure prediction algorithms, accurately measure false alarm rate, or assess robustness to real-world artifacts like patient and electrode movement.