Search for a command to run...
Restful sleep is essential for health, yet many children with Attention Deficit Hyperactivity Disorder (ADHD) experience disturbances such as delayed sleep onset, shorter total sleep time, frequent awakenings, and daytime fatigue. Accurate detection of these issues is important for clinical care, but existing tools have limitations: polysomnography is costly and complex, while wrist devices often miss subtle movement or physiological changes. This study introduces a deep learning approach using data from RestEaze, a leg-worn multimodal wearable that records photoplethysmography (PPG), motion from accelerometer and gyroscope, and temperature signals. Overnight recordings were collected from 14 children referred for ADHD evaluation. A Support Vector Machine (SVM) using handcrafted features was implemented to establish a traditional baseline. Two convolutional neural network (CNN-BiLSTM) models were then developed, employing early and late-fusion of raw multimodal inputs to classify sleep and wake states in short windows. The late-fusion model achieved an area under the ROC curve of 90.94% in five-fold cross-validation. Derived metrics included total sleep time, wake after sleep onset, sleep onset latency, and awakenings. A temporal label-smoothing method further improved consistency. These findings demonstrate the feasibility of leg-based multimodal sensing and deep learning for noninvasive sleep monitoring in pediatric neurodevelopmental populations.