Search for a command to run...
This deposit contains the best pretrained and finetuned checkpoints per encoder and dataset for 36 SSL + encoder combinations across 6 HAR datasets, accompanying the paper: Da Luz, G. P. C. P.; Soto, D. H. P.; Napoli, O. O.; Rocha, A.; Boccato, L.; Borin, E."Benchmarking Encoders and Self-Supervised Learning for Smartphone-Based Human Activity Recognition"IEEE Access, vol. 14, pp. 37451–37475, 2026. DOI: 10.1109/ACCESS.2026.3669412 🔗 Code & reproduction scripts: github.com/H-IAAC/benchmarking-encoders-ssl-har 📦 What's Inside Each of the 36 combinations provides two checkpoints: *_pretrained.ckpt : SSL pretrained backbone (no labels used) *_finetuned.ckpt : Best finetuned model (full fine-tuning, best of 3 seeds) All models use the DAGHAR standardized view (6 IMU channels, window size 60).Activity classes: sit, stand, walk, stair up, stair down, run (where present). 🗂️ File Naming All checkpoints are in the root of the deposit. File names follow the pattern: {ssl}_{encoder}_{dataset}_pretrained.ckpt {ssl}_{encoder}_{dataset}_finetuned.ckpt Component Options Datasets kh (KuHar), ms (MotionSense), rw-thigh (RealWorld Thigh), rw-waist (RealWorld Waist), uci (UCI-HAR), wisdm (WISDM) Encoders ts2vec, cnnpff, resnetse5, rnn, imutransformer, tstcc SSL Methods lfr, tfc, diet 🚀 Quick Start 📓 Interactive notebook (recommended): use ssl_har_model_zoo.ipynb to select a model, download checkpoints automatically, then evaluate or fine-tune in a few cells. Minimal usage (example: LFR + TS2Vec on MotionSense): pip install minerva # Download checkpoint from this Zenodo deposit wget https://zenodo.org/records/19301058/files/lfr_ts2vec_ms_pretrained.ckpt import torch from minerva.models.nets.tnc import TSEncoder from minerva.models.loaders import FromPretrained from minerva.models.nets.mlp import MLP from minerva.models.nets.base import SimpleSupervisedModel from minerva.models.adapters import MaxPoolingTransposingSqueezingAdapter # 1. Build backbone architecture backbone_arch = TSEncoder( input_dims=6, output_dims=320, hidden_dims=64, depth=10, permute=True ) # 2. Load weights from checkpoint backbone = FromPretrained( model=backbone_arch, ckpt_path="lfr_ts2vec_ms_pretrained.ckpt", filter_keys=["backbone"], keys_to_rename={"backbone.": ""}, ) # 3. Assemble full model model = SimpleSupervisedModel( backbone=backbone, fc=MLP([320, 128, 6]), loss_fn=torch.nn.CrossEntropyLoss(), adapter=MaxPoolingTransposingSqueezingAdapter(kernel_size=60), flatten=False, ) Head input sizes, adapter requirements, and data loading for all 36 combinations are documented in the notebook and the GitHub repository. 📊 Best Results per Dataset KuHar : 6 classes Encoder SSL Acc (%) TS2Vec TF-C 90.3 CNN-PFF TF-C 81.2 ResNet-SE-5 DIET 80.6 IMU Transformer LFR 77.1 RNN TF-C 74.3 TS-TCC TF-C 73.6 MotionSense : 6 classes Encoder SSL Acc (%) TS2Vec LFR 97.5 CNN-PFF TF-C 95.0 ResNet-SE-5 LFR 93.8 TS-TCC LFR 92.8 RNN TF-C 92.0 IMU Transformer TF-C 89.7 RealWorld Thigh : 6 classes Encoder SSL Acc (%) ResNet-SE-5 TF-C 82.8 CNN-PFF TF-C 81.5 TS2Vec TF-C 81.2 RNN TF-C 81.0 TS-TCC LFR 74.9 IMU Transformer LFR 72.2 RealWorld Waist : 6 classes Encoder SSL Acc (%) TS2Vec TF-C 82.5 RNN TF-C 82.1 CNN-PFF TF-C 81.8 IMU Transformer LFR 80.1 ResNet-SE-5 TF-C 79.2 TS-TCC TF-C 76.5 UCI-HAR : 5 classes (no Run) Encoder SSL Acc (%) CNN-PFF TF-C 96.2 TS2Vec TF-C 96.1 ResNet-SE-5 DIET 95.9 RNN TF-C 94.9 TS-TCC TF-C 94.3 IMU Transformer TF-C 92.3 WISDM : 4 classes (sit, stand, walk, run) Encoder SSL Acc (%) CNN-PFF TF-C 91.2 TS2Vec TF-C 90.7 RNN TF-C 89.2 IMU Transformer TF-C 89.2 TS-TCC TF-C 88.9 ResNet-SE-5 TF-C 86.9 📖 Citation @article{daLuz2026benchmarking, author = {Da Luz, Gustavo P. C. P. and Soto, Darlinne H. P. and Napoli, Otávio O. and Rocha, Anderson and Boccato, Levy and Borin, Edson}, journal = {IEEE Access}, title = {Benchmarking Encoders and Self-Supervised Learning for Smartphone-Based Human Activity Recognition}, year = {2026}, volume = {14}, pages = {37451-37475}, doi = {10.1109/ACCESS.2026.3669412} }