Search for a command to run...
🎉 VABAM v1.0.0 — Official Release Official code release accompanying our paper : Shape and Amplitude Decoupling in Pulsatile Physiological Signal Synthesis and Its Evaluation Junetae Kim, Kyoungsuk Park, Lei Chen, Kyunglim Kim Overview VABAM (Variational Autoencoder for Amplitude-based Biosignal Augmentation within Morphological Shape) is a generative framework for pulsatile physiological signals (ABP, ECG) that decouples waveform shape from amplitude dynamics via cascaded filtering. VABAM structurally partitions the generative process into dedicated pathways for each component — learnable cutoff frequencies define spectral retention at each filtering stage under a uniform prior constraint, while waveform shape is encoded through Gaussian-prior latent variables. This design enables targeted amplitude modulation while preserving waveform shape. The repository also includes CMI-based evaluation metrics for principled, information-theoretic assessment of structural preservation and amplitude controllability. Demo A lightweight demo is available for quick testing of signal generation and visualization — no training required. Demo dataset: Data/Demo — small synthetic dataset generated with NeuroKit2 Demo notebook: DemoVisualizationSig.ipynb Note: Demo files contain names like MIMIC or VitalDB for pipeline compatibility only — they are not real records from those sources. Data Data Sources The original study used data from: MIMIC-III Waveform Database: https://physionet.org/content/mimic3wdb/1.0/ VitalDB: https://vitaldb.net/ Both datasets are subject to their respective Data Use Agreements. Original data are not redistributed in this repository. Synthetic Dataset A larger synthetic dataset can be generated locally by running the following notebooks in order from the ./Data folder: Demodataset.ipynb — generates synthetic signals (dataset size is configurable) DemoProcessing_mu_law_encode_sampling.ipynb — processes signals for WaveNet-based benchmarks (skip if not running WaveNet-based benchmarks) Output files will be saved to ./Data/ProcessedData. Note: Four reference files (Mimic3SigMax, Mimic3SigMin, VitalDBSigMax, VitalDBSigMin) are pre-provided but contain arbitrary placeholder values only and do not represent any real data. Environment | Dependency | Version | |---|---| | Python | 3.8.16 / 3.9.18 | | numpy | 1.19.5 / 1.26.0 | | pandas | 1.1.4 / 2.1.1 | | tensorflow | 2.4.0 / 2.10.0 | | GPU | RTX 3090Ti / 4080 / 4090 | Note: On NVIDIA 50-series GPUs (e.g., RTX 5090), CUDA/cuDNN compatibility issues may occur with the GRU implementation. Switch to CPU if needed. Citation @software{kim2025vabam_code, author = {Kim, Junetae and Park, Kyoungsuk and Chen, Lei and Kim, Kyunglim}, title = {Variational Autoencoder for Amplitude-based Biosignal Augmentation within Morphological Shape}, doi = {10.5281/zenodo.19351272}, url = {https://doi.org/10.5281/zenodo.19351272}, year = {2025}} License MIT License — see LICENSE for details.