Il progetto di Data Science dietro Spleeft

In this article, I share — from a technical and reflective perspective — the learning and development process behind the latest optimization of the Spleeft algorithm. I’ll explain how the original system, developed four years ago, works and how, over the past month, I’ve fine-tuned its parameters to improve both accuracy and processing speed.


SCARICA ORA L'APP SPLEEFT PER iOS, ANDROID E APPLE WATCH!

How does the algorithm work?

The Spleeft algorithm measures vertical velocity by integrating acceleration data. The concentric phase is detected using minimum velocity thresholds to mark the start and end of a repetition, and a minimum peak velocity of 0.3 m/s is required to validate the movement. This phase is the most clearly defined and predictable, which allows for more precise detection rules.

To detect stationary phases — which are essential for correcting the accumulated drift from integration — I use data from inertial sensors: accelerometer, gyroscope, and gravity. A phase is considered stationary when the values from these sensors remain below certain thresholds for a specific time window.

This methodological approach coincides with the one later used by Achermann et al. in their validation study of the Apple Watch for measuring barbell velocity.


Background and technological context

In scientific literature, accelerometers have traditionally been considered less accurate for velocity measurement compared to other technologies. Comparative studies have shown that devices like Beast Sensor or Push Band perform with lower accuracy than other solutions.

However, accelerometers are by far the most accessible sensors, as they are integrated into widely used consumer devices like smartwatches and wearables. Although their reliability for metrics such as heart rate has also been questioned, their potential to democratize velocity-based training (VBT) is undeniable.

It’s worth noting that most comparative studies evaluate full systems (hardware + software), without distinguishing between these two components. In my view, it is not fair to attribute poor accuracy solely to the hardware. An accelerometer can provide precise results if paired with well-optimized software designed specifically for the type of movement being analyzed. Therefore, before ruling out a technology, we must fully explore the algorithmic possibilities it offers.

There is already sufficient evidence supporting the use of IMUs (inertial measurement units) to estimate lifting velocity. Companies like Enode or Output have developed proprietary algorithms that achieve high accuracy. In the case of the Apple Watch, multiple studies have validated its hardware for this application, even though they used different algorithms than mine. This reinforces the idea that an appropriate algorithm can overcome the limitations often attributed to the hardware alone.


Limitations of IMUs for measuring velocity

These limitations arise in a chain. The first challenge is identifying the best way to process inertial sensor data — accelerometer, gyroscope, and magnetometer — to accurately estimate the vertical velocity of a bar.

One of the main problems is the integration of acceleration. When integrating using the trapezoidal method (widely used due to its simplicity), systematic errors known as drift accumulate over time. To compensate for this, it is necessary to identify known zero-velocity moments — known as stationary phases — which allow for drift correction.


Signal processing for estimating vertical velocity

Inertial sensors provide data in three spatial axes. However, it’s not enough to directly use the vertical component from the accelerometer, as this sensor alone cannot precisely determine the direction of the measured force. To solve this, I use sensor fusion algorithms like Kalman, Mahony, and Madgwick, which provide orientation quaternions to correct the device’s frame of reference.

I processed data from the Apple Watch and iPhone using different orientation algorithms and compared them against a motion capture system (STT Systems). I selected the combination that offered the best balance between accuracy and computational efficiency on mobile devices.

Once orientation is corrected, acceleration is integrated to obtain velocity. To further minimize drift, I tested the use of a low-pass filter, such as the Butterworth filter, which is widely used in biomechanics.


Zero Velocity Update (ZUPT)

During testing, I found that the main issue was not the orientation correction or the integration itself, but rather the detection of stationary phases needed to apply ZUPT. These phases must be detected quickly and reliably, as their accuracy directly impacts the system’s precision.

There is no single method to achieve this. Therefore, I developed an empirical approach based on the systematic analysis of multiple parameter combinations: which sensors to use, which threshold values to apply, and which time windows to consider.

Schermata 2025 06 16 alle 13.13.19
Velocity vs time Spleeft raw data with drift and also with the zero velocity update working.

Algorithm optimization using data science

With all variables defined, I collected a large dataset of gym repetitions in the lab, storing raw sensor data. Simultaneously, I used a gold-standard system for comparison. The selected exercises were: rebound squat, paused deadlift (both concentric and eccentric), and lat pulldown. All sets were performed with high effort to include various fatigue levels and minimal rest between reps.

This setup allowed me to optimize the algorithm for both accuracy and speed, even under conditions with little or no stationary phase between repetitions — the most challenging scenario for integration-based methods.

I created a Python script to systematically explore 13,486 parameter combinations:

  • Whether or not to apply a low-pass filter, and at which cutoff.
  • ZUPT parameters: thresholds, time windows, and sensor combinations.
  • Minimum velocity thresholds for concentric detection.

After running the full dataset (which took several hours), I analyzed the top-performing combinations separately, based on exercise type, velocity range, and stop type. My goal was not only to find the best-performing configuration, but also to better understand the system’s behavior for future improvements.

output2 edited
Outputs from the Python script that helped me to identify the best parameters for the algorithm.

Field-testing

During practical validation, I found that the main bottleneck was the algorithm’s delay in detecting the stationary phase. User feedback was clear: the system took too long to deliver feedback.

Upon reviewing many velocity-time graphs, I noticed that in the first repetition, there was almost no drift — even without applying ZUPT. This is because drift grows over time. I asked myself: is the accumulated drift large enough to meaningfully affect the average velocity estimate?

Screenshot 2025 06 16 at 21.07.57
Velocity vs time data without a significant drift between reps

To answer this, I tested squats with a substantial eccentric phase before the concentric one. I waited for a first stationary phase to calibrate the system, then performed consecutive reps without pausing. The result was surprising: the average error was just 0.018 m/s, and even lower (0.016 m/s) in the latest validation. The difference compared to the corrected value was minimal, but the response time improved dramatically, from 800 ms to 200 ms. This allowed feedback to be delivered immediately after the concentric phase, rather than seconds later.

Still, the ZUPT algorithm remains active. When a new stationary phase is detected, the signal is recalibrated, and any accumulated error is corrected. If more than one repetition occurs between stationary phases, the corrected values overwrite the initial feedback to ensure the user always receives the most accurate result.


Conclusione

This data-driven project allowed me to significantly optimize the Spleeft algorithm. I’ve successfully reduced the response time for average velocity estimation without compromising the reliability confirmed in earlier validations. The key was combining systematic data collection with thorough parameter exploration — understanding that the quality of the final result depends not only on the hardware, but perhaps even more on the software behind it.

Ivan de Lucas Rogero

Ivan de Lucas Rogero

Prestazioni fisiche MSC e CEO SpleeftApp

Dedicato al miglioramento delle prestazioni atletiche e dell'allenamento ciclistico, unendo scienza e tecnologia per ottenere risultati.

condividi questa ricetta:

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *

Potrebbe interessarti anche

Casi di studio

Il progetto di Data Science dietro Spleeft

In questo articolo condivido, da una prospettiva tecnica e riflessiva, il processo di apprendimento e sviluppo alla base dell'ultima ottimizzazione di Spleeft

it_IT