Count the Notes: Histogram-Based Supervision for Automatic Music Transcription

1Tel Aviv University, 2International Audio Laboratories Erlangen, Germany

Abstract

Automatic Music Transcription (AMT) converts audio recordings into symbolic musical representations. Training deep neural networks (DNNs) for AMT typically requires strongly aligned training pairs with precise frame-level annotations. Since creating such datasets is costly and impractical for many musical contexts, weakly aligned approaches using segment-level annotations have gained traction. However, existing methods often rely on Dynamic Time Warping (DTW) or soft alignment loss functions, both of which still require local semantic correspondences, making them error-prone and computationally expensive. In this article, we introduce CountEM, a novel AMT framework that eliminates the need for explicit local alignment by leveraging note event histograms as supervision, enabling lighter computations and greater flexibility. Using an Expectation-Maximization (EM) approach, CountEM iteratively refines predictions based solely on note occurrence counts, significantly reducing annotation efforts while maintaining high transcription accuracy. Experiments on piano, guitar, and multi-instrument datasets demonstrate that CountEM matches or surpasses existing weakly supervised methods, improving AMT's robustness, scalability, and efficiency.

Qualitative Examples

Each video begins with the original audio and transitions into the model's transcription. The transcriptions are generated by models trained using our proposed counting-based alignment approach. Under each video, we also provide a link to the Original Performance for reference and comparison.

1. Vivaldi – Four Seasons

Original Performance

2. Rejouissance (Clarinet, Sax, Tuba)

Original Performance

3. Bach – Violin Partita No. 2

Original Performance

4. Beethoven Sonata No. 25 (Piano)

Original Performance

5. Hagrid's Friendly Bird (Flute)

Original Performance

6. Pachelbel – Canon in D (String Quartet)

Original Performance

7. MAESTRO Example (Track02)

Original Performance

8. Mozart – Violin Sonata No. 32

Original Performance

9. ABBA - Mamma Mia

Original Piece

MAESTRO Histogram Supervision Results

Note-level transcription results for training with histogram-based supervision on the MAESTRO dataset. We report Precision (P), Recall (R), and F-score (F) for test and train sets across different histogram window sizes (or Full Track). For reference, results include a baseline trained on synthetic data only (Sy) and a supervised model (Sup).

Model Test Train
P R F P R F
Pre-trained Model
Sy 88.3 81.6 84.6 87.8 81.2 84.1
Histogram Supervision
Rep.
iter.
F/T 92.4 90.4 91.3 91.8 90.5 91.1
180s 93.2 91.7 92.4 92.9 91.9 92.4
120s 93.1 92.2 92.6 92.8 92.4 92.6
60s 95.7 92.2 93.9 95.6 92.5 94.0
30s 95.5 92.8 94.1 95.3 93.1 94.2
1-iter. F/T 92.4 87.1 89.6 91.9 87.3 89.5
60s 93.9 88.4 91.0 93.6 88.5 90.9
Sup 98.7 93.1 95.8 98.8 93.4 96.0

Cross-Dataset Evaluation

Training was performed on MusicNet, with evaluation on MAESTRO, GuitarSet, and URMP. For URMP, we also report F-histogram, which does not enforce the 50ms onset threshold.

Model MAESTRO GuitarSet URMP URMP (Histog.)
P R F P R F P R F P R F
Pre-trained Model
Sy 88.3 81.6 84.6 57.9 80.7 66.2 76.2 65.4 70.1 91.8 79.8 84.9
Histogram Supervision MusicNet Piano (ours)
30s 93.0 88.2 90.4 77.8 82.5 79.4 70.1 79.6 74.5 80.1 90.8 85.0
F/T 92.1 85.8 88.7 81.2 80.1 79.8 77.3 75.1 76.1 89.7 87.1 88.3
Histogram Supervision MusicNet Full (ours)
32ms 77.1 12.1 16.7 85.5 5.0 8.6 56.9 1.5 2.8 100.0 19.0 36.0
100ms 94.7 33.9 43.9 91.3 31.9 40.6 90.2 6.0 11.2 100.0 6.6 12.1
500ms 92.4 80.5 85.8 90.5 69.2 75.8 82.9 70.6 76.1 97.7 83.2 89.8
30s 94.5 86.0 89.9 88.5 75.4 80.3 82.2 79.9 80.9 93.0 90.4 91.6
60s 93.1 86.1 89.3 86.7 78.5 81.5 81.9 79.7 80.7 92.6 90.3 91.3
F/T 92.4 85.0 88.4 82.8 82.4 82.0 81.6 78.2 79.7 92.3 88.8 90.3
DTW + Refinement
M&B AlPl 92.6 87.2 89.7 86.6 80.4 82.9 81.7 77.6 79.6 95.6 91.0 93.2
M&B Al 96.4 83.4 89.2 89.0 76.9 81.5 84.0 75.2 79.3 96.6 86.8 91.3