Submit
Evaluation Tracks
To evaluate the relevance of representations for downstream MIR tasks, we design evaluation frameworks: the unconstrained track, semi-constrained track and the constrained track.
Unconstrained Track
In the unconstrained track, researchers are invited to submit their systems with any hyperparameter and structure configuration, including the option to fine-tune pre-trained models. This track encourages flexibility and exploration, enabling researchers to investigate a wide range of approaches.
Semi-Constrained Track
On the other hand, the semi-constrained track requires the submissions to use frozen pre-trained backbones.
Constrained Track
Finally, the constrained track employs a standardised setting with limited hyper-parameter search space, where frozen models are used as feature extractors for training a one-layer 512-unit MLP (or 3-layer 512-unit LSTM for source separation) on each task. In addition, we set a computational wall for MARBLE. The systems need to finish each task within a week on our machine equipped with a single consumer GPU (RTX3090).
By offering these three evaluation tracks, we aim to provide researchers with a comprehensive platform to assess the performance and relevance of representations in MIR tasks, encouraging innovative approaches and fostering advancements in the field.
The hyper-parameter search range of the constrained evaluation track is given as follow:
- Layer:
[every single layer, weighted sum]
- Model:
[one-layer 512-units MLP, 3-layer 512-unit LSTM (source separation only)]
- Batch size:
[64]
- Learning rate:
[5e-5, 1e-4, 5e-4, 1e-3, 5e-3, 1e-2]
- Dropout probability:
[0.2]
Submision Protocol
Anounced soon.