The development of the audio module is obtaining good results in the task of tagging the different events that occur in the songs. We want to show you just one example of the results we are obtaining:

A song is generally composed by different segments often named as intro, verse, chorus, etc. It is very challenging to develop an algorithm to precisely segment a song. Even human segmentation is some time controversial, because the difficulty of defining what a segment is.  Anyway, employing spectral balance features, we are developing a reliable algorithm for segmentation.

Different segments of music contain different instruments combination, generating different spectral fingerprints. Using the signal processing structure shown in the figure we can differentiate these segments, and label it.

To easily represents that segments, we shown in the next figure a pseudocolor associated with each segment. Colour and amplitude can be used in real time for video synchronization, or off-line for segmentation.