Abstract
Musical pattern identification is crucial for various classification and retrieval applications in computational musicology. Feature learning is the basic task, and features act as a basis for the Pattern Recognition (PR). Selecting an appropriate approach is vital to the accuracy of the retrieval algorithms. This research gives a comprehensive review of approaches used for PR and similarity modelling. It systematically analyses various approaches for melodic feature identification and comparatively evaluates the work done in the literature in terms of software tools used, melodic pattern representations, and matching. The study discusses the benefits and limitations of various approaches along with the challenges to be addressed for melodic PR. Results show a wide variety of approaches for different music genres and applications. Further, analysis shows that statistical and symbolic approaches were being used predominantly, and deep learning approaches are gaining popularity in recent times.
References
Albus, John Edward, Robert H. Anderson, J. M. Brayer, R. DeMori, H-YF Feng, S. L. Horowitz, B. Moayer et al. “Syntactic pattern recognition, applications.” Vol. 14. Springer Science & Business Media, 2012.
Aubio audio toolbox. https://aubio.org/.[accessed 10-11-2022].
Aucouturier, Jean-Julien, and Mark Sandler. “Finding repeating patterns in acoustic musical signals: Applications for audio thumbnailing.” In Audio Engineering Society Conference: 22nd International Conference: Virtual, Synthetic, and Entertainment Audio. Audio Engineering Society, 2002.
Bainbridge, David, and Tim Bell. “The challenge of optical music recognition.” Computers and the Humanities 35, no. 2 (2001): 95-121.
Bee Suan Ong, Emilia G´omez, and Sebastian Streich. “Automatic extraction of musical structure using pitch class distribution features.” In Workshop on learning the semantics of audio signals (LSAS), pages 53–65, 2006.
Benetos, Emmanouil, Simon Dixon, Dimitrios Giannoulis, Holger Kirchhoff, and Anssi Klapuri. “Automatic music transcription: challenges and future directions.” Journal of Intelligent Information Systems 41, no. 3 (2013): 407-434.
Bishop, Christopher M., and Nasser M. Nasrabadi. “Pattern recognition and machine learning.” Vol. 4, no. 4. New York: springer, 2006.
Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. “Audio Chord Recognition with Recurrent Neural Networks.” In ISMIR, pp. 335-340. 2013.
Cambouropoulos, Emilios, Tim Crawford, and Costas S. Iliopoulos. “Pattern processing in melodic sequences: Challenges, caveats and prospects.” Computers and the Humanities 35, no. 1 (2001): 9-21.
Caraballo, Luis Evaristo, José Miguel Díaz-Báñez, Fabio Rodríguez, V. Sánchez-Canales, and Inmaculada Ventura. “Scaling and compressing melodies using geometric similarity measures.” Applied Mathematics and Computation 426 (2022): 127130.
Chaudhuri, Amit. “Finding the Raga: An Improvisation on Indian Music.” Faber & Faber, 2021.
Chen, Bo. “Music audio rhythm recognition based on recurrent neural network.” Wireless Communications and Mobile Computing 2022 (2022).
Chen, Ming-Tso, Bo-Jun Li, and Tai-Shih Chi. “Cnn based two-stage multi-resolution end-to-end model for singing melody extraction.” In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1005-1009. IEEE, 2019.
Conklin, Darrell. “Discovery of distinctive patterns in music.” Intelligent Data Analysis 14, no. 5 (2010): 547-554.
Darrell Conklin. “Melody classification using patterns.” In Second international workshop on machine learning and music, pages 37–41, 2009.
Deutsch, Diana. “Binaural integration of melodic patterns.” Perception & psychophysics 25, no. 5 (1979): 399-405.
Dhariwal, Prafulla, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. “Jukebox: A generative model for music.” arXiv preprint arXiv:2005.00341 (2020).
Gonzalo Navarro. “A guided tour to approximate string matching”. ACM computing surveys (CSUR), 33(1):31–88, 2001.
Gulati, Sankalp, Joan Serra, Vignesh Ishwar, Sertan Sentürk, and Xavier Serra. “Phrase-based aga recognition using vector space modeling.” In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 66-70. IEEE, 2016.
Habrard, Amaury, José Manuel Inesta, David Rizo, and Marc Sebban. “Melody recognition with learned edit distances.” In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pp. 86-96. Springer, Berlin, Heidelberg, 2008.
Ismir software tools. https://www.ismir.net/resources/software-tools/.[accessed 10-7-2022].
Jagtap, Jayant, and Nilesh Bhosle. “A comprehensive survey on the reduction of the semantic gap in content-based image retrieval.” International Journal of Applied Pattern Recognition 6, no. 3 (2021): 254-271.
Jain, Anil K., Robert P. W. Duin, and Jianchang Mao. “Statistical pattern recognition: A review.” IEEE Transactions on pattern analysis and machine intelligence 22, no. 1 (2000): 4-37.
Joan Serra, Emilia G´omez, Perfecto Herrera, and Xavier Serra. “Chroma binary similarity and local alignment applied to cover song identification.” IEEE Transactions on Audio, Speech, and Language Processing, 16(6):1138–1151, 2008.
Jonas Langhabel, Robert Lieck, Marc Toussaint, and Martin Rohrmeier. “Feature discovery for sequential prediction of monophonic music.” In ISMIR, pages 649–656, 2017.
Justin Salamon. “Chroma-based predominant melody and bass line extraction from music audio signals.” Unpublished master thesis, Universitat Pompeu Fabra, 2008.
Khulusi, Richard, Jakob Kusnick, Christofer Meinecke, Christina Gillmann, Josef Focht, and Stefan Jänicke. “A survey on visualizations for musical data.” In Computer Graphics Forum, vol. 39, no. 6, pp. 82-110. 2020.
Klapuri, Anssi. “Pattern induction and matching in music signals.” In International symposium on computer music modeling and retrieval, pp. 188-204. Springer, Berlin, Heidelberg, 2010.
Klaus Frieler, Dogac Basaran, Frank H¨oger, H´el`ene-Camille Crayencour, Geoffroy Peeters, and Simon Dixon. “Don’t hide in the frames: Note-and pattern-based evaluation of automated melody extraction algorithms.” In 6th International Conference on Digital Libraries for Musicology, pages 25–32, 2019.
Kotsiantis, Sotiris B., Ioannis Zaharakis, and P. Pintelas. “Supervised machine learning: A review of classification techniques.” Emerging artificial intelligence applications in computer engineering 160, no. 1 (2007): 3-24.
Lee, Junghyuk, and Jong-Seok Lee. “Music popularity: Metrics, characteristics, and audio-based prediction.” IEEE Transactions on Multimedia 20, no. 11 (2018): 3173-3182.
Liu, Weibo, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, and Fuad E. Alsaadi. “A survey of deep neural network architectures and their applications.” Neurocomputing 234 (2017): 11-26.
Lu, Lie, Muyuan Wang, and Hong-Jiang Zhang. “Repeating pattern discovery and structure analysis from acoustic music data.” In Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, pp. 275-282. 2004.
Lu, Wei-Tsung, and Li Su 1. “Deep learning models for melody perception: An investigation on symbolic music data.” In 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1620-1625. IEEE, 2018.
Lu, Wei Tsung, and Li Su 2. “Vocal Melody Extraction with Semantic Segmentation and Audio-symbolic Domain Transfer Learning.” In ISMIR, pp. 521-528. 2018.
Mandel, Michael I., and Daniel PW Ellis. “Song-level features and support vector machines for music classification.” (2005): 594-599.
Matlab audio toolbox. https://in.mathworks.com/products/audio.html.[accessed 10-11-2022].
Melodia vamp plugin. https://www.upf.edu/web/mtg/melodia[accessed 10-11-2022].
Microsoft computational tools for music. https://www.microsoft.com/en-us/research/project/computational-tools-for-music.[accessed 10-11-2022].
Miles, Scott A., David S. Rosen, and Norberto M. Grzywacz. “A statistical analysis of the relationship between harmonic surprise and preference in popular music.” Frontiers in Human Neuroscience 11 (2017): 263.
Mohammed, Duraid Yehya. “Overlapped speech and music segmentation using singular spectrum analysis and random forests.” University of Salford (United Kingdom), 2017.
Murthy, YV Srinivasa, and Shashidhar G. Koolagudi. “Content-based music information retrieval (cb-mir) and its applications toward the music industry: A review.” ACM Computing Surveys (CSUR) 51, no. 3 (2018): 1-46.
Norgaard, Martin, and Ute Römer. “Patterns in music: How linguistic corpus analysis tools can be used to illuminate central aspects of jazz improvisation.” Jazz Education in Research and Practice 3, no. 1 (2022): 3-26.
Ong, Bee Suan, Emilia Gómez, and Sebastian Streich. “Automatic extraction of musical structure using pitch class distribution features.” In Workshop on learning the semantics of audio signals (LSAS), pp. 53-65. 2006.
Oramas, Sergio, Francesco Barbieri, Oriol Nieto Caballero, and Xavier Serra. “Multimodal deep learning for music genre classification.” Transactions of the International Society for Music Information Retrieval. 2018; 1 (1): 4-21. (2018).
Orio, Nicola, and Antonio Rodà. “A Measure of Melodic Similarity based on a Graph Representation of the Music Structure.” In ISMIR, pp. 543-548. 2009.
Ozcan, Giyasettin, Cihan Isikhan, and Adil Alpkocak. “Melody extraction on MIDI music files.” In Seventh IEEE International Symposium on Multimedia (ISM’05), pp. 8-pp. Ieee, 2005.
Panteli, Maria, Emmanouil Benetos, and Simon Dixon. “A review of manual and computational approaches for the study of world music corpora.” Journal of New Music Research 47, no. 2 (2018): 176-189.
Panteli, Maria, Emmanouil Benetos, and Simon Dixon. “Learning a feature space for similarity in world music.” ISMIR, 2016.
Pearce, Marcus Thomas. “The construction and evaluation of statistical models of melodic structure in music perception and composition.” PhD diss., City University London, 2005.
Pearce, Marcus, and Martin Rohrmeier. “Musical syntax II: Empirical perspectives.” In Springer handbook of systematic musicology, pp. 487-505. Springer, Berlin, Heidelberg, 2018.
Pelchat, Nikki, and Craig M. Gelowitz. “Neural network music genre classification.” Canadian Journal of Electrical and Computer Engineering 43, no. 3 (2020): 170-173.
Praat software. https://www.fon.hum.uva.nl/praat/.[accessed 10-11-2022].
Rao, Preeti, Joe Cheri Ross, Kaustuv Kanti Ganguli, Vedhas Pandit, Vignesh Ishwar, Ashwin Bellur, and Hema A. Murthy. “Classification of melodic motifs in raga music with time-series matching.” Journal of New Music Research 43, no. 1 (2014): 115-131.
Reddy M, Gurunath, K. Sreenivasa Rao, and Partha Pratim Das. “Melody Extraction from Polyphonic Music by Deep Learning Approaches: A Review.” arXiv e-prints (2022): arXiv-2202.
Ren, Iris Yuping, Hendrik Vincent Koops, Anja Volk, and Wouter Swierstra. “In search of the consensus among musical pattern discovery algorithms.” In Proceedings of the 18th International Society for Music Information Retrieval Conference, pp. 671-678. ISMIR press, 2017.
Rizo, David, Pedro J. Ponce De León, Carlos Pérez-Sancho, Antonio Pertusa, and José Manuel Iñesta Quereda. “A Pattern Recognition Approach for Melody Track Selection in MIDI Files.” In ISMIR, pp. 61-66. 2006.
Roads, Curtis, and Paul Wieneke. “Grammars as representations for music.” Computer Music Journal (1979): 48-55.
Rohrmeier, Martin, and Marcus Pearce. “Musical syntax I: Theoretical perspectives.” In Springer handbook of systematic musicology, pp. 473-486. Springer, Berlin, Heidelberg, 2018.
Sadie, Stanley, and John Tyrrell. “Dictionary of music and musicians.” New York: Oxford University Press. Yónatan Sánchez, 2001.
Salamon, Justin, Emilia Gómez, Daniel PW Ellis, and Gaël Richard. “Melody extraction from polyphonic music signals: Approaches, applications, and challenges.” IEEE Signal Processing Magazine 31, no. 2 (2014): 118-134.
Samuel Kim and Shrikanth Narayanan. “Dynamic chroma feature vectors with applications to cover song identification.” In 2008 IEEE 10th Workshop on Multimedia Signal Processing, pages 984–987, 2008.
Shen, Jialie, John Shepherd, and Anne HH Ngu. “Towards effective content-based music retrieval with multiple acoustic feature combination.” IEEE Transactions on Multimedia 8, no. 6 (2006): 1179-1189.
Shmulevich, Ilya, Olli Yli-Harja, Edward Coyle, Dirk-Jan Povel, and Kjell Lemström. “Perceptual issues in music pattern recognition: Complexity of rhythm and key finding.” Computers and the Humanities 35, no. 1 (2001): 23-35.
Song, Yading, Simon Dixon, and Marcus Pearce. “A survey of music recommendation systems and future perspectives.” In 9th international symposium on computer music modeling and retrieval, vol. 4, pp. 395-410. 2012.
Sonic visualizer. https://www.sonicvisualiser.org/.[accessed 10-11-2022].
Sound and music computing software tools. https://smcnetwork.org/software.html.[accessed 10-11-2022].
Subramanian, Aswin Shanmugam, Chao Weng, Shinji Watanabe, Meng Yu, and Dong Yu. “Deep learning based multi-source localization with source splitting and its effectiveness in multi-talker speech recognition.” Computer Speech & Language 75 (2022): 101360.
Thickstun, John, Zaid Harchaoui, and Sham Kakade. “Learning features of music from scratch.” arXiv preprint arXiv:1611.09827 (2016).
Tuomas Eerola, Topi J¨a¨arvinen, Jukka Louhivuori, and Petri Toiviainen. “Statistical features and perceived similarity of folk melodies.” Music Perception, 18(3):275–296, 2001
Ullrich, Karen, Jan Schlüter, and Thomas Grill. “Boundary Detection in Music Structure Analysis using Convolutional Neural Networks.” In ISMIR, 2014.
Vamp plugins for software tools. https://www.vamp-plugins.org/download.html.[accessed 10-11-2022].
Vempala, Naresh N., and Frank A. Russo. “Predicting emotion from music audio features using neural networks.” In Proceedings of the 9th International Symposium on Computer Music Modeling and Retrieval (CMMR), pp. 336-343. London, UK: Lecture Notes in Computer Science, 2012.
Walshaw, Chris. “Multilevel melodic matching.” In 5th Intl. Workshop on Folk Music Analysis, pp. 130-137. 2015.
Wang, Xiaolu, and Yao Chen. “Music teaching platform based on FPGA and neural network.” Microprocessors and Microsystems 80 (2021): 103337.
Xiang Cao. “Automatic accompaniment of vocal melodies in the context of popular music.” PhD thesis, Georgia Institute of Technology, 2009.
