A Slot-Aware Semantic Signaling Framework for Multi-intent Classification
PDF

How to Cite

Shah, Juhi, and Priyank Thakkar. 2026. “A Slot-Aware Semantic Signaling Framework for Multi-Intent Classification”. Journal of Trends in Computer Science and Smart Technology 8 (1): 176-94. https://doi.org/10.36548/jtcsst.2026.1.009.

Keywords

— Slot-Aware Semantic Signaling
— Multi-Intent Detection
— Slot Filling
— Joint Learning
Published: 25-03-2026

Abstract

Multi-Intent Detection (MID) and Slot Filling (SF) are fundamental tasks in Natural Language Understanding (NLU) for goal-oriented dialogue systems. Despite advances in various joint models, which improve performance by learning interactions between intents and slots, existing models might not adequately capture intricate and complex relationships between intents in multi-intent utterances, especially if slot-level semantic information is not adequately utilized during intent detection (ID). To this end, this paper proposes a novel framework, namely Slot-Aware Semantic Signaling for Multi-Intent Classification (S3MIC), which is a joint framework that leverages slot-level semantic information to improve intent prediction in multi-intent dialogue systems. We conduct experiments with the proposed framework on benchmark datasets, namely MixATIS and MixSNIPS, and report consistent improvements over existing benchmark models in terms of Slot F1, Intent Accuracy, and Semantic Frame Accuracy (SeFr Acc). Specifically, our proposed framework achieves 52.5% SeFr Acc on MixATIS and 86.28% SeFr Acc on MixSNIPS, thereby validating the efficacy of slot-aware semantic signaling in improving joint MID and SF performance in goal-oriented dialogue systems.

References

  1. Cai, Fengyu, Wanhao Zhou, Fei Mi, and Boi Faltings. "Slim: Explicit Slot-Intent Mapping with Bert for Joint Multi-Intent Detection and Slot Filling." In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2022, 7607-7611.
  2. Wan, Xue, Wensheng Zhang, Mengxing Huang, Siling Feng, and Yuanyuan Wu. "A Unified Approach to Nested and Non-Nested Slots for Spoken Language Understanding." Electronics 12, no. 7 (2023): 1748.
  3. Chen, Lisong, Peilin Zhou, and Yuexian Zou. "Joint Multiple Intent Detection and Slot Filling Via Self-Distillation." In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2022, 7612-7616.
  4. Yu, Dian, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, and Qi Li. "Few-Shot Intent Classification and Slot Filling with Retrieved Examples." In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021, 734-749.
  5. Wu, Di, Liang Ding, Fan Lu, and Jian Xie. "SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling." In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), 2020, 1932-1937.
  6. Qin, Libo, Xiao Xu, Wanxiang Che, and Ting Liu. "AGIF: An Adaptive Graph-Interactive Framework for Joint Multiple Intent Detection and Slot Filling." In Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, 1807-1816.
  7. Qin, Libo, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. "A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding." In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), 2019, 2078-2087.
  8. Gangadharaiah, Rashmi, and Balakrishnan Narayanaswamy. "Joint Multiple Intent Detection and Slot Labeling for Goal-Oriented Dialog." In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, 564-569.
  9. Wang, Yu, Yilin Shen, and Hongxia Jin. "A Bi-Model Based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling." In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 2018, 309-314.
  10. Chen, Qian, Zhu Zhuo, and Wen Wang. "Bert for Joint Intent Classification and Slot Filling." arXiv preprint arXiv:1902.10909 (2019).
  11. Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. "Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding." In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), 2019, 4171-4186.
  12. Haffner, Patrick, Gokhan Tur, and Jerry H. Wright. "Optimizing SVMs for Complex Call Classification." In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03)., vol. 1, IEEE, 2003, I-I.
  13. Raymond, Christian, and Giuseppe Riccardi. "Generative and Discriminative Algorithms for Spoken Language Understanding." In Interspeech 2007-8th Annual Conference of the International Speech Communication Association. 2007.
  14. Hemphill, Charles T., John J. Godfrey, and George R. Doddington. "The ATIS Spoken Language Systems Pilot Corpus." In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. 1990.
  15. Coucke, Alice, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro et al. "Snips Voice Platform: An Embedded Spoken Language Understanding System for Private-By-Design Voice Interfaces." arXiv preprint arXiv:1805.10190 (2018).
  16. Castellucci, Giuseppe, Valentina Bellomaria, Andrea Favalli, and Raniero Romagnoli. "Multi-Lingual Intent Detection and Slot Filling in a Joint Bert-Based Model." arXiv preprint arXiv:1907.02884 (2019).
  17. Zhang, Xiaodong, and Houfeng Wang. "A Joint Model of Intent Determination and Slot Filling for Spoken Language Understanding." In IJCAI, vol. 16, no. 2016, 2016, 2993-2999.
  18. Hakkani-Tu¨r, Dilek, Go¨khan Tu¨r, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. ”Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional Rnn-Lstm.” In Interspeech, 2016, 715-719.
  19. Zhang, Chenwei, Yaliang Li, Nan Du, Wei Fan, and Philip S. Yu. "Joint Slot Filling and Intent Detection Via Capsule Neural Networks." In Proceedings of the 57th annual meeting of the association for computational linguistics, 2019, 5259-5267.
  20. Liu, Yijin, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, and Jinan Xu. "Cm-Net: A Novel Collaborative Memory Network for Spoken Language Understanding." In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), 2019, 1051-1060.
  21. Goo, Chih-Wen, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. "Slot-Gated Modeling for Joint Slot Filling and Intent Prediction." In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 2018, 753-757.
  22. Li, Changliang, Liang Li, and Ji Qi. ”A Self-Attentive Model with Gate Mechanism for Spoken Language Understanding.” In Proceedings of the 2018 conference on empirical methods in natural language processing, 2018, 3824-3833.
  23. Cheng, Xuxin, Zhihong Zhu, Wanshi Xu, Yaowei Li, Hongxiang Li, and Yuexian Zou. ”Accelerating Multiple Intent Detection and Slot Filling Via Targeted Knowledge Distillation.” In Findings of the Association for Computational Linguistics: EMNLP 2023, 2023, 8900-8910.
  24. Zhu, Zhihong, Weiyuan Xu, Xuxin Cheng, Tengtao Song, and Yuexian Zou. ”A Dynamic Graph Interactive Framework with Label-Semantic Injection for Spoken Language Understanding.” In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2023, 1-5.
  25. Mesnil, Gre´goire, Xiaodong He, Li Deng, and Yoshua Bengio. ”Investigation of Recurrent-Neural-Network Architectures and Learning Methods for Spoken Language Understanding.” In Interspeech, 2013, 3771-3775.
  26. Collobert, Ronan, Jason Weston, Le´on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. ”Natural Language Processing (Almost) from Scratch.” Journal of Machine Learning Research 12 (2011): 2493-2537.
  27. Yao, Kaisheng, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. ”Recurrent Neural Networks for Language Understanding.” In Interspeech, 2013, 2524-2528.
  28. Mikolov, Tomas, Stefan Kombrink, Anoop Deoras, Lukar Burget, and Jan Cernocky. ”Rnnlm-Recurrent Neural Network Language Modeling Toolkit.” In Proc. of the 2011 ASRU Workshop, 2011, 196-201.
  29. Ravuri, Suman V., and Andreas Stolcke. ”Recurrent Neural Network and LSTM Models for Lexical Utterance Classification.” In Interspeech, 2015, 135-139.
  30. Mesnil, Gre´goire, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He et al. ”Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding.” IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, no. 3 (2014): 530-539.
  31. Liu, Bing, and Ian Lane. ”Recurrent Neural Network Structured Output Prediction for Spoken Language Understanding.” In Proc. NIPS Workshop on Machine Learning for Spoken Language Understanding and Interactions. 2015.
  32. Elman, Jeffrey L. ”Finding Structure In Time.” Cognitive science 14, no. 2 (1990): 179-211.
  33. Jordan, Michael I. ”Serial Order: A Parallel Distributed Processing Approach.” In Advances in psychology, vol. 121, North-Holland, 1997, 471-495.
  34. Xu, Puyang, and Ruhi Sarikaya. ”Convolutional Neural Network Based Triangular Crf for Joint Intent Detection and Slot Filling.” In 2013 IEEE workshop on automatic speech recognition and understanding, IEEE, 2013, 78-83.
  35. Yao, Kaisheng, Baolin Peng, Geoffrey Zweig, Dong Yu, Xiaolong Li, and Feng Gao. ”Recurrent Conditional Random Field for Language Understanding.” In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2014, 4077-4081.
  36. Yao, Kaisheng, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. ”Spoken Language Understanding Using Long Short-Term Memory Neural Networks.” In 2014 IEEE spoken language technology workshop (SLT), IEEE, 2014, 189-194.
  37. Vu, Ngoc Thang, Pankaj Gupta, Heike Adel, and Hinrich Schu¨tze. ”Bi-Directional Recurrent Neural Network with Ranking Loss for Spoken Language Understanding.” In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2016, 6060-6064.
  38. Liu, Bing, and Ian Lane. ”Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling.” arXiv preprint arXiv:1609.01454 (2016).
  39. Liu, Bing, and Ian Lane. ”Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks.” In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2016, 22-30.
  40. Qin, Libo, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, and Ting Liu. ”A Co-Interactive Transformer for Joint Slot Filling and Intent Detection.” In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2021, 8193-8197.
  41. Chen, Guanhua, Yutong Yao, Derek F. Wong, and Lidia S. Chao. ”A Two-Stage Prediction-Aware Contrastive Learning Framework for Multi-Intent NLU.” In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 2024, 1778-1788.
  42. He, Li, Jingxuan Zhao, Jianyong Duan, Hao Wang, and Xin Li. ”Conceptual Knowledge Enhanced Model for Multi-Intent Detection and Slot Filling.” IEICE TRANSACTIONS on Information and Systems 107, no. 4 (2024): 468-476.
  43. Wu, Di, Liting Jiang, Lili Yin, Kai Wang, Haoxiang Su, Zhe Li, and Hao Huang. ”Dual Level Intent-Slot Interaction for Improved Multi-Intent Spoken Language Understanding.” In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2024, 12301-12305.
  44. Qin, Libo, Qiguang Chen, Jingxuan Zhou, Jin Wang, Hao Fei, Wanxiang Che, and Min Li. ”Divide-Solve-Combine: An Interpretable and Accurate Prompting Framework for Zero-shot Multi-Intent Detection.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 23, 2025, 25038-25046.
  45. Chen, Shuxin, Xu Li, Jiaqi Wang, and Yu Zhang. ”Joint Model for Multi-Intent Spoken Language Understanding Based on Bidirectional Graph Attention Network and Enhanced With Large Language Models.” In International Journal of Innovative Computing, Information and Control, 2025.
  46. Qin, Libo, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, and Ting Liu. ”GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, 178-188.