Special Session 7 - Multi-modal Information Processing

  • Submission Link: https://www.easychair.org/conferences/?conf=prai2024
    (Select Track Special Session 7: Multi-modal Information Processing)

    With the continuous advancement of artificial intelligence (AI) technology, research in multi-modal AI has become a hot topic. Multi-modal AI involves the fusion and interaction of multiple perceptual modalities, such as speech, image, video, text, etc., providing rich information and possibilities for more intelligent and humanized human-computer interaction and intelligent systems. Multi-modal information processing aims to leverage the combined strengths of visual, auditory, textual, and other sensory data to extract more extensive information. This Special Session is dedicated to exploring the challenges and opportunities in this topic.

    Important Date:
    Submission deadline: 2024.06.15
    Notification deadline: 2024.07.05

    Topics of interest include, but are not limited to:
    - Multi-modal data fusion and integration methods
    - Multi-modal feature extraction and representation learning
    - Multi-modal emotion recognition and affective computing
    - Multi-modal intelligent interaction and interface design
    - Multi-modal intelligent systems and applications
    - Deep learning and neural network methods for multi-modal data
    - Pattern recognition and classification of multi-modal data

    1. Professor Jianqing Zhu
    School of Engineering, Huaqiao University, China
    Email: jqzhu@hqu.edu.cn

    2. Professor Chih-Hsien Hsia
    Department of Computer Science and Information Engineering, National Ilan University, Taiwan
    Email: hsiach@niu.edu.tw

    3. Associate Professor Jing Chen
    School of Information Science and Engineering, Huaqiao University, China
    Email: chenjing8005@hqu.edu.cn

    4. Assistant Professor Yifan Shi
    School of Engineering, Huaqiao University, China
    Email: shiyifan@hqu.edu.cn