Track 1. Robot Perception and Intelligent Interaction

Topics | Submit Online: https://www.easychair.org/conferences/?conf=prai2026 (Please choose Track 1)

Track 1 - Invited Speakers

Keyang Chen, Jiangsu University, China
Keyang Cheng, Executive Dean, Professor, and Doctoral Supervisor of the Cyberspace Security Research Institute of Jiangsu University, is currently a member of the Multimedia Special Committee of the Chinese Computer Society, the Internet of Things Special Committee of the Chinese Computer Society, the Pattern Recognition and Machine Intelligence Special Committee of the Chinese Automation Society, the Pattern Recognition Special Committee of the Chinese Artificial Intelligence Society, and a high-level talent of the "333 Project" in Jiangsu Province. He mainly engaged in research in the fields of artificial intelligence and pattern recognition, led 10 projects such as the National Natural Science Foundation of China, the National Engineering Laboratory Fund of China, and the Jiangsu Provincial Natural Science Foundation. He has published over 50 academic papers in top journals and conferences such as TNNLS, TII, TCSVT, MM, ICDE, and ICME, applied for more than 30 patents and software copyrights, published 3 books, and received Jiangsu Provincial Science and Technology Award twice.
Title: New Progress of Interpretability Theory in Deep Learning
Abstract: With the rapid development of deep neural network theory and technology, the opacity of deep neural network models and the inexplicability of results seriously hinder its application in high-risk fields. Issues such as the "black box" nature of models and unreliable decision-making paths have been explored and studied by relevant researchers in the early stages of deep learning development. Today, there are numerous achievements in the field of interpretability research in deep learning. This report will firstly discuss the measurement and evaluation indicators of interpretability in deep learning. Secondly, it will summarize the current theoretical progress of deep learning interpretability theory in constructing internal interpretable deep learning models and interpreting existing deep learning models. Then it will share the existing interpretable research exploration ideas and methods of our team; Finally, based on the current high-risk application fields, it will look forward to the direction and challenges of future research.