AI Testing & Verification

Neural networks can be (arguably) viewed a different paradigm of programming, where logical reasoning is replaced with big data and optimization. Unlike traditional programs, however, neural networks are subject to bugs, e.g., adversarial samples and discriminatory instances. In this line of work, we aim to develop systematic theories, methods and tools to improve the quality of AI-systems.

Key Publications

Improving Neural Network Verification through Spurious Region Guided Refinement.
Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, Lijun Zhang.
International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, 27 March - 1 April, 2021.

White-box fairness testing through adversarial sampling.
Peixin Zhang, Jingyi Wang, Jun Sun, Guoliang Dong, Xinyu Wang, Xingen Wang, Jin Song Dong, and Ting Dai.
ICSE ‘20: 42nd International Conference on Software Engineering, Seoul, South Korea, 27 June - 19 July, 2020.

Global PAC Bounds for Learning Discrete Time Markov Chains.
Hugo Bazille, Blaise Genest, Cyrille Jégourel, and Jun Sun.
Computer Aided Verification - 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21-24, 2020, Proceedings, Part II.

Adversarial sample detection for deep neural network through model mutation testing.
Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, and Peixin Zhang.
Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019.

People

SUN Jun
Associate Professor
PHAM Hong Long
PHAM Hong Long
Research Fellow
ZHANG Yueling
ZHANG Yueling
Research Fellow
SUN Bing
SUN Bing
PhD Student
ZHANG Mengdi
ZHANG Mengdi
PhD Student