Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
publications
Towards lightweight and efficient distributed intrusion detection framework
Published in In the proceedings of 2021 IEEE Global Communications Conference (GLOBECOM), 2021
Use Google Scholar for full citation
Recommended citation: Shuai Yuan, Hongwei Li, Rui Zhang, Meng Hao, Yiran Li, Rongxing Lu, "Towards lightweight and efficient distributed intrusion detection framework." In the proceedings of 2021 IEEE Global Communications Conference (GLOBECOM), 2021.
Secure feature selection for vertical federated learning in ehealth systems
Published in In the proceedings of ICC 2022-IEEE International Conference on Communications, 2022
Use Google Scholar for full citation
Recommended citation: Rui Zhang, Hongwei Li, Meng Hao, Hanxiao Chen, Yuan Zhang, "Secure feature selection for vertical federated learning in ehealth systems." In the proceedings of ICC 2022-IEEE International Conference on Communications, 2022.
Adversarial robustness poisoning: Increasing adversarial vulnerability of the model via data poisoning
Published in In the proceedings of GLOBECOM 2024-2024 IEEE Global Communications Conference, 2024
Use Google Scholar for full citation
Recommended citation: Wenbo Jiang, Hongwei Li, Yuxin Lu, Wenshu Fan, Rui Zhang, "Adversarial robustness poisoning: Increasing adversarial vulnerability of the model via data poisoning." In the proceedings of GLOBECOM 2024-2024 IEEE Global Communications Conference, 2024.
Backdoor Attacks against Image-to-Image Networks
Published in arXiv preprint arXiv:2407.10445, 2024
Use Google Scholar for full citation
Recommended citation: Wenbo Jiang, Hongwei Li, Jiaming He, Rui Zhang, Guowen Xu, Tianwei Zhang, Rongxing Lu, "Backdoor Attacks against Image-to-Image Networks." arXiv preprint arXiv:2407.10445, 2024.
Combinational Backdoor Attack against Customized Text-to-Image Models
Published in arXiv preprint arXiv:2411.12389, 2024
Use Google Scholar for full citation
Recommended citation: Wenbo Jiang, Jiaming He, Hongwei Li, Guowen Xu, Rui Zhang, Hanxiao Chen, Meng Hao, Haomiao Yang, "Combinational Backdoor Attack against Customized Text-to-Image Models." arXiv preprint arXiv:2411.12389, 2024.
Instruction Backdoor Attacks Against Customized LLMs
Published in In the proceedings of 33rd USENIX Security Symposium (USENIX Security 24), 2024
Use Google Scholar for full citation
Recommended citation: Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang, "Instruction Backdoor Attacks Against Customized LLMs." In the proceedings of 33rd USENIX Security Symposium (USENIX Security 24), 2024.
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
Published in arXiv preprint arXiv:2410.22725, 2024
Use Google Scholar for full citation
Recommended citation: Ji Guo, Wenbo Jiang, Rui Zhang, Guoming Lu, Hongwei Li, "One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks." arXiv preprint arXiv:2410.22725, 2024.
Vertical federated learning across heterogeneous regions for industry 4.0
Published in IEEE Transactions on Industrial Informatics, 2024
Use Google Scholar for full citation
Recommended citation: Rui Zhang, Hongwei Li, Luoding Tian, Meng Hao, Yuan Zhang, "Vertical federated learning across heterogeneous regions for industry 4.0." IEEE Transactions on Industrial Informatics, 2024.
Backdoor attacks against hybrid classical-quantum neural networks
Published in Neural Networks, 2025
Use Google Scholar for full citation
Recommended citation: Ji Guo, Wenbo Jiang, Rui Zhang, Wenshu Fan, Jiachen Li, Guoming Lu, Hongwei Li, "Backdoor attacks against hybrid classical-quantum neural networks." Neural Networks, 2025.
BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models
Published in arXiv preprint arXiv:2505.03501, 2025
Use Google Scholar for full citation
Recommended citation: Zihan Wang, Hongwei Li, Rui Zhang, Wenbo Jiang, Kangjie Chen, Tianwei Zhang, Qingchuan Zhao, Guowen Xu, "BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models." arXiv preprint arXiv:2505.03501, 2025.
ConfGuard: A Simple and Effective Backdoor Detection for Large Language Models
Published in arXiv preprint arXiv:2508.01365, 2025
Use Google Scholar for full citation
Recommended citation: Zihan Wang, Rui Zhang, Hongwei Li, Wenshu Fan, Wenbo Jiang, Qingchuan Zhao, Guowen Xu, "ConfGuard: A Simple and Effective Backdoor Detection for Large Language Models." arXiv preprint arXiv:2508.01365, 2025.
Evaluating Robustness of Large Audio Language Models to Audio Injection: An Empirical Study
Published in arXiv preprint arXiv:2505.19598, 2025
Use Google Scholar for full citation
Recommended citation: Guanyu Hou, Jiaming He, Yinhang Zhou, Ji Guo, Yitong Qiao, Rui Zhang, Wenbo Jiang, "Evaluating Robustness of Large Audio Language Models to Audio Injection: An Empirical Study." arXiv preprint arXiv:2505.19598, 2025.
Hidden Tail: Adversarial Image Causing Stealthy Resource Consumption in Vision-Language Models
Published in arXiv preprint arXiv:2508.18805, 2025
Use Google Scholar for full citation
Recommended citation: Rui Zhang, Zihan Wang, Tianli Yang, Hongwei Li, Wenbo Jiang, Qingchuan Zhao, Yang Liu, Guowen Xu, "Hidden Tail: Adversarial Image Causing Stealthy Resource Consumption in Vision-Language Models." arXiv preprint arXiv:2508.18805, 2025.
MPMA: Preference Manipulation Attack Against Model Context Protocol
Published in arXiv preprint arXiv:2505.11154, 2025
Use Google Scholar for full citation
Recommended citation: Zihan Wang, Hongwei Li, Rui Zhang, Yu Liu, Wenbo Jiang, Wenshu Fan, Qingchuan Zhao, Guowen Xu, "MPMA: Preference Manipulation Attack Against Model Context Protocol." arXiv preprint arXiv:2505.11154, 2025.
The Ripple Effect: On Unforeseen Complications of Backdoor Attacks
Published in In the proceedings of Forty-second International Conference on Machine Learning., 2025
Use Google Scholar for full citation
Recommended citation: Rui Zhang, Yun Shen, Hongwei Li, Wenbo Jiang, Hanxiao Chen, Yuan Zhang, Guowen Xu, Yang Zhang, "The Ripple Effect: On Unforeseen Complications of Backdoor Attacks." In the proceedings of Forty-second International Conference on Machine Learning., 2025.
Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models
Published in In the proceedings of Proceedings of the AAAI Conference on Artificial Intelligence, 2025
Use Google Scholar for full citation
Recommended citation: Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li, "Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models." In the proceedings of Proceedings of the AAAI Conference on Artificial Intelligence, 2025.
