About Me

Yihua Cheng is a Research Fellow at the University of Birmingham. He is working on various research topics including computer vision, human-computer interaction and autonomous driving. His research focuses on understanding human behaviour and human-object interaction, generating realistic digital humans, and applying these advancements across various domains.

Yihua Cheng has published high-quality papers in top-tier venues, including TPAMI, TIP, Nature Communications, CVPR, ICCV, ECCV, AAAI, and ACM MM. He serves as a reviewer for high-level journals such as TPAMI, TIP, and Nature Human Behaviour, and was recognized as an Outstanding Reviewer for ICCV 2023. Yihua is the organizer of the Gaze Workshop at CVPR and the Autonomous Driving Workshop at WACV. He leads a Ramsay Funding project and collaborates with industry partners in autonomous driving.

Yihua Cheng obtained Ph.D. degree in Computer Science from Beihang University in December 2022. His Ph.D. supervisor was Prof. Feng Lu. He received B.S. degree of Computer Science from Beijing University of Posts and Telecommunications in 2017.

Please feel free to contact me for any academic or bussiness collobration.

News

[2025-01]: 🔥 “Single-view Image to Novel-view Generation for Hand-Object Interactions” is accepted to AAAI 2025.
[2025-01]: 🔥 “Meta-learning enables complex cluster-specific few-shot binding affinity prediction for protein-protein interactions” is accepted to JCML.
[2024-10]: 🔥 Call for Paper, DDL November 22! We are organizing Human-Autonomous Vehicle Interaction Workshop (HAVI) at WACV 2025.
[2024-10]: 🔥 I am invited by Prof. Yoichi Sato to give a talk titled “Eye Tracking and Generation: Challenges and Future” at the University of Tokyo.
[2024-09]: “Integration of molecular coarse-grained model into geometric representation learning framework for protein-protein complex property prediction” is accepted to Nature Communication.
[2024-07]: “TextGaze: Gaze-Controllable Face Generation with Natural Language” is accepted to ACM MM24.
[2024-07]: “NL2Contact: Natural Language Guided 3D Hand-Object Contact Modeling with Diffusion Model” is accepted to ECCV24 (Oral Presentation).
[2024-04]: “Appearance-Based Gaze Estimation with Deep Learning: A Review and Benchmark” is accepted to TPAMI.
[2024-03]: “What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation” is accepted to CVPR2024, Please find the project Here.
[2024-03]: Call for Paper, DDL March 15 ! We are organizing GAZE Workshop at CVPR 2024. [2023-10]: One paper is accepted to WACV 2024.
[2023-08]: One paper is accepted to BMVC 2023. (Oral Presentation)
[2023-07]: “DVGaze: Dual-View Gaze Estimation” is accepted to ICCV 2023.
[2023-02]: I jointed University of Birmingham as a Postdoc.