李宇楠

-
副教授
博士生导师
硕士生导师
- 主要任职:华山菁英副教授
- 性别:男
- 毕业院校:西安电子科技大学
- 学历:博士研究生毕业
- 学位:博士研究生毕业
- 在职信息:在岗
- 所在单位:计算机科学与技术学院
- 入职时间: 2020-01-13
- 学科:计算机应用技术
- 办公地点:南校区网安大楼A1-726
- 电子邮箱:aa2302df5e005d4951b2940719b384468aff3003e025fd322efab03da947ce77eeb09a9b150071f81870c43185a572039543a081db87f76e34e2a7736fd5191539258736091bf4de853c74f94d6cf6fefd933df7bf0f5a23d13ef3be1e47b7afff678322cc9d2cf506aece1ef5ae2db4542e0aa1f58e0bac7133dc2b218576c3
访问量:
-
[1]Yunan Li,Huizhou Chen.Image Hazing and Dehazing: From the Viewpoint of Two-Way Image Translation With a Weakly Supervised Framework.[J]:IEEE Transactions on Multimedia,2022
-
[2]Review of dynamic gesture recognition.[J]:Virtual Reality & Intelligent Hardware,3(3):183-206
-
[3]Benjia Zhou,Yunan Li,Regional Attention with Architecture-Rebuilt 3D Network for RGB-D Gesture Recognition.[C]:AAAI,2021
-
[4]Jun Wan,Chi Lin,Longyin Wen,Yunan Li,Qiguang Miao.ChaLearn Looking at People: IsoGD and ConGD Large-Scale RGB-D Gesture Recognition.[J]:IEEE Transactions on Cybernetics,2020
-
[5]Yunan Li,Qiguang Miao,Sergio Escalera,Huijuan Fang,Huizhou Chen,Xiangda Qi,Guodong Guo.CR-Net: A Deep Classification-Regression Network for Multimodal Apparent Personality Analysis:International Journal of Computer Vision,2020,128(1):2763–2780
-
[6]Yunan Li,Wanli Ouyang.LAP-Net: Level-Aware Progressive Network for Image Dehazing.[C]:ICCV,2019:3276-3285
-
[7]Yunan Li,Kuan Tian.Large-scale gesture recognition with a fusion of RGB-D data based on optical flow and the C3D model:Pattern Recognition Letters,2019,119(1):187-194
-
[8]Yunan Li,Xiangda Qi.A spatiotemporal attention-based ResC3D model for large-scale gesture recognition.[J]:Machine Vision and Applications,2019,30(5):875-888
-
[9]Qi Yuan,Chi Lin,Yunan Li.Global and Local Spatial-Attention Network for Isolated Gesture Recognition.[C]:Chinese Conference on Biometric Recognition,2019:84-93
-
[10]Yunan Li,Kuan Tian.Large-scale Gesture Recognition with a Fusion of RGB-D Data Based on Saliency Theory and C3D model:IEEE Transactions on Circuits and Systems for Video Technology,2018,28(10):2956 - 2964