Tao Chen 陈涛
Professor (Tenured)
Director of the Embedded Deep Learning and Visual Analysis Lab
Office: Room C5029, Interdisciplinary Building No. 2, Songhu Road 2005, Shanghai city, 200438, China.
Tel: 021-31242503
E-mail: eetchen@fudan.edu.cn
Dr. Tao Chen, the PI of the Embedded Vision Lab, received his Ph.D. from Nanyang Technological University in Singapore in 2012. At the beginning of 2018, he was selected into the National High Level Oversea Talent Plan, and then he joined Fudan University as a tenure-track Professor. Before joining Fudan, he worked at the top research institutions such as the Singapore Intelligent Robotics Laboratory, Singapore Institute for Infocomm Research, and Huawei Singapore Research Center. Since 2019, he joined Fudan and led a research team focusing on light deep vision model design, multimodal vision analysis, and edge device-aware vision applications. To date, Dr. Tao Chen has undertaken multiple projects and fundings from various goverment agencies such as NSFC and corporations like Huawei, Tencent. He has published nearly 100 academic papers in various reputable journals and conferences like IEEE T-PAMI/IJCV/T-IP/CVPR/NeurIPS, etc., and has granted over 10 PCT patents.
News
2024.11 We have seven new papers accepted recently. Including one paper accepted by T-CSVT: Efficient Architecture Search via Bi-level Data Pruning; there papers accepted by T-MM: Lightweight Model Pre-training via Language Guided Knowledge Distillation, WI3D:Weakly Incremental 3D Detection via Vision Foundation Models, ShapeGPT:3D Shape Generation with A Unified Multi-modal Language Model; two paper accepted by Neurocomputing: Instruct Pix-to-3D:Instructional 3D object generation from a single image, Revisiting 3D visual grounding with Context-aware Feature Aggregation; and one paper accepted by RAL: Unbounded-GS:Extending 3D Gaussian Splatting with Hybrid Representation for Unbounded Large-Scale Scene Reconstruction.
2024.09 We are pleased to announce the acceptance of seven new papers this September, covering a range of cutting-edge research areas. These include Neural Coordinate Fields for Generative 3D Foundation Models (MeshXL), Causal Sequence Modelling for 3D Object Detection (3DET-Mamba), Training-Free Adaptive Diffusion, High-Performance Model Merging (EMR-Merging), 3D-Aware Image Composing with Language Instructions, Bridging the Pruning Gap through Soft-to-Hard Distillation (S2HPruner), and Fourier Neural Processes for Arbitrary-Resolution Data Assimilation (FNP).
2024.07 We have four papers accepted by the Proceedings of the European Conference on Computer Vision (ECCV) in 2024, focusing on 3D Assistant with Multi-modal Instructions, Test Time 3D Detection Adaptation, Network Sparsification via Stimulative Training, and Motion Controllers via Multimodal Prompts.
2024.04 We have one paper on 3D Dense Captioning, accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), one paper on Compressed Neural Models, accepted by International Joint Conference on Artificial Intelligence (IJCAI 2024), and one paper on Hyperspectral Image Classifications, accepted by IEEE Transactions on Geoscience and Remote Sensing (T-GRS).
2024.02 We have three papers accepted by Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) in 2024, focusing on advancements in Large Language 3D Assistants, Vision-Language Transformer acceleration through pruning, and Vision Transformer Compression. Additionally, our work on Domain Generalized Point Cloud Classification has been accepted by the IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT).
2024.01 We have one paper on “ReSimAD: Zero-Shot 3D Domain Transfer for Autonomous Driving with Source Reconstruction and Target Simulation”, accepted by Proc. of the International Conference on Learning Representations (ICLR), 2024 and one paper on “See Through the Real World Haze Scenes: Navigating the Synthetic-to-Real Gap in Challenging Image Dehazing”, accepted by Proc. of the International Conference on Robotics and Automation (ICRA), 2024.
2023.12 We have two papers, with one on multi-modal implicit large-scale scene neural representation and another on boosting residual networks with group knowledge, accepted by Proc. of the AAAI Conference on Artificial Intelligence (AAAI), 2023 and one paper on 3D point cloud data-scarce learning accepted by IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT).
2023.09 We have four papers on Autonomous Driving, 3D Shape Generation, Efficient Vision Transformers, and 3D Motion Generation, accepted by Proc. of the Conference on Neural Information Processing Systems (NeurIPS), 2023 and one paper on Knowledge Distillation accepted by IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT).
2023.08 We have one paper on “Rethinking Cross-Domain Pedestrian Detection: ABackground-Focused Distribution AlignmentFramework for One-Stage Detectors”, accepted by IEEE Transactions on Image Processing (TIP).
2023.07 We have two papers, with one large-scale outdoor multi-modal dataset and another on depth estimation, accepted by Proc. of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023. We have one paper on person Re-ID accepted by ACM Conference on Multimedia (ACM MM), 2023.
2023.05 We have one paper on “Boost Transformer-based Language Models with GPU-Friendly Sparsity and Quantization”, accepted by Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
2023.04 We have one paper on “Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend”, accepted by International Joint Conference on Artificial Intelligence (IJCAI), 2023.
2023.03 We have three papers, with one on Multitask CNNs Pruning accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), one on Hyperspectral lmage Classification accepted by IEEE Transactions on Geoscience and Remote Sensing (TGRS), and one on Unsupervised Domain Adaptation accepted by IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT).
2023.02 We have five papers on 3D Object Detection, 3D Dense Captioning, 3D Motion Generation, and Efficient Vision Transformers, accepted by Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
2023.01 We have one paper on “DCNet: Large-scale Point Cloud Semantic Segmentation with Discriminative and Efficient Feature Aggregation”, accepted by IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), 2023.
2022.12 We have one paper on “A Closer Look at Few-Shot 3D Point Cloud Classification”, accepted by International Journal of Computer Vision (IJCV), 2022.
2022.11 We have one paper on “Exploring Kernel-Based Texture Transfer for Pose-Guided Person Image Generation”, accepted by IEEE T-MM 2022.
2022.09 We have two papers, with one on multi-view scene reconstruction and another on residual networks strengthening, Proc. of the Conference on Neural Information Processing Systems (NeurIPS), in press, 2022.
2022.07 We have one paper on “Efficient Joint-Dimensional Search with Solution Space Regularization for Real-Time Semantic Segmentation”, accepted by International Journal of Computer Vision (IJCV), 2022.
2022.06 We have two papers, with one on image classification and another on Efficient Image Classifier Search accepted by ACM MM, 2022.
2022.03 We have one paper on “Beta-decay regularization for differentiable architecture search”, Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR oral), in press, 2022.
2022.01 We have one paper on “Sample-centric Feature Generation for Semi-supervised Few-shot Learning”, accepted by IEEE Transactions on Image Processing (TIP), 2022.
2021.12 We have one paper on “What Makes for Effective Few-shot Point Cloud Classification”, accepted by IEEE Winter Conference on Applications of Computer Vision (WACV), 2022.
2021.11 We have two papers, with one on point cloud instance segmentation accepted by IEEE T-PAMI 2021 and another on hyperspectral image classification accepted by IEEE T-GRS 2021.
2021.10 We have one paper on “Curriculum-style Local-to-global Adaptation for Cross-domain Remote Sensing Image Segmentation”, accepted by IEEE T-GRS 2021.
2021.09 We have one paper on “Joint Distribution Alignment via Adversarial Learning for Domain Adaptive Object Detection”, accepted by IEEE T-MM 2021.
2021.07 We have one paper on “Object-aware Long-short-range Spatial Alignment for Few-Shot Fine-Grained Image Classification”, accepted by ACM’MM 2021.
2021.05 We have two papers,with one on hair synthesis and editing GAN and another on 3D point cloud segmentation, accepted by IEEE ICIP 2021.
2021.03 We have one paper on “Densely Semantic Enhancement for Domain Adaptive Region-free Detectors”, accepted for publication by IEEE T-CSVT, 2021.
2021.02 We have one paper on “EADNET: Efficient Asymmetric Dilated Network For Semantic Segmentation” which achives fastest SOTA semantic segmentation, accepted by IEEE ICASSP, 2021.
2020.11 We have one paper on “Coarse-to-Fine Gaze Redirection with Numerical and Pictorial Guidance”, accepted by IEEE Winter Conference on Applications of Computer Vision (WACV), 2021.
2020.10 We have one paper on “M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia Screening from CT Imaging” accepted by IEEE Journal of Biomedical and Health Informatics (JBHI) .
2020.07 We have one paper on “Dynamic Pedestrain Intrusion Detection” accepted by ACM Conference on Multimedia (ACM’ MM) 2020.
2020.06 We have one paper on “Robust Scene Text Spotting” accepted by Journal of Visual Communication and Image Representation (JVCIR) .
2020.02 We have one paper on “Cascade EF-GAN for Facial Expression Editing” accepted by IEEE CVPR 2020 (Oral),CCF A.
2020.01 We have one paper on “Fine-grained Facial Expression Analysis” accepted by Neurocomputing.
2019.07 We have one paper on “Compositional GAN for Facial Expression Recognition” accepted by ACM’MM, CCF A.
2019.03 We have one paper on “Semi-supervised Hierarchical CNN learning” published by IEEE T-IP, CCF A.
2019.02 Dr. Chen left Singapore and jonined Fudan as a Professor.