Jaehong Yoon

jaehong.yoon [at] kaist [dot] ac [dot] kr
Github    Twitter    Google Scholar   


02. 2023     I'm currently on the job market for the Postdoc or Research Scientist position.
01. 2023     One paper got accepted to ICLR 2023.
05. 2022     Two papers got accepted to ICML 2022.
01. 2022     Two papers got accepted to ICLR 2022 including one oral presentation (acceptance rate=54/3391=1.6%).


I’m a Ph.D. candidate in the Machine Learning and Artificial Intelligence (MLAI) Lab at KAIST, under the supervision of Prof. Sung Ju Hwang. I finished the B.S. and M.S. degree at UNIST in 2016 and 2018, respectively. I interned at Microsoft Research (mentor: Dr. Yue Cao) and worked as a short-term visiting student at Weizmann Institute of Science (host: Prof. Yonina Eldar).

My research interest includes following topics:
  • Efficient Deep Learning: Continual Learning, Federated Learning, and Neural Network Compression
  • Egocentric Vision: Video Understanding, and Multimodal Learning with video, audio, and language information
  • Learning with Real-world Data: Un-(Self-)/Semi-supervised Learning and Input Selective Training

  • My research interest mainly focuses on developing lifelong-evolving and efficient deep learning alrogithms for deploying sustainable on-device artificial general intelligence systems. In particular, I've been focusing on tackling practical and real-world challenges in application domains, such as online/streaming learning, egocentric videos, and audio-video-text multimodal problems.

    New Preprints

    [P6] Text-Guided Token Selection for Text-to-Image Synthesis with Token-based Diffusion Models

    Jaewoong Lee*, Sangwon Jang*, Jaehyeong Jo, Jaehong Yoon, Yunji Kim, Jin-Hwa Kim, Jung-Woo Ha, Sung Ju Hwang

    Preprint, 2023
    Paper BibTeX

    [P5] Continual Learners are Incremental Model Generalizers

    Jaehong Yoon, Sung Ju Hwang, and Yue Cao

    Preprint, 2023
    Paper BibTeX

    [P4] Efficient Video Representation Learning via Masked Video Modeling with Motion-centric Token Selection

    Sunil Hwang*, Jaehong Yoon*, Youngwan Lee, and Sung Ju Hwang

    arXiv:2211.10636, 2022
    Paper Code

    [P3] Personalized Subgraph Federated Learning

    Jinheon Baek*, Wonyong Jeong*, Jiongdao Jin, Jaehong Yoon, and Sung Ju Hwang

    arXiv:2206.10206, 2022


    (*: equal contribution)

    [C10] On the Soft-Subnetwork for Few-shot Class Incremental Learning

    Haeyong Kang, Jaehong Yoon, Sultan R. H. Madjid, Sung Ju Hwang, and Chang D. Yoo

    ICLR 2023
    Paper Code

    [W1] BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation

    Geon Park*, Jaehong Yoon*, Haiyang Zhang, Xing Zhang, Sung Ju Hwang, and Yonina Eldar

    ECCV 2022 Workshop on Computational Aspects of Deep Learning (CADL)

    [C9] Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization

    Jaehong Yoon*, Geon Park*, Wonyong Jeong, and Sung Ju Hwang

    ICML 2022
    Paper Code

    [C8] Forget-free Continual Learning with Winning Subnetworks

    Haeyong Kang*, Rusty J. L. Mina*, Sultan R. H. Madjid, Jaehong Yoon, Mark Hasegawa-Johnson, Sung Ju Hwang, and Chang D. Yoo

    ICML 2022
    Paper Code

    [C7] Representational Continuity for Unsupervised Continual Learning

    Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, and Sung Ju Hwang

     ICLR 2022   Oral Presentation (Acceptance Rate=1.6%)
    Paper Code
    [C6] Online Coreset Selection for Rehearsal-based Continual Learning

    Jaehong Yoon, Divyam Madaan, Eunho Yang, and Sung Ju Hwang

    ICLR 2022
    Paper Code

    [C5] Federated Continual Learning with Weighted Inter-client Transfer

    Jaehong Yoon*, Wonyong Jeong*, Giwoong Lee, Eunho Yang, and Sung Ju Hwang

    ICML 2020 Workshop on Lifelong Machine Learning
    ICML 2021
    Paper Code

    [C4] Federated Semi-supervised Learning with Inter-Client Consistency & Disjoint Learning

    Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang

    ICML 2020 Workshop on Federated Learning (Long Presentation) (Best Student Paper Award)
    ICLR 2021
    Paper Code

    [C3] Scalable and Order-robust Continual Learning with Additive Parameter Decomposition

    Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang

    ICLR 2020
    Paper Code

    [C2] Lifelong Learning with Dynamically Expandable Networks

    Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang

    ICLR 2018
    Paper Code

    [C1] Combined Group and Exclusive Sparsity for Deep Neural Networks

    Jaehong Yoon and Sung Ju Hwang

    ICML 2017
    Paper Code

    [P2] Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

    Minyoung Song, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang

    arXiv:2006.12130, 2020

    [P1] Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout

    Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, and Sung Ju Hwang

    arXiv:1805.10896, 2018