Jaehong Yoon

jaehong.yoon [at] kaist [dot] ac [dot] kr
CV    Github    Twitter    Google Scholar   

Latest Updated 09. 19. 2022.


Aug 2022     One paper got accepted to ECCV 2022 Workshop on Computational Aspects of Deep Learning (CADL).
May 2022     Two papers got accepted to ICML 2022.
Jan 2022     Two papers got accepted to ICLR 2022 including one oral presentation (acceptance rate=54/3391=1.6%).


I’m a Ph.D. candidate in the Machine Learning and Artificial Intelligence (MLAI) Lab at KAIST, under the supervision of Prof. Sung Ju Hwang. I finished the B.S. and M.S. degree at UNIST in 2016 and 2018, respectively.
My expected graduation date is February 2023.

My research interest includes following topics:
  • Online Continual Learning: Lifelong Learning and Video Streaming Learning
  • On-device Learning: Federated Learning and Neural Network Compression
  • Egocentric Vision: Video Representation Learning and Audio-video Multimodal Learning
  • Learning with Real-world Data: Un-/Semi-supervised Learning and Coreset Selection

  • My research interest mainly focuses on developing lifelong-evolving and meta-cognitive algorithms for deploying on-device artificial general intelligence systems. In particular, I've been focusing on tackling practical and real-world challenges in various application domains, such as online/streaming learning, egocentric videos, and audio-video multimodal problems.


    (*: equal contribution)

    [W1] BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation

    Geon Park*, Jaehong Yoon*, Haiyang Zhang, Xing Zhang, Sung Ju Hwang, and Yonina Eldar

    ECCV 2022 Workshop on Computational Aspects of Deep Learning (CADL) (To appear)

    [C9] Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization

    Jaehong Yoon*, Geon Park*, Wonyong Jeong, and Sung Ju Hwang

    ICML 2022
    Paper Code

    [C8] Forget-free Continual Learning with Winning Subnetworks

    Haeyong Kang*, Rusty J. L. Mina*, Sultan R. H. Madjid, Jaehong Yoon, Mark Hasegawa-Johnson, Sung Ju Hwang, and Chang D. Yoo

    ICML 2022
    Paper Code

    [C7] Representational Continuity for Unsupervised Continual Learning

    Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, and Sung Ju Hwang

     ICLR 2022   Oral Presentation (Acceptance Rate=1.6%)
    Paper Code
    [C6] Online Coreset Selection for Rehearsal-based Continual Learning

    Jaehong Yoon, Divyam Madaan, Eunho Yang, and Sung Ju Hwang

    ICLR 2022
    Paper Code

    [C5] Federated Continual Learning with Weighted Inter-client Transfer

    Jaehong Yoon*, Wonyong Jeong*, Giwoong Lee, Eunho Yang, and Sung Ju Hwang

    ICML 2020 Workshop on Lifelong Machine Learning
    ICML 2021
    Paper Code

    [C4] Federated Semi-supervised Learning with Inter-Client Consistency & Disjoint Learning

    Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang

    ICML 2020 Workshop on Federated Learning (Long Presentation) (Best Student Paper Award)
    ICLR 2021
    Paper Code

    [C3] Scalable and Order-robust Continual Learning with Additive Parameter Decomposition

    Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang

    ICLR 2020
    Paper Code

    [C2] Lifelong Learning with Dynamically Expandable Networks

    Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang

    ICLR 2018
    Paper Code

    [C1] Combined Group and Exclusive Sparsity for Deep Neural Networks

    Jaehong Yoon and Sung Ju Hwang

    ICML 2017
    Paper Code


    [P4] On the Soft-Subnetwork for Few-shot Class Incremental Learning

    Haeyong Kang, Jaehong Yoon, Sultan R. H. Madjid, Sung Ju Hwang, and Chang D. Yoo

    arXiv:2209.07529, 2022

    [P3] Personalized Subgraph Federated Learning

    Jinheon Baek*, Wonyong Jeong*, Jiongdao Jin, Jaehong Yoon, and Sung Ju Hwang

    arXiv:2206.10206, 2022

    [P2] Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

    Minyoung Song, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang

    arXiv:2006.12130, 2020

    [P1] Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout

    Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, and Sung Ju Hwang

    arXiv:1805.10896, 2018


    Oct 2022 - Nov 2022     Weizmann Institute of Science, Visiting Researcher (Host: Prof. Yonina Eldar)
    Nov 2021 - Apr 2022     Microsoft Research, Research Intern (Mentor: Dr. Yue Cao)
    Mar 2018 - May 2018     AITRICS, Research Intern