Jaehong Yoon

jaehong.yoon [at] kaist [dot] ac [dot] kr
CV    Github    Twitter    Google Scholar   

Latest Updated 05. 15. 2022.


May 2022     (NEW!) Two papers got accepted to ICML 2022.
Jan 2022     Two papers got accepted to ICLR 2022 including one oral presentation (acceptance rate = 54/3391 = 1.6%).


I’m a Ph.D. student in the Machine Learning and Artificial Intelligence (MLAI) Lab at KAIST, under the supervision of Prof. Sung Ju Hwang. I finished the B.S. and M.S. degree at UNIST in 2016 and 2018, respectively.
My expected graduation date is February 2023.

My research interest includes following topics:
  • Lifelong Machine learning: Online learning, Continual learning, and Streaming learning
  • Collective Machine Intelligence: On-device learning and Federated learning
  • Learning with incomplete data: Un-/self-supervised learning and Coreset selection
  • Low-resource learning: Network compression and Quantization

  • I'm eager to build lifelong and meta-cognitive learning algorithms for non-stationary problems or environments, towards real-world artificial general intelligence. Recently, I've been focusing on bridging my research experience to relevant research areas, such as open-world problems, online/streaming learning, reinforcement learning, multimodal, and language models.


    (*: equal contribution)

    [C9] Forgetting-free Continual Learning with Winning Subnetworks

    Haeyong Kang, Rusty J. L. Mina, Sultan R. H. Madjid, Jaehong Yoon, Chang D. Yoo, Sung Ju Hwang, and Mark Hasegawa-Johnson

    ICML 2022 (To appear)
    Paper Code BibTeX

    [C8] Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization

    Jaehong Yoon*, Geon Park*, Wonyong Jeong, and Sung Ju Hwang

    ICML 2022 (To appear)
    Paper Code

    [C7] Representational Continuity for Unsupervised Continual Learning

    Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, and Sung Ju Hwang

     ICLR 2022   Oral Presentation (Acceptance Rate = 1.6%)
    Paper Code
    [C6] Online Coreset Selection for Rehearsal-based Continual Learning

    Jaehong Yoon, Divyam Madaan, Eunho Yang, and Sung Ju Hwang

    ICLR 2022
    Paper Code

    [C5] Federated Continual Learning with Weighted Inter-client Transfer

    Jaehong Yoon*, Wonyong Jeong*, Giwoong Lee, Eunho Yang, and Sung Ju Hwang

    ICML 2020 Workshop in Lifelong Machine Learning
    ICML 2021
    Paper Code

    [C4] Federated Semi-supervised Learning with Inter-Client Consistency & Disjoint Learning

    Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang

    ICML 2020 Workshop in Federated Learning (Long Presentation) (Best Student Paper Award)
    ICLR 2021
    Paper Code

    [C3] Scalable and Order-robust Continual Learning with Additive Parameter Decomposition

    Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang

    ICLR 2020
    Paper Code

    [C2] Lifelong Learning with Dynamically Expandable Networks

    Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang

    ICLR 2018
    Paper Code

    [C1] Combined Group and Exclusive Sparsity for Deep Neural Networks

    Jaehong Yoon and Sung Ju Hwang

    ICML 2017
    Paper Code


    [P4] Personalized Subgraph Federated Learning

    Jinheon Baek*, Wonyong Jeong*, Jiongdao Jin, Jaehong Yoon, and Sung Ju Hwang

    Preprint, 2022
    Paper BibTeX

    [P3] BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation

    Geon Park*, Jaehong Yoon*, Haiyang Zhang, Xing Zhang, Sung Ju Hwang, and Yonina Eldar

    Preprint, 2022
    Paper BibTeX

    [P2] Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

    Minyoung Song, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang

    arXiv:2006.12130, 2020

    [P1] Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout

    Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, and Sung Ju Hwang

    arXiv:1805.10896, 2018


    Nov 2021 - Apr 2022     Microsoft Research, Research Intern (Advisor: Yue Cao)
    Mar 2018 - May 2018     AITRICS, Research Intern