Profile

Jaehong Yoon

jaehong.yoon [at] kaist [dot] ac [dot] kr
Github,    Google Scholar,    CV

Latest Updated 10. 07. 2021.

Introduction

I’m a Ph.D. student in the Machine Learning and Artificial Intelligence (MLAI) Lab at KAIST, under the supervision of Prof. Sung Ju Hwang. I finished the B.S. degree at UNIST in 2016 and the M.S. degree at UNIST in 2018, under the supervision of Prof. Sung Ju Hwang.

My research interest mainly focuses on developing novel models and algorithms for tackling practical challenges in deploying on-device artificial intelligence systems to various real-world application domains.

I currently focus on the following topics:
  • Continual learning, Transfer learning, and Domain adaptation
  • Network pruning and Quantization
  • Federated learning
  • Unsupervised/self-supervised representation learning
  • Learning with biased and noisy inputs

Publications

and Preprints

Rethinking the Representational Continuity: Towards Unsupervised Continual Learning
Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, and Sung Ju Hwang
In submission, 2021

Online Coreset Selection for Rehearsal-based Continual Learning
Jaehong Yoon, Divyam Madaan, Eunho Yang, and Sung Ju Hwang
arXiv:2106.01085, 2021 [Paper]

Federated Continual Learning with Weighted Inter-client Transfer
Jaehong Yoon*, Wonyong Jeong*, Giwoong Lee, Eunho Yang, and Sung Ju Hwang (*: equal contribution)
ICML 2020 Workshop in Lifelong Machine Learning
ICML 2021 [Paper]

Federated Semi-supervised Learning with Inter-Client Consistency & Disjoint Learning
Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang
ICML 2020 Workshop in Federated Learning (Long Presentation), (Best Student Paper Award)
ICLR 2021 [Paper]

Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning
Minyoung Song, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang
arXiv:2006.12130, 2020 [Paper]

Scalable and Order-robust Continual Learning with Additive Parameter Decomposition
Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang
ICLR 2020 [Paper]

Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout
Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, and Sung Ju Hwang
arXiv:1805.10896, 2018 [Paper]

Lifelong Learning with Dynamically Expandable Networks
Jaehong Yoon, Eunho Yang, Jeongtae Lee and Sung Ju Hwang
ICLR 2018 [Paper]

Combined Group and Exclusive Sparsity for Deep Neural Networks
Jaehong Yoon and Sung Ju Hwang
ICML 2017 [Paper]

Patents (US Only)

Method and Apparatus with Neural Network and Training
Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang
US 20210256374 A1, Aug 2021

Electronic Apparatus and Method for Re-learning Trained Model
Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang
US 20180357539 A1, Dec 2018

Experience

Nov 2021 ~
Current
Microsoft Research
Research Internship