Seungyub Han

I am a PhD student at the Communications and Machine Learning Laboratory, part of the Department of Electrical and Computer Engineering at Seoul National University, where I work on Reinforcement Learning, Robotics, and Deep Learning. My PhD advisor is Jungwoo Lee.

I have a BS in EE from Seoul National University.

E-mail: seungyubhan@snu.ac.kr

GitHub  /  Google Scholar  /  LinkedIn  / 

profile photo

Publications

I'm interested in reinforcment learning, robot learning, optimization, and representation learning.

project image

Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion


Taehyun Cho, Seungyub Han, Heesoo Lee, Kyungjae Lee, Jungwoo Lee
NeurIPS 2023, 2023
paper / arxiv /

We provide a perturbed distributional Bellman optimality operator by distorting the risk measure and prove the convergence and optimality of the proposed method with the weaker contraction property.

project image

SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning


Dohyeok Lee, Seungyub Han, Taehyun Cho, Jungwoo Lee
NeurIPS 2023, 2023
paper /

By introducing a novel regularization loss for Q-ensemble independence based on random matrix theory, we propose spiked Wishart Q-ensemble independence regularization (SPQR) for reinforcement learning.

project image

On the Convergence of Continual Learning with Adaptive Methods


Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee
UAI 2023, 2023
paper /

In this paper, we provide a convergence analysis of memory-based continual learning with stochastic gradient descent and empirical evidence that training current tasks causes the cumulative degradation of previous tasks.

project image

Learning to Learn Unlearned Feature for Brain Tumor Segmentation


Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi
Medical Imaging meets NeurIPS (NeurIPS 2018 Workshop), 2018
paper / arxiv /

One of the difficulties in medical image segmentation is the lack of datasets with proper annotations. To alleviate this problem, we propose active meta-tune to achieve balanced parameters for both glioma and brain metastasis domains within a few steps.




Preprints

project image

Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN


Hyeungill Lee, Seungyub Han, Jungwoo Lee
, 2017
arxiv /

We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network. We alternately train both classifier and generator networks. The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image.




Other Activities


Design and source code from Jon Barron's website