Dongyeop Lee

prof_past_1.png

I am a third-year Master’s student (PhD student from spring 2026) in the Graduate School of Artificial Intelligence at POSTECH, under the supervision of Professor Namhoon Lee.

I study large-scale optimization for (a) distributive training, (b) neural network sparsification, and (c) general deep learning. My research focuses on uncovering principles to train extreme-scale machine learning systems under minimal infrastructures, so that one can use existing resources as they are.

You can contact me at dongyeop.lee2@postech.ac.kr.

news

Nov 27, 2025 I am honored to have been selected as a winner of the Qualcomm innovation fellowship Korea 2025!
Nov 17, 2025 Our new paper on achieving extreme LLM sparsity won the best paper award in JKAIA 2025! (related news)
May 9, 2025 One paper has been accepted to UAI 2025 🇧🇷: “Critical Influence of Overparameterization on Sharpness-aware Minimization”.
May 5, 2025 Two papers have been accepted to ICML 2025 🇨🇦: SAFE and Sassha.
Sep 2, 2024 Excited to be working as a student researcher at Google for the next 12+ weeks!

selected publications

2025

  1. prj-elsa-thumbnail.png
    The Unseen Frontier: Pushing the Limits of LLM Sparsity with Surrogate-Free ADMM
    Kwanhee Lee, Hyeondo Jang, Dongyeop Lee, Dan Alistarh, and Namhoon Lee
    arxiv (Best paper award🏆@ JKAIA 2025), Oct 2025
  2. prj-safe-thumbnail.png
    SAFE: Finding Sparse and Flat Minima to Improve Pruning
    Dongyeop Lee, Kwanhee Lee, Jinseok Chung, and Namhoon Lee
    ICML 2025 (spotlight), Jul 2025
  3. prj-sassha-thumbnail.png
    SASSHA: Sharpness-aware Adaptive Second-order Optimization with Stable Hessian Approximation
    Dahun Shin*, Dongyeop Lee*, Jinseok Chung, and Namhoon Lee
    ICML 2025 (CKAIA 2024), Jul 2025
  4. prj-samparam-thumbnail.png
    Critical Influence of Overparameterization on Sharpness-aware Minimization
    Sungbin Shin*, Dongyeop Lee*, Maksym Andriushchenko, and Namhoon Lee
    UAI 2025 (ICML 2023 HiLD Workshop, Best paper award🏆@ JKAIA 2023), Jul 2025