자연어처리
연구실
Natural Language
Processing Lab

@ UNIST

인간 수준의 지식 학습과 추론 능력을 갖춘 차세대 언어 AI를 연구합니다. 대학원생 및 연구원을 모집하고 있습니다. Advancing next-generation language AI with human-level knowledge learning and reasoning capabilities. We are recruiting graduate students and researchers.

NLP Lab group photo

Recent News

News
  • Prof. Na will serve as an Area Chair for NeurIPS 2026
  • Three papers are accepted in ACL 2026 (3 Findings)
  • One paper is accepted in Expert Systems with Applications
  • One paper is accepted in CVPR 2026 (1 Findings)
  • One paper is accepted in TACL
  • One paper is accepted in ICLR 2026
  • One paper is accepted in Expert Systems with Applications
  • One paper is accepted in EMNLP 2025 (1 Findings)

Overview: Our Mission and Current Themes

Research

Our NLP lab at UNIST eventually aims to develop a System 2-based language AI that simulates how humans expand and acquire knowledge, ultimately striving to build an Artificial General Language Intelligence (AGLI) intensively equipped with progressive knowledge learning and manipulation capabilities. Given this aim, our research goes beyond System 1 abilities focused on short-term factual recall and aims to endow large language models (LLMs) with System 2-level cognitive skills-such as long-term learning, conceptual understanding, and creative knowledge composition.

Particularly, noting that current LLMs are remarkable but still remain inefficient at knowledge injection and manipulation and fall qualitatively short of human-level capabilities, our current interests include:

UNIST 자연어처리 연구실은 인간이 지식을 확장하고 습득하는 과정을 모사하여 지식 점진적 학습 및 조작 능력을 갖춘 일반 언어지능(Artificial General Language Intelligence, AGLI)을 구현하기 위한 System 2 기반 언어 AI 연구를 목표로 합니다. 본 연구실은 단기적 사실 회상 중심의 System 1 능력을 넘어, System 2 수준의 고차원적 추론 능력—즉 장기 학습, 개념 이해, 창의적 지식 조합—을 대규모 언어모델(LLM)에 부여하는 것을 연구하고 있습니다.

현재의 LLM이 뛰어난 성능에도 불구하고 지식 주입 및 조작(knowledge injection and manipulation)에서 여전히 비효율적이라는 인식하에, 연구실은 다음 분야를 중점적으로 다루고 있습니다:

  • Editing and leveraging knowledge in unstructured text비정형 텍스트 상의 지식 편집 및 활용
  • Efficient reasoning based on knowledge learning지식 학습 기반의 효율적 추론
  • Integrating external memory with parameter-efficient LLMs외부 메모리와 파라미터 효율형 LLM의 통합
  • Progressive knowledge expansion via Mixture-of-Experts (MoE)Mixture-of-Experts(MoE)를 통한 점진적 지식 확장

In the mid-term, the lab also aims to develop parametric equivalents of in-context knowledge editing. In the long-term, we seek mechanisms for long-term conceptual learning, ultimately enabling LLM agents to master knowledge at a human level. Overall, we are dedicated to establishing foundational technologies that will drive next-generation language intelligence.

중기적으로는 In-context 지식 편집의 파라미터적 등가 모델 개발을, 장기적으로는 장기 지식 및 개념 학습 메커니즘을 완성하여 인간 수준의 지식 숙달 능력을 갖춘 LLM 에이전트를 구현하는 것을 목표로 합니다. 이를 통해 차세대 언어지능을 위한 원천 기술 확보에 주력하고 있습니다.

Introduction

  • Introduction to the Natural Language Processing Lab [pdf]
  • 자연어처리 연구실 소개 [pdf]

Announcement

We are now recruiting graduate students or researchers. If you are interested in joining our lab, please send me an email that describes your interests and experience (including CV)!

저희 연구실에서는 현재 대학원생 및 연구원을 모집하고 있습니다. 관심 있는 분께서는 본인의 연구 관심 분야와 경력(이력서 포함)을 소개하는 이메일을 보내주시기 바랍니다.

  • Admission Guides for International Students (Graduate School) are announced. Please refer to the board in School homepage.
  • (대학원) 입학 안내가 공지되었습니다. 자세한 내용은 학교 홈페이지를 참고하시기 바랍니다.

Recent Publication

Publications
  1. ACL 2026 Findings
    PRIME: Ultra-Low-Rank Principal-Residual Model Merging Seung-Ho Lee, Kyungsu Lee, BAZARVAANI ZUCHI, Jeongmin Ahn, Insuk Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
  2. ACL 2026 Findings
    PURE: Post-hoc Unlocking and REfinement for Discrete Diffusion Decoding Yangryeol Park, Kunhui Lee, Hanback Choi, Cheoneum Park, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
  3. ACL 2026 Findings
    EAIR: Entity-aware Inference-Time Knowledge Routing for Multi-Hop Knowledge Editing Jungyu Lee, Kunhui Lee, Gyun Lee, Seung-Hoon Na
  4. CVPR 2026 Findings
    AlphaMerging: Orthogonal Subspace Projection of Task Vectors to Reduce Task Interference for Multi-Task Model Merging BAZARVAANI ZUCHI, Seung-Ho Lee, Ahn Jeongmin, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
  5. Expert Systems With Applications 2026
    GateLM: Jointly Injecting Knowledge Graphs and Texts for Reasoning-Enhanced Language Models on Commonsense Question Answering Jinwoo Min, Kun-Hui Lee, Roseline Nyange, Seung-Hoon Na
  6. TACL 2026 (accepted)
    OrthoEdit: Principled and Stable Knowledge Editing via Orthogonal Subspace Projection Shanbao Qiao, Xuebing Liu, Akshat Gupta, Seung Hoon Na
  7. ICLR 2026
    MergePRAG: Orthogonal Merging of Passage-experts for Multi-hop Parametric RAG Xuebing Liu, Shanbao Qiao, Roseline Nyange, Dongwook Min, Hyun Kim, Seung Hoon Na
  8. Expert Systems with Applications 2025
    MECA: Modular Editing via Customized Expert Networks and Adaptors in Large Language Models Roseline Nyangea, Shanbao Qiao, Seung Hoon Na
  9. EMNLP Findings 2025
    GenPoE: Generative Passage-level Mixture of Experts for Knowledge Enhancement of LLMs Xuebing Liu, Shanbao Qiao and Seung-Hoon Na
  10. NTCIR-18 2025
    Optimizing Causality-Based Radiology Reporting with Retrieval-Augmented and Structured Reasoning Approaches for the NTCIR-18 HIDDEN-RAD Task Ju-Min Cho, Ho-Jin Yi, Myung-Kyu Kim and Seung-Hoon Na [PDF]

Research topic

Research
  • NLP(Natural Language Processing)자연어 처리
  • LLM(Large Language Model)대형 언어 모델
  • RAG(Retrieval-Augmented Generation)검색 증강 생성
  • MoE(Mixture-of-Experts)전문가 혼합 모델
  • Knowledge Editing지식 편집
  • Diffusion-based Language Model확산 기반 언어 모델
  • Model Merging모델 병합
  • Agentic AI지능형 에이전트

Courses

Courses

UNIST울산과학기술원

2026

  • 자연어처리Natural Language Processing - CSE40201 - 2026 1학기- CSE40201 - 2026 1st semester

2025

  • 딥러닝 원론Principles of Deep Learning - AI50201 - 2025 2학기- AI50201 - 2025 2nd semester
  • 자연어처리Natural Language Processing - CSE40201 - 2025 1학기- CSE40201 - 2025 1st semester