検索条件

キーワード
タグ
ツール
開催日
こだわり条件

タグ一覧

JavaScript
PHP
Java
Ruby
Python
Perl
Scala
Haskell
C言語
C言語系
Google言語
デスクトップアプリ
スマートフォンアプリ
プログラミング言語
U/UX
MySQL
RDB
NoSQL
全文検索エンジン
全文検索
Hadoop
Apache Spark
BigQuery
サーバ構成管理
開発サポートツール
テストツール
開発手法
BI
Deep Learning
自然言語処理
BaaS
PaaS
Iaas
Saas
クラウド
AI
Payment
クラウドソフトウェア
仮想化ソフトウェア
OS
サーバ監視
ネットワーク
WEBサーバ
開発ツール
テキストエディタ
CSS
HTML
WEB知識
CMS
WEBマーケティング
グラフィック
グラフィックツール
Drone
AR
マーケット知識
セキュリティ
Shell
IoT
テスト
Block chain
知識

Variational Uncertainty Decomposition for In-Context Learning

2025/09/01(月)
05:00〜06:00

主催:RIKEN AIP Public

This talk will be held in a hybrid format, both in person at AIP Open Space of RIKEN AIP (Nihonbashi office) and online by Zoom. AIP Open Space: *only available to AIP researchers.

DATE, TIME & LOCATION
Monday, September 1st, 14:00 - 15:00, RIKEN AIP Nihombashi Office, Open Space

TITLE
Variational Uncertainty Decomposition for In-Context Learning

ABSTRACT
As large language models (LLMs) gain popularity in conducting prediction tasks in-context, understanding the sources of uncertainty in in-context learning becomes essential to ensuring reliability. The recent hypothesis of in-context learning performing predictive Bayesian inference opens the avenue for Bayesian uncertainty estimation, particularly for decomposing uncertainty into epistemic uncertainty due to lack of in-context data and aleatoric uncertainty inherent in the in-context prediction task. However, the decomposition idea remains under-explored due to the intractability of the latent parameter posterior from the underlying Bayesian model. In this work, we introduce a variational uncertainty decomposition framework for in-context learning without explicitly sampling from the latent parameter posterior, by optimising auxiliary inputs as probes to obtain an upper bound to the aleatoric uncertainty of an LLM's in-context learning procedure. Through experiments on synthetic and real-world tasks, we show quantitatively and qualitatively that the decomposed uncertainties obtained from our method exhibit desirable properties of epistemic and aleatoric uncertainty.

BIO
Yingzhen Li is an Associate Professor in Machine Learning at the Department of Computing, Imperial College London, UK. Before that she was a senior researcher at Microsoft Research Cambridge, and previously she has interned at Disney Research. She received her PhD in engineering from the University of Cambridge, UK. Yingzhen is passionate about building reliable machine learning systems, and her approach combines both Bayesian statistics and deep learning. She has worked extensively on approximate inference methods with applications to Bayesian deep learning and deep generative models, and her work has been applied in industrial systems and implemented in deep learning frameworks (e.g. Tensorflow Probability and Pyro). She regularly gives tutorials and lectures on probabilistic ML and generative models at machine learning research summer schools, including invited tutorials on Approximate Inference at NeurIPS 2020 and UAI 2025. She was a co-organiser of the Advances in Approximate Bayesian Inference (AABI) symposium in 2020-2023, as well as many NeurIPS/ICML/ICLR workshops on topics related to probabilistic ML. She is a Program Chair for AISTATS 2024 and she serves as a General Chair for AISTATS 2025 and 2026. Her work on Bayesian ML has also been recognised in AAAI 2023 New Faculty Highlights.

Workship