検索条件

キーワード
タグ
ツール
開催日
こだわり条件

タグ一覧

JavaScript
PHP
Java
Ruby
Python
Perl
Scala
Haskell
C言語
C言語系
Google言語
デスクトップアプリ
スマートフォンアプリ
プログラミング言語
U/UX
MySQL
RDB
NoSQL
全文検索エンジン
全文検索
Hadoop
Apache Spark
BigQuery
サーバ構成管理
開発サポートツール
テストツール
開発手法
BI
Deep Learning
自然言語処理
BaaS
PaaS
Iaas
Saas
クラウド
AI
Payment
クラウドソフトウェア
仮想化ソフトウェア
OS
サーバ監視
ネットワーク
WEBサーバ
開発ツール
テキストエディタ
CSS
HTML
WEB知識
CMS
WEBマーケティング
グラフィック
グラフィックツール
Drone
AR
マーケット知識
セキュリティ
Shell
IoT
テスト
Block chain
知識

[91st TrustML Young Scientist Seminar] Talk by Yu-Hu Yan (Nanjing University) "Universal Online Learning with Gradient Variations"

2025/04/18(金)
01:00〜02:00

主催:RIKEN AIP Public

Date and Time: April 18, 2025: 10:00 - 11:00 (JST)
Venue: Online

Title: Universal Online Learning with Gradient Variations

Speaker: Yu-Hu Yan (Nanjing University)

Abstract: In this talk, I will introduce our recent works that enhance online convex optimization approaches with two levels of adaptivity. On a higher level, our methods are agnostic to the curvatures of the online functions. At the lower level, they are capable of modeling the difficulties of the online learning problem, enabling problem-dependent guarantees. Specifically, I will introduce our two recent works on this topic, published in NeurIPS 2023 and 2024. These two comparable works differ both in their results and the technical ideas used. Our findings not only provide robust worst-case guarantees but also lead to small-loss bounds in the analysis. Furthermore, the applicability of our results extends to adversarial and stochastic convex optimization, as well as two-player zero-sum games, demonstrating both the significance of our research and the effectiveness of the proposed methods.

Bio: Yu-Hu Yan (https://www.lamda.nju.edu.cn/yanyh/) is a Ph.D. student in the LAMDA Group at the School of Artificial Intelligence, Nanjing University, under the supervision of Prof. Zhi-Hua Zhou and Assistant Prof. Peng Zhao. He earned his bachelor’s degree from Nanjing University in 2020. His research interests span online learning, optimization, online game/control, and large language model (LLM) alignment. His work has been published in top conferences and journals, including JMLR, NeurIPS, ICML, and AAAI.

Workship