検索条件

キーワード
タグ
ツール
開催日
こだわり条件

タグ一覧

JavaScript
PHP
Java
Ruby
Python
Perl
Scala
Haskell
C言語
C言語系
Google言語
デスクトップアプリ
スマートフォンアプリ
プログラミング言語
U/UX
MySQL
RDB
NoSQL
全文検索エンジン
全文検索
Hadoop
Apache Spark
BigQuery
サーバ構成管理
開発サポートツール
テストツール
開発手法
BI
Deep Learning
自然言語処理
BaaS
PaaS
Iaas
Saas
クラウド
AI
Payment
クラウドソフトウェア
仮想化ソフトウェア
OS
サーバ監視
ネットワーク
WEBサーバ
開発ツール
テキストエディタ
CSS
HTML
WEB知識
CMS
WEBマーケティング
グラフィック
グラフィックツール
Drone
AR
マーケット知識
セキュリティ
Shell
IoT
テスト
Block chain
知識

Talk by Prof.Brad Rava (the University of Sydney) "Ask for More Than Bayes Optimal: A Theory of Indecisions for Classification"

2025/06/13(金)
02:00〜03:00

主催:RIKEN AIP Public

Venue: Hybrid (Zoom / RIKEN AIP Nihonbashi Office)
(RIKEN AIP Nihonbashi office is only for AIP members)

Speaker: Brad Rava (the University of Sydney)

Title: Ask for More Than Bayes Optimal: A Theory of Indecisions for Classification

Abstract: Selective classification is a powerful tool for automated decision-making in high-risk scenarios, allowing classifiers to make only highly confident decisions while abstaining when uncertainty is too high. Given a target classification accuracy, our goal is to minimize the number of indecisions, which are observations that we do not automate. For problems that are hard, the target accuracy may not be achievable without using indecisions. In contrast, by using indecisions, we are able to control the misclassification rate to any user-specified level, even below the Bayes optimal error rate, while minimizing the frequency of identifying an indecision.

We provide a full characterization of the minimax risk in selective classification, proving key continuity and monotonicity properties that enable optimal indecision selection. Our results extend to hypothesis testing, where we control type II error given a fixed type I error, introducing a novel perspective in selective inference. We analyze the impact of estimating the regression function η, showing that plug-in classifiers remain consistent and that accuracy-based calibration effectively controls indecision levels. Additionally, we develop finite-sample calibration methods and identify cases where no training data is needed under the Monotone Likelihood Ratio (MLR) property. In the binary Gaussian mixture model, we establish sharp phase transition results, demonstrating that minimal indecisions can yield near-optimal accuracy even with suboptimal class separation. These findings highlight the potential of selective classification to significantly reduce misclassification rates with a relatively small cost in terms of indecisions.

Bio: Bio: Brad Rava is a Lecturer (Assistant Professor) in Business Analytics at the University of Sydney's Business School. He received his Ph.D. in Statistics at the University of Southern California's Marshall School of Business, housed in the Department of Data Sciences and Operations, advised by Dr. Gareth James and Dr. Xin Tong.

Brad's research interests revolve around developing methods that rigorously communicate uncertainty in high risk scenarios. Some of his research draws upon Human-AI collaboration, Selective Classification, Fairness in Machine Learning, Empirical Bayes techniques, Multiple Hypothesis Testing, and High Dimensional Statistics.

Workship