検索条件

キーワード
タグ
ツール
開催日
こだわり条件

タグ一覧

JavaScript
PHP
Java
Ruby
Python
Perl
Scala
Haskell
C言語
C言語系
Google言語
デスクトップアプリ
スマートフォンアプリ
プログラミング言語
U/UX
MySQL
RDB
NoSQL
全文検索エンジン
全文検索
Hadoop
Apache Spark
BigQuery
サーバ構成管理
開発サポートツール
テストツール
開発手法
BI
Deep Learning
自然言語処理
BaaS
PaaS
Iaas
Saas
クラウド
AI
Payment
クラウドソフトウェア
仮想化ソフトウェア
OS
サーバ監視
ネットワーク
WEBサーバ
開発ツール
テキストエディタ
CSS
HTML
WEB知識
CMS
WEBマーケティング
グラフィック
グラフィックツール
Drone
AR
マーケット知識
セキュリティ
Shell
IoT
テスト
Block chain
知識

[Tensor Learning Team Seminar] Talk by Prof. John Paisley (Columbia University) on Gaussian Process Tilted Nonparametric Density Estimation using Fisher Divergence Score Matching

2025/07/01(火)
06:00〜07:00

主催:RIKEN AIP Public

Date, Time and Location:
July 1, 2025, 15:00 - 16:00 (JST)
This lecture will be held both in person at the AIP open space and online by Zoom.
*AIP open space is available to AIP researchers only
Title: Gaussian Process Tilted Nonparametric Density Estimation using Fisher Divergence Score Matching
Abstract: We present three Fisher divergence (FD) minimization algorithms for learning Gaussian process (GP) based score models for lower dimensional density estimation problems. By representing the GP part of the score as a linear function using the random Fourier feature (RFF) approximation, we show that all learning problems can be solved in closed form. This includes the basic and noise conditional versions of the Fisher divergence, as well as a novel alternative to noise conditional FD models based on variational inference (VI). Here, we propose using an ELBO-like optimization of the approximate posterior with which we derive a Fisher variational predictive distribution. The RFF representation of the GP, which is functionally equivalent to a single layer neural network score model with cosine activation, provides a unique linear form for which all VI expectations are in closed form. We demonstrate our three learning algorithms, as well as a MAP baseline algorithm, on several low dimensional density estimation problems.
Bio: John Paisley is an Associate Professor in the Department of Electrical Engineering at Columbia University, where he is also a member of the Data Science Institute. His research interests include Bayesian models and inference, with applications to machine learning problems. Before joining Columbia in 2013, he was a postdoctoral researcher in the computer science departments at Princeton University and UC Berkeley. He received the BSE and PhD degrees in Electrical and Computer Engineering from Duke University in 2004 and 2010, respectively.

Workship