Insu Han Profile
Insu Han

@insu_han

Followers
115
Following
93
Media
3
Statuses
9

Postdoc at Yale Univ.

Joined September 2011
Don't wanna be here? Send us removal request.
@insu_han
Insu Han
3 years
Most infinitely wide NTK and NNGP kernels are based on the ReLU activation. In we propose a method of computing neural kernels with *general* activations. For homogeneous activations, we approximate the kernel matrices by linear-time sketching algorithms.
7
14
66
@insu_han
Insu Han
3 years
We open-source NNGP and NTK for new activations within the Neural Tangents dev and sketching algorithm at Joint work with Amir Zandieh @ARomanNovak @hoonkp @Locchiu @aminkarbasi.
0
2
3
@insu_han
Insu Han
3 years
Still, .computing full NTK matrices is a big pain, e.g., 5-layer Convolution NTK requires 151 GPU hours. We accelerate the NTK approximation by sketching techniques and provide a tight point-wise error bound. Our approximation takes only 1.5 GPU hours (x106 speedup) 🫢.
0
0
4
@insu_han
Insu Han
3 years
We study another simple approach for the dual kernel by Gauss-Hermite quadrature:
Tweet media one
0
0
3
@insu_han
Insu Han
3 years
In addition, we propose how to automatically compute the dual kernel of the derivative without the activation, which is useful to characterize the NTK with an unknown activation (e.g., normalized Gaussian) or whose dual kernel of the derivative is unavailable (e.g., GeLU, ELU).
Tweet media one
0
0
1
@insu_han
Insu Han
3 years
Our derivations are based on (1) an explicit expression of dual kernel by Hermite polynomials and (2) the fact that Hermite polynomials can play a role of random features of monomial kernels. They allow inputs from the entire *R^d* space.
0
0
1
@insu_han
Insu Han
3 years
We first characterize a kernel function of a single-layer neural network (a.k.a. duel kernel) of various activations. This is a key block for the NNGP and NTK of deeper architectures.
Tweet media one
0
0
5
@insu_han
Insu Han
5 years
RT @dohmatobelvis: Good news: Our paper on Scalable learning and MAP inference in nonsymmetric Determinantal Point Processes .
0
6
0
@insu_han
Insu Han
5 years
RT @dohmatobelvis: Happy to share our recent preprint with .@mikegartrell, Insu Han, V.-E. Brunel, J. Gillenwater,.on Scalable learning and….
0
5
0