Vincent Adam Profile
Vincent Adam

@vincentadam87

Followers
339
Following
152
Media
6
Statuses
93

Machine Learning Research at @UPFBarcelona

Barcelona
Joined June 2012
Don't wanna be here? Send us removal request.
@vincentadam87
Vincent Adam
4 years
Our #NeurIPS2021 paper "Dual Parameterization of Sparse Variational Gaussian Processes" is on Arxiv. We introduce an alternative 'dual' parameterization for SVGP leading to faster inference and learning. with @edchangy, @arnosolin & @EmtiyazKhan. (1/n)
Tweet media one
2
13
97
@vincentadam87
Vincent Adam
2 years
RT @RLSummerSchool: And that's a wrap for #RLSS2023! After the chess tournament on Tuesday evening, yesterday we had our last lectures on n….
0
5
0
@grok
Grok
5 days
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
358
634
2K
@vincentadam87
Vincent Adam
2 years
RT @RLSummerSchool: 🎉Exciting news! Applications for the Reinforcement Learning Summer School Barcelona 2023 are now open! Don't miss your….
0
14
0
@vincentadam87
Vincent Adam
3 years
We are hosting a. *Reinforcement Learning Summer School* (RLSS) 🤖.at University Pompeu Fabra in 🌴Barcelona 🌞this year . Great place, great topics, great line-up. Spread the word!.
@RLSummerSchool
RLSS Barcelona
3 years
We are thrilled to announce that the Reinforcement Learning Summer School 2023 will take place in Barcelona from June 26th to July 5th. It will provide an in-depth introduction to RL principles and algorithms. Applications will open soon. Stay tuned!.
0
2
7
@vincentadam87
Vincent Adam
3 years
Sharing notebooks I made:.Topic: gentle introduction to deep learning with jax. Requires: little to no coding experience with Python. Content: Defining & optimizing losses via (stochastic) gradient descent, from linear regression to image classification.
Tweet card summary image
github.com
Introductory notebooks to deep learning using jax. Contribute to vincentadam87/intro_to_jax development by creating an account on GitHub.
0
1
11
@vincentadam87
Vincent Adam
3 years
RT @VinFL: New great day at the Reinforcement Learning summer school with @herkevanhoof , @thomaskipf , @pcastr and a practical session led….
0
6
0
@vincentadam87
Vincent Adam
3 years
Looking forward. I'll talk about optimization methods for Gaussian variational inference for Gaussian process models (and sparse extensions).
@avt_im
Alexander Terenin
3 years
The next speaker in the virtual seminar series is Vincent Adam, who will give a talk on "Dual Parameterization of Sparse Variational Gaussian Processes" - Zoom links have been sent, but you can still find us on YouTube at the scheduled time!.
0
1
10
@vincentadam87
Vincent Adam
3 years
[Life update] I have just moved to Barcelona 😎 and joined the AI/ML group at @DTIC_UPF. Looking forward to 2 years of serious RL theory with @neu_rips @vicen__gomez, Anders Jonsson and the team. I'll be an independent post-doctoral researcher on a Spanish grant (Maria Zambrano).
5
1
64
@vincentadam87
Vincent Adam
4 years
RT @arnosolin: We are presenting 'Dual Parameterization of Sparse Variational Gaussian Processes' \w @vincentadam87, @edchangy, @EmtiyazKha….
0
11
0
@vincentadam87
Vincent Adam
4 years
The dual parameterization leads to faster inference:.natural gradients updates take a very simple form and require only local gradients of the expected log-likelihood terms in the variational objective (ELBO) - no need to differentiate the KL term. (4/n)
Tweet media one
1
0
2
@vincentadam87
Vincent Adam
4 years
The dual parameterization allows to derive a better objective (tighter bound) in a Variational Expectation-Maximization (VEM) algorithm for learning the kernel parameters of GP models. This tighter bound leads to faster learning. We call our extended VEM algorithm t-SVGP. (3/n)
Tweet media one
1
0
1
@vincentadam87
Vincent Adam
4 years
The 'dual' parameterization of the posterior process in SVGP shares the site-based structure of the approximation used in Expectation-Propagation, separating prior and data contributions, with 2n parameters. Sites can be tied to reduce storage complexity to O(m^2). (2/n)
Tweet media one
1
0
2
@vincentadam87
Vincent Adam
4 years
I m happy to announce I will never have a best reviewer award. I always ask for at most 3 papers. Conferences sum the good AC reports. I will never make the cut.
1
0
7
@vincentadam87
Vincent Adam
4 years
RT @SecondmindLabs: @EmtiyazKhan recently gave a great talk at Secondmind Labs, and he was kind enough to allow us to share it with everyon….
0
3
0
@vincentadam87
Vincent Adam
4 years
RT @SecondmindLabs: Secondmind is opening a Senior Research Engineer position to work on our Gaussian process and Bayesian optimisation ope….
0
9
0
@vincentadam87
Vincent Adam
4 years
In brief it is a (opinionated) history of deep learning covering, up to 2020, its successes and investment by the gafam. It goes behind the scenes, focusing on the personalities who pushed the vision and engineering, the convincing of investors and the ideologies behind it. (2/2).
0
0
0
@vincentadam87
Vincent Adam
4 years
I recommend 'Genius makers' by Cade Metz, which could be renamed 'an enthusiastic history of the deep learning revolution'. (1/2).
1
0
5
@vincentadam87
Vincent Adam
4 years
RT @SecondmindLabs: Secondmind is opening five research placement positions in the area of Gaussian process models and Bayesian optimisatio….
0
17
0
@vincentadam87
Vincent Adam
4 years
RT @arnosolin: Our paper with @wil_j_wil and @vincentadam87 "Sparse Algorithms for Markovian Gaussian Processes" will be presented at #AIST….
0
6
0