viettran86 Profile Banner
Viet Tran Profile
Viet Tran

@viettran86

Followers
140
Following
264
Media
30
Statuses
353

Bayes is AI and AI is Bayes

United Kingdom
Joined November 2017
Don't wanna be here? Send us removal request.
@viettran86
Viet Tran
7 years
RT @shivon: Cards Against Machine Learning
Tweet media one
0
909
0
@viettran86
Viet Tran
7 years
RT @latentjasper: Bayesian optimization played a significant role for AlphaGo! "prior to the match with Lee Sedol, we tuned the latest Alp….
0
95
0
@viettran86
Viet Tran
7 years
RT @Reza_Zadeh: Best paper award #NeurIPS2018 main idea: Defining a deep residual network as a continuously evolving system & instead of up….
0
110
0
@viettran86
Viet Tran
7 years
a very inspiring personal story on AI research. It’s not a privilege reserved solely for big organizations anymore. People can do it alone nowadays as long as they persevere and work hard. Kudos to Alexia‘s well-deserved success 👏👏.
@jm_alexia
Alexia Jolicoeur-Martineau
7 years
7/ It attracted the attention of even @goodfellow_ian who had me present it to @GoogleAI. Then, I presented my work at @GoogleAI Montreal, @SymposiumAi, MILA, #socml2018, and #WiML2018 at #NeurIPS2018. It has been widely used and I reached 10 citations after only 6 months.
0
0
2
@viettran86
Viet Tran
7 years
There’s a lot more to learn from community, though. I don’t think incremental research is bad. Success/failure, we need them to move forward. But we should step back sometimes and see a big picture. Simplifying is the core of (Bayes/AI) learning, I think.
@dennybritz
Denny Britz
7 years
To ask the right questions it’s useful to keep some distance from the field. If you only read the latest NeurIPS papers that everyone reads you might get locked into a frame of mind and work on exactly the same as everyone else. Excellent read:
Tweet media one
0
0
0
@viettran86
Viet Tran
7 years
Anyway, I’m delighted that Bayes got revived across many fields. 10 years ago, I went to Paris to learn about ML/AI (I studied robotics before that). I got stuck with Bayes ever since. Bayes AI still has a very long way to go, but I think it’s the only way we can achieve Human AI.
2
0
2
@viettran86
Viet Tran
7 years
he also used Negative Binomial (NB) dist. to fit and deduce the rate of referee’s acceptence (35%).It seems interesting. Without NB, I couldn’t derive closed form MAP (i.e optimal) number of PCA components in my recent paper either.
@viettran86
Viet Tran
7 years
I solved a 50-year-old challenge for closed-form of optimal estimation of VA in Y=VA+Z in PCA, without overfitting. We found SNR > -10 dB is limit. This is my first step toward avoid overfitting in #NeuralNet completely!. paper code
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
1
0
@viettran86
Viet Tran
7 years
In paper “You’ve come a long way, Bayesians”, he summarized 40 years of Bayesian phylosophy. Here he also explained intuitively why Bayes is consistent with Axiom of Probability (check out his paper “Varieties of Bayesianism” or my PhD thesis for details).
Tweet media one
Tweet media two
1
1
1
@viettran86
Viet Tran
7 years
When looking for NeuralNet python tutorial for beginners, I came across a nice blog of Prof. @jweisber at Uni. Toronto @UofT . He used the same excellent Coursera of @AndrewYNg but with simpler form. What’s more. it turns out that he is a cool Bayesian!.
1
0
1
@viettran86
Viet Tran
7 years
.“The most powerful A.I. systems… use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning”. A good read from Prof.@GaryMarcus !.
0
0
5
@viettran86
Viet Tran
7 years
A nice illustration of SGD’s drawback. Extracted from their recent paper:.
Tweet media one
Tweet media two
Tweet media three
0
0
4
@viettran86
Viet Tran
7 years
A really cool paper on how to run Gradient Descent on Bayesian NeuralNet efficiently (and, as extra point, it’s relevant to my Copula Variational paper 😀). They also showed that traditional SGD over parameter has a fundamental flaw (it may not converge to the optimal dist.) 1/2.
@EmtiyazKhan
Emtiyaz Khan
7 years
I will be giving a talk on Dec. 2 at 4pm in the Symposium on Advances in Approximate Bayesian Inference (AABI) 2018 in Montreal. Talk title "Fast yet simple natural gradient descent in variational inference". Slides [. Hope to see you there. 1/3
Tweet media one
1
0
11
@viettran86
Viet Tran
7 years
“Two approaches in particular stand out to Herbrich. One is the revival of a technique called Bayesian learning (. ).The other machine learning technique that Herbrich sees drawing increased attention is something called a spiking neural network”.
@vladtenevx
Vlad Tenev Commentary
7 years
A few of @Amazon's leading #ML scientists provide their perspective on what the sold-out NeurIPS Conference tells us about the future of #AI #AlexaAI @dilekhakkanitur @rherbrich @SilkeGo @MckBrickl @iamashbrown.
0
0
1
@viettran86
Viet Tran
7 years
RT @Reza_Zadeh: Best Paper Award #ICLR18: finds flaw in ADAM proof of convergence, shows failure on specific toy convex problem & suggests….
0
195
0
@viettran86
Viet Tran
7 years
I solved a 50-year-old challenge for closed-form of optimal estimation of VA in Y=VA+Z in PCA, without overfitting. We found SNR > -10 dB is limit. This is my first step toward avoid overfitting in #NeuralNet completely!. paper code
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
1
12
@viettran86
Viet Tran
7 years
Our ICASSP paper is published! Journal version is up tomorrow. Given Y = VA + Z, we found that non-overfitting limit for estimating V,A is SNR > -10 (dB) for PCA/MUSIC. paper: code ( + journal): explanation:
Tweet media one
1
0
2
@viettran86
Viet Tran
7 years
I have a feeling that Sir Atiyah has solved it. His approach is pretty clever, since pi,e and alpha is a fundamental connection between maths and physics. I am now reading his proof carefully, anyone cares to join me xD? . His proof is already up:
Tweet media one
1
0
3
@viettran86
Viet Tran
7 years
the biggest hit-or-miss of maths today.current livestreams on one of hardest Riemann Hypothesis proof of Sir Atiyah!. finger crossed!!!!! I wish him wellll.
0
0
2
@viettran86
Viet Tran
7 years
“People say ‘we know mathematicians do all their best work before they’re 40’”, says Atiyah. “I’m trying to show them that they’re wrong. That I can do something when I’m 90.”.
@stevenstrogatz
Steven Strogatz
7 years
Uh oh. I have a bad feeling about this. Famed mathematician Michael Atiyah claims proof of Riemann hypothesis
0
0
1
@viettran86
Viet Tran
7 years
RT @nolimits: When interviewing job candidates, I always refer to this passage from Jeff Bezos' 1998 shareholder letter. It's a great, simp….
0
617
0