
Viet Tran
@viettran86
Followers
140
Following
264
Media
30
Statuses
353
Bayes is AI and AI is Bayes
United Kingdom
Joined November 2017
RT @latentjasper: Bayesian optimization played a significant role for AlphaGo! "prior to the match with Lee Sedol, we tuned the latest Alp….
0
95
0
RT @Reza_Zadeh: Best paper award #NeurIPS2018 main idea: Defining a deep residual network as a continuously evolving system & instead of up….
0
110
0
a very inspiring personal story on AI research. It’s not a privilege reserved solely for big organizations anymore. People can do it alone nowadays as long as they persevere and work hard. Kudos to Alexia‘s well-deserved success 👏👏.
7/ It attracted the attention of even @goodfellow_ian who had me present it to @GoogleAI. Then, I presented my work at @GoogleAI Montreal, @SymposiumAi, MILA, #socml2018, and #WiML2018 at #NeurIPS2018. It has been widely used and I reached 10 citations after only 6 months.
0
0
2
There’s a lot more to learn from community, though. I don’t think incremental research is bad. Success/failure, we need them to move forward. But we should step back sometimes and see a big picture. Simplifying is the core of (Bayes/AI) learning, I think.
To ask the right questions it’s useful to keep some distance from the field. If you only read the latest NeurIPS papers that everyone reads you might get locked into a frame of mind and work on exactly the same as everyone else. Excellent read:
0
0
0
he also used Negative Binomial (NB) dist. to fit and deduce the rate of referee’s acceptence (35%).It seems interesting. Without NB, I couldn’t derive closed form MAP (i.e optimal) number of PCA components in my recent paper either.
I solved a 50-year-old challenge for closed-form of optimal estimation of VA in Y=VA+Z in PCA, without overfitting. We found SNR > -10 dB is limit. This is my first step toward avoid overfitting in #NeuralNet completely!. paper code
1
1
0
When looking for NeuralNet python tutorial for beginners, I came across a nice blog of Prof. @jweisber at Uni. Toronto @UofT . He used the same excellent Coursera of @AndrewYNg but with simpler form. What’s more. it turns out that he is a cool Bayesian!.
1
0
1
.“The most powerful A.I. systems… use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning”. A good read from Prof.@GaryMarcus !.
0
0
5
A really cool paper on how to run Gradient Descent on Bayesian NeuralNet efficiently (and, as extra point, it’s relevant to my Copula Variational paper 😀). They also showed that traditional SGD over parameter has a fundamental flaw (it may not converge to the optimal dist.) 1/2.
I will be giving a talk on Dec. 2 at 4pm in the Symposium on Advances in Approximate Bayesian Inference (AABI) 2018 in Montreal. Talk title "Fast yet simple natural gradient descent in variational inference". Slides [. Hope to see you there. 1/3
1
0
11
“Two approaches in particular stand out to Herbrich. One is the revival of a technique called Bayesian learning (. ).The other machine learning technique that Herbrich sees drawing increased attention is something called a spiking neural network”.
A few of @Amazon's leading #ML scientists provide their perspective on what the sold-out NeurIPS Conference tells us about the future of #AI #AlexaAI @dilekhakkanitur @rherbrich @SilkeGo @MckBrickl @iamashbrown.
0
0
1
RT @Reza_Zadeh: Best Paper Award #ICLR18: finds flaw in ADAM proof of convergence, shows failure on specific toy convex problem & suggests….
0
195
0
I solved a 50-year-old challenge for closed-form of optimal estimation of VA in Y=VA+Z in PCA, without overfitting. We found SNR > -10 dB is limit. This is my first step toward avoid overfitting in #NeuralNet completely!. paper code
0
1
12
RT @nolimits: When interviewing job candidates, I always refer to this passage from Jeff Bezos' 1998 shareholder letter. It's a great, simp….
0
617
0