Aakash Kumar Nain Profile Banner
Aakash Kumar Nain Profile
Aakash Kumar Nain

@A_K_Nain

Followers
9,152
Following
676
Media
170
Statuses
11,808

Sr. ML Engineer | Keras 3 Collaborator | @GoogleDevExpert in Machine Learning | @TensorFlow addons maintainer l ML is all I do | Views are my own!

New Delhi, India
Joined October 2016
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@A_K_Nain
Aakash Kumar Nain
10 months
After contributing and maintaining TF-addons for a while, here is the next big thing I always wanted to work on. I have been a Keras power user since 2016. Always wanted to build it along with the core devs. I finally got the chance to do it, and that too for all three major…
@fchollet
François Chollet
10 months
We're launching Keras Core, a new library that brings the Keras API to JAX and PyTorch in addition to TensorFlow. It enables you to write cross-framework deep learning components and to benefit from the best that each framework has to offer. Read more:
Tweet media one
127
829
4K
2
5
71
@A_K_Nain
Aakash Kumar Nain
5 years
As promised, here is the first super clean notebook showcasing @TensorFlow 2.0. An example of end-to-end DL with interpretability. Cc: @fchollet @random_forests @DynamicWebPaige PS: Wait for more!
11
284
1K
@A_K_Nain
Aakash Kumar Nain
3 years
People who say "Machine Learning is easy" are way more famous on Twitter.🤦🤦 Don't know when I will get that notion but today, even after 4 years working in the field of ML, I still find it harder than a lot of other things that I do!
37
50
754
@A_K_Nain
Aakash Kumar Nain
4 years
Someone asked me today: "What would be an ideal job for you?" Umm..getting paid to read/write/discuss research papers, implement the best ideas in an open source deep learning library and writing huge amount of Python codebase!
15
35
535
@A_K_Nain
Aakash Kumar Nain
5 years
How quickly can we build a captcha reader using deep learning? Check out yourself the Captcha cracker built @TensorFlow 2.0 and #Keras
6
120
464
@A_K_Nain
Aakash Kumar Nain
2 years
No matter for how long you have been doing ML, go read @fchollet Deep Learning with Python, 2e. Refreshing content all the way! 👌👌🥳
8
16
388
@A_K_Nain
Aakash Kumar Nain
3 years
The only problem I see in pursuing ML as a career option: Every day you check your Twitter feed and papers on arXiv, and realize how much there is to learn! Imposter Syndrome is very real!
12
26
345
@A_K_Nain
Aakash Kumar Nain
2 years
Major difference between learning about GANs vs learning about Diffusion models: GANs: Pretty good literature. A lot of exceptional tutorials, code, blogposts, etc. Diffusion Models: A very few tutorials, blogposts, hardly any good code, etc..
11
19
334
@A_K_Nain
Aakash Kumar Nain
4 years
As promised, here is my new blogpost explaining the latest research from Google Research and Brain team. I liked this paper a lot because instead of building models with billions of params, it focuses on fundamental aspects.
4
68
325
@A_K_Nain
Aakash Kumar Nain
4 years
I don’t like to brag about my code but I think I did a good job on this one. Apart from code quality, the mental model that an API provides plays a very important role, and Keras does that for me. PS: @fchollet Thanks for the support and guidance 🙏🏼🙏🏼
@fchollet
François Chollet
4 years
Convert horses to zebras with this CycleGAN model by @A_K_Nain , now on : Definitely the most concise and elegant implementation of CycleGAN I've seen anywhere -- around 350 lines end-to-end. Train on 8 GPUs by just adding 1 line.
Tweet media one
3
53
305
4
19
284
@A_K_Nain
Aakash Kumar Nain
3 years
I am very happy to announce today, an interactive web app for my annotated research papers repo developed using "Wave", an open-source Python framework launched by @h2oai a few days ago to build interactive apps.
6
44
278
@A_K_Nain
Aakash Kumar Nain
4 years
A recent paper *Augmix* from Google and DeepMind people showed huge improvements in accuracy as well as robustness. The original code is in PyTorch, and I ported it to TF2.0. Check it out: Cc: @DanHendrycks @balajiln @barret_zoph et al.
2
57
255
@A_K_Nain
Aakash Kumar Nain
4 years
This came in today. Thank you @TensorFlow team ❤️🍻🍻
Tweet media one
5
1
253
@A_K_Nain
Aakash Kumar Nain
5 years
After this meeting, all I am saying is that the new features coming to Keras are gonna blow your mind. A few more months! We will see where each framework stands in the *rankings* next year 😎😎
@edd
Edd Wilder-James
5 years
It's happening! First Keras community call.
Tweet media one
2
21
111
3
29
229
@A_K_Nain
Aakash Kumar Nain
4 years
I am announcing a new project today **keras_jax**. I love Keras and I see JAX as a potential future framework. So, why not combine both! I won't be able to do it alone though, without the community support. Lemme know if you are interested!
30
7
220
@A_K_Nain
Aakash Kumar Nain
4 years
I want to start reading about Graph NNs but I have two questions in my mind: 1. Applications of GNNs 2. Which paper should I start with?
15
34
213
@A_K_Nain
Aakash Kumar Nain
2 years
Today, I am starting a series of new tutorials: **Building models in JAX** Here is the first tutorial in this series:
3
32
211
@A_K_Nain
Aakash Kumar Nain
2 years
🔥 Tutorial Alert! 🔥 The third part of the DDPMs is now available: **A deep dive into DDPMs** Here is the link to the repo containing the notebook: A thread 👇
2
46
210
@A_K_Nain
Aakash Kumar Nain
2 years
The diffusion models series is now on a proper blog making it easier for people to access the material because Github still has issues rendering notebooks. Enjoy reading! More DDPMs posts coming soon! 🍻🍻 cc: @rishabh16_ @RisingSayak
2
30
194
@A_K_Nain
Aakash Kumar Nain
4 years
Resolutions for 2020: 1. Write at least one research paper 2. Contribute more to open source 3. Do more calisthenics and HIIT workout 4. Travel like crazy 5. Teach people ML and TF2.0
6
10
190
@A_K_Nain
Aakash Kumar Nain
3 years
Still waiting for the day when I can load a notebook in Github without clicking "reload" x times!
6
11
183
@A_K_Nain
Aakash Kumar Nain
4 years
Unpopular take but things I would have loved to see being discussed about GPT-3: 1. Hardware req and training time 2. Energy consumption and carbon footprint 3. FLOPS and knowledge distillation 4. Production covergae 5. Biases and proposed sol. But it seems no on is interested!
18
15
184
@A_K_Nain
Aakash Kumar Nain
2 years
🔥 Annotated Paper alert! 🔥 Most people must have heard about this paper in the past few weeks: **Understanding diffusion models** IMO, this paper deserves every bit of attention. What does the paper cover? Or why should you read it? A thread👇
2
24
169
@A_K_Nain
Aakash Kumar Nain
1 year
The code example for Denoising Diffusion Probabilistic Models in Keras is live on the site! 🥳🥳 What's in the code example, and why should you go through it? A thread 👇
2
33
171
@A_K_Nain
Aakash Kumar Nain
4 years
Finally! I completed the blogpost for the SwAV paper. Took me a lot of time to I put just enough details in the blogpost so that most people find it easy to grasp the maths. Lemme know in this thread if you like it.
4
26
165
@A_K_Nain
Aakash Kumar Nain
4 years
If you still haven't read the new paper from Google brain team or if there is something that wasn't clear in the first read, here is the summary: “Self-training with Noisy Student” by Aakash Nain
2
36
154
@A_K_Nain
Aakash Kumar Nain
4 years
AugMix in TF2.0 1. Fully modular code 2. Custom train/eval loops with tf.function 3. **Custom EarlyStopping for the custom train loop.** 4. Checkpoint manager 5. Parallelized data generators @fchollet @random_forests
2
30
151
@A_K_Nain
Aakash Kumar Nain
3 years
Small things that make me smile. cc: @fchollet
Tweet media one
1
2
151
@A_K_Nain
Aakash Kumar Nain
3 years
I deleted this tweet becauase I forgot to mention the year. People, this was in **2016** and I was decscribing my exp when I starred to look out just after graduation. Please stop bombarding my DMs with advise 😂😂. I have been working as an ML eng for a very long time now.
Tweet media one
5
1
145
@A_K_Nain
Aakash Kumar Nain
4 years
This arrived today! I absolutely love Pytest and always try to incorporate it in my workflow. Wanna read it right away but I have a blogpost to finish first. Thanks @brianokken for such a wonderful resource
Tweet media one
4
7
144
@A_K_Nain
Aakash Kumar Nain
4 years
With no MS/PhD and having not published any paper yet, I can totally relate these points. Don't know how it will turn out in the end but the stress, the anxiety is real. To prove that you are good enough, one has to invest a lot of time in it apart from day job. Not good IMHO.
@hardmaru
hardmaru
4 years
“To put it mildly, the competition seems absolutely insane to me, and seems to be exponentially increasing. Many top researchers claim they wouldn't have made it if the competition was like this when they started.”
33
99
761
5
6
145
@A_K_Nain
Aakash Kumar Nain
2 years
I have witnessed the era where CNNs exploded like anything. I have witnessed the era where Transformers were used for almost everything. Now I am witnessing the era where Diffusion models are combining vision, NLP, and speech like anything in generative modeling. 🔥🔥🔥
3
8
143
@A_K_Nain
Aakash Kumar Nain
5 years
In the last few days, I used almost all the major DL libraries for some task and here is my conclusion: Keras has still the best API to date.
6
8
143
@A_K_Nain
Aakash Kumar Nain
4 years
Is it just me or anybody else also feels that the ML community on Twitter went from "amazing" to "toxic" in just two years? (2018 -> 2020)
18
2
139
@A_K_Nain
Aakash Kumar Nain
4 years
Here is the third annotated research paper: SwAV- Unsupervised Learning of Visual Features by Constrastive Cluster Assignment. IMHO, this is one of the best papers on SSL. And it is also one of the papers with many many heavy mathematical terms.
3
17
136
@A_K_Nain
Aakash Kumar Nain
2 years
Good engineering requires a lot of patience and attention to small-small details. If you are building something in a hurry (for whatsoever reason), it will haunt you back in form of techincal debt. And that my friend, is the day when you'll realize that engineering is an art! 🍻
4
9
134
@A_K_Nain
Aakash Kumar Nain
2 years
This was delivered today. Pretty sure the next few months are going to be 🔥 Thanks @fchollet @dabeaz for these gems! 🥳🥳
Tweet media one
6
4
132
@A_K_Nain
Aakash Kumar Nain
8 months
🔥 Tutorial alert! 🔥 As promised, here is the latest tutorial on parallelization and distributed training in JAX powered by Keras core ! 💪😍 cc: @SingularMattrix @fchollet
2
21
129
@A_K_Nain
Aakash Kumar Nain
5 years
Implementing Deeplab_v3 in TF2.0 was fun. As I said earlier also, functional API in tf.keras is all we need to do amazing things. Check it out: cc: @random_forests @DynamicWebPaige @fchollet
1
31
126
@A_K_Nain
Aakash Kumar Nain
4 years
1/n: I think I never got a chance to thank those people properly who helped me in my ML journey. Given that the world is going through a difficult time, I think I should it now Thanks 1. @AndrewYNg for that ML course on coursera. (2015-16) 2. @karpathy for the exceptional CS231n
2
11
121
@A_K_Nain
Aakash Kumar Nain
2 years
Posting it a bit late but I recently won the Google OSS expert prize on @kaggle for my work on JAX! Thanks everyone for the support! 🍻🍻
Tweet media one
6
5
124
@A_K_Nain
Aakash Kumar Nain
3 years
💥💥New annotated paper!💥💥 1/ This paper from Google Research is a very detailed study on different normalizers like BN, LayerNorm, etc. and how they hold up for different scenarios. The paper is well laid out and written in a clear fashion
2
15
124
@A_K_Nain
Aakash Kumar Nain
4 years
One of the most important concepts for which I have never found an intuitive explanation is the “Lipschitz continuity”. It would be great if @3blue1brown can make a video on that , but I doubt he is going to notice this tweet. So RT! 😅😅
6
33
122
@A_K_Nain
Aakash Kumar Nain
3 years
No matter what people say, IMO research papers from @GoogleAI and @facebookai are among the best ones. Why? 1. Simple ideas 2. Explain most of the things clearly 3. Relatively short 4. Good ablation studies The one I am reading now is 🔥
5
5
121
@A_K_Nain
Aakash Kumar Nain
2 years
UNet doesn't get the attention it deserves. Surely, ResNets are great bit look at the number of problem statements where Unets have proven to be exceptional
10
9
119
@A_K_Nain
Aakash Kumar Nain
3 years
🔥 Annotated Paper 🔥 Today we will be looking at another good paper from Google Research and the Brain team: **Domain Agnostic Contrastive Learning** The paper is short, very well laid out, and was accepted at ICML 2021.
1
19
119
@A_K_Nain
Aakash Kumar Nain
4 years
As promised, here is the first annotated paper. This would give you an overview of what all goes in my mind when I am reading a paper (maymay not be useful for you). cc: @alisher_ai @suzatweet @bhutanisanyam1 PS: My handwriting isn't bad, I swear!😂
13
13
117
@A_K_Nain
Aakash Kumar Nain
4 years
Finally did some small experiments using *sine* activation as suggested in SIREN. Doing some more(large scale) exp before jumping to conclusion.
7
18
116
@A_K_Nain
Aakash Kumar Nain
5 years
If you really want to learn about DS/ML then instead of following frauds like @sirajraval , do @kaggle You will learn so much within a few months. Best part: People there really care about community and share knowledge selflessly
2
14
116
@A_K_Nain
Aakash Kumar Nain
1 year
Diffusion models will unify vision, nlp, and speech. Transformers based diffusion models will be the key factor driving it!
3
10
117
@A_K_Nain
Aakash Kumar Nain
4 years
My maths isn't that bad but had there been someone like @3blue1brown to teach us during my school time, I bet I would have chosen Mathematician as my career. PS: I have always loved Maths deeply. And ML/DL is something that I deeply care about, especially DL for vision
4
4
114
@A_K_Nain
Aakash Kumar Nain
2 years
1/ Last month @rishabh16_ and I thought to provide educative material for Diffusion Models. Today we are happy to share the first notebook with you. This notebook is an optional read, and its purpose is to serve as a refresher on random variables. Repo:
1
20
115
@A_K_Nain
Aakash Kumar Nain
4 years
Back to @kaggle and here is another notebook where you can find how to override `train` and `test` steps in @TensorFlow (Keras) model to avoid writing functionalities like custom callbacks, etc for a custom training loop.
3
20
111
@A_K_Nain
Aakash Kumar Nain
4 years
JAX will rule this decade. You can screenshot this tweet if you don't believe it.
@GoogleDeepMind
Google DeepMind
4 years
Our researchers are excited to announce the release of two more JAX libraries: Optax for optimisation, and Chex for writing better tests and reliable code. Optax: Chex:
4
109
542
11
3
110
@A_K_Nain
Aakash Kumar Nain
5 months
Keras 3 is another reminder why mental model matters. Developing an APi is one thing, developing an API with a good mental model is a whole another level of engineering. PS: It's always amazing to collaborate with @fchollet on OSS. Glad I have contributed to this! Go Keras! 💥
@fchollet
François Chollet
5 months
Big news: we just released Keras 3.0! ▶ Run Keras on top of JAX, TensorFlow, and PyTorch ▶ Train faster with XLA compilation ▶ Unlock training runs with any number of devices & hosts via the new Keras distribution API It's live on PyPI now! 🚀
Tweet media one
76
641
3K
1
7
110
@A_K_Nain
Aakash Kumar Nain
3 years
OMG OMG!! My transformers are finally working 🥺🥺😭😭🎉🎉🥳🥳🥳
3
1
109
@A_K_Nain
Aakash Kumar Nain
4 years
Okay, this is new for me. I never tried this and this IMHO is a big deal.
4
12
104
@A_K_Nain
Aakash Kumar Nain
4 years
Okay enough points and opinions heard about "Transformers replacing CNNs". Here is my take: 1/ 1. Transformers are good, no doubt in that but they are also on the "very heavy side" for production 2. Deploying transformers on the edge would be a huge pain
3
11
106
@A_K_Nain
Aakash Kumar Nain
4 years
LinkedIn should be renamed as "emotional story-telling" platform. 😤😤
8
4
106
@A_K_Nain
Aakash Kumar Nain
3 years
Remember there is a shortage of ML people in the industry but from selection process to the interviews, everything is so deeply broken that not everyone gets a fair chance.
8
1
104
@A_K_Nain
Aakash Kumar Nain
2 years
Writing another JAX tutorial. This will be the last tutorial focused on the fundamentals of JAX in the TF-JAX tutorials series. After this, we will be moving to building NNs in JAX using a high-level API. Expect the tutorial by EOW! 🍻🥳
2
5
104
@A_K_Nain
Aakash Kumar Nain
2 years
Keras was the first API where I placed my big bets (you can read that blogpost from 2017 on medium) when I started my ML journey. Keras is ❤️ Keras: The best high-level JAX: The best low-level Thanks @fchollet 🍻🍻
@fchollet
François Chollet
2 years
Keras turns 7 tomorrow -- it hit GitHub on March 27, 2015. Crazy how fast that went by... It's been amazing to see the project and its community take off over the years since! I feel like we're still just getting started.
15
79
876
0
7
98
@A_K_Nain
Aakash Kumar Nain
2 years
This paper was one of the best papers I read a while ago. I annotated it as well. You can find the annotation here:
@GoogleAI
Google AI
2 years
Understanding how neural networks reach decisions can be challenging, but is also valuable for uses from image analysis to scientific discovery. A new approach, called StylEx, discovers and visualizes how disentangled attributes affect a classifier →
25
516
2K
0
9
97
@A_K_Nain
Aakash Kumar Nain
4 years
Pheww! I was able to finish the blogpost in time. If you want to learn about the latest research paper Meta Pseudo Labels, you can read about it here:
1
19
97
@A_K_Nain
Aakash Kumar Nain
3 years
Everyone on LinkedIn is now a career coach for getting a job in FAANG! If I ever get into FAANG, I will make sure I write a book on getting a job in FAANG.
13
3
96
@A_K_Nain
Aakash Kumar Nain
4 years
This is gold! 🔥
@paperswithcode
Papers with Code
4 years
🎉 Papers with Code partners with arXiv! Code links are now shown on arXiv articles, and authors can submit code through arXiv. Read more:
Tweet media one
46
2K
7K
1
8
94
@A_K_Nain
Aakash Kumar Nain
3 years
The wait is over! The first tutorial on JAX in the TF-JAX tutorials series is out. Here are a few things that you will learn: 1. Why JAX? 2. DeviceArray 3. Numpy vs JAX-numpy 4. Automatic Differentiation Check it out and let me know what you think.
0
21
94
@A_K_Nain
Aakash Kumar Nain
6 years
Nice work @TensorFlow team for cleaning all the mess in the documentation and putting Keras and Eager first. Thanks @fchollet for such a great API for DL. Love for Keras will never end.
1
9
94
@A_K_Nain
Aakash Kumar Nain
3 years
As an exp ML guy, my advise to every graduate trying hard to get into ML: Go for MS if you can afford it. Life would be much easier in terms of job search. Trust me that much of struggle just to find a goob job isn't worth it. Rejections are hard. PS: I didn't do MS, so...
12
3
92
@A_K_Nain
Aakash Kumar Nain
2 years
1/ Happy to announce that the second part in the DDPMs tutorial series is now available: **Gaussian Distribution in the context of DDPMs** Here is the link to the repo containing the notebook:
1
21
90
@A_K_Nain
Aakash Kumar Nain
4 years
In these tough times, this made me really really happy! Learn more about Keras here:
Tweet media one
3
2
92
@A_K_Nain
Aakash Kumar Nain
3 years
The wait is over! I have finished updating the first tutorial in TF/JAX tutorials series. Check it out! Hope you like it. Lemme know what you think (in this thread) 🎉🎉🥳🥳🔥🔥
8
10
90
@A_K_Nain
Aakash Kumar Nain
3 years
After trying out many many things for many years, I realized that spending time on data pipelines(collection, cleaning, etc.) In real world scenario Is far more imp than replacing ResNet50 with something like EfficientNet and doing more and more hparams tuning
@fchollet
François Chollet
3 years
ML researchers work with fixed benchmark datasets, and spend all of their time searching over the knobs they do control: architecture & optimization. In applied ML, you're likely to spend most of your time on data collection and annotation -- where your investment will pay off.
20
344
2K
2
4
90
@A_K_Nain
Aakash Kumar Nain
3 years
I love doing ML research but I also wish to be as good as @fchollet @SingularMattrix someday to be able to develop amazing libraries. Deep dive into Keras and JAX has been a wonderful ride so far 🥳🥳
4
1
87
@A_K_Nain
Aakash Kumar Nain
3 years
This is one of the most fun exercises I did last week. The best thing about a good API with a great mental model is that it makes the implementation of even complex tasks very easy. Keras is ❤️
@fchollet
François Chollet
3 years
In this code walkthrough, we build an image captioning Transformer -- in about 200 lines of beautiful, clean code ✨ Don't miss it:
Tweet media one
7
105
494
7
12
85
@A_K_Nain
Aakash Kumar Nain
2 years
🔥 Tutorial Alert! 🔥 When it comes to Deep Learning, Augmentation is an essential component of the training data pipeline. This tutorial aims at teaching you to write a data augmentation pipeline in TF/JAX. Here is the link to the @kaggle notebook:
1
11
84
@A_K_Nain
Aakash Kumar Nain
3 years
I didn't join Reddit because it was toxic. I joined Twitter because finding ML resources was easy and it was peaceful. Now, ML Twitter is extremely toxic IMHO. Don't know when but I think I would consider leaving the platform
1
0
88
@A_K_Nain
Aakash Kumar Nain
4 years
Implemented **Integrated Gradients** in @TensorFlow . It was a fun exercise with lots of learnings involved from the original implementation. Also, submitted a PR to for the same. Check it out: cc: @fchollet @GokuMohandas
0
13
84
@A_K_Nain
Aakash Kumar Nain
2 years
There is a reason why I consider as one of the best resource. Amazing tutorial!
@fchollet
François Chollet
2 years
New tutorial on : generating images using denoising diffusion implicit models. Start experimenting with your own image generation models!
7
71
424
0
5
82
@A_K_Nain
Aakash Kumar Nain
4 years
I am literally tired of this. (In a party) Person1: You don't drink? You never done that? Me: Never felt the need to! Person2: And you don't smoke as well? Me: Absolutely no. Person3: And you are a **pure** veg? Me: Yes. Never understood why people freak out🤦‍♂️🤦‍♂️🤦‍♂️
11
4
86
@A_K_Nain
Aakash Kumar Nain
2 years
100+ github stars on the diffusion models repo. Thank you everyone for the support. I hope you liked the content! 🙏🙏🥳🥳🍻 cc: @RisingSayak @rishabh16_
1
6
83
@A_K_Nain
Aakash Kumar Nain
4 years
Github is down. Well, it's 2020!
2
9
81
@A_K_Nain
Aakash Kumar Nain
3 years
People who are unfollowing because of recent threads on Covid ...go ahead I am not here to please anyone!
3
1
81
@A_K_Nain
Aakash Kumar Nain
4 years
Debugging RNNs and GANs are two toughest things, period.
7
2
81
@A_K_Nain
Aakash Kumar Nain
5 years
Have you ever seen anyone doing marketing for numpy/scipy? No, right? This is why these are "truly" open-source and this is also the reason why everyone loves numpy!
5
7
80
@A_K_Nain
Aakash Kumar Nain
10 months
2012: Everywhere I go, I see a web dev 2019: Everywhere I go, I see a DS 2020: Everywhere I go, I see a MLE 2023: Everywhere I go, I see a so called LLM expert 🤣🤣🤣🤣
7
12
81
@A_K_Nain
Aakash Kumar Nain
4 years
They sacrificed their lives so that we can enjoy our freedom. Never forget that! Saying that I am grateful for everything my country has given isn't enough. Words aren't enough to express that gratitude and those feelings. "Meri jaan Tiranga hai!" Happy Independence Day! 💥💥🇮🇳
Tweet media one
4
2
80
@A_K_Nain
Aakash Kumar Nain
3 years
What you are seeing in the image below is known as "dibiya", a type of kerosene lamp. In early 2000s, I had only this to study in the night. Electricity used to be a luxury! This reminded me of my progress and that's the best feeling since last few days. PC: Google Images
Tweet media one
6
3
78
@A_K_Nain
Aakash Kumar Nain
4 years
Looking for a mentor/group of researchers doing cutting-edge research in semi-supervised learning. Aiming for at least 1-2 research papers by the end of 2020. RTs would be very helpful! 🙏🙏
11
41
77
@A_K_Nain
Aakash Kumar Nain
4 years
So many debates on Twitter/Reddit/HN regarding TensorFlow vs PyTorch. Literally mind blowing! My take: Blaming TF2.0 for the mistakes of TF1.x isn't fair. Also, most of the people have relaized that they need to learn both.
9
2
77
@A_K_Nain
Aakash Kumar Nain
2 years
I have updated the TF-JAX tutorials repo. We have covered all the fundamentals blocks that you need to learn to understand the two frameworks better. Also, this is the first repo which you can treat as one stop to learn about @TensorFlow and JAX.
2
12
75
@A_K_Nain
Aakash Kumar Nain
3 years
Computer Vision is eternal love! ❤️❤️
2
4
79
@A_K_Nain
Aakash Kumar Nain
2 years
🔥Annotated Paper alert!🔥 Today's annotated paper is the latest research from the Google and the Waymo team: PolyLoss Why should you read this paper? 👇
2
9
76
@A_K_Nain
Aakash Kumar Nain
4 years
It’s a shame that ML engineers at LinkedIn aren’t using GPT-3 to filter out and block anything related to: 1. Emotional/motivational posts 2. Story-tellers 3. Any post with the statement “Do you agree?” 😤😤 4. Political agendas
12
2
77
@A_K_Nain
Aakash Kumar Nain
3 years
1.5k stars on my annotated research papers repo! I started doing this a few months back only. I didn't anticipate that it would be liked by so many people. Started it because I wasn't able to print papers during the pandemic, it has really taken off. Thank you everyone! 🙏❣️
2
1
77
@A_K_Nain
Aakash Kumar Nain
4 years
As promised, here is a @kaggle notebook showcasing the use of an **Endpoint** layer for CTC loss function used for building a **Captcha Reader** in @TensorFlow cc: @fchollet @random_forests @DynamicWebPaige PS: PR for kerasio on the way!
@A_K_Nain
Aakash Kumar Nain
4 years
Lambda layer in Keras is really handy for a small use case but as the solution becomes complex, like including arbitrary losses, it's not the best solution at all. Today, or by tomorrow, I am gonna put a notebook to show how you can do the same thing in a more elegant way!
3
1
50
0
11
76
@A_K_Nain
Aakash Kumar Nain
3 years
The first tutorial in the TF/JAX tutorial series will probably be out today or tomorrow. Come, learn the basics in a diff way! 🥳🥳🥳
4
7
76
@A_K_Nain
Aakash Kumar Nain
4 years
The number of things that I have to learn and do today is 10x more than all the things combined that I have learned in the last 4 years. You gotta move fast to be in the game or you just watch yourself losing out
3
3
73
@A_K_Nain
Aakash Kumar Nain
4 years
Woke up at 6am. Played football from 7am-9am. Came back home. Found a bug in my model. 9am-3pm: Debugging debugging.....no breakfast, no lunch, no bathing! 😡😡 Sometimes I literally hate DL!
10
1
72