Bharath Ramsundar Profile Banner
Bharath Ramsundar Profile
Bharath Ramsundar

@rbhar90

Followers
12,041
Following
10,718
Media
299
Statuses
14,945
Explore trending content on Musk Viewer
@rbhar90
Bharath Ramsundar
2 months
Respectfully, AI today is not like atomic science before the Manhattan project. The problem is there's a terrible Wizard of Oz effect. Even smart people who understand LLMs intellectually get fooled by the seeming intelligence of ChatGPT etc. Smooth text interpolation + a giant…
89
196
1K
@rbhar90
Bharath Ramsundar
1 month
This picture has been sticking in my brain. This is a good exemplar of why LLMs as computing platforms don't make sense to me. They learn the input distribution biases and all, and not necessarily any underlying structure or meaning. It's the hallucination problem in another form
@infobeautiful
Information is Beautiful
1 month
Ask ChatGPT to pick a number between 1 and 100 - which does it pick? (by @Leniolabs_ )
Tweet media one
433
860
10K
85
98
995
@rbhar90
Bharath Ramsundar
1 month
My mental model of an LLM is an interpolative vector database. Larger LLMs correspond to bigger databases. The vector embeddings allow for interpolative behavior to generalize to some degree due to rich embeddings. Given this model, I am trying to understand the scientific…
31
82
728
@rbhar90
Bharath Ramsundar
6 years
The point of a PhD isn't really novel thinking or research even. It's more about learning how to interface with the centuries old chain of academic literature. The PhD trains you to construct memories ("papers") that are appended onto this chain for preservation.
12
99
593
@rbhar90
Bharath Ramsundar
6 years
I have a personal goal to spend more time being bored. If you're always busy, always excited, there's no room for mental rest and contemplation. But if you're bored, the mind wanders. I've come up with some of my best ideas when bored out of my head. You might too
27
76
577
@rbhar90
Bharath Ramsundar
6 years
I'm excited to announce that my book with @Reza_Zadeh , "TensorFlow for Deep Learning" will be out on March 1st! Check it out for an introduction to the fundamentals of deep learning that focuses on conceptual understanding
13
82
433
@rbhar90
Bharath Ramsundar
5 months
Repeatedly hyping AGI fears is an excellent PR strategy. Almost no downside to the company if AGI doesn't happen in the next 5 years. It does tremendous damage to the field though and misguides policy makers. It's increasingly important to push back against the hype
12
44
306
@rbhar90
Bharath Ramsundar
6 years
Two parallel trends I see in AI. First, a growing realization that deep learning isn't going to get to AGI. Second, the best AI work continues to build and extend on core deep learning insights. It sounds contradictory, but deep learning is a building block not the solution
12
45
305
@rbhar90
Bharath Ramsundar
4 years
I've written up a brief paper review of "Molecular Attention Transformers," an intriguing new model for molecular machine learning that blends graph convolutional methods with transformer architectures
3
61
292
@rbhar90
Bharath Ramsundar
6 years
This is one of the most innovative deep learning papers I've seen in a while. Uses @PyTorch to construct differentiable dynamic programs. Allows for the use of backprop for PGM inference in a structured fashion
@arthurmensch
Arthur Mensch
6 years
Our work w/ @mblondel_ml 'Differentiable Dynamic Programming for Structured Prediction and Attention' was accepted at @icmlconf ! Sparsity and backprop in CRF-like inference layers using max-smoothing, application in text + time series (NER, NMT, DTW)
Tweet media one
Tweet media two
Tweet media three
9
200
651
1
79
288
@rbhar90
Bharath Ramsundar
7 years
What can't deep learning do? Worth putting together a list of known failures to guide algorithmic development
14
70
280
@rbhar90
Bharath Ramsundar
6 months
I like this emerging view of LLMs as approximate retrieval engines, something like a continuous analog of a database. Practical, powerful, and thankfully distant from AGI mythology
@ylecun
Yann LeCun
6 months
Don't confuse the approximate retrieval abilities of LLMs for actual reasoning abilities.
29
103
708
13
27
266
@rbhar90
Bharath Ramsundar
3 years
The "12 steps to Navier Stokes" tutorial series is amazing. I've had trouble building a mental model of Navier Stokes but the 12 notebooks really break down how the equation works into bite size chunks
2
37
249
@rbhar90
Bharath Ramsundar
9 months
1/ The new paper on Transformer limits () has me thinking a lot. I've tried a lot to use ChatGPT and found it a lacking tool for my applications. It does an excellent surface level job but often fails to do deeper analyses
9
42
251
@rbhar90
Bharath Ramsundar
5 years
Nowadays I learn new mathematics primarily through coding. I build a model system in Python, write unit tests, construct examples and iterate. There are limits to this style, but it's surprisingly powerful. I suspect the math classes of the future will adopt this system
11
27
242
@rbhar90
Bharath Ramsundar
3 years
The Feynman lectures are available publicly online on Caltech's website. This is a great resource for learning more about physics
6
66
230
@rbhar90
Bharath Ramsundar
6 years
Recently, I've started reading some pure math again. Unlike when I was younger, I have no particular aims or goals and am mainly looking for beautiful things. Maybe unsurprisingly, I find myself starting to get ideas I couldn't digest before
8
10
219
@rbhar90
Bharath Ramsundar
5 months
This is a good example showing LLMs are basically powerful autocomplete tools backed by large databases. I think 23 triggered a numerical flow and it kept rolling. The world model is limited at best. This isn't AGI
@abacaj
anton
5 months
Even the smartest LLM is "dumb" and overfit. Change a few numbers and watch them collapse
Tweet media one
305
263
3K
22
32
213
@rbhar90
Bharath Ramsundar
5 years
I'm sure this blog post is excellent, but @Medium 's paywall is preventing me from reading it. Proliferating paywalls on blog articles is an unpleasant future. It feels like inviting academic paywalls back into the tech world. Let's start avoiding paywalled services like @Medium
@quocleix
Quoc Le
5 years
Nice blog post titled "The Quiet Semi-Supervised Revolution" by Vincent Vanhoucke. It discusses two related works by the Google Brain team: Unsupervised Data Augmentation and MixMatch.
Tweet media one
4
293
982
15
25
208
@rbhar90
Bharath Ramsundar
8 months
I'm starting to worry silly fears around AI doom could cut off societally useful scientific AI. An LLM trained on trillion molecules is by no means superintelligent or even intelligent outside chemistry but could fall afoul of bad regulation
@norabelrose
Nora Belrose
9 months
I'm opposed to any AI regulation based on absolute capability thresholds, as opposed to indexing to some fraction of state-of-the-art capabilities. The Center for AI Policy is proposing thresholds which already include open source Llama 2 (7B). This is ridiculous.
Tweet media one
56
37
410
18
31
208
@rbhar90
Bharath Ramsundar
6 years
Deep learning is becoming a subdiscipline of software engineering and systems software. Academic machine learners are drifting into causality, quantum ML, probabilistic programming and neuroscience. This is actually healthy. I think the lemon has been juiced thoroughly already
5
36
205
@rbhar90
Bharath Ramsundar
27 days
AI agents make for very cool demos but I'm not sure when you'd actually want one. LLMs are still pretty mediocre at instruction following. An agent with chained LLM calls can go off the rails pretty easily. I think most of us don't want uncontrolled systems; rather we want tools…
27
18
200
@rbhar90
Bharath Ramsundar
5 years
I think programming language work doesn't get enough credit for deep learning. Automatic differentiation is what's enabled deep learning frameworks. If backprop had to be implemented by hand, deep learning would never have taken off
5
21
199
@rbhar90
Bharath Ramsundar
4 years
I'd really love to see an "American Shenzhen" develop. The ability to cheaply build hardware prototypes locally would be an amazing resource for researchers and entrepreneurs
10
14
196
@rbhar90
Bharath Ramsundar
2 months
I strongly disagree with this report. Those of us who disagree and don't think AGI is coming immediately need to get word out there more and push back against bad policy recommendations by Doomers. There are real world repercussions to bad geopolitical policy
@billyperrigo
Billy Perrigo
2 months
🚨Exclusive: a report commissioned by the U.S. government says advanced AI could pose an "extinction-level threat to the human species" and calls for urgent, sweeping new regulations
149
224
626
13
39
195
@rbhar90
Bharath Ramsundar
5 years
I loved @fchollet 's recent paper. One of the few papers about AGI that actually makes you think. Definitely recommend a careful read
@EmilWallner
Emil Wallner
5 years
François Chollet’s core point: We can't measure an AI system's adaptability and flexibility by measuring a specific skill. With unlimited data, models memorize decisions. To advance AGI we need to quantify and measure ***skill-acquisition efficiency***. Let’s dig in👇
Tweet media one
13
279
1K
0
33
189
@rbhar90
Bharath Ramsundar
6 years
Deep learning for gravitational wave detection! Deep learning is really taking off in science. I suspect it's because a new wave of physics grad students who've done the ML coursework now understand both physics and ML and can find applications.
2
61
172
@rbhar90
Bharath Ramsundar
5 years
I enjoy working with code written by non-engineers. Often times in not following "best practices," they come up with creative solutions to problems. Novelty becomes unwieldy at scale but is beautiful in the small
7
11
167
@rbhar90
Bharath Ramsundar
6 months
Scientists at @OpenAI , if you really see evidence of apocalyptic capabilities, you have a moral duty to publish asap. Prove it so us skeptics don't keep fighting you foolishly. If you don't have evidence, why are you raising public fears? Show proof or stop hyping please.
@rbhar90
Bharath Ramsundar
6 months
If this is true, please publish openly. I don't see evidence provided for these claims from any published material. I've tried open versions of ChatGPT etc. It's nowhere near this capable
5
6
74
9
24
167
@rbhar90
Bharath Ramsundar
6 years
Random forests: a truly magical and underappreciated algorithm. My first query to newcomers to deep learning is "Have you tried a random forest to baseline?"
2
27
167
@rbhar90
Bharath Ramsundar
5 months
OpenAI does a lot of damage with AGI mythologizing. Failure to be honest about what's in the training data and how that explains seemingly intelligent behavior is core. This is why we need open LLMs. Lying about AGI will damage the field and prevent useful applications
@rbhar90
Bharath Ramsundar
5 months
Goes to @ylecun 's point that LLMs do well when they are fed the answer... I'm being a little unfair but the hype cycle needs to deflate a bit
0
2
26
4
20
160
@rbhar90
Bharath Ramsundar
6 years
One of life's trickiest skills is learning to learn new things. In a new blog post, I share some tips for learning to learn in everyday life, such as greedy learning, pushing yourself, and reframing new concepts in terms of the everyday.
3
44
160
@rbhar90
Bharath Ramsundar
6 years
I'm excited to announce that @deep_chem 2.0 has just been released! We've significantly refactored and extended our TensorGraph framework (built on @TensorFlow ) to let us support more deep learning chem/bio/science applications. Please check it out!
3
46
157
@rbhar90
Bharath Ramsundar
3 years
I've put together a @deep_chem model wishlist with a collection of models we'd like to see added to DeepChem. Molecular ML, materials ML, physics ML and more needed. Please take a look and contribute!
3
30
155
@rbhar90
Bharath Ramsundar
4 years
Doing 0.1 seconds of molecular dynamics is crazy. For context, integration steps happen at the femtosecond scale. That's a 100 trillion integration steps!
@foldingathome
Folding@home
4 years
@nvidia We have analyzed our dataset (capturing 0.1 seconds of simulation, the largest in history!) to identify over 50 novel pockets across many #COVID19 proteins! These “cryptic pockets” are candidates for targeting with antivirals.
Tweet media one
2
41
145
6
27
150
@rbhar90
Bharath Ramsundar
2 years
Noether's theorem should be taught earlier in physics (maybe even at the high school level). The idea that energy is just the conserved quantity of a time invariant system for example is a simple definition of the otherwise mysterious term "energy"
10
16
145
@rbhar90
Bharath Ramsundar
6 years
TensorFlow 1.7 release candidate is out. Biggest change is that eager mode is moved into the main library. This is going to dramatically change how users write TensorFlow code. Looking forward to more dynamic graphs!
1
54
144
@rbhar90
Bharath Ramsundar
7 months
I think one of the biggest mistakes I made when starting research was trying to become a "great researcher". It's often more useful to just explore for fun or curiosity. The paper I worked hardest on so far still has 0 citations and thats ok
@sivareddyg
Siva Reddy
7 months
My student sent me this list saying they have to improve themselves in many areas. Such a list can do more harm than good. While I appreciate author's intention to motivate one for greatness, I don't think it can be planned. But you can plan to be a "good researcher."
13
61
580
6
15
149
@rbhar90
Bharath Ramsundar
5 months
Join the new @deep_chem Discord server if you are interested in open source scientific machine learning! We're building up a good community to discuss scientific ML, LLMs and more
3
25
147
@rbhar90
Bharath Ramsundar
7 years
I've written a new blog post on "Machine Learning with Small Data." Small data ML is the future :)
8
41
147
@rbhar90
Bharath Ramsundar
6 years
Data is the new oil, but perhaps simulators are the new data.
5
24
148
@rbhar90
Bharath Ramsundar
4 years
Something worth noting here is that Baidu is actively covering up the infrastructure of genocide. If you're accepting a computer vision paper from Baidu researchers, consider what that technology could be used for
@meghara
Megha Rajagopalan
4 years
How were we able to do this analysis? . @alisonkilling , Christo Buschek and I stumbled upon a strange phenomenon on China's Baidu Maps — light gray tiles appearing over known Xinjiang camp locations. By finding more gray tiles, @alisonkilling thought we could find more camps.
Tweet media one
12
201
531
1
45
147
@rbhar90
Bharath Ramsundar
5 years
Some things take a really long time to learn. I think it took me 5 years before I could read and write Tamil well. There are some concepts in algebra I learned in college and I'm just starting to get now. Being willing to patiently learn something over years pays dividends
5
11
147
@rbhar90
Bharath Ramsundar
2 months
Another excellent take. Most of us sharing skepticism love AI and have been working on it for over a decade. Being honest is sustainable. Hype and pump and dump craters careers and misleads the public
@fchollet
François Chollet
2 months
Of course factualism doesn't sell -- if I wanted to be more of an AI influencer I would have to be constantly tweeting about how AI is going to replace all programmers and doctors and so on in less than a year. That sounds exciting and positive, and it gets great engagement!…
26
32
615
5
20
100
@rbhar90
Bharath Ramsundar
2 months
I've personally struggled to get much out of generative AI tools. They're fun distractions but the hallucinations mean I'd rather just google+wiki most of the time. But smart people I know seem to love them. Perhaps it's a matter of working style?
49
4
145
@rbhar90
Bharath Ramsundar
6 years
Blockchains will make game theory mainstream in the way machine learning made linear algebra mainstream
@naval
Naval
6 years
Eventually, every game theory textbook will have a chapter on public blockchains.
27
203
1K
5
35
142
@rbhar90
Bharath Ramsundar
4 months
My two cents, rumors like this have been circulating for a while. It's excellent marketing but I'm not buying it. I think OpenAI has far less than they imply.
@JacquesThibs
Jacques
4 months
I’ve heard the same (~“most papers are bad; very behind actual SOTA”) from someone at a top lab @dwarkesh_sp @tszzl has said similar things. GPT-4 finished training *1.5 years ago*. What do we expect world-class researchers at OAI have done since then, twiddled their thumbs?
Tweet media one
12
15
252
10
6
140
@rbhar90
Bharath Ramsundar
7 years
First two chapters of my and @Reza_Zadeh 's book "Tensorflow for deep learning" available for free on @matroid blog
0
52
137
@rbhar90
Bharath Ramsundar
6 years
Leaving academia can be scary if you're a serious scientist. But it's deeply freeing as well. Being able to evaluate papers without worrying about alienating influential professors who could nix your tenure frees the mind. Academic hierarchy does a lot of unnoticed harm
3
20
135
@rbhar90
Bharath Ramsundar
4 years
Differentiable programming seems like a really interesting extension to differential geometry. It's particularly interesting since manifolds are usually very smooth objects and programs combinatorial objects but now we're seeing a blending of the two subjects
4
23
136
@rbhar90
Bharath Ramsundar
3 months
A lot of OpenAI's work is taking existing community models, running them at large scale, and then shipping them out online. Good systems software. But it's tiring how it's captured the AGI discourse especially when they don't publish or give proper credit
3
15
137
@rbhar90
Bharath Ramsundar
6 months
This is an important debate. I am nowhere near as qualified but I also respectfully disagree with the doomer position. It's important for scientists who disagree to speak up so there isnt the impression of scientific consensus of AI doom
@geoffreyhinton
Geoffrey Hinton
6 months
Yann LeCun thinks the risk of AI taking over is miniscule. This means he puts a big weight on his own opinion and a miniscule weight on the opinions of many other equally qualified experts.
638
484
4K
7
8
133
@rbhar90
Bharath Ramsundar
4 years
Maybe it's my time in quarantine, but I'll share a couple unfiltered opinions about startups. Starting off, I think culture setting exercises in general are silly. Writing down a few generic statements ("We live for excellence") doesn't say much
5
23
135
@rbhar90
Bharath Ramsundar
4 years
This implementation of Genomic-ULMFiT provides a neat application of modern NLP models to genomic classification. Pretraining on a large corpora of genetic sequences is used to boost downstream genome sequence classification problems
0
26
133
@rbhar90
Bharath Ramsundar
6 years
This is a really interesting paper on biological deep learning. The researchers construct an end-to-end model of a moth's olfactory network. Trained with Hebbian rule. System capable of low data learning picking up new smells with 10 examples
3
23
128
@rbhar90
Bharath Ramsundar
6 years
Blockchain is revolutionary for reasons I've seen few people express. It's not Bitcoin's valuation, but fundamental algorithmic advances that have the potential to remake Tech. My new blog post, "Why Blockchain Could (One Day) Topple Google," explains
5
24
100
@rbhar90
Bharath Ramsundar
6 years
How do you solve a truly hard problem? My undergrad math advisor said it's bit like trying to climb a mountain. You circle and look for a path to the peak that no one else has noticed. Often the searcher with most time and patience wins. Genius isn't what it appears from afar
4
26
122
@rbhar90
Bharath Ramsundar
6 years
Learning how to read code is a skill worth developing. Not straightforward, since it requires you to form a mental model of code execution, but broad reading will teach you idioms and design patterns you didn't realize could exist. Pick a GitHub repo you like and start reading!
3
24
121
@rbhar90
Bharath Ramsundar
5 years
I lost interest in pure mathematics after starting CS grad school, but I've recently been getting back to it after following some interesting mathematicians on Twitter. The human angle makes such a difference. Seeing what people are excited about cuts through the noise
5
6
118
@rbhar90
Bharath Ramsundar
6 years
Medium is really pushing their subscription model recently. So, users write free essays which we pay Medium to read? Sounds a lot like old school scientific publisher paywalls. No thanks!
6
21
119
@rbhar90
Bharath Ramsundar
6 years
Thriving in deep learning requires unlearning mathematics. Proofs are not to be trusted and only empirical results can be believed. Blockchain is the opposite. In a world where good empirical results are extraordinarily hard, solid proofs provide critical guarantees.
7
26
116
@rbhar90
Bharath Ramsundar
6 years
Have you ever wondered what a PhD is like? Check out my new essay, "A PhD in Snapshots." I've put together 10 progress reports I wrote charting the ups and downs and sideways of the PhD. It's a long and windy path!
2
21
118
@rbhar90
Bharath Ramsundar
6 years
I'll be defending my thesis, "Molecular Machine Learning with DeepChem" on December 12th from 3:15-5:15 at Stanford. The first portion is open to the public, so please come by if you're interested in learning about molecular ML!
9
14
113
@rbhar90
Bharath Ramsundar
6 years
Excited to announce that @deep_chem has crossed 1,000 stars on GitHub! Lots of interest in deep learning for the life sciences. Excited to see how our users leverage these tools to discover cures for diseases that have no treatments today!
1
30
118
@rbhar90
Bharath Ramsundar
5 years
Google Earth now offers up close pictures of backyards. This is pretty intrusive folks. I'm able to see my parents' backyard and their neighbors' yards in high rendered detail. When was consent asked for this?
11
16
108
@rbhar90
Bharath Ramsundar
5 months
I really recommend this paper. Reading it highlights just how many details are left out in most "open" LLMs. The ML community should return to its strong open source roots and make more fully open LLMs to advance the fundamental science of the space
2
19
109
@rbhar90
Bharath Ramsundar
6 years
Another cool deep learning for physics paper.
@abursuc
Andrei Bursuc
6 years
tempoGAN: nice use of adversarial training for super-resolution of temporally consistent fluid flows using a Volumetric-GAN
Tweet media one
Tweet media two
1
40
99
2
31
112
@rbhar90
Bharath Ramsundar
2 months
@schwabpa The conversational behavior really creates a poweful illusion of presence. It's fascinating how it enables us to mirror ourselves onto the chatbot. Also why these things are so dangerous as therapy agents etc. They tell us completions we want to hear pseudo-authoritatively
5
12
116
@rbhar90
Bharath Ramsundar
3 years
Something that surprised me when I learned it was that federal grants in the US provide ~$75K/year/grad student. But the grad student only gets about a ~$25K stipend with the rest going to the university. It feels like the ratio should be inverted with $50K stipend
10
8
114
@rbhar90
Bharath Ramsundar
4 years
There's a lot of talented folks working on not terribly useful products. The on going crises are a good opportunity to reevaluate whether what you're working on is meaningful and worth your energy.
9
19
114
@rbhar90
Bharath Ramsundar
3 years
This article suggests that biological neurons are roughly analogous to 5-8 layer deep networks (w/ 256 nodes/layer). This seems intuitively reasonable. Deep learning practitioners know that the intelligence of a deep network is usually quite limited
1
20
107
@rbhar90
Bharath Ramsundar
28 days
This has triggered a lot of controversy. To share a more positive take, I got my start in research in high school with Intel STS. My high school didn't have supporting programs so I emailed a lot of faculty and researchers. A couple were kind enough to gjve me pointers which…
@thegautamkamath
Gautam Kamath
30 days
NeurIPS 2024 will have a track for papers from high schoolers.
Tweet media one
79
92
595
14
11
110
@rbhar90
Bharath Ramsundar
6 years
Check out our new paper on "Spatial Graph Convolutions for Drug Discovery." Converts a 3D macro molecular structure into a graph structure that it feeds into a graph convolutional deep network. Matches state-of-art with end-to-end learning.
3
44
105
@rbhar90
Bharath Ramsundar
5 years
Something like 8 years ago, I did my first internship at Google. I remember working through a simple mapreduce job and launching it on 1000 cores. It was an exhilarating experience and one that pushed me towards CS grad school.
2
6
106
@rbhar90
Bharath Ramsundar
6 years
Physicists love Boltzmann distributions partly because of their analytic tractability. Computer scientists less so because of their computational intractability. Efficient generative neural models of physical systems will likely gain in prominence over coming years.
1
23
104
@rbhar90
Bharath Ramsundar
5 years
If you're working on AI, ask yourself why progress in AI is a genuine human good? Most applications of AI so far seem to involve public face recognition and audio surveillance. We need more recognition that research is being used for regressive purposes
12
21
103
@rbhar90
Bharath Ramsundar
6 years
Deep learning developers should be concerned about Google's extensive patents in the deep learning space. If the patents were truly for defensive purposes, Google could open source them. Note that they haven't.
@mark_riedl
Mark Riedl
6 years
Serious question: if Google changed its license on future versions of TensorFlow, and a company using an older Apache 2.0 licensed version reimplemented newer functionality... that would violate copyright? Precedent set by Oracle vs Google over the Java API?
2
8
36
3
38
95
@rbhar90
Bharath Ramsundar
4 years
A brief personal announcement: I’ve departed my role as CTO at @computable_io . It’s been a pleasure working with a talented team to develop and ship the world's first protocol for decentralized data cooperatives!
1
2
100
@rbhar90
Bharath Ramsundar
5 years
I'm reading "Surveillance Capitalism". It's making me take a hard look at big tech and Google. I've been in and around these companies for the last decade. Many friends and mentors I deeply respect there. But I can't say I deeply grasped the consequences of the business model
2
9
98
@rbhar90
Bharath Ramsundar
6 years
Jupyter notebooks now support interactive C++. Mind blown. Will make C++ dev so much nicer. H/T @cxhrndz
3
51
95
@rbhar90
Bharath Ramsundar
3 years
I'll just go ahead and make a prediction that AlphaFold will win a Nobel. Accessible, believable proteomes for humans and other species too, wow. There are a lot of proteins in there we barely understand that we now have a structure for
3
14
97
@rbhar90
Bharath Ramsundar
2 months
Part of the reason LLMs cause so much hype and confusion is that they are genuinely new bits of software. We don't yet have common understanding of what approximate retrieval from large compressed text/image/video databases can do. It's too easy to fall into the trap of imagining…
10
12
97
@rbhar90
Bharath Ramsundar
6 years
I've written a new blog post, "The Advent of Huang's Law," which makes the case that the continuing improvements in GPUs alongside the steady spread of deep learning applications indicates the start of a new epoch in modern computer science.
1
32
96
@rbhar90
Bharath Ramsundar
5 years
I think real breakthroughs in AI will come from students who are currently studying things like quantum computing, low-energy circuit design, neuroscience, or cryptography. The field of machine learning, like graphics before it, is tilting towards applications right now
3
13
95
@rbhar90
Bharath Ramsundar
6 years
Digesting very complex ideas can take years. Often times, the concept won't click until you face an external challenge that requires you to master the concept to proceed. It's really important to be patient. The fact you don't get something now doesn't mean 5 years out you can't.
4
13
95
@rbhar90
Bharath Ramsundar
3 years
I'm really excited to be able to announce that @deep_chem 2.4.0 is out! Over a year's worth of hacking from an entire team of superb developers. DeepChem has been basically rewritten at this point and supports many new use cases
@deep_chem
The DeepChem Project
3 years
DeepChem 2.4.0 is out!! This release features over a year's worth of development work. Support for TensorFlow 2 / PyTorch. Significantly improved production readiness. Faster datasets. More MoleculeNet datasets, materials science support and more!
1
29
124
5
17
94
@rbhar90
Bharath Ramsundar
7 years
Check out first two chapters of "Tensorflow for Deep Learning", my new book with @Reza_Zadeh and @OReillyMedia !
5
28
93
@rbhar90
Bharath Ramsundar
6 years
When you're trying to digest a deep technical idea, taking it head-on is almost always a bad strategy. Instead take a strafing run against it; skim it trying to pick out one weak point, simple enough that you can get it. That gives you a toehold. Repeat patiently till it cracks
4
16
93
@rbhar90
Bharath Ramsundar
9 months
Some sad news. Bram Moolenaar, the creator of vim, has just passed
5
19
93
@rbhar90
Bharath Ramsundar
6 years
One of the tougher things about learning new ideas is letting go of the idea that you're an "expert". Really new ideas make you question many of your deeply held instincts and will often make you feel stupid. Embracing the bewilderment will make you more creative
2
18
91
@rbhar90
Bharath Ramsundar
6 years
Spending a little time in a library drives home the limitations of recommendation algorithms. It's not possible to walk around Amazon and discover hidden gems that you'd have never thought to open if it weren't in front of your eyes
7
19
91
@rbhar90
Bharath Ramsundar
6 years
If you want to do good science, read widely. Papers in your field, in adjacent fields, and even in disciplines you know nothing about. Curiosity is a really good guide here. If you think something looks cool, just go ahead and read it!
1
22
89
@rbhar90
Bharath Ramsundar
5 years
The reason Ethereum deeply excites me is that it's the first attempt to build a permisionless nation state. A rudimentary financial system has been built. Storage, communication, identity and more are in the works. Governance is painfully forming. This is what big tech should be
1
16
83
@rbhar90
Bharath Ramsundar
2 years
Floating an idea to any @GoogleAI engineers listening, perhaps consider rolling Jax out to an open source foundation like @PyTorch has done. Google's track record with open source maintenance isn't great. Take the opportunity to make sure your amazing work isn't lost!
2
4
84
@rbhar90
Bharath Ramsundar
7 months
This is an excellent summary of logical holes in AGI fears. Why do we expect recursive self improvement? Usually iterating systems hit a fixed point fairly rapidly. Exponentials are usually S curves. AGI fear mongering seems self serving for organizations that benefit from it
@martin_casado
martin_casado
7 months
@Simeon_Cps what's particularly strange about this discourse is that so much of it is counter to what we know about systems: - complex systems hit diminishing marginal returns - even a billion years of evolution has not evolved the ability to auto-improve intelligence - centralized…
10
3
44
16
12
85
@rbhar90
Bharath Ramsundar
5 years
I'm excited to announce @deep_chem 2.2 has just been released! This new version contains improvements to protein structure handling, better support for image datasets, and brings @deep_chem closer to being a general library for the deep life sciences
1
23
85
@rbhar90
Bharath Ramsundar
4 months
Attacks on open source AI are mounting. If you think open source AI can be a source of good, please speak up. Bad regulation could choke the field and useful applications
8
22
83