Samip Dahal Profile
Samip Dahal

@samipddd

Followers
724
Following
3K
Media
11
Statuses
157

working on the theory of general intelligence.

SF
Joined June 2016
Don't wanna be here? Send us removal request.
@samipddd
Samip Dahal
11 months
there is a good chance that it is real in a much longer timescale.
0
0
3
@samipddd
Samip Dahal
11 months
why is Gaia still a taboo?.
1
0
1
@samipddd
Samip Dahal
11 months
the good news is that we have access to a decent chunk of the computational universe. so we can bootstrap non-obvious (intelligent) steps with more gradient descent. "emergence".
0
0
2
@samipddd
Samip Dahal
11 months
we gotta beat gradient descent with more gradient descent. it's gradient descent + raw compute. if this turns out to be true, it's a striking fact about our universe that all we can do is the most obvious thing(take a small, local step in this large continuous space).
2
0
1
@samipddd
Samip Dahal
1 year
flexing parameter count in LLMs is going to look pretty stupid in retrospect. better models have **less** parameters, not more. you should instead be flexing how much compute your model is able to spend during inference time. current models cannot spend much time on inference.
0
0
2
@samipddd
Samip Dahal
1 year
all the physics that have been discovered so far are properties of the dream universe that our minds create. not of the underlying reality that allow the existence of such minds at all in the first place.
0
0
1
@samipddd
Samip Dahal
1 year
physics people have mostly stopped asking the most fundamental questions(what the universe actually is, how it exists, role of minds/observers like us). the community as a whole seems to have convinced itself that those answers are out of reach.
0
0
3
@samipddd
Samip Dahal
1 year
the universe actually exists. how fucking amazing is that.
0
0
4
@samipddd
Samip Dahal
2 years
Looking back, intelligence will look insanely cheap/easy to build. And LLMs are the opposite of that, although they are the thing closest to general intelligence right now (won't last long).
0
1
3
@samipddd
Samip Dahal
2 years
LLMs won't win the AI race. The thing we've clearly seen is that scaling works and neural nets are enough. But we'll build much more efficient/capable models than LLMs. Think of human brain, to begin with. or GNNs(for certain super difficult algorithmic reasoning tasks).
2
0
3
@samipddd
Samip Dahal
2 years
the interesting thing about turing machines is that despite being crazy powerful, the first thing we knew about them was what they couldn't do in theory(all the non-computable things) and not what they could do.
0
0
3
@samipddd
Samip Dahal
2 years
autoregressive LLMs are just not it if we're talking about superintelligence. time to move on.
0
0
2
@samipddd
Samip Dahal
2 years
@LukeGessler
Luke Gessler
2 years
this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by. gzip
Tweet media one
0
0
0
@samipddd
Samip Dahal
2 years
okay everybody this is what AGI looks like. that's all -- those 14 lines
Tweet media one
2
0
4
@samipddd
Samip Dahal
2 years
in fact, i applied to Thiel fellowship as a sophomore couple of years ago saying i wanted to do code generation with NNs and got rejected lol. it was a batshit crazy idea tho, the best you could do is train a 50M parameter model to barely write one line of python.
0
0
1
@samipddd
Samip Dahal
2 years
one trend that doesn't get emphasized enough is how we are completely redoing programming with ML. before GPT-3, it was not a serious thing in any sense. TranX --a semantic parser!!-- was the best code generation algorithm. the vibe shifted rapidly with GPT-3.
1
1
4
@samipddd
Samip Dahal
2 years
Langchain is the definition of abstractions that are not needed. It's like 2 functions expanded to 200 files and 200 different concepts. Oh and it is incidentally also one of the most popular github repos right now.
@minimaxir
Max Woolf
2 years
The Problem With LangChain
Tweet media one
0
0
2
@samipddd
Samip Dahal
2 years
RT @thesephist: You guys are telling me we are going to invent literal superintelligence and we are going to interact with it by sending te….
0
10
0
@samipddd
Samip Dahal
2 years
We live in exciting times: coding will be completely automated BUT theoretical computer science will become the biggest field.
0
0
0
@samipddd
Samip Dahal
2 years
the best programmers i know are there to solve problems or build something -- instead of just 'coding'. you can describe this in natural language/some UI/some easy formal language if super easy.
0
0
0