Simons Institute for the Theory of Computing
@SimonsInstitute
Followers
9K
Following
851
Media
552
Statuses
1K
The world's leading venue for collaborative research in theoretical computer science. Follow us at https://t.co/KvcuGI7WM0.
Berkeley, CA
Joined January 2018
4/4 “I claim this is the future of LLMs. My prediction is that in a couple of years people are going to look back and think it’s crazy to cram all of this into the model weights,” @Cornell's Kilian Weinberger at the Simons Institute. Video: https://t.co/uwbmOr1rD1
0
2
8
3/4 Large Memory Language Model (LMLM) “is a very simple trick…basically a pre-training method that separates tail knowledge and common knowledge": Kilian Weinberger at the Simons Institute’s workshop on The Future of Language Models and Transformers. https://t.co/uwbmOr1rD1
1
2
11
Full Self Driving Supervised drives like the best chauffeur imaginable Try it yourself at a nearby Tesla location
0
102
962
2/4 “We can make the model much smaller, faster to train, cheaper to operate. And we are no longer limited by the model weights for the tail knowledge. We can make this external database really big,” @Cornell's Kilian Weinberger at the Simons Institute. https://t.co/uwbmOr1rD1
1
2
9
1/4 LLMs learn language competency, factual common knowledge and factual tail knowledge. Factual knowledge has a heavy tail and should be stored in a database. “It doesn’t belong in the model weights”: @Cornell's Kilian Weinberger at the Simons Institute. https://t.co/uwbmOr1rD1
1
7
24
4/4 “I claim this is the future of LLMs. My prediction is that in a couple of years people are going to look back and think it’s crazy to cram all of this into the model weights,” @Cornell's Kilian Weinberger at the Simons Institute. Video: https://t.co/uwbmOr1rD1
0
2
8
Sleigh the season with the most personal gift around. Get them a Cameo video!
8
23
108
3/4 Large Memory Language Model (LMLM) “is a very simple trick…basically a pre-training method that separates tail knowledge and common knowledge": Kilian Weinberger at the Simons Institute’s workshop on The Future of Language Models and Transformers. https://t.co/uwbmOr1rD1
1
2
11
2/4 “We can make the model much smaller, faster to train, cheaper to operate. And we are no longer limited by the model weights for the tail knowledge. We can make this external database really big,” @Cornell's Kilian Weinberger at the Simons Institute. https://t.co/uwbmOr1rD1
1
2
9
1/4 LLMs learn language competency, factual common knowledge and factual tail knowledge. Factual knowledge has a heavy tail and should be stored in a database. “It doesn’t belong in the model weights”: @Cornell's Kilian Weinberger at the Simons Institute. https://t.co/uwbmOr1rD1
1
7
24
2/2 "Jailbreaking is a serious threat in agentic settings. Increased threat surfaces makes it even more vulnerable." @sivareddyg of @Mila_Quebec at the Simons Institute, on Robustness of jailbreaking across aligned LLMs, reasoning models and agents. Video: https://t.co/5Vi1KsKSUN
0
0
0
1/2 "In the LLM scenario...jailbreaking is very powerful. In agentic scenarios, it gets even more dangerous." @sivareddyg of @Mila_Quebec at the Simons Institute's workshop on Safety-Guaranteed LLMs Video: https://t.co/5Vi1KsKSUN
1
0
9
2/2 "So how about getting rid of [self & goals] and minimizing [affordances] to its simplest possible form." @Yoshua_Bengio's Richard M. Karp talk at the Simons Institute on "Superintelligent Agents Pose Catastrophic Risks." Video: https://t.co/wLA72znf8x
0
0
0
1/2 "You could have bad goals and you could be smart, but if you can’t do anything in the world, then you can’t do a lot of harm. The trio is the thing that kills us." @Yoshua_Bengio, at his Richard M. Karp Distinguished Lecture at the Simons Institute: https://t.co/wLA72znf8x
1
0
2
2/2 Soledad Villar (@JohnsHopkins) identified 3 types of symmetries: those that come from observed regularities of physics; symmetries that come from choice of mathematical representations; & symmetries in the parameter space. Simons Institute talk video: https://t.co/OjrSzMpJDU
0
0
3
We need policies that champion newer, modern recycling to keep up with demand for products made with recycled plastic.
0
12
94
1/2 "Symmetries play a fundamental role in machine learning." Soledad Villar of @JohnsHopkins speaking at the Simons Institute workshop on Randomness, Invariants, and Complexity. Video: https://t.co/OjrSzMpJDU
1
3
20
Join us next Tuesday, 12/9 for Jon Kleinberg's talk in our Theoretically Speaking public lecture series. Registration is required. https://t.co/uU1y7Co9zU
0
2
11
1/3 "What are the conditions for an AI system in the future to cause catastrophic harm?" Turing Award winner @Yoshua_Bengio asked, during his Richard M. Karp Distinguished Lecture at the Simons Institute earlier this year.
2
6
17
3/3 "The only thing...we can manage is to make sure they don’t have bad intentions. Of course, intentions can come from malicious humans, or ... from the AI themselves." @Yoshua_Bengio at his Richard M. Karp Distinguished Lecture at the Simons Institute. https://t.co/wLA72zmHiZ
0
0
0
Aries is transforming logistics with modern tracking and unmatched client experience. Get a Quote or Book Now.
0
41
279
2/3 "It needs to have the capability to cause the harm: the intelligence and the affordance. And it needs to have the intention," said @Yoshua_Bengio at the Simons Institute. "It’s very unlikely that we’ll stop the train of capability." Video: https://t.co/wLA72zmHiZ
1
0
0
1/3 "What are the conditions for an AI system in the future to cause catastrophic harm?" Turing Award winner @Yoshua_Bengio asked, during his Richard M. Karp Distinguished Lecture at the Simons Institute earlier this year.
2
6
17
2/2 "The general belief is that networking is not working efficiently." Chen Avin of @bengurionu at the Simons Institute workshop on Managing Specialized and Heterogeneous Architectures. Video: https://t.co/YdeHRTXRqT
0
0
0