
Jack Cole
@MindsAI_Jack
Followers
2K
Following
6K
Media
35
Statuses
957
AI Researcher, Clinical Psychologist, App Dev
Illinois, USA
Joined April 2022
Thanks Greg. We are looking forward to a great competition, trying to progress towards AGI, and contributing to open source and science. I would like to introduce our new great teammate Dr. Dries Smit @DriesSmit1. He's cooking up other hot things in the lab most of the time,.
The mission of ARC Prize is to open source progress towards systems that can generalize. The energy from @MindsAI_Jack and team to push the frontier forward help do that. Excited to have them full time on ARC-AGI-2.
1
1
17
RT @arcprize: Impressive work by @makingAGI and team. No pre-training or CoT with material performance on ARC-AGI. > With only 27 million p….
0
25
0
Great work by Giotto and ARChitects!.
We've had 3 leaders in the past 7 days for @arcprize . Top score ($50K pool) prize heating up.
1
0
12
RT @rohanpaul_ai: The paper shows that LLMs can explain rules yet still stumble when asked to carry them out. Most people try bigger model….
0
6
0
RT @pmddomingos: Transformers are the standard model of AI. They’re both:.- The result of a long series of breakthroughs.- Surprisingly pow….
0
18
0
RT @MLStreetTalk: Fantastic result from OAI achieving IMO gold medal level performance. However -- we disagree with the premise that super….
0
13
0
RT @fchollet: Today we're releasing a developer preview of our next-gen benchmark, ARC-AGI-3. The goal of this preview, leading up to the….
0
968
0
RT @sama: Today we launched a new product called ChatGPT Agent. Agent represents a new level of capability for AI systems and can accompli….
0
3K
0
RT @AdamZweiger: Come check out our ICML poster on combining Test-Time Training and In-Context Learning for on-the-fly adaptation to novel….
0
5
0
RT @rohanpaul_ai: Big LLMs always run every layer, no matter how simple or hard the prompt. This paper shows that skipping or looping laye….
0
5
0
RT @GregKamradt: One of my favorite things about @arcprize, new ideas are needed which leads to novel research. Open papers like this by @P….
0
3
0
RT @rohanpaul_ai: Meta’s reply to Stargate comes through Prometheus at 1 GW and Hyperion at 2 GW, running multi-billion-dollar GPU clusters….
0
21
0
RT @EMostaque: fwiw the amount of compute needed for Kimi K2 is around about the same as DeepSeek V3/R1 ($5m worth). More stable training,….
0
43
0
RT @DrTechlash: 🚨The UK AISI identified four methodological flaws in AI "scheming" studies (deceptive alignment) conducted by Anthropic, MT….
0
57
0
Exactly. RL can help a model develop some patterned exploration skills and some degree of self-verification. That is not adaptable to all new problems.
Scaling up RL is all the rage right now, I had a chat with a friend about it yesterday. I'm fairly certain RL will continue to yield more intermediate gains, but I also don't expect it to be the full story. RL is basically "hey this happened to go well (/poorly), let me slightly.
1
0
8
RT @ADarmouni: Program Synthesis approach breakthrough in ARC-AGI through Self-Play. 📖 Read 201: « Self-Improving Language Models for Evolu….
0
4
0