Abhiram Singh Profile
Abhiram Singh

@symmkey

Followers
119
Following
187
Media
23
Statuses
297

Principal Engineer @AWS. Formerly @ Amazon Robotics and IBM. Tweet about #engineering #distributedsystems #software #ai #space #physics. Tweets are personal.

Joined March 2015
Don't wanna be here? Send us removal request.
@symmkey
Abhiram Singh
5 months
Claude 3.7 Sonnet from Anthropic took on Pokémon Red with no prior training, using only vision, memory, and function calls. With its extended thinking mode, it reasoned through challenges, defeating 3 gym leaders by dynamically adapting to the game’s mechanics. This is.
0
0
0
@symmkey
Abhiram Singh
6 months
DeepSeek-R1 discovered a powerful compute multiplier for LLM reasoning capabilities. Instead of scaling through brute-force compute or complex architectures, they found an elegant shortcut. The Multiplier (GRPO):.• Generate multiple solutions & let them compete.• Learn from.
1
0
2
@symmkey
Abhiram Singh
6 months
too much computing is like too much money. they both stifle innovation.
0
0
0
@symmkey
Abhiram Singh
6 months
Tweet media one
0
5K
0
@symmkey
Abhiram Singh
7 months
2024 Belongs to LLM-Based Coding Agents and Why We Still Need Time-Honored Coding Best Practices (For Now). Of all the remarkable AI breakthroughs emerging this year, LLM-based coding agents have been turning the most heads. Their ability to generate workable code on the fly is.
0
0
0
@symmkey
Abhiram Singh
7 months
Transformers: The Surprising One-Stop Shop for Deep Learning and AI. People have been wary of “all-in-one” solutions in computing for ages—especially in distributed systems, where a single architecture can seem riskier than more specialized approaches. As Tanenbaum and van Steen.
0
0
1
@symmkey
Abhiram Singh
8 months
Some Thoughts on Scaling Model Reasoning with Test Time Compute. How models determine compute allocation at test time and what “difficulty” really means for LLMs?. Determining how much compute(time) to apply at inference is probably far from exact. Models may gauge complexity by.
0
0
0
@symmkey
Abhiram Singh
9 months
One of the best perks of driving an EV? Guilt-free idling. Imagine sitting in your car, waiting for your kid’s math class to end—no emissions, no engine noise, and no wasted fuel. You can keep the air on, charge your phone, or even catch up on some podcasts, all while being.
0
0
0
@symmkey
Abhiram Singh
9 months
I’ve been thinking about how current AI coding assistants are pretty impressive. They handle code completion, offer contextual suggestions, optimize algorithms, and can even help write small-scale applications with a bit of tinkering. I think, the next logical step from here.
0
0
0
@symmkey
Abhiram Singh
9 months
Influence Without Authority is So Tough—But So Worth It When You Nail It. Trying to get things done without a fancy title or authority is tough. You can’t just tell people what to do. Instead, you’ve got to get them on board by earning their trust, proving your value, and showing.
0
0
0
@symmkey
Abhiram Singh
10 months
comfort is an addiction.
0
0
0
@symmkey
Abhiram Singh
10 months
Can We Build Deterministic Systems on Top of Probabilistic Foundations Like LLMs?. It's a fascinating question that we don't have a definitive answer to yet. However, we can draw an intriguing parallel from the world of physics. In quantum mechanics, the microscopic world is.
0
0
0
@symmkey
Abhiram Singh
11 months
My speculative take—based on publicly available information—on how the recently released OpenAI o1 model advances the state of the art (SOTA) in reasoning tasks is that it combines existing techniques in a new, scalable recipe. A key shift is moving significant compute from.
0
0
0
@symmkey
Abhiram Singh
11 months
this one tiny paragraph explains what 'inference' is really about better than entire volumes of literature on ML.
Tweet media one
0
0
0
@symmkey
Abhiram Singh
11 months
perfection is not a computable operation.
0
0
0
@symmkey
Abhiram Singh
11 months
As Builders, Should We Wait for LLMs to Become Deterministic? Lessons from Computing. Unpredictability, often seen as a limitation, can be a powerful tool in computing:. 1/ Evolutionary Algorithms use randomness through mutations and crossovers to explore vast search spaces,.
0
0
0
@symmkey
Abhiram Singh
11 months
The Long History of Tech’s Unintended Side Effects: Serendipity or Systematic Design?. Tech history is full of unexpected side effects that revolutionized fields beyond their original purpose. GPUs, for example, were designed to solve a specific class of problems very efficiently.
0
0
0
@symmkey
Abhiram Singh
1 year
Why LLM Agents Have Taken Off in Coding Domains but Not in Others - A Thought. LLM-based coding agents have gained significant traction, particularly because the code they generate can be easily verified and refined using existing tools. This success is largely due to the fact.
0
0
0
@symmkey
Abhiram Singh
1 year
Computer geeks, what's one accessory under $100 that you absolutely love?.
0
0
1
@symmkey
Abhiram Singh
1 year
Today, OpenAI announced GPT-4o-mini: 15 cents per million input tokens, 60 cents per million output tokens, MMLU of 82%, and fast real-time responses. The trend of rapidly reducing inference costs continues and is gaining momentum. These smaller but highly capable models will.
5
0
1