Explore tweets tagged as #TextGrad
@thomasahle
Thomas Ahle
1 year
Exciting new paper: TextGrad: Automatic “Differentiation” via Text!. This is a DSPy-like framework for optimizing prompts in a composite LLM system. However, there is one major difference!. In DSPy the idea is (basically):.- Forward pass: Each Module (LLM call) generates picks a
Tweet media one
Tweet media two
Tweet media three
15
64
315
@ivibecode
build.dev
2 months
AlphaEvolve broke a 56-year matrix-math record, but the real story was how it used a novel gradient-based optimization to rewrite its own code. Add TextGrad’s language approach, and you get a numeric and textual self-refining dev loop. RL hype is loud,
0
0
1
@tom_doerr
Tom Dörr
1 year
Agents 2.0 seems to use techniques similar to those in TextGrad
Tweet media one
1
2
5
@TommyFalkowski
Tommy Falkowski
1 year
Has anyone already tried out Textgrad?.Looks like some kind of successor to DSPy
Tweet media one
2
0
1
@jiqizhixin
机器之心 JIQIZHIXIN
2 months
Evolution, meet AI agents. 🚀 Meet EvoAgentX: an open-source platform that evolves multi-agent workflows with LLMs — no manual configs, no static pipelines. It auto-generates, executes, and optimizes agents using TextGrad, AFlow, & MIPRO across tasks like QA, code, and math.
Tweet media one
3
3
15
@zereraz
Sahebjot Singh
1 year
TextGrad: framework performing automatic “differentiation” via text. TextGrad backpropagates textual feedback provided by LLMs to improve individual components of a compound AI system.
Tweet media one
0
0
4
@techwith_ram
𝗿𝗮𝗺𝗮𝗸𝗿𝘂𝘀𝗵𝗻𝗮— 𝗲/𝗮𝗰𝗰
5 months
Stanford's TextGrad optimizes AI models using text-based feedback instead of numerical gradients, improving AI performance in science Q&A, medicine, drug discovery, and multimodal reasoning.
Tweet media one
0
17
73
@gabrielchua_
gabriel
1 year
trying `textgrad` and it's really just prompt engineering under the hood?. for example, there's a textual gradient descent prompt, and "momentum" is done by adding earlier iterations of the prompt. well at least the prompts are conveniently stored in `optimizer_prompts.py`
Tweet media one
1
0
6
@thomasahle
Thomas Ahle
1 year
A while ago I wrote a thread about #TextGrad, which is an alternative prompt optimization method, based on "natural language gradients". Cool!. Since we are still waiting for @karpathy's video reimplementing this from scratch. I thought I had to make my own. So here is the
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
21
108
@luke_yun1
Luke Yun
5 months
TextGrad optimizes AI models using text-based feedback instead of numerical gradients, enabling iterative refinement across tasks like science Q&A, medical treatment planning, and drug discovery. This marks a shift in AI learning by leveraging "textual gradients" for adaptation.
Tweet media one
1
3
17
@james_y_zou
James Zou
5 months
⚡️Really thrilled that #textgrad is published in @nature today!⚡️. We present a general method for genAI to self-improve via our new *calculus of text*. We show how this optimizes agents🤖, molecules🧬, code🖥️, treatments💊, non-differentiable systems🤯 + more!
Tweet media one
Tweet media two
20
127
669
@MikeE_3_14
Mike Erlihson, Math PhD, AI
1 year
⚡️🚀המאמר היומי של מייק 23.06.24:⚡️🚀.TextGrad: Automatic “Differentiation” via Text. 1⃣אני קצת שיכור אחרי כמה שוטים ובירות באירוע המגניב של one-shot אבל התמדה בסקירות יומיות גברה על כך. הסקירה של היום מדברת גישה ש״מטילה״(project) את שיטת מורד גרדיאנט (gradient descent או פשוט
Tweet media one
Tweet media two
Tweet media three
1
2
5
@arankomatsuzaki
Aran Komatsuzaki
1 year
TextGrad: Automatic "Differentiation" via Text. - Backprops textual feedback provided by LLMs to improve individual components of a compound AI system.- 51% -> 55% on GPQA and 20% rel. gain in LeetCode-Hard . repo: abs:
Tweet media one
2
52
278
@TheAIObserverX
nat | localhost: auriel
1 year
TextGrad: Automatic "Differentiation" via Text.◼ 🚀 Introducing TextGrad, the new framework revolutionizing AI optimization! By harnessing LLM feedback, it enhances AI components—from coding to molecule design—without extra tuning. 🧠🔧 Boosts GPT-4o's accuracy & optimizes
Tweet media one
0
0
0
@RobRoyce_
Royce
8 months
Comparison of TextGrad vs. DSPy, both very compelling research from @Stanford
Tweet media one
0
1
4
@kirill_igum
Kirill Igumenshchev
1 year
#TextGrad #LLM framework shows that optimizing answer AND the #PROMPT using multistep #reflextion, greatly improves accuracy. they created a python package with a similar syntax as pytorch. the framework has nice logging but I don't find it intuitive and extendable.
Tweet media one
Tweet media two
Tweet media three
0
0
0
@_akhaliq
AK
1 year
TextGrad. Automatic "Differentiation" via Text. AI is undergoing a paradigm shift, with breakthroughs achieved by systems orchestrating multiple large language models (LLMs) and other complex components. As a result, developing principled and automated optimization
Tweet media one
6
59
262
@123wimi
Happy
1 year
1. 🚀 TextGrad uses text feedback for automatic differentiation, offering a new method for prompt optimization in large language models, effective in Q&A, molecular optimization, and radiotherapy planning. 2. 🔧 DSPy provides the algorithmic foundation for TextGrad to optimize
Tweet media one
1
1
3
@onjas_6
Jason Hu
1 year
Super excited to share our work on benchmarking LLM routers at Compound AI Systems workshop at @Data_AI_Summit ! . Had the pleasure to chat with other masterminds in the space like @matei_zaharia @ChenLingjiao . Found @ShengLiu_ ‘s textgrad presentation to be particularly
Tweet media one
4
5
38