Reyna Abhyankar Profile
Reyna Abhyankar

@reyna_abhyankar

Followers
19
Following
32
Media
0
Statuses
12

I like computers

Joined August 2019
Don't wanna be here? Send us removal request.
@reyna_abhyankar
Reyna Abhyankar
1 month
Computer-Use Agents (CUAs) are improving every day but take up to tens of minutes to complete simple tasks. We built OSWorld-Human, a benchmark that measures efficiency - a first-step towards practical CUAs. Check out our blog post!.
@yiying__zhang
Yiying Zhang
1 month
Computer-use AI agents (CUAs) are powerful, but way too slow. A 2-minute human task can take a CUA over 20 minutes!. At Wuklab, we're building faster CUAs. Recently, we created OSWorld-Human, a new benchmark to close the speed gap between humans and machines. Read our full blog.
0
2
4
@reyna_abhyankar
Reyna Abhyankar
1 month
RT @yiying__zhang: Computer-use AI agents (CUAs) are powerful, but way too slow. A 2-minute human task can take a CUA over 20 minutes!. At….
0
4
0
@grok
Grok
6 days
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
394
661
3K
@reyna_abhyankar
Reyna Abhyankar
5 months
RT @yiying__zhang: Boost your gen-AI workflow's quality by 2.8x with just $5 in 24 minutes! Check how Cognify autotunes gen-AI workflow’s q….
Tweet card summary image
github.com
Multi-Faceted AI Agent and Workflow Autotuning. Automatically optimizes LangChain, LangGraph, DSPy programs for better quality, lower execution latency, and lower execution cost. Also has a simple ...
0
4
0
@reyna_abhyankar
Reyna Abhyankar
9 months
Check out our latest work: Cognify!.
@yiying__zhang
Yiying Zhang
9 months
Struggling with developing high-quality gen-AI apps? Meet Cognify: an open-source tool for automatically optimizing gen-AI workflows. 48% higher generation quality, 9x lower cost, fully compatible with LangChain, DSPy, Python. Read & try Cognify: #GenseeAI
Tweet media one
0
0
2
@reyna_abhyankar
Reyna Abhyankar
9 months
RT @yiying__zhang: Struggling with developing high-quality gen-AI apps? Meet Cognify: an open-source tool for automatically optimizing gen-….
0
4
0
@reyna_abhyankar
Reyna Abhyankar
11 months
RT @yiying__zhang: WukLab's new study reveals CPU scheduling overhead can dominate LLM inference time—up to 50% in systems like vLLM! Sched….
0
12
0
@reyna_abhyankar
Reyna Abhyankar
1 year
RT @yiying__zhang: Join us at ICML in Vienna next Thursday 11:30-1pm local time (poster session 5) for our poster on InfeCept (Augmented, o….
0
1
0
@reyna_abhyankar
Reyna Abhyankar
1 year
RT @yiying__zhang: Today, LLMs are constantly being augmented with tools, agents, models, RAG, etc. We built InferCept [ICML'24], the first….
0
2
0
@reyna_abhyankar
Reyna Abhyankar
1 year
RT @yiying__zhang: LLM prompts are getting longer and increasingly shared with agents, tools, documents, etc. We introduce Preble, the firs….
0
5
0
@reyna_abhyankar
Reyna Abhyankar
2 years
RT @JiaZhihao: Generative LLMs are slow and expensive to serve. Their much smaller, distilled versions are faster and cheaper but achieve s….
0
73
0