Explore tweets tagged as #LLMCompiler
@tom_doerr
Tom Dörr
10 months
"[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling"
Tweet media one
1
60
352
@sehoonkim418
sehoonkim
2 years
Besides latency and accuracy, LLMCompiler also allows cost savings of up to 7x as it avoids unnecessary reasoning for every function call. 🧵5/n
Tweet media one
1
0
13
@sehoonkim418
sehoonkim
2 years
LLMCompiler can be used with open-source models (e.g. LLaMA-2), as well as OpenAI’s GPT models. Across various tasks with different parallel function calling patterns, LLMCompiler consistently shows better accuracy and significant speed up compared to ReAct, as well as the recent
Tweet media one
1
0
12
@sehoonkim418
sehoonkim
2 years
LLMCompiler has three components of (i) an LLM Planner that identifies an execution flow from user inputs that defines different function calls with their dependencies; (ii) a Task Fetching Unit that dispatches the function calls which can be executed in parallel after
Tweet media one
2
0
13
@llama_index
LlamaIndex 🦙
2 years
Build Custom Agent Loops 💫🛠️. You may want to build agentic reasoning beyond what’s available in prepackaged agent frameworks (ReAct, ToT, LLMCompiler). This lets you tackle complex questions over your data that’s best suited for your use case. We’re excited to launch a new
Tweet media one
1
42
193
@llama_index
LlamaIndex 🦙
2 years
🚨 SOTA Parallel Function Calling Agents in @llama_index 🚨. The LLMCompiler project by Kim et al. (@berkeley_ai) is a state-of-the-art agent framework that enables 1) DAG-based planning, and 2) parallel function execution. Makes it much faster than sequential approaches like
Tweet media one
3
72
320
@LangChainAI
LangChain
1 year
🗒️ LLMCompiler: blazing-fast agent execution 👾. 🔀Plan tasks as a DAG. 🎏Stream parallelized task execution (while the planner is still thinking!).🗣️Respond or replan. Build it for yourself in LangGraph!. Python: .Youtube:
Tweet media one
4
57
298
@jerryjliu0
Jerry Liu
2 years
A holy grail for agents is combining parallel function execution with query planning capabilities. The recent LLMCompiler paper by Kim et al. (@berkeley_ai ) does exactly that, and I’m excited to introduce an integration with @llama_index 🔌. Here’s how it works 👇.1. Plan:
Tweet media one
@llama_index
LlamaIndex 🦙
2 years
🚨 SOTA Parallel Function Calling Agents in @llama_index 🚨. The LLMCompiler project by Kim et al. (@berkeley_ai) is a state-of-the-art agent framework that enables 1) DAG-based planning, and 2) parallel function execution. Makes it much faster than sequential approaches like
Tweet media one
12
87
460
@llama_index
LlamaIndex 🦙
2 years
Our first webinar of 2024 explores how to efficiently, performantly build agentic software 🎉. We’re excited to host @sehoonkim418 and @amir__gholami to present LLMCompiler: an agent compiler for parallel multi-function planning/execution. Previous frameworks for agentic
Tweet media one
2
37
185
@diogosantosbr
Diogo Santos
2 years
How can we make LLM agents work together efficiently on complex, large-scale tasks? 🤔. LLMCompiler is a tool that compiles an effective plan to execute multiple tasks in parallel. It helps to create scalable LLM applications, identifies tasks for parallel execution, and manages
Tweet media one
1
1
0
@amir__gholami
Amir Gholami
1 year
Excited to announce that SqueezeLLM and LLMCompiler have been accepted to ICML 2024! 🎉. SqueezeLLM addresses massive outliers in LLMs, through a dense-and-sparse decomposition. The massive outliers are efficiently isolated through in the sparse part, and the remainder is
Tweet media one
1
6
27
@ankurkumarz
Ankur Kumar
2 years
LLMCompiler based approach for parallel processing 👇
Tweet media one
@llama_index
LlamaIndex 🦙
2 years
Here are 7 challenges that AI engineers must solve in order to build large-scale intelligent agents (“LLM OSes”):. 1️⃣ Improving Accuracy: Make sure agents can solve hard tasks well.2️⃣ Moving beyond serial execution: identify parallelizable tasks and run them accordingly.3️⃣
Tweet media one
1
0
0
@andysingal
Ankush Singal
2 years
LLMCompiler with @llama_index : Revolutionizing Multi-Function Calling with Parallel Execution .Link:
Tweet media one
0
0
1
@amir__gholami
Amir Gholami
1 year
Will LLMs disrupt modern e-commerce and web navigation? 🤖🛍️. We recently tested LLMCompiler on WebShop dataset and it outperformed ReAct by 20% higher accuracy. So we decided to test this on a real website. We asked LLMCompiler to buy an On running shoe, and gave it browser
1
6
16
@rohanpaul_ai
Rohan Paul
10 months
LLMCompiler is a framework that enables an efficient and effective orchestration of parallel function calling with LLMs, including both open-source and close-source models, by automatically identifying which tasks can be performed in parallel and which ones are interdependent.
Tweet media one
1
5
29
@aitoolhouse
AI Toolhouse - AI Tools Repository
1 year
2. Meta LLM Compiler. Meta introduces the Meta LLM Compiler, built on Meta Code Llama, featuring advanced code optimization and compiler capabilities. Available in 7B and 13B models, it aids in code size optimization and disassembly tasks. #Meta #LLMCompiler #AI
Tweet media one
1
0
0
@opstreedevops
OpsTree Solutions
1 year
🚀556 Billion Tokens: The AI Revolution Begins! . Meta's AI redefines programming with custom languages, making coding faster and more efficient.📊💥. #CodeOptimization #LLMCompiler #TechInnovation #MetaLLM #CompilerEngineering #AI #MachineLearning #FutureTech #CodeRevolution
0
0
0
@zzwz
不鍊金丹不坐禪
4 months
咱糊的 DeepSearchAgent 这种多步骤任务 一次 Action 输出的效果算不算 ReWOO & LLMCompiler 的一种简陋的混合模式 ? [😂]() .> - [ReWOO]( .> - [LLMCompiler]()
Tweet media one
Tweet media two
0
0
0
@sehoonkim418
sehoonkim
2 years
As for future work, it would be interesting to explore LLMCompiler in conjunction with the ongoing works adopting an operating systems perspective for LLMs. In particular, the incorporation of parallel function calling capability could pave the way for executing complex,.
0
0
14
@vlruso
Vlad Ruso PhD
2 years
UC Berkeley Researchers Introduce LLMCompiler: An LLM Compiler that Optimizes the Parallel Function Calling Performance of LLMs. #ai #itinai #ainews #new #trend
Tweet media one
1
0
1