Explore tweets tagged as #CodeLlama
@reach_vb
Vaibhav (VB) Srivastav
1 year
New Drop: CodeGeeX4 9B by ChatGLM 🔥. > Beats CodeLlama 70B (7x size), competitive with DeepSeek Coder 33B.> Multilingual code generation model, continually trained on ChatGLM 9B.> Upto 128K context.> Supports code completion and generation, code interpreter, web search, function
Tweet media one
2
22
136
@reach_vb
Vaibhav (VB) Srivastav
1 year
WAIT, it's not over; Meta just dropped the LLM Compiler! 🧑‍💻.> Beats GPT-4 on code size improvement and disassembly.> Achieves 77% of the optimising potential of an autotuning search and 45% disassembly round trip. > Built on top of CodeLLaMa with improved code optimisation and
Tweet media one
4
51
334
@tekwendell
wendell
1 year
Whoa, ROCm has come a long way quick. This little 17gb sapphire pulse 7600 xt zips right along driving codellama 13b!
10
22
133
@mammothcompany
The Mammoth Corporation - #1 Crypto & AI Resource
7 months
Want to harness the power of Llama 2 and Ollama for AI-driven creativity? 🤖. 1️⃣ Run and Chat with Llama 2 in the Command Line.2️⃣ Write Code Effortlessly with Code Llama.3️⃣ Run an Uncensored LLM Model with Ollama . #AI #LLMs #Llama2 #Ollama #CodeLlama #Python #MachineLearning
0
0
2
@HowardBGil
Howard Gil 🖇️ r/accoon
1 year
Living in this absolute unit for another week and can’t get the Starlink working. Thankfully we live in post-e/acc world and local LLMs exist 📟.Hit me with some offline coding tools 🙏🏽.I have ⁦@ollama⁩ running CodeLlama ⁩etc on ⁦@msty_app⁩ 🦙
1
2
20
@PrathameshD_8
Prathamesh Devadiga
5 months
Built a multi-agent system using DSPy using the CoT (Chain of Thought) and ReAct mechanism!.It does code reviews, code optimization and writes test cases!. All of this was tested out locally using codellama:7b and llama3.2:3b!.
4
13
140
@Saboo_Shubham_
Shubham Saboo
1 year
AI software engineer using OpenDevin and Codellama running locally on your computer (100% free and without internet):
12
98
704
@OpenxAINetwork
OpenxAI
4 months
New AI Model Added!.StarCoder2 Is Live on OpenxAI. StarCoder2 Features:.✅ 3B, 7B, & 15B parameter models for any coding need.✅ 16K+ context window for large, complex codebases.✅ Outperforms CodeLlama-34B with more power and half the size.✅ Beats DeepSeekCoder-33B in
1
0
10
@BlessingMwiti
Blessing Mwiti 
3 days
I am testing local Ollama with codellama 7b instruct on a Elitebook 850 g6 with 16GB RAM running hackintosh sequoia, and trying it out with continue extension installed in vs code that provides agent mode just like cursor. 🧵
Tweet media one
1
0
4
@androiddevnotes
Android Dev Notes
1 year
Kotlin ML Pack: Technical Report.(. "In this technical report, we present three novel datasets of Kotlin code: KStack, KStack-clean, and KExercises. We also describe the results of fine-tuning CodeLlama and DeepSeek models on this data."
Tweet media one
1
8
45
@arnitly
Arnav Jaitly
6 months
I spent my weekend learning about agentic workflows and playing around with the smolagents library released by @huggingface. With this, I built a pretty cool Travel Assistant Agent (I call it Tracy). I use CodeLlama-34b-Instruct-hf for my LLM agents in the backend to do a
3
0
5
@m_matsubara
松原正和 (m.matsubara)
1 year
A5:SQL Mk-2 の AI アシスタント機能で、 Azure OpenAI 及び Ollama に対応しました。ただ、Ollamaのモデルは phi3 や codellama:7b くらいでは少々回答精度が厳しそう。普通にエラーになるSQLを出してくる。でも��ローカルLLMは夢があるので…。
Tweet media one
Tweet media two
1
2
53
@deejayyhu
Chief Twat
1 year
Mondom mi kell ahhoz, legyen egy copilotod a saját gépeden, ami offline módon működik. ollama download, next, next, finish.ollama run codellama:13b-code
Tweet media one
1
2
7
@e2enetworks
E2E Networks
2 months
CodeLlama runs like a charm on TIR. From idea to AI pair programmer in just:. • 7 minutes to deploy.• 7 days to train your coding agent.• 7-day money-back if it’s not the right fit. Write better code. Build faster. Try CodeLlama on TIR →
Tweet media one
0
1
0
@_philschmid
Philipp Schmid
1 year
Will LLMs soon compile and optimize our code? 🤔 Last Week, @AIatMeta released a new LLM called LLM-Compiler, trained on over a billion tokens of LLVM-IR, x86_64, ARM, and CUDA code. TL;DR;.🦙 Based on CodeLlama 7B and 13B.🔍 Trained on an additional 546 billion tokens of
Tweet media one
2
26
93
@TheBidouilleur
Quentin 🦋 (Hold)
1 year
J'ai testé codellama (à 70 milliards de paramètres, ~40Go) sur un Xeon E5-1630 avec 64Go de RAM (sans GPU) de chez OVH. Et beh, il a beaucoup de mal ! Environ 4 minutes pour répondre à ma question, je vais basculer sur un modèle plus petit 😄
Tweet media one
3
2
10
@cwrichardson
Chris Richardson
1 year
MaxLOL. What up sister! Word to your mom. … .In the middle of a conversation where I was trying to have local AI help with typescript, I switched from codegemma:7b to codellama:13b. #aisafety #pdoom #ai
Tweet media one
0
0
0
@SQLGene
Eugene Meidinger
7 months
Aaaaaah! CodeLlama is on to us!
Tweet media one
0
0
1
@kalyan_kpl
Kalyan KS
8 months
BudgetMLAgent : Cost Effective Multi-Agent LLM System. This paper introduces BudgetMLAgent, a cost-effective LLM multi-agent system for automating machine learning tasks. Analysis show that . (1) no-cost and low-cost models such as Gemini-Pro, Mixtral and CodeLlama perform far
Tweet media one
0
0
8