
Nate Sesti
@NateSesti
Followers
4K
Following
81
Media
8
Statuses
130
Coding @continuedev, Publicly Thinking @ https://t.co/p11OSyonc3, (no longer) Studying Physics @ MIT ('23) ๐ Continue is hiring! https://t.co/UcuvyI7yh9
Joined July 2018
today @continuedev released v1 of tab autocomplete. it's 100% local and open-source. for the next few months i'm going to share (live, as i learn) the tricks that improve our acceptance rate. if you follow along you might learn. .
3
6
46
๐ฆ @rustlang ๐ @ThePSF ๐ซ @OCamlLang โ๏ธ @java ๐ @official_php ๐ฆ @SwiftLang ๐ฆ @ziglang what could go wrong?
0
0
4
โก๏ธ excited to share that @continuedev is releasing our open-source tab-autocomplete! โก๏ธ. why might you want to use it?. - local (code remains on your machine).- customizable (change model, temp, and more).- open-source.- and free!. here's how we built it:.
1
6
26
better cross-encoders are a big deal. out of 300,000 loc, @Voyage_AI_ embeddings + reranker found 300 loc that allowed for basically the answer i'd give as the author
4
5
21
open-source is never far behind.
Me and @huybery are discussing about reproducing Devin. Come and join us and see if we can make something great together!.
1
0
10
v2.
๐ ๏ธ Announcing Tools! Continue can now.- navigate your repo.- create files.- search the web.- run terminal commands (after you approve).- use custom tools from @AnthropicAI's MCP. Here Continue builds a much more professional personal site for @NateSesti in just a few minutes
0
1
10
The problem "given a question and a directory, find the 8,192 most relevant tokens" turns out to have some depth.
Today weโre releasing codebase retrieval! If you want to edit a large codebase but donโt know where to start, just use โ+โ and Continue will automatically pull in the relevant snippets of code.
1
0
9
seeing really impressive autocomplete results from @deepseek_ai's 1.3b model. it's become clear that. a) at least 1/2 the work is in constructing the right prompt, making the model's job easier. b) once you do this, small (local!) models will shine.
0
0
5
whenever you type the opening parenthesis of a function call, @continuedev's autocomplete will now use the language server protocol ( to add the function definition to the prompt.
1
0
4
This is why Continue helps you collect your own โdevelopment dataโ (dumped to a local .jsonl file). Training on fine-grained dev data isnโt currently commonplace, but there have been explorations like Google Researchโs DIDACT And the longer youโve been.
The ideal training data for an LLM is not what you wrote. It's the full sequence of your internal thoughts and all the individual edits while you wrote it. But you make do with what there is.
0
0
2
they won't all be this mundane, but autocomplete optimization #1 is very necessary. debouncing (modified) and a related trick.
today @continuedev released v1 of tab autocomplete. it's 100% local and open-source. for the next few months i'm going to share (live, as i learn) the tricks that improve our acceptance rate. if you follow along you might learn. .
1
0
3
if you want to know more, please follow along! there's much more where this came from, including topics here that i haven't yet discussed:.
today @continuedev released v1 of tab autocomplete. it's 100% local and open-source. for the next few months i'm going to share (live, as i learn) the tricks that improve our acceptance rate. if you follow along you might learn. .
0
0
3
@erhartford @DrTBehrens @ollama @continuedev @BrianRoemmele looks like someone shared setup below, but also just dm'd! autocomplete was only in pre-release for a bit and has improved a ton since, so very possible this is what happened.
0
0
3
@Bharathi19145 @kirat_tw @continuedev Just made a quick fix; this should now be solvedโlet me know if not!.
0
0
3
@metcalfc @Ollama_ai @continuedev It should! Iโve got Continue x Ollama working on my own Windows machine with WSL. And I believe all instances of WSL would share a loopback interface.
0
1
1
@FilterPunk @continuedev @MetaAI @replicatehq @togethercompute You can edit the `server_url` of the model you are using in the config file. For example with Ollama: `default=Ollama(model="codellama", server_url="<your_hosted_endpoint>")`. Many other options as well if you're self hosting:
0
0
3
@Nigh8w0lf @MikeBirdTech @erhartford @SourcegraphCody @ollama @continuedev I took a deeper look, solved a problem with cancelling requests. likely this is what you were experiencing, but if not feel free to dm meโwould love to make sure this is squared away!.
0
0
2
@gasatrya @continuedev @AnthropicAI for now, but support for openai, lmstudio, ollama, and more coming soon!.
1
0
2
debouncing and reusing requests:.
they won't all be this mundane, but autocomplete optimization #1 is very necessary. debouncing (modified) and a related trick.
1
0
2
@S1lv3rd3m0n @continuedev we're always improving : ) some of the most exciting stuff recently in my opinion:.- tool use.- predicted outputs for faster edits.- brand new edit UI.- multi-file edits.- instant apply.- and most importantly a lot of polish. if you give it a try, would love feedback!.
1
0
3
@erhartford @SourcegraphCody @ollama With @continuedev you can use any model from any provider for chat, Codellama-70b on Ollama included! And weโre releasing tab-autocomplete tomorrow (also allowing any model, and access to configure basically every setting you could want).
4
0
2
@MikeBirdTech @erhartford @SourcegraphCody @ollama @continuedev Personally I have Ollama running in the background. If youโre on Windows or want UI, LM Studio is fantastic. And I learned about Jan just recently, but am impressed so far.
1
0
1
@menjilx @continuedev @code you might be able to fix it by adjusting theme like here (, but even better is if we can solve it in Continue. would you be able to share what theme you're using and i can try to figure out what variable is causing this? feel free to dm me if it's easier.
1
0
1
using the language server protocol to add function/type definitions to context:.
whenever you type the opening parenthesis of a function call, @continuedev's autocomplete will now use the language server protocol ( to add the function definition to the prompt.
1
0
1
@menjilx @continuedev @code ok very good to know. made an issue to track this: i'll share updates there instead of twitter.
1
0
1
@0xblacklight @continuedev make sure you're on pre-release and using claude (for now), and then you'll see the "tools" menu in the toolbar. it comes with these tools built in, but you can install any mcp server to add more!
0
0
1
@MikeBirdTech @erhartford @SourcegraphCody @ollama @continuedev It does! This and a lot more is editable in config.json, and you can also switch between multiple chat models with a keyboard shortcut:
1
0
1
this is another chance to use the "ast path". we check whether any parent node is of the "statement_block" type (a fancy way of saying you're in a `{ }` in .js/.ts) and whether the cursor is at the start of this block. if so, then we use multi-line.
at the core of most tab autocomplete systems is a tool called tree-sitter (. tree-sitter makes it fast and easy to parse abstract syntax trees in any programming language. where we've found it extremely helpful so far is by using the "ast path".
1
0
1
@cheesecakegood @continuedev @AnthropicAI absolutely, thereโs a PR in the works for this. clearer err messages and nicer display of them should be in pre-release next week.
0
0
1