willkurt Profile Banner
Will Kurt Profile
Will Kurt

@willkurt

Followers
7K
Following
15K
Media
364
Statuses
4K

“It was a strange dream.”

Seattle, WA
Joined April 2007
Don't wanna be here? Send us removal request.
@willkurt
Will Kurt
5 months
🥳Check out: Token-Explorer! 🤖.Interact with and explore LLM token generation!.Features:.- Step through token selection.- Remove tokens to explore alt paths.- Fork prompt and quickly switch between them.- Visualize all token probabilities and entropy!.- OSS (github in replies)
2
9
45
@willkurt
Will Kurt
4 months
RT @BEBischof: Hell yeah, blog came out about my failure funnel!. Come to @DataCouncilAI to see me explain it in depth!. .
Tweet card summary image
hex.tech
Using GPT-4.1 as a case study of our framework for impactful LLM evaluation
0
2
0
@grok
Grok
1 day
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
507
762
5K
@willkurt
Will Kurt
4 months
There are many things not ideal for with our current timeline… but we do have juice box sake!
Tweet media one
0
0
6
@willkurt
Will Kurt
4 months
Support for Structured Outputs in Token Explorer should be coming soon!
Tweet media one
0
1
10
@willkurt
Will Kurt
4 months
RT @venturetwins: A summary of consumer AI
Tweet media one
0
590
0
@willkurt
Will Kurt
4 months
Had an absolute blast chatting joining @cameron_pfiffer to chat with @CShorten30 about all the cool stuff we're working on @dottxtai!.
@CShorten30
Connor Shorten
4 months
Structured Outputs: The Building Blocks for Reliable AI! 🏗️. I am SUPER EXCITED to publish our newest Weaviate Podcast featuring Will Kurt (@willkurt) and Cameron Pfiffer (@cameron_pfiffer) from @dottxtai! 🎙️🎉. Dottxt is the company behind Outlines, reshaping how we control LLM
1
3
10
@willkurt
Will Kurt
4 months
RT @cameron_pfiffer: Does anyone use exllamav2 with Outlines? If so, what has your experience been?.
0
2
0
@willkurt
Will Kurt
4 months
So excited to share what @cameron_pfiffer and I have been up to @dottxtai!. It was absolutely fantastic to work with Andrew and the incredible team @DeepLearningAI . We hope this course makes it easier to get started getting reliable outputs from your LLM using structured gen!.
@AndrewYNg
Andrew Ng
4 months
New Short Course: Getting Structured LLM Output!. Learn how to get structured outputs from your LLM applications in this course, built in partnership with @dottxtai, and taught by @willkurt, a Founding Engineer, and @cameron_pfiffer , Developer Relations Engineer. It's
1
5
24
@willkurt
Will Kurt
4 months
RT @cameron_pfiffer: Who are the best public speakers/presenters in AI? . Looking for people to give interesting talks for AI by the Bay.….
0
2
0
@willkurt
Will Kurt
5 months
RT @colin_fraser: tremendous alpha right now in sending your wife photos of yall converted to ren and stimpy characters .
0
18
0
@willkurt
Will Kurt
5 months
It’s always odd to me when people gush about OpenAI releasing image models that can only just now do things open models have been doing about a year now!.
@jfischoff
Jonathan Fischoff
5 months
Skill issue. @OpenAI cooked here.
Tweet media one
1
1
4
@willkurt
Will Kurt
5 months
RT @RohanInference: I was interested in learning about Bayesian statistics and came across this fantastic book written by @willkurt ! Highl….
0
1
0
@willkurt
Will Kurt
5 months
Choosing structure goes hand in hand with prompt engineering! A quick demo, here are the steps:. - Clone the initial prompt 5 times.- See what the unstructured output is.- Then generate through 4 different date formats and see how they compare!. We can see the model tends to
1
0
2
@willkurt
Will Kurt
5 months
TokenExplorer now has basic support for structured generation implemented!. At any point in the prompt you can toggle on predefined structure (note with the 'Struct' label turns green) and it will constrain the output using @dottxtai's outlines-core!. Not quite ready for a PR but
2
2
11
@willkurt
Will Kurt
5 months
I must say we do some cool things internally!.
@remilouf
Rémi 📎
5 months
Vagueposting
Tweet media one
0
0
5
@willkurt
Will Kurt
5 months
It's funny how often you hear "you can't tell how uncertain an LLM is", but here is a case where you can investigate exactly what the model was thinking, and how close it was to getting the correct answer!.
@alonsosilva
Alonso Silva (e/acc) 💸
5 months
TokenExplorer: 9.9 or 9.11, which one is bigger?
Tweet media one
0
1
6
@willkurt
Will Kurt
5 months
RT @alonsosilva: TokenExplorer: 9.9 or 9.11, which one is bigger?
Tweet media one
0
2
0
@willkurt
Will Kurt
5 months
I think manually sampling high-entropy prompts is my new favorite way to write! . I had a short prompt talking about Burroughs cut-up technique and took the model from there! Most fun I've had with an LLM in a while!
Tweet media one
0
0
7
@willkurt
Will Kurt
5 months
It's fun to try to keep the prompt in a state of high entropy!
Tweet media one
4
0
15
@willkurt
Will Kurt
5 months
Here's a link to the repo: And some images of the "probability" and "entropy" visualization modes!
Tweet media one
Tweet media two
1
3
8