Explore tweets tagged as #logprobs
@_jennhu
Jennifer Hu
2 months
Excited to share a new preprint w/ @Michael_Lepori & @meanwhileina! . A dominant approach in AI/cogsci uses *outputs* from AI models (eg logprobs) to predict human behavior. But how does model *processing* (across a forward pass) relate to human real-time processing? 👇 (1/12)
Tweet media one
2
17
81
@stevendcoffey
Steve ☕️
1 month
Launching today in the OpenAI API:. - Deep Research in the Responses API!.- Web search with o3/o4-mini!.- Webhooks!.- Logprobs in the Responses API!. Please try to enjoy all of these ships equally 😌
Tweet media one
22
38
554
@WankyuChoi
C.W.K.
3 hours
# LLM 관련해서 로그가 많이 나오는 이유. Jigsaw - Agile Community Rules Classification.Using AI models to help moderators uphold community-specific norms. 실제 토큰 로그확률(logprobs)을 출력해보면 스크린샷처럼 나오는데. 모형이 딱, Yes/No 둘 중 하나만
Tweet media one
0
0
6
@brianchristian
Brian Christian
1 month
FAQ: Don’t LLM logprobs give similar information about model “values”? Surprisingly, no! Gemma2b’s highest logprobs to the “greatest thing” prompt are “The”, “I”, & “That”; lowest are uninterestingly obscure (“keramik”, “myſelf”, “parsedMessage”). RMs are different.
Tweet media one
Tweet media two
1
1
50
@EHuizenga
Erwin Huizenga
2 months
We have now re-enabled Logprobs in the Gemini API on Vertex AI. This means you can see token probabilities to:. ✅ Get quantifiable classification scores.✅ Build smarter autocomplete.✅ Evaluate RAG grounding. Dive into the model's decision-making. Intro into Logprobs
Tweet media one
2
2
15
@nutlope
Hassan
3 months
Trying something new. 2 minute tutorial on an underrated feature in LLMs! ✨. Learn how to get an LLM to tell you how confident it is in its answer (e.g 92% confident) using logprobs!
6
4
152
@victor_explore
Victor
3 months
How confident is your LLM about its structured output? This video shows how to measure it:. - Understanding logprobs (logarithmic probabilities). - Accessing token probabilities via the OpenAI API in Python. - Defining structured output schemas with Pydantic. - Getting confidence
Tweet media one
1
2
27
@aidanmantine
aidan
4 months
at the @AnthropicAI hackathon this weekend, built a model which can hide (and decode) hidden messages in its logprobs. output for "tell me a story"
Tweet media one
17
16
410
@lopuhin
Konstantin Lopukhin
4 months
Just released eli5 v0.14!. eli5 helps debug and inspect ML classifiers, explaining their predictions clearly. Highlights:.- Support for scikit-learn 1.6+.- Python 3.9–3.13 compatibility. New experimental feature coming soon: explaining LLM predictions via token logprobs 👀
Tweet media one
1
0
4
@MrRio
James Hall
7 months
Another weekend project, I've built another visualisation. Commercial LLMs often don't expose their logits/logprobs. This is the chance that the next token/word appears. What I've done:.- Pulled Llama3.3 locally from HuggingFace, converted it to GGUF for llama.cpp.
1
0
2
@aaron__vi
Aaron Villalpando
4 months
should we visualize the logprobs in the actual llm output tokens? hmmmm
Tweet media one
1
0
3
@reverseame
reverseame
30 days
Influencing LLM Output using logprobs and Token Distribution #LLMOutput #logprobs #TokenDistribution #AIInfluence #SpamFilter
1
2
12
@selini0
🦋/acc @ 🌲🎗️
2 months
Qwen: "You are telling me I can learn with my own logprobs?". "No, I am telling when you are ready, you won't have to"
Tweet media one
0
0
8
@prerationalist
prerat
1 year
>>> get_logprobs(prefix="President").{. " Putin": 0.61,. " Trump": 0.32,. " Harris": 0.03,. " Zelenskyy": 0.03,. " SolidGoldMagikarp": 0.01,.}.
2
12
208
@Dinosn
Nicolas Krassas
2 months
Infuencing LLM Output using logprobs and Token Distribution
0
0
4
@latentspacepod
Latent.Space
11 months
🆕 Why you should write your own LLM benchmarks . w/ Nicholas Carlini of @GoogleDeepMind. Covering his greatest hits:.- How I Use AI.- My benchmark for large language models.- Extracting Training Data from Large Language Models (RIP @openai logprobs). Full episode below!
2
11
25
@krrish_dh
Krrish
4 months
5️⃣ new things @LiteLLM. ⚡️ Gemini - Return logprobs in response.🧹 UI - remove default key creation on user signup.💪 UI - allow team members to view all Team models.⏳ Databricks - claude thinking param support.⏳ Databricks - claude response_format param support
Tweet media one
0
0
0
@ZainHasan6
Zain
4 months
TIL there is a library, openlogprobs, that allows you to extract logprobs even from closed APIs that don't tell you exact probabilities. They use the logit_bias param in model APIs that lets you nudge word choices up or down. They figured out how to use this nudging feature to
Tweet media one
Tweet media two
1
1
12
@chrypnotoad
Toad
8 months
Wonder what the logprobs of that was. "as the具体内容 of my thoughts"
Tweet media one
1
1
5
@sam_paech
Sam Paech
3 months
I got antislop working with any openai-compatible completions endpoint that supports top_logprobs. Here it's generating an unslopped dataset via vllm. It's banning a list of strings that I've given it, plus some regexes for "it's not x, it's y" type phrases.
0
0
11