Explore tweets tagged as #LLMhallucination
@App8starz
Mohammad
26 days
Guessing is Rewarded, Honesty is Penalized Why LLM Hallucinate #ai #llm #openai #LLMHallucination https://t.co/uytbOxLJQs
0
0
0
@PulseOfAvalon
PULSE of AVALON | Music
2 months
I’m sorry, I can’t parse your enormous uploaded PDF containing all your lore and characters, but I’ll pretend like I did. #ChatGPT #LLMHallucination #Gaslighting
1
0
0
@TechUpdaterProo
Daily AI
27 days
OpenAI: LLMs Hallucinate Due to Training Rewarding Guessing OpenAI researchers suggest language models hallucinate because current training and evaluation favor confident guesses ... https://t.co/TGR2LonCGF #OpenAI #LLMHallucination #AIResearch
1
0
0
@labelyourdata
Label Your Data
9 months
🧠 Struggling with LLM model errors? Dive into our article on LLM hallucinations and learn how to identify, prevent, and tackle these tricky text errors. Read more: https://t.co/kKl2l0s80D #LLMs #LLMhallucination #AIhallucination
0
0
1
@VoxAgent_AI
VoxAgentAI
21 days
0
0
1
@managetech_inc
Managetech inc.
10 months
ヘッジファンドのBalyasnyのAIチームはOpenAIよりも優れたパフォーマンスを発揮している #RAG #BalyasnyAI #FinanceBench #LLMhallucination https://t.co/1rwGFJSIqJ
0
0
0
@xLMCV
xLM, LLC - Continuous Validation
1 year
A new framework to detect hallucinations, or false information, in text generated by large language models (LLMs). https://t.co/StgL3SWsxt #ai #ArtificialIntelligence #LLM #LLMHallucination
1
0
0
@Ashwini12737359
Ashwini
4 months
2
0
0
@bitlauncherai
bitlauncher
1 year
1/🔐 Taming #LLMHallucination: Researchers unveil Chain-of-Verification to enhance trust in #LargeLanguageModels. #TrustworthyAI #NLProc
1
0
0
@bitlauncherai
bitlauncher
1 year
2/📚 Groundbreaking paper tackles the unsolved hallucination mystery plaguing AI. #GroundbreakingResearch #AIPaper #LLMHallucination
1
0
0
@Neha04Sri
Neha Srivastava
4 months
37% of GPT-4 outputs contain factual errors (Stanford study). LLMs don't lie - they hallucinate. And that's more dangerous Why 'plausible fiction' from AI is the biggest threat to truth in tech: https://t.co/dh6bCenlIK #LLMHallucination #AI #ContentWriter
0
0
0
@BrikeshKumar_
Brikesh Kumar
1 year
Reviewing how hallucinations occur in #LLMs and new methods to improve #ErrorDetection. Explore insights on error categorization, probing classifiers, and token selection to enhance #AISystems reliability. #FutureOfAI #AI #MachineLearning #LLMHallucination https://t.co/kCQwrhkYZx
0
0
1
@rishikant9
Rishi
4 months
This I did not realise until I reviewed each of them one by one, it wasn’t just a case of getting name wrong , it was actual fabrication of complete records. #llmhallucination #chatgpt4o
0
0
1
@WhyLabs
WhyLabs
2 years
Fact or Fiction?🤔 #LLMs can "hallucinate" (invent) data & deliver it with complete conviction. In this post, we break down the challenges of #LLMHallucination detection & illustrate some of the approaches presented in a prominent research paper. https://t.co/0gYMemuKRW
0
1
6
@TruLensML
TruLens
2 years
Looking for a free, open source way to fight #LLMhallucination? Read more in @diginomica about how TruLens just might be the answer that you're looking for. Kudos to @glawton for the deep dive!
@truera_ai
TruEra
2 years
Trying to build a trustworthy and effective LLM app? Hallucination is your biggest challenge. @diginomica, @glawton deep dives into TruEra's hallucination detection & mitigation workflows. https://t.co/RNauZElFBJ #AIObservability #LLMObservability #LLMapps #AIQuality
0
0
0