ead Profile Banner
Eric Daimler Profile
Eric Daimler

@ead

Followers
83K
Following
7K
Media
234
Statuses
1K

Fast & Safe AI scaled by Category Theory; Working to have everyone on the field for our digital future; Fmr CS Prof. @CarnegieMellon; Obama Admin Alum

SF | NY
Joined July 2007
Don't wanna be here? Send us removal request.
@ead
Eric Daimler
10 months
Thanks for good conversation @iuea_uganda
instagram.com
1
0
3
@ead
Eric Daimler
10 months
Looking forward to #FII8 as we continue to explore the exciting possibilities of SAFE AI. #AI #FutureOfWork #DataDriven #NeurosymbolicAI (5/5).
0
0
1
@ead
Eric Daimler
10 months
The future of AI is about more than just chatbots. It's about building systems that understand the meaning of data, not just the patterns. It's about empowering humans to make better decisions, faster, and with greater confidence. 4/5.
1
0
0
@ead
Eric Daimler
10 months
LLMs are great for learning, but they can't guarantee accuracy in high-stakes situations. That's where neurosymbolic AI comes in, combining the power of LLMs with the rigor of symbolic reasoning for true AI safety. 3/5.
1
0
0
@ead
Eric Daimler
10 months
We explored the 'cost disease' in IT and healthcare, and how AI can automate inefficiencies to boost productivity. But we also highlighted the limitations of current AI, especially Large Language Models (LLMs). 2/5.
1
0
0
@ead
Eric Daimler
10 months
At #FII7, we dove deep into the practical side of AI. Forget flashy chatbots; the real transformation lies in tackling the 'boring' but critical tasks that underpin our industries. 1/5.
1
0
0
@ead
Eric Daimler
10 months
But in other cases, they can have serious consequences, especially when LLMs are used in high-stakes applications like manufacturing, energy, or finance. (13/13).
0
0
0
@ead
Eric Daimler
10 months
The result is almost humorous. In some cases, these errors can be harmless, like a slightly off-kilter autocorrect suggestion or a room painted darker than intended. (12/13).
1
0
0
@ead
Eric Daimler
10 months
The report in the image of this thread isn’t any one company’s quarterly report. It is rather the accumulation of the most common words and numbers contained in quarterly reports. (11/13).
1
0
0
@ead
Eric Daimler
10 months
It isn’t literally the arithmetic average but it is rather the most common number on a distribution of numbers. (10/13).
1
0
0
@ead
Eric Daimler
10 months
In the example below I am asking for a presentation of a 3Q24 review. I had supplied absolutely no numbers in the prompt. However the LLM took a look at similar quarterly reports and used not just the most common language but the most common numbers. (9/13).
1
0
0
@ead
Eric Daimler
10 months
LLMs famously fail in generating truly random numbers. Their selection of supposedly random numbers has been shown to generate the answer ‘42’ far more often than random. (8/13).
1
0
0
@ead
Eric Daimler
10 months
This same principle applies to the much more sophisticated LLMs as we apply the analogy to words and sentences. (7/13).
1
0
0
@ead
Eric Daimler
10 months
We can see this with numbers too. If we have an even spread of numbers from 1 to 100, the average is 50. But if we add a bunch of 90s and 100s to the distribution, the average gets pulled higher. (6/13).
1
0
0
@ead
Eric Daimler
10 months
To help along this process, we might think of a painter mixing colors. For whatever reason, maybe one painter just consistently mixes colors that end up being a little too dark. This is their own bias, expressed in colors. (5/13).
1
0
0
@ead
Eric Daimler
10 months
This makes them powerful, but also prone to errors when the averages don’t align with the specifics we need. How can I reason about the biases inherent in the LLMs? (4/13).
1
0
0
@ead
Eric Daimler
10 months
When given a question, they produce (or autocomplete) the most likely answer based on averages of what they’ve learned from the information on which they have been trained. (3/13).
1
0
0
@ead
Eric Daimler
10 months
Despite what some people say, they aren’t ‘thinking’ in the way we do. They are calculating probabilities across trillions of data points. (2/13).
1
0
0
@ead
Eric Daimler
10 months
AIs are like a giant averaging machine. Do you ever wonder why autocorrect is so often bad? These tools, and their modern children, the popular Large Language Models (LLMs), operate by pulling patterns. These are from massive datasets. (1/13)
Tweet media one
1
0
0
@ead
Eric Daimler
10 months
Let's build AI that's not just advanced, but trustworthy and effective in solving the real problems we all face. #AI #FutureOfWork #NeurosymbolicAI #WKF #AIApplications #DataIntegrity #ConexusAI 5/5.
0
0
0