
Jeremy Howard
@jeremyphoward
Followers
259K
Following
10K
Media
3K
Statuses
63K
🇦🇺 Co-founder: @AnswerDotAI & @FastDotAI ; Prev: professor @ UQ; Stanford fellow; @kaggle president; @fastmail/@enlitic/etc founder https://t.co/16UBFTX7mo
Brisbane/Queensland, Australia
Joined August 2010
RT @dillon_mulroy: as an experienced engineer, the worst part of AI right now is the illusion of productivity and constantly battling the s….
0
40
0
RT @lefttailguy: Perplexity doesn't include free user inference in COGS, they categorize it as an R&D expense lmfao.
0
46
0
RT @rtfeldman: Claude: "Summary—this implementation is now production-ready! Here's a list of everything I did: …". Me: "Did you leave anyt….
0
4
0
RT @capetorch: codex is so good at fastHTML, LLM + fastHTML it is such a powerful data visualization/annotation/labelling tool. Thanks @jer….
0
2
0
RT @R_Dimm: The fix? Work WITH LLM properties, not against them:. • RLHF makes them over-eager → Work in small steps, ask clarifying questi….
0
5
0
RT @R_Dimm: I took the Solveit course by @jeremyphoward and @johnowhitaker. Main insight: we can't expect one-shot AI solutions because we….
0
6
0
RT @LLMSherpa: Novel jailbreak discovered. Not only does OpenAi putting your name in the system prompt impact the way GPT responds, but it….
0
224
0
I love this example of how our new learning-coding-writing-everything environment 'solveit' is awesome for learning new programming techniques interactively:.
At @answerdotai we built SolveIt to improve how we develop software - including SolveIt itself 🔄 Here I'm using it to learn our own fastcore library, with immediate code execution right in the dialog.
12
11
126
Also it's trained on AMD.
Motif 2.6B tech report is pretty insane, first time i see a model with differential attention and polynorm trained at scale!. > It's trained on 2.5T of token, with a "data mixture schedule" to continuously adjust the mixture over training. > They use WSD with a "Simple moving
9
12
248
RT @philtrem22: In defense of screen time. Please read the article before an forming opinion.
rachel.fast.ai
an AI researcher going back to school for immunology
0
6
0
RT @allgarbled: True sign of corporate stagnation is that Amazon, who basically owns the ebook market, still has not integrated an LLM into….
0
114
0
It's been nearly 6 months, and this might have come true. But not quite how he meant it…. If AI is 10x more verbose than human coders, then AI could be writing 90% of the code, without humans writing any less.
"In the next 3 to 6 months, AI is writing 90% of the code, and in 12 months, nearly all code may be generated by AI.". - Anthropic CEO, Dario Amodei. Also Anthropic's open roles:
28
12
305
We made this btw. And we created a new way of working with AI to take advantage of it. We let the first 1000 people in to try the first version a year ago, and the results have been amazing. We'll let the next batch in soon-ish, so watch this space….
@jeremyphoward @hive_echo If only there was good integration with strong coding language models, and they were aware of cells and current variable state, that would be amazing!.
16
9
225
RT @xlr8harder: Grok-2 has been "open sourced" but has one of the worst licenses of any recent major open weights release. Given that it'….
0
17
0
RT @xeophon_: I wonder whether we'll see a deliberate countermovement . I started using LLMs *less* for writing due to the slop it produces….
0
6
0