anmorgan2414 Profile Banner
Abby Profile
Abby

@anmorgan2414

Followers
271
Following
1K
Media
130
Statuses
541

Joined February 2012
Don't wanna be here? Send us removal request.
@anmorgan2414
Abby
7 days
RT @akshay_pachaar: Let's compare GPT-5 and Claude Opus-4.1 for code generation:.
0
22
0
@anmorgan2414
Abby
7 days
If you’re still around and you’re interested in observability, evaluation, and optimization of LLMs, check out Opik, Comet’s open source LLM eval framework (9/9).
Tweet card summary image
github.com
Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. - comet-ml/opik
0
0
4
@anmorgan2414
Abby
7 days
▪️Emerging methods like instruction-augmented pretraining, multi-phase curricula, continual pretraining, and reinforcement pretraining (RPT). (7/9).
Tweet card summary image
arxiv.org
In this work, we introduce Reinforcement Pre-Training (RPT) as a new scaling paradigm for large language models and reinforcement learning (RL). Specifically, we reframe next-token prediction as a...
1
0
3
@anmorgan2414
Abby
7 days
Inconsistent terminology, shifting methodologies, and blurred boundaries between pretraining, fine-tuning, and alignment make it increasingly difficult to answer a basic question:. What does “LLM training” really mean today? (4/9).
1
0
2
@anmorgan2414
Abby
7 days
As models evolve into multimodal, instruction-following systems, the training pipeline has become less standardized and more opaque. (3/9).
1
0
2
@anmorgan2414
Abby
7 days
Pretraining defines everything from what a model knows to how it reasons. If you want to understand why LLMs behave the way they do, you need to understand how they’re pretrained. But here’s the problem: (2/9).
1
0
3
@anmorgan2414
Abby
7 days
The line between pretraining and fine-tuning is blurrier than ever. Here’s why that matters.🧵 (1/9).
1
0
4
@anmorgan2414
Abby
9 days
RT @akshay_pachaar: Let's compare OpenAI gpt-oss and Qwen-3 on maths & reasoning:.
0
69
0
@anmorgan2414
Abby
2 months
So excited for this incredible milestone! Huge thanks to the entire community for making this happen! 🏆🤩 . @Cometml.
@Cometml
Comet
2 months
⭐ Opik has officially passed 10,000 GitHub Stars! ⭐. Everyone who works at Comet has contributed to Opik over the last 9 months, but the key to Opik’s rapid growth has been its community—meaning you 🤩. In that light, we want to thank some of the people who’ve helped us 🧵
Tweet media one
0
0
3
@anmorgan2414
Abby
2 months
RT @svpino: Build your first AI agent + MCP Server in Python. Here is everything you need to build your first AI agent in less than 20 min….
0
165
0
@anmorgan2414
Abby
3 months
RT @Cometml: ⚡LLMs are exciting to build with — until you remember you’re on the hook for what they say 😅. We’ve built something to catch t….
0
2
0
@anmorgan2414
Abby
3 months
RT @vincent_koc: @17jmumf “LLM engineers need to think like a data scientist, but build like an engineer”. Superb quote on the mindset gap….
0
2
0
@anmorgan2414
Abby
4 months
RT @MITDeepLearning: ⭐️⭐️ Lecture 9 of @MITDeepLearning 2025 is now available online #FREE for ALL!. Should there be a Hippocratic Oath for….
0
9
0
@anmorgan2414
Abby
4 months
RT @harjtaggar: Managing AI agents by writing evals is so much less awkward than those coffee walks where you figure out the right amount o….
0
31
0
@anmorgan2414
Abby
4 months
RT @vincent_koc: Last week I got the chance to share my story at TEDxKL's salon session on rebuilding communities and having my little @TED….
0
2
0
@anmorgan2414
Abby
4 months
RT @_mchenco: our workers ai team sprinted through saturday to get llama 4 up,. learned a lot over the last 24h (and still learning) - want….
0
81
0