
Abby
@anmorgan2414
Followers
271
Following
1K
Media
130
Statuses
541
Joined February 2012
If you’re still around and you’re interested in observability, evaluation, and optimization of LLMs, check out Opik, Comet’s open source LLM eval framework (9/9).
github.com
Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. - comet-ml/opik
0
0
4
▪️The critical role of data curation in scaling models responsibly. (8/9).
arxiv.org
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly...
1
0
2
▪️Emerging methods like instruction-augmented pretraining, multi-phase curricula, continual pretraining, and reinforcement pretraining (RPT). (7/9).
arxiv.org
In this work, we introduce Reinforcement Pre-Training (RPT) as a new scaling paradigm for large language models and reinforcement learning (RL). Specifically, we reframe next-token prediction as a...
1
0
3
▪️Why “pretraining” is an overloaded term, and how definitions vary across contexts. (6/9).
arxiv.org
Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale...
1
0
2
I cover:. ▪️The evolution from ULMFiT’s inductive transfer learning to InstructGPT’s three-stage pipeline. (5/9).
arxiv.org
Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not...
1
0
2
In this article, I unpack the foundational stage of the LLM pipeline: pretraining.
comet.com
From ULMFiT to instruction-augmented and RL-based methods, this article explains how LLM pretraining has evolved and why it still matters.
1
0
3
So excited for this incredible milestone! Huge thanks to the entire community for making this happen! 🏆🤩 . @Cometml.
⭐ Opik has officially passed 10,000 GitHub Stars! ⭐. Everyone who works at Comet has contributed to Opik over the last 9 months, but the key to Opik’s rapid growth has been its community—meaning you 🤩. In that light, we want to thank some of the people who’ve helped us 🧵
0
0
3
RT @svpino: Build your first AI agent + MCP Server in Python. Here is everything you need to build your first AI agent in less than 20 min….
0
165
0
RT @Cometml: ⚡LLMs are exciting to build with — until you remember you’re on the hook for what they say 😅. We’ve built something to catch t….
0
2
0
RT @vincent_koc: @17jmumf “LLM engineers need to think like a data scientist, but build like an engineer”. Superb quote on the mindset gap….
0
2
0
RT @MITDeepLearning: ⭐️⭐️ Lecture 9 of @MITDeepLearning 2025 is now available online #FREE for ALL!. Should there be a Hippocratic Oath for….
0
9
0
RT @harjtaggar: Managing AI agents by writing evals is so much less awkward than those coffee walks where you figure out the right amount o….
0
31
0
RT @vincent_koc: Last week I got the chance to share my story at TEDxKL's salon session on rebuilding communities and having my little @TED….
0
2
0
RT @_mchenco: our workers ai team sprinted through saturday to get llama 4 up,. learned a lot over the last 24h (and still learning) - want….
0
81
0