Ejento Ai Profile
Ejento Ai

@EjentoAI

Followers
2
Following
0
Media
133
Statuses
167

Joined June 2025
Don't wanna be here? Send us removal request.
@EjentoAI
Ejento Ai
3 days
paper:
0
0
0
@EjentoAI
Ejento Ai
3 days
🚀 Google DeepMind just dropped one of the most important papers on agentic AI this year and it changes how we think about scaling multi-agent systems. Instead of relying on intuition or hype, the team builds a quantitative science around agent coordination, running 180
1
0
0
@EjentoAI
Ejento Ai
4 days
🚀 A rare paper that finally explains why RL sometimes works—and why it often doesn’t. This paper offers one of the clearest deep dives yet into how reasoning actually develops inside LMs and the results challenge a lot of assumptions in the current RL-reasoning hype cycle. The
1
0
0
@EjentoAI
Ejento Ai
5 days
🚨 LLMs Are Getting Smarter—Here’s How Structured Knowledge Is Quietly Transforming Them This new survey is one of the most comprehensive deep dives into a question everyone in AI is asking right now: How do we make LLMs reliable, grounded, and reasoning-capable—not just bigger?
1
0
0
@EjentoAI
Ejento Ai
7 days
paper:
0
0
0
@EjentoAI
Ejento Ai
7 days
🚀 When Academic Writing Meets Agentic AI — PaperDebugger Turns Overleaf Into a Fully Autonomous Co-Author. Academic writing has always suffered from one big bottleneck: tooling that lives outside the writing environment. Copy–paste workflows, broken context, lost revision
1
0
0
@EjentoAI
Ejento Ai
10 days
Read more:
0
0
0
@EjentoAI
Ejento Ai
10 days
🚨 AI agents just learned how to actually learn. Most AI agents today can recall facts… But ask them to learn how to do something new, a procedure, a workflow, a step-by-step method, after deployment, in the real world? Almost none can do it. That’s the gap this new paper
2
0
0
@EjentoAI
Ejento Ai
12 days
link to paper:
0
0
0
@EjentoAI
Ejento Ai
12 days
🔥 Meta just released a hard-hitting reality check on scaling LLM training and it’s not the story we’ve been telling ourselves. If you’ve been assuming that “just add more GPUs” is the golden path to faster, cheaper training… this new study from FAIR turns that idea upside
1
0
0
@EjentoAI
Ejento Ai
13 days
🤔 What if a swarm of bees is actually… a single reinforcement learning agent? This paper absolutely mind blowing. We’ve all heard the phrase “hive mind”, but this work gives it mathematical teeth and reframes collective intelligence in a way that cut across biology, economics,
0
0
0
@EjentoAI
Ejento Ai
14 days
paper:
0
0
0
@EjentoAI
Ejento Ai
14 days
⚡ Scaling isn’t the only path to smarter models — sometimes, it’s about training them better. This paper takes a refreshing stance in a landscape dominated by “bigger is better.” Instead of pushing parameter counts, the authors show how methodology, not magnitude, can unlock
1
0
0
@EjentoAI
Ejento Ai
17 days
One of the most interesting directions in multimodal AI right now is rethinking how LLMs gain new modalities and this paper introduces a surprisingly effective alternative to the “train a giant VLM” approach. Instead of merging vision and language into one huge model, the
0
0
0
@EjentoAI
Ejento Ai
19 days
Google recently released this paper and some people are already calling it “Attention Is All You Need 2.0.” And honestly… we get why. This work introduces Nested Learning (NL) — a framework that argues the deep-learning architectures we’ve been scaling for a decade are
0
0
0
@EjentoAI
Ejento Ai
20 days
🚀 GPT-5 isn’t just assisting science — it’s starting to collaborate in it. After reading “Early Science Acceleration Experiments with GPT-5”, one thing is clear: AI is beginning to contribute real scientific insight across fields. 🔹 Re-deriving research results GPT-5
0
0
0
@EjentoAI
Ejento Ai
21 days
Reinforcement learning for LLM agents just got a serious upgrade. The new Agent-R1 paper takes a deep look at one of the biggest challenges in AI right now: ➡️ How do we train language models not just to answer— but to act, plan, use tools, and adapt across multi-turn
0
0
0
@EjentoAI
Ejento Ai
25 days
🚨 New Research Alert: Meta just showed that you can reach SOTA LLM performance using… simple weight averaging. The paper “Souper-Model: How Simple Arithmetic Unlocks State-of-the-Art LLM Performance” proposes SoCE — Soup of Category Experts, a smarter way to combine multiple
0
0
0