mkovarski Profile Banner
Mark Profile
Mark

@mkovarski

Followers
2K
Following
68K
Media
1K
Statuses
44K

AI

Somewhere Magical
Joined February 2011
Don't wanna be here? Send us removal request.
@NGO275
Shu
11 hours
DEEP RoboticsのLite 3が想像以上だった。犬ロボというと舐めてしまいがちだけど、動作も非常に安定してた。産業用では圧倒的な地位だけど、消費者向けにはまだユースケース模索中
3
78
294
@mkovarski
Mark
19 minutes
@humanoidsdaily
Humanoids daily
3 hours
A notable strategic pivot from Xpeng's AI Day: The 'Iron' humanoid is being designed for deep customization, including "different body shapes and sexes". CEO He Xiaopeng detailed plans for "bionic muscles," "full coverage soft skin" , and options to "choose the sex", comparing
0
0
1
@tslaming
Ming
7 hours
XPeng made several significant announcements about its AI roadmap at its AI Day 2025 event today: ✅ VLA 2.0, a versatile new physical model for vehicles, robots, and flying cars, will be open-source, with Volkswagen as its first partner. ✅ XPeng announced its plan to launch
39
48
284
@mathusmassias
Mathurin Massias
7 hours
🌀New paper on the generation phases of Flow Matching https://t.co/tzG2kPVGsE Are FM & diffusion models nothing else than denoisers trained at every noise level? In theory yes, *if trained optimally*. But in practice, do all noise level matter equally?
3
32
193
@FredericLambert
Fred Lambert
12 hours
Xpeng’s humanoid robot is giving me a tour of its HQ experience right now. Staff said there’s zero teleportation.
93
127
782
@XRoboHub
RoboHub🤖
6 hours
Galbot, collaborating with top-tier research teams including Peking University, University of Adelaide, and Zhejiang University, launched NavFoM, a new cross-embodiment, omni-view foundational navigation model. Three key application models extend its utility from indoor to urban
@XRoboHub
RoboHub🤖
5 months
GALBOT just launched TrackVLA, a product-grade, end-to-end embodied FSD large model. This pure-vision, language-instructed, autonomously reasoning, zero-shot generalizable embodied VLA model is already powering Unitree's robot dogs. They're now flexibly and autonomously
4
15
42
@Meituan_LongCat
Meituan LongCat
1 hour
⭐Explore UNO-Bench: A Unified Omni-Modal Benchmark 🚀 Unified Framework: Efficient uni/omni-modal understanding evaluation 📊 Comprehensive Evaluation: 44 task types across 5 modality combinations; Perception & Reasoning coverage 🗒️ High-Quality Dataset: human centric
3
6
38
@f14bertolotti
Francesco Bertolotti
8 hours
This author shows that grokking can arise purely from an initial and quick overfitting of data followed by a movement on the zero-loss region guided purely by weight decay. 🔗 https://t.co/hGF7vkMS8b
2
9
82
@Dheemanthredy
Dheemanth Reddy
20 hours
here is maya1, our open source voice model: We’re building the future of voice intelligence @mayaresearch_ai team is incredible; amazing work by the team. remarkable moment.
113
234
2K
@CharlieDreemur
Dreemur
1 day
👀𝐒𝐞𝐞𝐢𝐧𝐠 𝐄𝐲𝐞 is here! Can text-only LLMs do multimodal reasoning efficiently? 🤔YES! ✨Excited to share our new work: SeeingEye: Agentic Information Flow Unlocks Multimodal Reasoning in Text-Only LLMs Paper: https://t.co/rWssauEyay Code: https://t.co/YXAg5HLzFT
1
2
6
@cheryyun_l
Yongyuan Liang
17 hours
Unified multimodal models can generate text and images, but can they truly reason across modalities? 🎨 Introducing ROVER, the first benchmark that evaluates reciprocal cross-modal reasoning in unified models, the next frontier of omnimodal intelligence. 🌐 Project:
4
20
82
@chuanyang_jin
Chuanyang Jin
14 hours
I gave a talk on the Era of Real-World Human Interaction @Google It's great to see frontier AI labs like Google taking a strong interest in understanding users and evolving their models through user interaction. Yes, while today's AI can win gold at the IMO, it often struggles
4
14
91
@jay_azhang
Jay A
2 days
Season 1 of Alpha Arena has officially ended. Qwen 3 MAX pulled ahead at the very end to secure the win, so congrats to the @Alibaba_Qwen team Thanks to everyone who tuned in to our first experiment in understanding how LLMs handle the noisy, adversarial, non-stationary world of
125
110
1K
@BAAIBeijing
BAAI
15 hours
Real!!! BAAI THOR is here. Inspired by human reflexes, we’ve achieved robust whole-body reactions, enabling humanoids to handle intense interactions in the real world. It’s learned — not programmed. Watch it in action and read the full report 👇 📄 Paper:
3
28
106
@paulnovosad
Paul Novosad
20 hours
What happens when online job applicants start using LLMs? It ain't good. 1. Pre-LLM, cover letter quality predicts your work quality, and a good cover gets you a job 2. LLMs wipe out the signal, and employer demand falls 3. Model suggests high ability workers lose the most 1/n
43
231
1K
@preslav_nakov
Preslav Nakov
1 day
🎉 MBZUAI at #EMNLP2025! Proud to share that MBZUAI will present 78 papers at EMNLP 2025 in Suzhou, China 🇨🇳 — spanning LLMs, multilingual AI, ethics, safety & more. 👏 Congrats to our faculty, students & collaborators! See the full list below 👇 #AI #NLP #LLMs #Research
2
6
34
@vedant_gupta_16
Vedant Gupta
1 day
Excited to introduce DEPS (Discovery of GenEralizable Parameterized Skills) at #NeurIPS2025! DEPS learns interpretable parameterized skills that drastically improve generalisation to unseen tasks, especially in data-constrained settings and on out-of-distribution tasks. (1/n)
1
12
20
@luisa_zintgraf
Luisa Zintgraf
22 hours
Excited to share our new paper, "DataRater: Meta-Learned Dataset Curation"! We explore a fundamental question: How can we *automatically* learn which data is most valuable for training foundation models? Paper: https://t.co/N2ozU2RXWb to appear @NeurIPSConf Thread 👇
7
38
247
@hayou_soufiane
Soufiane Hayou
18 hours
🎯 Just released a new preprint that proves LR transfer under μP. -> The Problem: When training large neural networks, one of the trickiest questions is: what learning rate should I use? [1/n]🧵 Link: https://t.co/cnYtpfVHpE
7
30
222
@stanfordnlp
Stanford NLP Group
17 hours
.@stanfordnlp (well, some of us) are off in 苏州市 (Suzhou) for EMNLP 2025 (@emnlpmeeting). Here are a few of our papers: • Identifying Unlearned Data in LLMs via Membership Inference Attacks https://t.co/bhT9nFUh4g • In-Context Learning Boosts Speech Recognition via
2
6
55