Emerson Segura Profile
Emerson Segura

@emerson

Followers
944
Following
4K
Media
37
Statuses
962

CTO,ML,Research

Joined April 2007
Don't wanna be here? Send us removal request.
@emerson
Emerson Segura
23 days
AMD Mi300X Mi355. faster than Nvidia Blackwell gpu using vLLM and SGLang .#AMD #vllm #sglang #nvidia #Meta
0
0
2
@emerson
Emerson Segura
1 month
RT @hrw: 36 years on, and the Chinese government still seeks to erase the memory of the June 1989 #TiananmenMassacre. World leaders should….
0
225
0
@emerson
Emerson Segura
1 month
Great new book by @keerthanpg ! .#ai , #robotics. get the ebook, or tree version here :. AI for Robotics: Toward Embodied and General Intelligence in the Physical World
Tweet media one
0
1
7
@emerson
Emerson Segura
2 months
Robots are coming! the state of Humanoid Robotics in 2025 (Unitree G1 gets Dex3 finger upgrade)
1
0
2
@emerson
Emerson Segura
2 months
great news for global economy: "U.S. and China agreed to suspend most tariffs while trade negotiations continue". Washington slashes levies on China to 30%, while Beijing cuts tariffs on U.S. to 10%, with more trade negotiations planned.
Tweet media one
0
0
1
@emerson
Emerson Segura
4 months
RT @ylecun: New paper: turns out you can train deep nets without normalization layers by replacing them with a parameterized tanh().
0
568
0
@emerson
Emerson Segura
9 months
RT @rao2z: On the Stone Soup of LLM Reasoning #SundayHarangue . Stone soup is the European folk story where some clever travelers convince….
0
59
0
@emerson
Emerson Segura
9 months
Marc Andreessen & Robert Nishihara- a16z speaking about AI and startups. Health Care, Defense sectors, and building companies in the current environment, Great advice and perspective on creating new startups and companies. ( #Ray​ ,#Anyscale).
Tweet media one
1
0
2
@emerson
Emerson Segura
9 months
Sergey Edunov, Meta. Head of Llama4 training. Speak about building Llama 1,2,3 and working on Llama-4 at Meta. Covers scaling laws and details of pre and post training ( at conf on scaling AI with #Ray​ and #Anyscale ).
0
0
1
@emerson
Emerson Segura
10 months
o1 demo on its launch day in SF @OpenAI .thx to @@MindsDB for hosting and @swyx ,@romainhuet
Tweet media one
1
1
10
@emerson
Emerson Segura
10 months
RT @igorsushko: Ukraine has the largest delegation in the country's history at the 2024 Paris Paralympics. Guess why. .
0
4K
0
@emerson
Emerson Segura
10 months
RT @CerebrasSystems: Introducing Cerebras Inference.‣ Llama3.1-70B at 450 tokens/s – 20x faster than GPUs.‣ 60c per M tokens – a fifth the….
0
303
0
@emerson
Emerson Segura
10 months
Grace Hopper, creator of the first programming language COBOL. talk at the NSA, 1982.
0
0
1
@emerson
Emerson Segura
11 months
RT @LGSpace: A bumblebee can only fly for about 40 mins between feeding. But we've lost 97% of our wiłdflower meadows. So each nectar-rich….
0
605
0
@emerson
Emerson Segura
11 months
RT @Acyn: Giffords: Our lives can change so quickly. Mine did when I was shot. But I never gave up hope. I chose to make a new start and lo….
0
2K
0
@emerson
Emerson Segura
11 months
RT @gunsnrosesgirl3: Dog asks passing human to save his friend .
0
411
0
@emerson
Emerson Segura
11 months
0
0
0
@emerson
Emerson Segura
11 months
Nvidia AI chips delayed to Q4 or Q1 of 2025! . The B100, B200, GB200, use advanced packaging techniques from TSMC to connect multiple dies on one substrate, root cause of delay is unclear. (source, link below).#Nvidia.#AIart #ai #GPU #blackwell #tsmc @Nvidia @TSMC @intel @amd.
1
0
1
@emerson
Emerson Segura
1 year
Llama3.1 versus Anthropic Claude :
Tweet media one
0
0
3
@emerson
Emerson Segura
1 year
llama3.1 (405B, 70B, 8B) details:. 15 Trillion tokens pretrained!. >128k Context Length .>better than GPT4o/Claude in over 90% of bmarks.>820GB is size of large base model .>fine tuned models coming next. (benchmarks bellow).#LLaMA3 #llama llma3.1 405b, llama 3.1 8B
Tweet media one
1
0
5