Hagay Lupesko Profile
Hagay Lupesko

@hagay_lupesko

Followers
307
Following
518
Media
4
Statuses
128

Building the world's fastest inference service and ML stack @ Cerebras as the SVP of Cloud and Inference.

San Francisco Bay Area
Joined September 2016
Don't wanna be here? Send us removal request.
@andrewdfeldman
Andrew Feldman
4 months
@chamath @JonathanRoss321 @GroqInc According to Open Router, @CerebrasSystems is more than twice as fast as @GroqInc. Find today's data here: https://t.co/oFj460mmnh
2
23
187
@bigdata
Ben Lorica 罗瑞卡
2 years
🆕💡🎧 DBRX Unpacked - featuring @hagay_lupesko @Databricks: 🔥 Open LLM bridging quality & cost for AI apps ⚡️ Mixture-of-experts architecture for efficiency ⚙️ Optimized for enterprise needs: code gen, long context https://t.co/1QZnVkeHOu
0
7
10
@DbrxMosaicAI
Databricks Mosaic Research
2 years
🎧 Our VP of Engineering @hagay_lupesko joins the Artificial Intelligence podcast to share insights on how our #GenAI platform enables #LLM training at scale. Give a listen!
Tweet card summary image
open.spotify.com
The Artificial Intelligence Podcast · Episode
0
2
10
@DbrxMosaicAI
Databricks Mosaic Research
3 years
📣Announcing MosaicML Inference 📣 Ever wanted a text or image generation API that doesn’t make you send data to a third party? Or a cheaper solution than paying by the token? Or an easy way to get a trained model into production? We can help with that. 🧵
15
99
633
@mvpatel2000
Mihir Patel
3 years
@jefrankle @landanjs Now, with @MosaicML, it's absurdly easy. Here's the loss curve from the week. I launched the runs and didn't have to mess with the ongoing runs once, including when things broke during my Saturday brunch with @vivek_myers. It crashed, identified failure, restarted by itself 🤯
1
5
44
@DbrxMosaicAI
Databricks Mosaic Research
3 years
🚨 A few months ago we announced that you can train Stable Diffusion from scratch for less than $125k using the MosaicML platform. A major price drop is coming...and we have the training run to back it up. Stay tuned for a major announcement this week!
0
9
70
@GradFlowTech
Gradient Flow
3 years
The current prevailing approach for developers is to use proprietary LLMs through APIs. Hagay Lupesko @MosaicML on why factors such as domain specificity, security, privacy, regulations, IP protection & control, will prompt organizations to opt to invest in their own custom LLMs
0
5
3
@DbrxMosaicAI
Databricks Mosaic Research
3 years
Woo hoo! 🙌What an honor to make the @Forbes AI 50 List. MosaicML empowers you build your own #GenerativeAI. Train, finetune, and deploy your custom #LLM today: https://t.co/os6ZHHhjFe
13
44
328
@NaveenGRao
Naveen Rao
3 years
Tune in to hear from our CTO @hanlintang !
@DbrxMosaicAI
Databricks Mosaic Research
3 years
Set your alarm⏰tomorrow at 9:50 AM MosaicML CTO @hanlintang joins the virtual #MLOpsCommunity LLMs in Production Conference to share insights on efficiently scaling & deploying #LLMs. Register here: https://t.co/MRXotAFdmu
1
5
12
@demaria_michael
Michael DeMaria
3 years
A quintessential example of the importance of data privacy and why companies should build and use their own models. Demonstrates the significance of what @NaveenGRao, @jefrankle, and @hagay_lupesko are building at @MosaicML. https://t.co/iAIa8cOj45
Tweet card summary image
techradar.com
Samsung meeting notes and new source code are now in the wild after being leaked in ChatGPT
0
1
10
@DbrxMosaicAI
Databricks Mosaic Research
3 years
Think it’s too hard—or too expensive—to train your own GPT or diffusion models from scratch? Think again. We built the MosaicML platform to tackle the challenges of training large models and unleash the power of #generativeAI. Learn more: https://t.co/k7aJYFYaeJ
Tweet card summary image
databricks.com
Latest blogs from the team at Mosaic Research
3
10
47
@ml_hardware
Abhi Venigalla
3 years
Been hinting at this blog for a while and it's finally here! The Streaming team @MosaicML has built an open source library (`mosaicml-streaming`) for efficiently loading training data from object stores like S3, GCS, OCI, and more. https://t.co/y9a5vOSZws
Tweet card summary image
databricks.com
Loading your training data becomes an escalating challenge as datasets grow bigger in size and the number of nodes scales. We built StreamingDataset to make training on large datasets from cloud...
3
23
130
@DbrxMosaicAI
Databricks Mosaic Research
3 years
Our VP of Engineering @hagay_lupesko gave a great talk at last week's #PyTorchConference on how easy it is to speed up #ML model #training with our Composer tool.⚡️ Couldn't make it to New Orleans? Catch the recap here!
0
5
11
@hagay_lupesko
Hagay Lupesko
3 years
Folks at @awscloud know their stuff 👇🏽
@DbrxMosaicAI
Databricks Mosaic Research
3 years
The team at @awscloud shows how to use our #Composer open-source library to train ResNet-50 on the ImageNet dataset with industry-standard accuracy - for less than the cost of a large pizza. 🍕🍕🍕#efficientML #AI
0
0
2
@hagay_lupesko
Hagay Lupesko
3 years
Excited and proud to share what we've been working on! 👇🏽
@DbrxMosaicAI
Databricks Mosaic Research
3 years
📢 MosaicML Cloud is now available for early access! Create advanced AI models faster and cheaper than you thought possible.
0
0
2
@DbrxMosaicAI
Databricks Mosaic Research
3 years
Our open source Composer library has everything to improve the ML model training experience. Today we're dropping our 0.10 release: @cometml integration, auto-eval batch size selection, API improvements, and streaming #datasets.
0
6
28
@DbrxMosaicAI
Databricks Mosaic Research
3 years
How much do you think it costs to train a GPT-3 quality model from scratch?
15
7
27
@hagay_lupesko
Hagay Lupesko
3 years
Dream team at @MosaicML 🚀
@juliechoi
Julie Shin Choi (she/her)
3 years
I am very grateful to be working with @emilyhutson and @gloriakillen again as we build @MosaicML. That is all. #BestOfPeople #WomeninAI #WomenLeading
1
0
2
@hagay_lupesko
Hagay Lupesko
3 years
I'll be at MLSys on Tuesday 8/30 - would love to connect and meet up, DM me!
@DbrxMosaicAI
Databricks Mosaic Research
3 years
Who’s coming to MLSys 2022 next week? We’re excited to announce that we’re a Gold Sponsor - come check out our booth to talk about creating cutting edge techniques for ML scalability and efficient training.
0
0
4
@DbrxMosaicAI
Databricks Mosaic Research
3 years
How many hours of your life do you lose to debugging CUDA OOM errors? Automatic gradient accumulation can save you valuable time:
databricks.com
With automatic gradient accumulation, Composer lets users seamlessly change GPU types and number of GPUs without having to worry about batch size. CUDA out of memory errors are a thing of the past!
0
4
17