Adrian Romo
@AdrianRomo97
Followers
11K
Following
5K
Media
34
Statuses
208
π¨π»βπ» Backend Engineering Maestro πΆ Music Enthusiast & Lasagna Connoisseur π§π»ββοΈ ML Sorcerer π Passionate Dreamer Extraordinaire
MΓ©xico
Joined August 2012
Save one dollar a day for one year and you will have like a million dollars in only one year.
0
0
1
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head abs: https://t.co/fzLeQnVFHd
21
281
1K
Weβre releasing GPT-4 β a large multimodal model (image & text in, text out) which is a significant advance in both capability and alignment. Still limited in many ways, but passes many qualification benchmarks like the bar exam & AP Calculus:
openai.com
Weβve created GPT-4, the latest milestone in OpenAIβs effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less...
148
1K
7K
Ok so, that moment when you solve a bug that was bothering you since days ago is beautiful.
0
0
0
Great to join as Backend Dev in @Espressive_AI βΊοΈ
Welcome new hires! We are so excited to have you on the team π @Espressive_AI #NewHires #Welcome #LifeAtEspressive
0
0
2
Best of AI Twitter (Sept. 11-18): - LLMs learn to use software and execute code π
, - Git Re-Basin: a technique to "merge" deep NN models, - Meta spins off an independent PyTorch Foundation, - Stalker tools, courtesy of computer vision + CCTV, ... and more: 1/13
2
78
553
Stable Diffusion implemented using @Tensorflow and #Keras. - Converted pre-trained models - Easy to understand code - Minimal code footprint Code : https://t.co/oPFfTcn7zz Google Colab with @Gradio demo : https://t.co/41UCRNZbpg
26
316
2K
We have some exciting updates to SayCan! Together with the updated paper, we're adding new resources to learn more about this work: Interactive site: https://t.co/91QCpwFP3u Blog posts: https://t.co/8fRebheNej and https://t.co/nvEn4Tfxy9 Video:
Super excited to introduce SayCan ( https://t.co/NWyvPubhmE): 1st publication of a large effort we've been working on for 1+ years Robots ground large language models in reality by acting as their eyes and hands while LLMs help robots execute long, abstract language instructions
3
55
235
What does an iPhone eat for breakfast... Siri-alπ₯Ή
19
16
226