joanromano Profile Banner
Joan Romano Profile
Joan Romano

@joanromano

Followers
740
Following
3K
Media
233
Statuses
4K

Engineer @GooglePhotos at @Google, previously @GoogleMaps @Canva | SpaniAustralian πŸ‡ͺπŸ‡ΈπŸ‡¦πŸ‡Ί all the way from Barcelona | No shortcuts to places worth going to

Sydney, New South Wales
Joined November 2009
Don't wanna be here? Send us removal request.
@karpathy
Andrej Karpathy
1 month
Imo this is along the lines of how talking to an LLM via text is like typing into a DOS Terminal and "GUI hasn't been invented yet" of some of my earlier posts. The GUI is an intelligent canvas.
71
138
3K
@carlosalcaraz
Carlos Alcaraz
2 months
VAMOOOOOOOS!!!! πŸ‡ͺπŸ‡Έ
1K
2K
42K
@sundarpichai
Sundar Pichai
2 months
Our TPUs are headed to space!Β  Inspired by our history of moonshots, from quantum computing to autonomous driving, Project Suncatcher is exploring how we could one day build scalable ML compute systems in space, harnessing more of the sun’s power (which emits more power than 100
826
2K
17K
@GoogleQuantumAI
Google Quantum AI
3 months
Hear from Michel Devoret, our Chief Scientist of Quantum Hardware, on our latest breakthrough algorithm: Quantum Echoes. His early work on superconducting artificial atoms laid the foundation for the Willow chip, enabling verifiable quantum advantage.
32
184
720
@lexfridman
Lex Fridman
6 months
Here's my 6 hour conversation with @dhh, a legendary programmer, creator of Ruby on Rails, author, and race car driver. This was a fun and inspiring conversation on everything from the future of programming & AI to the nature of happiness & productivity to the value of family,
500
906
8K
@rolandgarros
Roland-Garros
7 months
It's yours Carlitos πŸ† #RolandGarros
28
613
8K
@joanromano
Joan Romano
7 months
πŸŽ‰πŸŽ‰πŸŽ‰
@googlephotos
Google Photos
7 months
For the past 10 years, we've loved being a home to your 9T+ photos & videos! Now with 1.5B+ monthly users, you’ve made it so much more! Come celebrate our birthday with tips & a peek at new features β†’ https://t.co/pjFuRIlR3R
0
0
0
@joanromano
Joan Romano
8 months
Gosh luckily, I thought I was the only weirdo in the room
@dhh
DHH
8 months
@thekitze I'm using LLMs all day long, but I'm not letting it write my code. It's looking up APIs, it's explaining concepts, but I want to reserve the fun part of programming for myself: Actually writing code!
0
0
0
@joanromano
Joan Romano
11 months
Such a wild ride, here’s to 20 more years of it 🍩
@Google
Google
11 months
It’s been 20 years since @GoogleMaps hit the map πŸ—ΊοΈ After two decades of makeovers, updates and AI, here are our 20 favorite things you can do with Maps ↓
0
0
1
@joanromano
Joan Romano
1 year
That was actually fun πŸš€πŸš€πŸš€
@TechCrunch
TechCrunch
1 year
Google Maps is rolling out speedometer, speed limits on iPhone and CarPlay globally
0
0
5
@lexfridman
Lex Fridman
1 year
Here's my conversation with Pieter Levels (@levelsio), self-taught developer and entrepreneur who designed, programmed, shipped, and ran over 40 startups, many of which are hugely successful. In most cases, he did it all by himself, while living the digital nomad life in over 40
441
1K
10K
@karpathy
Andrej Karpathy
1 year
Every time I diversify I lose money
561
361
10K
@joanromano
Joan Romano
2 years
Interestingly, recently released Gemma 2 https://t.co/58xdfKhHIl seems to have a combination of sliding window attention mechanism as well: "We alternate between a local sliding window attention (Beltagy et al., 2020a,b) and global attention (Luong et al., 2015) in every other
@joanromano
Joan Romano
2 years
While playing around with simple self attention mechanisms, I went curious about different types of self attention implementations that I have not seen in https://t.co/DZDtm4jfqL Came across the Sliding Window attention which turns out to be a simple variation but yet powerful
0
0
0
@joanromano
Joan Romano
2 years
Sources: - Build a Large Language Model (From Scratch)
0
0
2
@joanromano
Joan Romano
2 years
Same implementation as to Causal Attention ( https://t.co/5MMWZISvXZ) instead: 1. Adds window_size param to constructor, determines sliding window size 2. Main change is in how we create and apply the mask 3. Rest remains almost the same 4. Also removed dropout layer for
1
0
0
@joanromano
Joan Romano
2 years
While playing around with simple self attention mechanisms, I went curious about different types of self attention implementations that I have not seen in https://t.co/DZDtm4jfqL Came across the Sliding Window attention which turns out to be a simple variation but yet powerful
1
0
2
@joanromano
Joan Romano
2 years
Sources: - Build a Large Language Model (From Scratch) https://t.co/DZDtm4jfqL
0
0
0
@joanromano
Joan Romano
2 years
Implementing dropouts 1. Add new Dropout layer at the end, before computing values - Randomly sets a fraction of input units to 0 at each update during training time 2. Exact placement of dropout can vary in different implementations - Some might apply dropout to the
1
0
0