Joan Romano
@joanromano
Followers
740
Following
3K
Media
233
Statuses
4K
Engineer @GooglePhotos at @Google, previously @GoogleMaps @Canva | SpaniAustralian πͺπΈπ¦πΊ all the way from Barcelona | No shortcuts to places worth going to
Sydney, New South Wales
Joined November 2009
Imo this is along the lines of how talking to an LLM via text is like typing into a DOS Terminal and "GUI hasn't been invented yet" of some of my earlier posts. The GUI is an intelligent canvas.
71
138
3K
Our TPUs are headed to space!Β Inspired by our history of moonshots, from quantum computing to autonomous driving, Project Suncatcher is exploring how we could one day build scalable ML compute systems in space, harnessing more of the sunβs power (which emits more power than 100
826
2K
17K
Hear from Michel Devoret, our Chief Scientist of Quantum Hardware, on our latest breakthrough algorithm: Quantum Echoes. His early work on superconducting artificial atoms laid the foundation for the Willow chip, enabling verifiable quantum advantage.
32
184
720
Here's my 6 hour conversation with @dhh, a legendary programmer, creator of Ruby on Rails, author, and race car driver. This was a fun and inspiring conversation on everything from the future of programming & AI to the nature of happiness & productivity to the value of family,
500
906
8K
πππ
For the past 10 years, we've loved being a home to your 9T+ photos & videos! Now with 1.5B+ monthly users, youβve made it so much more! Come celebrate our birthday with tips & a peek at new features β https://t.co/pjFuRIlR3R
0
0
0
Gosh luckily, I thought I was the only weirdo in the room
@thekitze I'm using LLMs all day long, but I'm not letting it write my code. It's looking up APIs, it's explaining concepts, but I want to reserve the fun part of programming for myself: Actually writing code!
0
0
0
Such a wild ride, hereβs to 20 more years of it π©
Itβs been 20 years since @GoogleMaps hit the map πΊοΈ After two decades of makeovers, updates and AI, here are our 20 favorite things you can do with Maps β
0
0
1
Here's my conversation with Pieter Levels (@levelsio), self-taught developer and entrepreneur who designed, programmed, shipped, and ran over 40 startups, many of which are hugely successful. In most cases, he did it all by himself, while living the digital nomad life in over 40
441
1K
10K
Interestingly, recently released Gemma 2 https://t.co/58xdfKhHIl seems to have a combination of sliding window attention mechanism as well: "We alternate between a local sliding window attention (Beltagy et al., 2020a,b) and global attention (Luong et al., 2015) in every other
While playing around with simple self attention mechanisms, I went curious about different types of self attention implementations that I have not seen in https://t.co/DZDtm4jfqL Came across the Sliding Window attention which turns out to be a simple variation but yet powerful
0
0
0
Same implementation as to Causal Attention ( https://t.co/5MMWZISvXZ) instead: 1. Adds window_size param to constructor, determines sliding window size 2. Main change is in how we create and apply the mask 3. Rest remains almost the same 4. Also removed dropout layer for
1
0
0
While playing around with simple self attention mechanisms, I went curious about different types of self attention implementations that I have not seen in https://t.co/DZDtm4jfqL Came across the Sliding Window attention which turns out to be a simple variation but yet powerful
1
0
2
Sources: - Build a Large Language Model (From Scratch) https://t.co/DZDtm4jfqL
0
0
0
Implementing dropouts 1. Add new Dropout layer at the end, before computing values - Randomly sets a fraction of input units to 0 at each update during training time 2. Exact placement of dropout can vary in different implementations - Some might apply dropout to the
1
0
0