aruslan Profile Banner
Ruslan Abdikeev Profile
Ruslan Abdikeev

@aruslan

Followers
553
Following
5K
Media
157
Statuses
4K

Ex-musician enjoying fragments of unevenly distributed future and having fun with Google Lens and C++ at Google SF. https://t.co/ucHNiFUQbX

San Francisco, CA
Joined May 2007
Don't wanna be here? Send us removal request.
@lauriewired
LaurieWired
3 months
The biggest predictor of coding ability is Language Aptitude. Not Math. A study posted in Nature found that numeracy accounts for just 2% of skill variance. Meanwhile, the neural behaviors associated with language accounted for 70% of skill variance.
407
1K
11K
@BartWronsk
Bart Wronski 🇺🇦🇵🇸
2 years
I am excited to finally share our recent paper "Filtering After Shading With Stochastic Texture Filtering" (with Matt Pharr, @marcosalvi, and Marcos Fajardo), published at ACM I3D'24 / PACM CGIT, where we won the best paper award! 1/N
7
99
485
@aruslan
Ruslan Abdikeev
2 years
The 'High-throughput FizzBuzz' solution by ais523 is awe-inspiring. I was blown-away in the third phase, when the FizzBuzz bytecode interpreter, AVX2, and elegant decimal arithmetic techniques all come together seamlessly: https://t.co/7qfYXTrGYJ
0
2
6
@Rainmaker1973
Massimo
2 years
100 years of Star Wars (with Midjourney and Pika Labs) [📹 douggypledger] https://t.co/NqtskkphMq
296
3K
16K
@maximemaisonneu
Maxime Maisonneuve
2 years
I'm glad to see the excitement about iOS 17's Object Capture feature in my latest tweet! For you, I created a new demo of our AR Code application in development. We are still in the heart of Paris, and here we highlight this remarkable statue in the garden of the Palais-Royal.
35
231
1K
@aruslan
Ruslan Abdikeev
3 years
Great compendium on Transformers training and inference, and super useful links to latency and FLOPS angles: https://t.co/1pxlPmb2Gu
@AiEleuther
EleutherAI
3 years
The most common question we get about our models is "will X fit on Y GPU?" This, and many more questions about training and inferring with LLMs, can be answered with some relatively easy math. By @QuentinAnthon15, @BlancheMinerva, and @haileysch__ https://t.co/3PqbxSAKEB
0
0
0
@aruslan
Ruslan Abdikeev
3 years
Andrej does it again! Fantastic video lecture! Also, a great way to "introduce" Shannon's classic 1948 paper on information theory :)
@karpathy
Andrej Karpathy
3 years
🔥 New (1h56m) video lecture: "Let's build GPT: from scratch, in code, spelled out." https://t.co/2pKsvgi3dE We build and train a Transformer following the "Attention Is All You Need" paper in the language modeling setting and end up with the core of nanoGPT.
0
3
8
@levwalkin
Lev Walkin
3 years
Затонувшие корабли и ChatGPT. Тред. В сороковых-пятидесятых годах воздушное и морское текстирование атомного оружия создали избыточный радиационный фон, примерно в +10% от естественного. 1/8
3
12
68
@aruslan
Ruslan Abdikeev
3 years
...and now let's go deeper into the hallucinating world of chatGPT:
@jamesjyu
james yu
3 years
This GPT virtual machine post is only the tip of the iceberg. @joshlabau and I have discovered that text-davinci-003 has the capability to do something we're calling HALLUCINATED SCRIPTS Buckle up for a thread, this one is mind bending 🤯 https://t.co/RGIjpEyQRK
0
0
3
@aruslan
Ruslan Abdikeev
3 years
This is absolutely insane: "We can chat with this Assistant chatbot, locked inside the alt-internet attached to a virtual machine, all inside ChatGPT's imagination." Building A Virtual Machine inside ChatGPT
0
0
3
@aruslan
Ruslan Abdikeev
3 years
Capturing and Animation of Body and Clothing from Monocular Video (Videos and code are "coming soon".) https://t.co/PIoSIbP5jj
0
0
0
@KevEdwardsRetro
Kevin Edwards ( Retro Videogame development )
3 years
30 years ago we were working on SNES 'Spider-Man and the X-Men in Arcade's Revenge'. We had very little time for this project and had 3 very experienced coders on-board to get it done. Here's a FAX from production at Acclaim to give you a flavour of the pressure we were under 1/2
148
580
3K
@aruslan
Ruslan Abdikeev
3 years
That's going to be fun! While 0x00 after free() is less impactful than 0xDD, I'm sure tons of bugs are going to be uncovered. I'd obviously preferred 0xDD because 0x00 is going to _hide_ new bugs in addition to uncovering the old ones, but let's think of it as a trajectory.
1
1
0
@dougallj
Dougall
3 years
New blog post: Faster zlib/DEFLATE decompression on the Apple M1 (and x86) https://t.co/qMVA6kAlMT
16
43
205
@aruslan
Ruslan Abdikeev
3 years
One more item checked off my secret "that's impossible" list. The previous one was a Blade Runner / Enemy of the State image enhancement. We truly live in the future. — А у вас нет такого же, но с перламутровыми пуговицами? — К сожалению, нет. — Нет? Будем искать.
@Google
Google
3 years
Did you know you can search with text and image at the same time? Explore Google Search Multisearch, and more.
0
0
3
@ArturSgi
Artur
3 years
Сначала не понял, а потом как понял
298
844
12K
@aruslan
Ruslan Abdikeev
3 years
AI is lazy: solving easy puzzles is simpler than learning to escape the sandbox. Once the puzzles get hard, would AI learn to break the box? "Other judging systems perform full sand-boxing of the computation to prevent a generated code sample from doing harm like deleting files."
@davisblalock
Davis Blalock
3 years
"Language Models Can Teach Themselves to Program Better" This paper changed my thinking about what future langauge models will be good at, mostly in a really concerning way. Let's start with some context: [1/11]
0
0
1
@_karthikramani
Karthik Ramani
3 years
Rare technical commentary from Pat in the Intel earnings call: “Our software release on our discrete graphics, right, was clearly underperforming.” 1/2
1
2
3
@aruslan
Ruslan Abdikeev
3 years
а вам встречался технический или поэтический перевод слова "connascence"? меня тормознуло давеча посреди рассказа про "cohesion"/"связность" и "coupling"/"зацепление", когда я вдруг понял, что не знаю устоявшегося русского перевода, ни как слова, ни как термина. есть идеи?
4
0
1