Chris Botica Profile
Chris Botica

@chrisbotica

Followers
180
Following
9K
Media
182
Statuses
857

building AI for defense ๐Ÿ‡บ๐Ÿ‡ธ, Soli Deo Gloria

Joined September 2020
Don't wanna be here? Send us removal request.
@chrisbotica
Chris Botica
1 month
It is entirely possible that Charlie Kirk's killer can turn to Jesus, repent, meet Charlie in heaven, and be best of friends for eternity
0
0
0
@thenewarea51
Thenewarea51
1 month
The greatest hummingbird feeder ever created, a KC-135 bird feeder! ๐Ÿ˜Ž
718
7K
71K
@MathMatize
MathMatize Memes
5 months
BREAKING: First order of Math Pope Leo XIV is to update the notation of cross products
19
523
5K
@MattWalshBlog
Matt Walsh
7 months
Christians are the most persecuted group in the world. Their plight is almost entirely ignored by corporate media. The horrible atrocities in Syria are part of this longstanding pattern. Pray for our persecuted brothers and sisters across the globe.
3K
10K
57K
@thedankoe
DAN KOE
8 months
โ€œEveryone is going to make their own apps with AIโ€ My friend you donโ€™t even make your own food. Youโ€™ll still pay a few bucks to use an app.
419
647
11K
@itzik009
Timmy Ghiurau
8 months
๐Ÿš€ Want to be one of the first to test AI-powered gaming? ๐ŸŽฎ If you love Minecraft, live in the U.S., and are 18+, I have something special for you. ๐Ÿ‘€ DM me if you want to test something cutting-edge before anyone else. Only 50 spots available! https://t.co/bYcefbVi5e ๐Ÿ”ฅ
2
4
16
@TolkienWonder
The Wonder of Tolkien
8 months
A good life consists of both
10
195
2K
@DimitrisPapail
Dimitris Papailiopoulos
8 months
o3 can't multiply beyond a few digits... But I think multiplication, addition, maze solving and easy-to-hard generalization is actually solvable on standard transformers... with recursive self-improvement. Below is the acc of a tiny model teaching itself how to add.
56
118
1K
@LilSketchy2
๐–‘๐–Ž๐–‘๐–˜k๐–Š๐–™๐–ˆ๐–.๐–Š๐–™๐– โ™ก ้‚ชๆ•™
9 months
Oranges are literally pre-sliced and you don't believe in God
272
6K
50K
@chrisbotica
Chris Botica
9 months
it was fortold
@jimcramer
Jim Cramer
9 months
When will they let NVDA go higher???
0
0
1
@probflow
rami
9 months
R1 CoT be like
2
9
61
@jxmnop
Jack Morris
9 months
with o1 and now R1, models are now generating tens of thousands of tokens to solve hard problems. o3 is likely generating hundreds of thousands or millions of tokens apparently the tokens for solving one task in ARC-AGI with the slowest o3 model cost over $3000 this is why i
71
107
1K
@chrisbotica
Chris Botica
9 months
wait, wait
@teortaxesTex
Teortaxesโ–ถ๏ธ (DeepSeek ๆŽจ็‰น๐Ÿ‹้“็ฒ‰ 2023 โ€“ โˆž)
9 months
somebody wake up Yud
0
0
1
@MathMatize
MathMatize Memes
9 months
BREAKING: Donald Trump announces he intends to change the name of The Chinese Remainder Theorem to The USA Remainder Theorem
36
455
4K
@finbarrtimbers
finbarr
10 months
the RL with Execution Feedback (RLEF) paper is super neat the authors get feedback from running code in a multi turn setup and use it as a reward signal
6
46
340
@chrisbotica
Chris Botica
10 months
For to us a child is born, to us a son is given, and the government will be on his shoulders. And He will be called Wonderful Counselor, Mighty God, Everlasting Father, Prince of Peace
0
0
1
@chrisbotica
Chris Botica
10 months
Intelligence downstream of agency
@iamgingertrash
simp 4 satoshi
10 months
The core innovation we're pursuing is that interacting with AI should be and interface to loop over actions, not chat. We think AGI is downstream of agency - the ability to take actions, ground them in reality, and accomplish medium to longer term tasks. A wordcel could never.
0
0
1
@scaling01
Lisan al Gaib
10 months
META JUST KILLED TOKENIZATION !!! A few hours ago they released "Byte Latent Transformer". A tokenizer free architecture that dynamically encodes Bytes into Patches and achieves better inference efficiency and robustness! (I was just talking about how we need dynamic
98
709
5K
@nrehiew_
wh
10 months
This paper from Meta proposes a method to not have the model reason in token space but directly model its reasoning using its hidden state. The authors also do a lot of cool interpretability work in this paper. Aesthetically, I like it alot and its simple to implement
@nrehiew_
wh
11 months
im convinced that reasoning (icl or cot) should be done in latent space and what we currently do by forcing it to happen in human readable token space is suboptimal
21
86
1K
@SakanaAILabs
Sakana AI
10 months
Introducing An Evolved Universal Transformer Memory https://t.co/IKt4l4YPvA Neural Attention Memory Models (NAMMs) are a new kind of neural memory system for Transformers that not only boost their performance and efficiency but are also transferable to other foundation models,
8
118
472