killerstorm Profile Banner
Alex Mizrahi Profile
Alex Mizrahi

@killerstorm

Followers
5K
Following
107K
Media
469
Statuses
22K

Blockchain tech guy, made world's first token wallet and decentralized exchange protocol in 2012; CTO ChromaWay / Chromia

Kyiv, Ukraine
Joined July 2008
Don't wanna be here? Send us removal request.
@killerstorm
Alex Mizrahi
8 days
o3 agrees, but it shows a bit more independent thinking:
Tweet media one
0
1
5
@killerstorm
Alex Mizrahi
8 days
Grok 4 agrees it's VectorDB & AI:.
Tweet media one
1
0
7
@killerstorm
Alex Mizrahi
8 days
It's also very expensive compared to cost of LLM inference: As people estimated, doing a Perplexity clone is easy except that search API cost would eat all the revenue.
1
0
1
@killerstorm
Alex Mizrahi
8 days
rely on something like Google search, but it's rather awful - it's essentially 'web scraping' level of data quality, authenticity is usually unknown, and it might contain deliberate prompt injections, etc.
2
0
0
@killerstorm
Alex Mizrahi
8 days
I'm rooting for DeFi and game projects on Chromia, but I believe Vector DB & AI have the biggest potential. Searchable global knowledge database can be like a holy grail for AI agents, as your output can be only as good as your input data. Currently AI agents largely.
1
0
0
@killerstorm
Alex Mizrahi
8 days
What is the most interesting area for Chromia's growth?.
12
8
33
@killerstorm
Alex Mizrahi
15 days
RT @rg_eis6:
Tweet media one
0
357
0
@killerstorm
Alex Mizrahi
1 month
you only need to define items & quests and make a front-end.
0
0
3
@killerstorm
Alex Mizrahi
1 month
We will now work on open-sourcing components to empower more developers to build high-quality games on Chromia. E.g. if you want to build some space strategy game with base building, crafting, etc, I think a lot of components of MNA backend would just fit in, then.
3
0
7
@killerstorm
Alex Mizrahi
1 month
What's on-chain in MNA? Basically, everything except player movements. That is, all items in the inventory, quests, crafting, building, etc is on-chain!.
1
0
6
@killerstorm
Alex Mizrahi
1 month
it only takes few clicks to get in, playing directly in a browser on macOS or Windows, with graphics easily matching and exceeding mainstream games of this genre!. Great work by the team, and all people who made WebGPU possible.
1
0
5
@killerstorm
Alex Mizrahi
1 month
Previously on-chain gaming was associated with huge onboarding friction (need a wallet!), poor UX (sign transactions) and overall subpar game experience and graphics. But as the new version of MNA is based on WebGPU, we can clearly see it doesn't have to be this way:.
@MyNeighborAlice
Alice
1 month
Chapter One is live. The new adventure begins NOW. 🌸. 🎮 Start playing with one click: After years of building, the gates of Lummelunda are open for a brand-new journey. 💚. Today marks the official release of My Neighbor Alice – Chapter One: A New
4
9
47
@killerstorm
Alex Mizrahi
1 month
RT @Chromia: 🎉 THE DAY WE’VE ALL BEEN WAITING FOR IS HERE! 🏝️. A fully on-chain virtual world built to last with no gas fees!. Today, My Ne….
0
39
0
@killerstorm
Alex Mizrahi
1 month
RT @MyNeighborAlice: A new friend has just arrived in the Archipelago… and it’s extra pudgy. 🐧. You heard that right, we’re beyond excited….
0
4K
0
@killerstorm
Alex Mizrahi
1 month
If you want to do experiments like this, here's a tool I made: (It can be a bit more automatic than just Claude Code when you want it to actually run training process.). And example output:
0
0
6
@killerstorm
Alex Mizrahi
1 month
Apparently yes, even a single injected input token embedding can have a meaningful effect on perplexity, after a little training of a single MLP. So perhaps it can be used as a more efficient form of RAG or a memory for an agent, etc.
3
0
5
@killerstorm
Alex Mizrahi
1 month
Alright, but can we use an existing sentence embedding to guide a model? I couldn't find a paper which answers this question, so I asked Claude to write a script to check this. And in only 23 minutes Claude wrote a nice report.
Tweet media one
1
0
0
@killerstorm
Alex Mizrahi
1 month
tokens can take information from. E.g. "prefix-tuning" works basically like a prompt but in a latent space. It's found by training process similar to fine-tuning, and it's found to be equivalent to full parameter fine-tuning, at least for some tasks.
1
0
1
@killerstorm
Alex Mizrahi
1 month
In RAG set-ups, a sentence embedding is computed to find relevant pieces of text, but the embedding itself is not sent to the model. With a more low-level APIs we have an ability to provide vectors to the model either as input token embeddings, or key-value pairs which later.
1
0
0
@killerstorm
Alex Mizrahi
1 month
Typically when you use LLM via an API, it's simple: prompt text goes in, generated text comes out. (Without a loss of generality we can ignore multimodality here.). Internally, LLM generation produces a lot of embedding vectors, but they are not exposed in any way.
Tweet media one
5
0
12