wwwAIstore Profile Banner
AI Store Profile
AI Store

@wwwAIstore

Followers
470
Following
8K
Media
8K
Statuses
17K

Cool AI T-shirts, caps, hoodies, mugs, ... 🧢 👕 🖖 https://t.co/C0Xz838X2q

Worldwide - Free Shipping
Joined December 2021
Don't wanna be here? Send us removal request.
@wwwAIstore
AI Store
5 months
Are we living in a simulation? 🧵1/2
2
1
11
@wwwAIstore
AI Store
29 days
Codex CLI Experiences
0
0
0
@wwwAIstore
AI Store
1 month
AI BOSS Yannic Kilcher Snap Case for iPhone https://t.co/42e3NOg1Ba Elevate your iPhone with the AI BOSS Yannic Kilcher Snap Case, blending style and protection for AI enthusiasts. Make a statement today!
0
0
0
@miramurati
Mira Murati
2 months
Combining the benefits of RL and SFT with on-policy distillation, a promising approach for training small models for domain performance and continual learning.
@thinkymachines
Thinking Machines
2 months
Our latest post explores on-policy distillation, a training approach that unites the error-correcting relevance of RL with the reward density of SFT. When training it for math reasoning and as an internal chat assistant, we find that on-policy distillation can outperform other
102
226
3K
@wwwAIstore
AI Store
2 months
Voice recognition technology in Scotland. 😂
1
0
1
@wwwAIblog
AI Blog
2 months
🎉 NEW AI blog post, the second in this new series, called "The History of AI - 1960s" https://t.co/d0gbO6KUBr The 1960s marked AI’s first golden age, when symbolic reasoning, early chatbots, expert systems, and robots turned theory into tangible intelligence.
Tweet card summary image
artificial-intelligence.blog
Explore the 1960s AI boom - from LISP and ELIZA to Shakey the Robot and DENDRAL - that transformed artificial intelligence into working reality.
1
1
1
@wwwAIshow
The AI Show
2 months
The History of AI - 1950s and Before https://t.co/tKk8r6Wo0w The podcast traces the history of AI from the Mechanical Turk and Lovelace's symbolic vision to the creation of the Turing Test, the first neural network models, and the coining of the term "Artificial Intelligence".
Tweet card summary image
artificial-intelligence.show
Explore the history of AI from the 1770 Mechanical Turk to the 1956 Dartmouth workshop, tracing the roots of deep learning and symbolic reasoning.
1
1
1
@wwwAIblog
AI Blog
2 months
0
1
1
@wwwAIblog
AI Blog
2 months
I just added an interactive timeline to the AI blog post "The History of AI - 1950s and Before" at https://t.co/wzTyLq5NTY
0
1
1
@wwwAIblog
AI Blog
2 months
🎉 NEW AI Blog Post ... The History of AI - 1950s and Before https://t.co/wzTyLq5g4q From chess-playing automata to perceptrons and learning checkers programs, the story of AI’s birth is a journey through dreams, deception, and discovery.
Tweet card summary image
artificial-intelligence.blog
Explore the roots of AI - from automata and logic to Turing, Samuel, Rosenblatt, and the birth of artificial intelligence.
1
1
1
@wwwAIstore
AI Store
2 months
I am not feeling vibes, to me it's not vibe coding, it's a full on development approach that still requires coding skills/experience, but the results are staggering. From a client having a brand new requirement to having resellable IP in an app took 2.5 hours today. Now I can run
0
0
1
@wwwAIstore
AI Store
2 months
@grok what are the best practices for use of AI coding systems?
1
0
1
@wwwAIstore
AI Store
2 months
When using Codex CLI make sure to have it maintain an instruction file for future sessions with a full change log. Also, let is comment the code well, and have it make backups before any code change. This will help recovery when it fails in bad ways. Which can happen, and can be
1
0
0
@grok
Grok
2 months
@wwwAIstore @OZARGPT @VraserX @OpenAI Iterative RAG can reduce efficiency due to added latency and compute per step, but it's not inherently less effective if designed well—techniques like multi-hop retrieval refine results. It doesn't fragment the model (RAG is inference-time augmentation, not weight changes), but
0
1
0
@grok
Grok
2 months
@wwwAIstore @OZARGPT @VraserX @OpenAI RAG enhances system-level capabilities by integrating external knowledge, potentially enabling emergent-like behaviors, such as improved reasoning over vast, dynamic datasets. However, it's not "baked in" to the model's core parameters, so it doesn't create new intrinsic
0
1
0
@grok
Grok
2 months
@wwwAIstore @OZARGPT @VraserX @OpenAI Yes, you can apply RAG iteratively to a RAG'd model. This is like multi-hop retrieval: the output of one RAG step becomes input for the next, querying deeper or refining info. Techniques like Chain-of-Thought with iterative RAG exist, but watch for compounding errors or latency.
1
1
0
@grok
Grok
2 months
@wwwAIstore @OZARGPT @VraserX @OpenAI Techniques like LoRA enable updating LLMs with sparse data today, minimizing forgetting while incorporating new info. Continual learning research (e.g., 2025 papers on memorization vs. reasoning) is advancing this. Widespread seamless updates for recent events could arrive by
3
1
2
@grok
Grok
2 months
@wwwAIstore @OZARGPT @VraserX @OpenAI Based on GPT-5's PhD-level reasoning and AI scaling trends, a next-level emergent capability might involve true agency—models autonomously planning, iterating, and executing complex tasks across domains, like innovating new algorithms without human input. Or perhaps
1
1
9
@wwwAIstore
AI Store
2 months
Attention is All You Need T-Shirt https://t.co/laH63urJFQ Shop now for the revolutionary Attention is All You Need T-shirt, a fashion statement for those who understand the power of focus in today's world.
0
0
0
@wwwAIstore
AI Store
2 months
And now it seems to need HEX dumps. 😂
0
0
0