fastinoAI Profile Banner
Fastino Labs Profile
Fastino Labs

@fastinoAI

Followers
247
Following
44
Media
4
Statuses
38

Building the first foundational model for agent personalization.

Palo Alto, CA
Joined July 2024
Don't wanna be here? Send us removal request.
@fastinoAI
Fastino Labs
1 month
Introducing GLiNER-2 - Fastino’s next-gen open-source model for unified entity extraction, classification & structured parsing. • NER, classification & JSON in 1 blazing-fast pass • ⚡ <150 ms CPU latency • 🧩 Apache-2.0 + hosted API Built by @fastino_ai — unveiled live at
Tweet card summary image
github.com
Unified Schema-Based Information Extraction. Contribute to fastino-ai/GLiNER2 development by creating an account on GitHub.
1
3
8
@Linkup_platform
Linkup
28 days
We’re kicking off an SF hackathon with our friends at @GoSenso this Saturday! A massive warehouse is buzzing with 400 hackers building all day. Always a pleasure to feel this kind of on-the-ground energy! cc @fastinoAI @gladly @Meet_campfire
0
2
3
@ash_csx
Ash Lewis
28 days
Heating up at the self-evolving agents hack in SF today 👀 @fastinoAI
0
1
2
@fastinoAI
Fastino Labs
28 days
Early preview from from our researcher @var6595 of a new foundational model for personalization we've been building at Fastino. At the self-evolving agents hack https://t.co/nRDd9SSE6j
1
3
6
@DhruvAtreja1
Dhruv Atreja
2 months
Had a lot of fun working on this :)
@rohanpaul_ai
Rohan Paul
2 months
This paper builds a tough test for proactive agents and shows current models still fail often. The test is called PROBE (Proactive Resolution of Bottlenecks) and it checks 3 steps, search, identify the real problem, execute the fix. Each task gives a big pile of emails, docs,
1
2
16
@scottcjohnston
Scott Johnston
1 month
#developers, Building an #LLM-powered app, but no GPU? No problem! Check-out @fastinoAI 's latest OSS model!
@fastinoAI
Fastino Labs
1 month
Introducing GLiNER-2 - Fastino’s next-gen open-source model for unified entity extraction, classification & structured parsing. • NER, classification & JSON in 1 blazing-fast pass • ⚡ <150 ms CPU latency • 🧩 Apache-2.0 + hosted API Built by @fastino_ai — unveiled live at
0
2
5
@ash_csx
Ash Lewis
1 month
Big milestone for our team - GLiNER-2 is live! One model for NER, classification & structured parsing in a single pass. <150 ms CPU latency, open-source (Apache-2.0) + hosted API. 🔗 https://t.co/H2U41R631j Incredible work by @urchadeDS and the @fastinoAI research crew after
Tweet card summary image
github.com
Unified Schema-Based Information Extraction. Contribute to fastino-ai/GLiNER2 development by creating an account on GitHub.
0
4
10
@ash_csx
Ash Lewis
5 months
Great to see @fastinoAI featured three times in @insightpartners model landscape!
0
2
3
@ash_csx
Ash Lewis
5 months
The Fastino team hiking around the peak district after a day of coding 🥾🏔️ @fastinoAI #HiringNow #TechJobs #aijobs #SanFranciscoJobs
0
2
4
@ash_csx
Ash Lewis
5 months
Starting the 2025 @fastinoAI team summit! 🇬🇧🦊
0
3
5
@jonchu
Jon Chu // Khosla Ventures
7 months
@wolfejosh you might want to take a look at @fastinoAI
@wolfejosh
Josh Wolfe
7 months
i am convinced on-device inference will dominate next and my theory of these case is memory (flash/NAND,etc) players will move in here (SK, Micron) jensen/nvda want u to believe u need giant clusters for inference and that may be true for 50% of your off-device search queries
0
2
4
@fastinoAI
Fastino Labs
7 months
Huge thanks to @mspiro3 and @InsightPartners for helping make our NYC rooftop happy hour a success! Missed @fastinoAI and @george_onx this time? Sign up for first dibs on our next event: https://t.co/asqmicrxzJ
1
1
6
@fastinoAI
Fastino Labs
7 months
Fastino trains cutting-edge language models on <$100K of gaming GPUs. No racks of H100s. No $100M burn. Just smart engineering. A new path for enterprise AI—accurate, fast, and cost-effective. 🔗 Tom’s Hardware feature: https://t.co/QKxoxCm8Ue cc: @tomshardware
Tweet card summary image
tomshardware.com
One more reason to worry about GPU availability.
0
2
5
@fastinoAI
Fastino Labs
7 months
We just dropped our first deep dive on Fastino's TLMs which are purpose-built to outperform generalist LLMs like GPT-4o on high scale enterprise tasks. 🦊 Millisecond latency 🦊 Benchmarked against real-world use cases 🦊 Inference on CPU and low-end GPU Read the full launch
0
2
6
@scottcjohnston
Scott Johnston
7 months
#developers, Need to run an LLM locally, but the usual suspects are too big to fit on your laptop? Check-out @fastinoAI 's task-specific language models (TLMs): they fit on a laptop, no GPU required ... and FREE ;-)! https://t.co/10TKSJTzRZ https://t.co/SpO46aVMxz
Tweet card summary image
fastino.ai
The world's first foundation model for adaptive personalization, proving dynamic user-level context and memory for your application.
1
4
11
@tbpn
TBPN
7 months
We asked @george_onx about why TLMs are better than traditional LLMs. "LLMs aren't built for the enterprise. They're built for consumers." "They help you code and get food recipes. They're not built for high-scale enterprise tasks. "Enterprises are spending millions a month on
@fastinoAI
Fastino Labs
7 months
BIG NEWS: Fastino raises $17.5M Seed to launch TLMs – Task-Specific Language Models that beat GPT on accuracy and latency. Led by @jonchu at @khoslaventures + joined by @gkm1 at @insightpartners, @AntonioGracias at @valorep, @scottcjohnston (ex-Docker CEO), and @l2k (CEO of
0
3
28
@george_onx
George Maloney
7 months
Excited to talk about @fastinoAI and our TLM launch on the @TBPN show with @johncoogan and @jordihays today at 2pm PST!
@tbpn
TBPN
7 months
Morning. Here are our guest call-ins today: - @StanfordReview (The Stanford Review) - @morganhousel (Collab Fund) - @sonyatweetybird (Sequoia) - @wquist (Slow Ventures) - @mehul (Matic Robots) - @george_onx (Fastino) - Aidan Dewar (Nourish) See you on the stream.
0
1
6
@aaliya_va
Aaliya
7 months
Fastino @fastinoAI just raised $17.5M in Seed funding. 🔥 But here's what makes them different: They're not building massive foundation models. They're building small, purpose-driven AI. ✅ Trained for specific tasks ✅ Built with under $100K in compute ✅ Lightning-fast
4
5
33
@fastinoAI
Fastino Labs
7 months
Our very own @george_onx is on @tbpn today w/ @johncoogan and @jordihays at 1:45pm PST. Don't miss out on a great conversation!
0
2
8
@TechCrunch
TechCrunch
7 months
Fastino trains AI models on cheap gaming GPUs and just raised $17.5M led by Khosla | TechCrunch
Tweet card summary image
techcrunch.com
Tech giants like to boast about trillion-parameter AI models that require massive and expensive GPU clusters. But Fastino is taking a different approach.
3
22
61