Fastino Labs
@fastinoAI
Followers
247
Following
44
Media
4
Statuses
38
Building the first foundational model for agent personalization.
Palo Alto, CA
Joined July 2024
Introducing GLiNER-2 - Fastino’s next-gen open-source model for unified entity extraction, classification & structured parsing. • NER, classification & JSON in 1 blazing-fast pass • ⚡ <150 ms CPU latency • 🧩 Apache-2.0 + hosted API Built by @fastino_ai — unveiled live at
github.com
Unified Schema-Based Information Extraction. Contribute to fastino-ai/GLiNER2 development by creating an account on GitHub.
1
3
8
We’re kicking off an SF hackathon with our friends at @GoSenso this Saturday! A massive warehouse is buzzing with 400 hackers building all day. Always a pleasure to feel this kind of on-the-ground energy! cc @fastinoAI @gladly @Meet_campfire
0
2
3
Early preview from from our researcher @var6595 of a new foundational model for personalization we've been building at Fastino. At the self-evolving agents hack https://t.co/nRDd9SSE6j
1
3
6
Had a lot of fun working on this :)
This paper builds a tough test for proactive agents and shows current models still fail often. The test is called PROBE (Proactive Resolution of Bottlenecks) and it checks 3 steps, search, identify the real problem, execute the fix. Each task gives a big pile of emails, docs,
1
2
16
#developers, Building an #LLM-powered app, but no GPU? No problem! Check-out @fastinoAI 's latest OSS model!
Introducing GLiNER-2 - Fastino’s next-gen open-source model for unified entity extraction, classification & structured parsing. • NER, classification & JSON in 1 blazing-fast pass • ⚡ <150 ms CPU latency • 🧩 Apache-2.0 + hosted API Built by @fastino_ai — unveiled live at
0
2
5
Big milestone for our team - GLiNER-2 is live! One model for NER, classification & structured parsing in a single pass. <150 ms CPU latency, open-source (Apache-2.0) + hosted API. 🔗 https://t.co/H2U41R631j Incredible work by @urchadeDS and the @fastinoAI research crew after
github.com
Unified Schema-Based Information Extraction. Contribute to fastino-ai/GLiNER2 development by creating an account on GitHub.
0
4
10
The Fastino team hiking around the peak district after a day of coding 🥾🏔️ @fastinoAI #HiringNow #TechJobs #aijobs #SanFranciscoJobs
0
2
4
@wolfejosh you might want to take a look at @fastinoAI
i am convinced on-device inference will dominate next and my theory of these case is memory (flash/NAND,etc) players will move in here (SK, Micron) jensen/nvda want u to believe u need giant clusters for inference and that may be true for 50% of your off-device search queries
0
2
4
Huge thanks to @mspiro3 and @InsightPartners for helping make our NYC rooftop happy hour a success! Missed @fastinoAI and @george_onx this time? Sign up for first dibs on our next event: https://t.co/asqmicrxzJ
1
1
6
Fastino trains cutting-edge language models on <$100K of gaming GPUs. No racks of H100s. No $100M burn. Just smart engineering. A new path for enterprise AI—accurate, fast, and cost-effective. 🔗 Tom’s Hardware feature: https://t.co/QKxoxCm8Ue cc: @tomshardware
tomshardware.com
One more reason to worry about GPU availability.
0
2
5
We just dropped our first deep dive on Fastino's TLMs which are purpose-built to outperform generalist LLMs like GPT-4o on high scale enterprise tasks. 🦊 Millisecond latency 🦊 Benchmarked against real-world use cases 🦊 Inference on CPU and low-end GPU Read the full launch
0
2
6
#developers, Need to run an LLM locally, but the usual suspects are too big to fit on your laptop? Check-out @fastinoAI 's task-specific language models (TLMs): they fit on a laptop, no GPU required ... and FREE ;-)! https://t.co/10TKSJTzRZ
https://t.co/SpO46aVMxz
fastino.ai
The world's first foundation model for adaptive personalization, proving dynamic user-level context and memory for your application.
1
4
11
We asked @george_onx about why TLMs are better than traditional LLMs. "LLMs aren't built for the enterprise. They're built for consumers." "They help you code and get food recipes. They're not built for high-scale enterprise tasks. "Enterprises are spending millions a month on
BIG NEWS: Fastino raises $17.5M Seed to launch TLMs – Task-Specific Language Models that beat GPT on accuracy and latency. Led by @jonchu at @khoslaventures + joined by @gkm1 at @insightpartners, @AntonioGracias at @valorep, @scottcjohnston (ex-Docker CEO), and @l2k (CEO of
0
3
28
Excited to talk about @fastinoAI and our TLM launch on the @TBPN show with @johncoogan and @jordihays today at 2pm PST!
Morning. Here are our guest call-ins today: - @StanfordReview (The Stanford Review) - @morganhousel (Collab Fund) - @sonyatweetybird (Sequoia) - @wquist (Slow Ventures) - @mehul (Matic Robots) - @george_onx (Fastino) - Aidan Dewar (Nourish) See you on the stream.
0
1
6
Fastino @fastinoAI just raised $17.5M in Seed funding. 🔥 But here's what makes them different: They're not building massive foundation models. They're building small, purpose-driven AI. ✅ Trained for specific tasks ✅ Built with under $100K in compute ✅ Lightning-fast
4
5
33
Our very own @george_onx is on @tbpn today w/ @johncoogan and @jordihays at 1:45pm PST. Don't miss out on a great conversation!
0
2
8
Fastino trains AI models on cheap gaming GPUs and just raised $17.5M led by Khosla | TechCrunch
techcrunch.com
Tech giants like to boast about trillion-parameter AI models that require massive and expensive GPU clusters. But Fastino is taking a different approach.
3
22
61