
Brian Scanlan
@brian_scanlan
Followers
3K
Following
22K
Media
299
Statuses
5K
https://t.co/JleL2wKqeE
Swords, Dublin, Ireland
Joined May 2009
RT @Padday: Delighted to have Brett Chen from @perplexity_ai join us, @cognition and @harvey for our AI event in SF next week!. We'll be go….
0
20
0
RT @ESYudkowsky: @JohnArnoldFndtn It's easier when there's a cheaper better substitute, and let us all hail the inventor who made it easier….
0
2
0
A Honeycomb heatmap of a fleet whose downstream is rate limiting, with linear backoff and retry in the client.
0
0
2
Logging into all-the-slacks is the final boss of setting up a new laptop.
1
0
4
RT @gregolsent: And by "~instant resolution by Fin" we DO mean instant! We are holding Fin's median reply time at ~7.5 seconds and it keeps….
0
3
0
Round numbers are fun. Working on the next one.
Fin just passed 1M resolutions / week. (Up from 600k a quarter ago). 1 resolution = 1 customer query that’s been successfully resolved in the eyes of the customer. Which often requires back and forth and work across multiple systems. Using the following numbers from our customer
1
0
15
Startup idea - SSO but it logs you into all the apps you use first thing in the morning.
3
1
19
RT @ray_cun: Irish people used to emigrate to find work. Now we're a country where people immigrate for work, and we're better off because….
0
5
0
Most of the time compaction has kicked in for me, @claudeai is already in a giant hole that it's dug and I need to start again. "You're absolutely right. I didn't need to refactor those tests. Attempting a more simple approach. Gurgling. ".
0
0
12
Here's a great writeup by my colleague Ketan about what we do to actually achieve world-class uptime on Fin. This is actually differentiating heavy lifting.
fin.ai
Building reliable large language model (LLM) inference is still an emerging discipline. Although the field has matured considerably in recent years, we are far from the level of dependability seen in...
0
2
14
World-class uptime and performance is far from trivial to achieve when you're building using LLMs right now. You don't need all-the-nines of uptime if you're generating Ghibili pics or summarising a meeting, but if Fin can't answer because of LLM downtime, we lose money.
1
2
17
Last year Intercom used 41.069 MTCO2e (metric tonnes of carbon dioxide) in our use of AWS. This is equivalent to the electricity used by 8 average homes in the USA. The vast majority of this was produced in the Sydney region - Australia loves burning coal.
1
1
12
RT @ciaran_lee: The things that make a difference at scale are so wild. Using ignored_columns in Rails results in SQL queries that list ea….
0
10
0
RT @darraghcurran: broadly i find people fall into one of two camps. camp 1) don’t believe or don’t care about AIs impact and just get back….
0
2
0
RT @haridigresses: Gross margins are in fact the single most important indicator of business quality -- not some abstract projection of fut….
0
22
0
RT @BessemerVP: 🚨 Most companies are adding AI features. @intercom is rebuilding itself to become AI-native. If you're a SaaS founder, bu….
0
7
0
RT @ciaran_lee: Resolution rate for @Fin_ai normally increases ~1% per month. Generally you would expect increases to get harder over time….
0
6
0