rapprach Profile Banner
Rachel Rapp Profile
Rachel Rapp

@rapprach

Followers
200
Following
3K
Media
41
Statuses
207

Posts mainly about AI, tech, and food | @basetenco EMEA | Recovering academic | Probably eating right now

Berlin
Joined May 2020
Don't wanna be here? Send us removal request.
@rapprach
Rachel Rapp
3 days
"Should I buy a house or put money into my startup? I went with the startup. And I'm still a tenant in the same place I moved into 10 years ago." Sat down with Vasilije Markovic (@tricalt), CEO/Founder at cognee, to talk about AI memory + lessons for early founders in Europe.
2
0
6
@rapprach
Rachel Rapp
3 days
Listen (or watch) to the full chat: Spotify https://t.co/yFnfwdOUT0 YouTube
0
0
0
@rapprach
Rachel Rapp
4 days
APT outperforms frontier models from OpenAI, Anthropic, and Google on agentic benchmarks; it feels like not enough people are talking about this.
@basetenco
Baseten
4 days
Agents that don't hallucinate? Meet APT: @ScaledCognition's Agentic Pretrained Transformer — the only frontier model for CX that eliminates hallucinations. We've been partners (and fans) of the Scaled Cognition team from launch day to massive scale, working with their engineers
0
0
2
@rapprach
Rachel Rapp
6 days
LFG Germany 👏 I like to pretend I love open-source models equally but Flux is my favorite image model ngl
@bfl_ml
Black Forest Labs
8 days
We've raised $300M in Series B funding from Salesforce Ventures and Anjney Midha (AMP) FLUX is used by millions every month and powers production workflows across the world's leading platforms. This funding will allow us to invest deeply in research and build the foundations
0
0
1
@rapprach
Rachel Rapp
7 days
Genuinely excited for this. Join us for breakfast tomorrow at Merantix! Going from being the only female engineer at multiple companies to being 1 of 3 in a 15-person lab (which was considered diverse!), being a part of this feels like a privilege.
0
0
3
@rapprach
Rachel Rapp
14 days
Excited to finally share a project @blwiertz and I have been working on: the {Tech: Europe} Podcast. European tech has a storytelling problem. We keep hearing that you need SF to ship something impactful, while builders across Europe quietly ship here, without the limelight.
4
0
15
@rapprach
Rachel Rapp
20 days
Hosting the AI in Action Track at AI Summit Europe today Come say hi ☺️ (and learn about AI + cybersecurity, building MarTech agents, and working with legacy systems)
1
0
7
@rapprach
Rachel Rapp
25 days
Gamma hit $100M ARR, 70M+ users, and a $2.1B valuation in their Series B — but we've been Gamma fans since day one.
@basetenco
Baseten
25 days
Working with the @GammaApp team never quite feels like work, and that’s how their product feels. "Criminally fun." We are honored to be long-term partners and power Gamma’s inference needs as they push the envelope on how we present ideas. Congratulations on the Series B!
0
0
6
@drfeifei
Fei-Fei Li
28 days
“AI has never been more exciting. Generative AI models such as LLMs have moved from research labs to everyday life, becoming tools of creativity, productivity, and communication for billions of people. Yet they remain wordsmiths in the dark; eloquent but inexperienced,
4
6
114
@rapprach
Rachel Rapp
26 days
Insane. + Inference by Baseten.
@theworldlabs
World Labs
26 days
Introducing Marble by World Labs: a foundation for a spatially intelligent future. Create your world at https://t.co/V267VJu1H9
0
0
10
@rapprach
Rachel Rapp
27 days
Kimi K2 Thinking 🤝 Baseten Inference Stack. Open-source ftw
@amiruci
Amir Haghighat
27 days
A few days ago Kimi K2 Thinking significantly narrowed the capability gap between open and closed LLMs. Today Baseten is the only provider to deliver over 100 tok/sec on this massive 1T-parameter model.
0
0
7
@rapprach
Rachel Rapp
1 month
I'm a big user of tools like Grammarly and Superhuman Mail. Thinking about what's in store for the Superhuman suite threw me back to a convo Philip and I had with Agustín from their AI team about how to build performant embedding pipelines from the ground up. 🤓 Convos with
2
1
5
@rapprach
Rachel Rapp
2 months
Surprising: production-ready gpt-oss running at .11 sec TTFT Not surprising: Baseten claiming the "fastest" title from itself... again
@basetenco
Baseten
2 months
This week, Baseten's model performance team unlocked the fastest TPS and TTFT for gpt-oss 120b on @nvidia hardware. When gpt-oss launched we sprinted to offer it at 450 TPS... now we've exceeded 650 TPS and 0.11 sec TTFT... and we'll keep working to keep raising the bar. We are
0
0
4
@rapprach
Rachel Rapp
2 months
Our marketing team recently moved to Linear and it's been great. Excited to spend less time buried in Slack messages now too.
@linear
Linear
2 months
New: Linear Agent for Slack Mention @linear in discussions on Slack and the Linear agent will create issues informed by your conversation's context.
0
0
4
@rapprach
Rachel Rapp
2 months
Even amazon dot com went down today, but Baseten inference stayed up 😉 Running across 9+ clouds, powered by Baseten's MCM. 💚
@basetenco
Baseten
2 months
We see the massive AWS outage. Baseten web app is down but inference, new deploys, training jobs, and the model management APIs are unaffected.
0
0
5
@rapprach
Rachel Rapp
2 months
Thanks @Redisinc for putting together an awesome panel on AI agents in London. Real-time web search (Tavily) + optimized memory (cognee) + high-performance caching (Redis) + the fastest inference (Baseten) = the golden agent stack. 🥇
0
0
3
@rapprach
Rachel Rapp
2 months
Really looking forward to this panel with @tricalt and @EBB_DataSnake tomorrow. Shoot me a message if you're around
@basetenco
Baseten
2 months
If you're in London, catch Rachel Rapp with our friends from Tavily and cognee at Redis Released. From building and deploying the fastest agentic systems to industry trends, they'll break down what the agentic tech stack looks like in a live panel this Thursday.
1
0
5
@rapprach
Rachel Rapp
2 months
Can always tell who’s flying to the same conference as me based on the amount of Patagonia on the plane
0
0
0
@rapprach
Rachel Rapp
2 months
We can region-lock, colocate, and scale workloads massively worldwide (rumor has it we were the first inference provider in Australia!). Partnerships with teams like Nebius make that possible.
@nebiusai
Nebius
2 months
Videogen workloads run longer than others, stress GPU memory and magnify the impact of latency. Today’s blog shows how @basetenco’s optimized runtime (including topology-aware parallelism) and orchestration operate on our clusters: https://t.co/pJhNc13b6a #ModelInference
0
0
3