Phind Profile
Phind

@phindsearch

Followers
9K
Following
127
Media
7
Statuses
105

AI answer engine for complex questions.

San Francisco
Joined June 2022
Don't wanna be here? Send us removal request.
@ycombinator
Y Combinator
15 days
Congrats to @phindsearch on their $10.4M round! Walls of text are so 2022. On Phind, every answer is now an interactive mini-app. Itโ€™s a step towards their vision of creating a personal internet just for you. https://t.co/JO0vbeF5BO
21
18
269
@phindsearch
Phind
15 days
Walls of text are so 2022. Introducing Phind 3, where every answer is now an interactive mini-app. Itโ€™s a step towards our vision of creating on-demand software and a personal internet just for you.
3
5
43
@phindsearch
Phind
10 months
We're excited to launch Phind 2 today! The new Phind is able to go beyond text to present answers visually with inline images, diagrams, cards, and other widgets to make answers much more delightful. Phind is also now able to seek out information on its own. If it needs more
16
16
127
@phindsearch
Phind
1 year
Introducing Phind-405B, our new flagship model! Phind-405B scores 92% on HumanEval, matching Claude 3.5 Sonnet. We're particularly happy with its performance on real-world tasks, particularly when it comes to designing and implementing web apps. Our focus on technical topics
30
84
596
@phindsearch
Phind
2 years
GPT-4o is now available in Phind for all paid users! https://t.co/buRxH7XegW
19
4
96
@phindsearch
Phind
2 years
We are excited and proud to be a signatory of SV Angel's Open Letter on AI:
Tweet card summary image
openletter.svangel.com
Build AI for a Better Future
4
3
21
@garrytan
Garry Tan
2 years
"We find that Phind-70B is in the same quality realm as GPT-4 Turbo for code generation and exceeds it on some tasks. Phind-70B is significantly faster than GPT-4 Turbo, running at 80+ tokens per second to GPT-4 Turbo's ~20 tokens per second." What a launch! ๐Ÿ‘€
22
47
675
@wholemars
Whole Mars Catalog
2 years
Introducing Phind-70B โ€“ closing the code quality gap with GPT-4 Turbo while running 4x faster
1
6
53
@bindureddy
Bindu Reddy
2 years
Phind-70b - A CodeLlama fine tune that claims to have GPT-4 code gen! Coming soon to open source ๐Ÿ‘๐Ÿ‘
10
50
243
@omarsar0
elvis
2 years
Phind-70B looks like a big deal! Phind-70B closes the code generation quality gap with GPT-4 Turbo and is 4x faster. Phind-70B can generate 80+ token/s (GPT-4 is reported to generate ~20 tokens/s). Interesting to see that inference speed is becoming a huge factor in comparing
8
105
521
@paulg
Paul Graham
2 years
I'm very impressed by the way Phind, despite being the tiniest of startups, has managed to keep up with the giants. Phind-70B beats GPT-4 Turbo at code generation, and runs 4x faster. There is definitely still room for startups in this game.
84
228
3K
@phindsearch
Phind
2 years
Introducing Phind-70B, our largest and most capable model to date! We think it offers the best overall user experience for developers amongst state-of-the-art models. https://t.co/wkA2unqhME
31
65
442
@phindsearch
Phind
2 years
Join us for our San Francisco meetup on February 6th! Weโ€™d love to meet you and hear about how we can keep making Phind better for you. And, of course, food and drinks will be provided :) https://t.co/CDnu7pwiHA
3
2
18
@phindsearch
Phind
2 years
Announcing much faster Phind Model inference for Pro and Plus users. Your request will be served by a dedicated cluster powered by NVIDIA H100s for the lowest latency and a generation speed of up to 100 tokens per second. If youโ€™re not yet a Pro user, join us at
38
4
81
@phindsearch
Phind
2 years
๐Ÿš€ Introducing GPT-4 with 32K context for Phind Pro users. If youโ€™re not yet a subscriber, join us at https://t.co/buRxH7XegW.
8
4
48
@phindsearch
Phind
2 years
๐Ÿš€ While ChatGPT is pausing signups, Phind continues to be better at programming while being 5x faster. Weโ€™ve been rapidly adding capacity and itโ€™s only getting faster. Check it out โžก๏ธ
15
16
117
@TimSuchanek
Tim Suchanek
2 years
I don't know how I lived without this. @phindsearch
4
6
45
@phindsearch
Phind
2 years
@elonmusk Seems to do just fine @elonmusk
16
11
189
@swyx
swyx
2 years
๐Ÿ†• pod: Beating GPT-4 with Open Source LLMs https://t.co/4a9XODI6Qx with @MichaelRoyzen of @phindsearch! The full story of how Phind finetuned CodeLlama to: - reach 74.7% HumanEval vs 45% base model - 2x'ed GPT4 context window to 16k tokens - 5x faster than GPT-4 (100 tok/s)
10
44
372
@pseudokid
raymel ๐Ÿ‘‹
2 years
Phind - @phindsearch worked pretty well for my tiny Bash script. It even adopted how I use a third-party script (jq). All on first try. See it in action:
@paulg
Paul Graham
2 years
Phind can now beat GPT-4 at programming, and does it 5x faster. https://t.co/Ixj9M5rx5K
6
3
34