
SUN YOUNG HWANG α―
π°π·
@SOSOHAJALAB
Followers
727
Following
23K
Media
368
Statuses
7K
CEO, AI (LLM/LMM/MLX) Engineer and enthusiast with non-tech and construction field based. Two kidsπ§ββοΈ, Fencerπ€Ί, F1 loverπ but love tech more π»β€οΈ
Seoul, Republic of Korea
Joined May 2023
Another super paper for memory LLM π.
Brilliant Memory framework proposed in this paper. MemOS makes remembering a firstβclass system call. LLMs forget stuff fast and retraining them costs a fortune. MemOS treats memories like files in an operating system, letting a model write, move, and retire knowledge on the
0
2
9
Possible to ship π°π· π€.
Opening orders for Reachy Mini today, our open-source desktop robot for AI builders, starting at $299! Fully integrated with @LeRobotHF & @huggingface for the whole community to build AI apps for it (like this dancing one). We'll probably ship a first batch of a hundred this
0
0
1
RT @AndrewYNg: New Course: Post-training of LLMs. Learn to post-train and customize an LLM in this short course, taught by @BanghuaZ, Assisβ¦.
0
329
0
The Godsloth @UnslothAI.
We made step-by-step guides to Fine-tune & Run every single LLM! π¦₯. What you'll learn:.β’ Technical analysis + Bug fixes explained for each model.β’ Best practices & optimal settings.β’ How to fine-tune with our notebooks.β’ Directory of model variants. π
0
0
4
I need this π€.
Hugging Face just dropped Reachy Mini. an expressive, open-source robot designed for human-robot interaction, creative coding, and AI experimentation. Fully programmable in Python (and soon JavaScript, Scratch) and priced from $299. Reachy Mini measures 11β/28cm in height and
0
0
0
Huggingface has done it always π€π€.
We just released the best 3B model, 100% open-source, open dataset, architecture details, exact data mixtures and full training recipe including pre-training, mid-training, post-training, and synthetic data generation for everyone to train their own. Let's go open-source AI!.
0
0
1
Unstoppable AI models on lmarena π.
3 new models live in the Arena today π . π§ Mistral Small 2506: latest 24B open model (Apache-2.0), tuned for efficiency by @MistralAI.π¨ Imagen 4 Ultra: latest text-to-image from @GoogleDeepMind.ποΈ Ideogram v3 Quality: latest text-to-image model from @Ideogram_AI. Your votes
0
0
1
Why don't you guys finetune your own datas with good model with THE BEST METHOD @unsloth?! π€. I've made more than 50models with @unsloth and this is one of our best domain models ever π.
You can utilize our Gemma 3n multimodal and fine-tuning Kaggle notebook for any submission to the $150,000 challenge!. The $10,000 is specifically for the Unsloth track - but you can submit it for the main track as well!. Kaggle notebook:
1
0
2
Love this mem0 so much!. But, memvid(repoπ(i'm not a contributor π₯²) is really getting better to memory!. This is still improving!. @Gradio can do many things btw π€
π§ Easily use Mem0 add and search tools with @OpenAI Agents SDK. - add_memory: saves information to memory.- get_memory: retrieves relevant past information. Persistent memory in your agent workflow in just few lines of code. docs below π
1
2
9