mbaddar2 Profile Banner
Moh Baddar Profile
Moh Baddar

@mbaddar2

Followers
199
Following
224
Media
112
Statuses
373

AI/ML Engineer | LLM-Dev , Ops and Evaluation | Applied Math PhD | Subscribe ✉️ https://t.co/rc9q8SYRnU

Berlin, Germany
Joined January 2025
Don't wanna be here? Send us removal request.
@mbaddar2
Moh Baddar
6 months
Check this star-worthy github repo for a comprehensive overview of RAG approaches. Concepts .1️⃣ Routing and Query Construction.2️⃣ Indexing & Advanced Retrieval .3️⃣ Reranking .
Tweet media one
1
19
71
@mbaddar2
Moh Baddar
7 days
I thought of you when I read this quote from "The Daily Stoic: 366 Meditations on Wisdom, Perseverance, and the Art of Living: Featuring new translations of Seneca, Epictetus, and Marcus Aurelius (English Edition)" by Ryan Holiday, Stephen Hanselman -. "“God, grant me the
Tweet media one
0
0
0
@mbaddar2
Moh Baddar
14 days
Despite the hype for Large Language Models (#LLMs), few people talk about the backbone technology that made LLMs possible : Transformers architecture. ---.Check my post below to get a very quick overview over transformers and how they model "context" in language models.
@mbaddar2
Moh Baddar
14 days
Just as blockchain serves as the foundational technology behind cryptocurrencies, Transformers are the core architecture powering Large Language Models (LLMs). At their essence, Transformers offer a sophisticated mechanism to capture context within a sequence of tokens—such as
Tweet media one
0
0
1
@mbaddar2
Moh Baddar
14 days
Just as blockchain serves as the foundational technology behind cryptocurrencies, Transformers are the core architecture powering Large Language Models (LLMs). At their essence, Transformers offer a sophisticated mechanism to capture context within a sequence of tokens—such as
Tweet media one
0
2
3
@mbaddar2
Moh Baddar
1 month
If you are in the business of adopting Language Models to your software application or solution, then the ability to load , configure, tune and run a local model is a must-have knowledge. One crucial aspect affecting the cost and performance of model tuning and inference (i.e
Tweet media one
0
0
2
@mbaddar2
Moh Baddar
1 month
Data types have a great impact on model training and generation time. Check BFloat16 , a high accuracy and memory efficient data type that doesn't suffer from numerical instability issues like underflow or division by zero. Meanwhile, it is highly memory efficient. Check this
Tweet media one
0
0
5
@mbaddar2
Moh Baddar
1 month
Acquiring the knowledge to build LLM-powered software solutions is a must for all tech-entrepreneurs. Despite the fact the the existing trend is choosing between different 3rd party Large Models (#OpenAI ChatGPT and #DeepSeek ) , many oversee the great potential of small and tiny
Tweet media one
0
0
2
@mbaddar2
Moh Baddar
1 month
Exposing an ML Model to user via an API is a critical step to make the model reusable by different services and Modules in your system. Check this Step-by-Step Guide to Deploying Machine Learning Models with FastAPI and Docker
Tweet card summary image
machinelearningmastery.com
We’ll take it from raw data all the way to a containerized API that’s ready for the cloud.
0
1
2
@mbaddar2
Moh Baddar
1 month
Being able to run Local Language Models is a super-power for small and medium enterprises. It is a cost effective, more secure and gives you more control over your data. If you are interested in a quick hands on how to load and run a simple local model with few lines of python
Tweet media one
0
0
0
@mbaddar2
Moh Baddar
2 months
Evaluating Large Language Models ( #LLMs ) is a key step to properly adopt them in any software solution. However the two key challenges to evaluating LLMs are :.( 1 ) Generality of tasks : the is a false perception due to the availability of dozens of LLMs user-facing
Tweet media one
0
0
0
@mbaddar2
Moh Baddar
2 months
One of the essential steps in adopting Large Language Models in any application is understanding how the complete system would be evaluated. An important angle to understand the evaluation process, is understand the "set of task" which an LLM models is required to achieve. Is
Tweet media one
0
0
0
@mbaddar2
Moh Baddar
2 months
One of the challenging topics when it comes to developing an LLM powered application is "Benchmarking". Any benchmark has the main 3 components . 1️⃣ Task : What kind of tasks are we evaluating the model for? Text completion, logical inference, or summarization?. 2️⃣ Data : From
Tweet media one
0
1
2
@mbaddar2
Moh Baddar
2 months
One of the interesting , yet challenging topics related to LLMs is benchmarking 🌡️. Due to the complex nature of #LLMs, in addition to the complex output format (free text), there are numerous benchmarks that measure the performance of LLMs for different task. To learn more ,.
@mbaddar2
Moh Baddar
2 months
There literally dozens of benchmarks for ( I would say 100s) of LLM models , both proprietary and open source. One of the problem I have faced while understanding the benchmarks, despite the existence of numerous framework for evaluation is a simple illustration , apart from
Tweet media one
0
0
2
@mbaddar2
Moh Baddar
2 months
There literally dozens of benchmarks for ( I would say 100s) of LLM models , both proprietary and open source. One of the problem I have faced while understanding the benchmarks, despite the existence of numerous framework for evaluation is a simple illustration , apart from
Tweet media one
0
0
2
@mbaddar2
Moh Baddar
2 months
If you mean business with LLMs, then one of the technical approaches you must be aware of is building a self-hosted LLMs framework that suits your technical and business needs. However, one of the trickiest challenges you might face during the design phase is selecting the best.
@mbaddar2
Moh Baddar
2 months
One of the pressing questions I have in mind when thinking about Local-LLMs solutions is : Which model I should use a base model ? Which one is suitable for which use-case ? . Despite this question being simple, it is very hard and subjective to answer. However, there are many
Tweet media one
0
0
0
@mbaddar2
Moh Baddar
2 months
One of the simple benchmarks (in design) but hard (in getting good accuracy) for evaluating LLMs is "Hellaswag". The design is simple : you give the model a question and it has 4 options a b c d that are designed to confuse the model. Only one is right . The metric is the
Tweet media one
0
1
6
@mbaddar2
Moh Baddar
2 months
You can access it easily through @huggingface .
Tweet card summary image
huggingface.co
0
0
1
@mbaddar2
Moh Baddar
2 months
One of the crucial steps in developing LLM powered applications is evaluating the quality of results. There are literally hundreds of benchmarks that evaluate different models against different tasks. Basically, each benchmark is a set of data with some kind of expected
Tweet media one
1
1
5
@mbaddar2
Moh Baddar
2 months
If you are in the business of adopting LLM models to your application, you might consider options beyond @OpenAI or @Gemini APIs : You can have your own LLM framework on your infrastructure. Why? Cost efficient, Fully customizable, Control and Secure your own data. However, a.
@mbaddar2
Moh Baddar
2 months
One of the pressing questions I have in mind when thinking about Local-LLMs solutions is : Which model I should use a base model ? Which one is suitable for which use-case ? . Despite this question being simple, it is very hard and subjective to answer. However, there are many
Tweet media one
0
0
2
@mbaddar2
Moh Baddar
2 months
If you want to build self hosted LLM-powered software solutions, one of the main design challenges you need to make is : Which base model I am going to use. There are , literally, hundreds of open source models: @llama @deepseek_ai @MistralAI and GPT family, etc. Usually,.
@mbaddar2
Moh Baddar
2 months
One of the pressing questions I have in mind when thinking about Local-LLMs solutions is : Which model I should use a base model ? Which one is suitable for which use-case ? . Despite this question being simple, it is very hard and subjective to answer. However, there are many
Tweet media one
0
0
1