alepiad Profile Banner
Alejandro Piad Morffis Profile
Alejandro Piad Morffis

@alepiad

Followers
17K
Following
39K
Media
379
Statuses
19K

Democratizing knowledge one keystroke at a time. PhD in NLP, full-time professor, CS Department @UdeLaHabana. Co-founder @syalia_srl.

Havana, Cuba ๐Ÿ‡จ๐Ÿ‡บ
Joined September 2011
Don't wanna be here? Send us removal request.
@alepiad
Alejandro Piad Morffis
11 months
But this is not all. NNs are also, in practice, extremely well-suited to take advantage of the immense increase in data and compute availability of the last couple of decades. But that's a story for another thread.
1
0
3
@alepiad
Alejandro Piad Morffis
11 months
6) NNs adapt to (almost) any problem Almost any task can be framed as the problem of minimizing the error of some differentiable loss function: binary cross-entropy for classification, MSE for regression, contrastive loss for similarity, ...
1
0
3
@alepiad
Alejandro Piad Morffis
11 months
5) NNs are (relatively) easy to train Regardless of the complexities of the architecture, there is a single, universal algorithm called Backpropagation that can compute how to modify the weights to decrease the prediction error. All NNs are, at least in principle, trainable.
1
0
3
@alepiad
Alejandro Piad Morffis
11 months
4) NNs dissentangle data Or manifold learning, the idea that NNs are projecting input data into a very high-dimensional space where the thing you want to learn is easy (e.g., there is a linear separator that classifies images of different types).
1
0
4
@alepiad
Alejandro Piad Morffis
11 months
3) NNs learn to abstract automatically Representation learning is the technical term. We can often interpret a deep neural network as a sequence of increasingly complex pattern detectors built from simpler patterns. Each layer learns a more abstract representation of the input.
1
0
3
@alepiad
Alejandro Piad Morffis
11 months
2) Neural networks are very flexible We have a gazillion layer types and architectures to encode all sorts of domain knowledge: convolutions for images, recurrent networks for sequences, transformers for context awareness, siamese networks for similarity, ...
1
0
3
@alepiad
Alejandro Piad Morffis
11 months
1) Neural networks are universal approximators This means that, theoretically, there is a big enough, super simple NN that can approximate any (sensible) mathematical function. (Now, that network is not the one you often want).
1
0
3
@alepiad
Alejandro Piad Morffis
11 months
Why are (artificial) neural networks so powerful? No, it's not because they are "biologically-inspired." That has little to do with it. It boils down to: - Their theoretical properties. - Their practical capacity to scale. Let me explain ๐Ÿ‘‡
1
8
40
@alepiad
Alejandro Piad Morffis
1 year
If you're the kind of nerd (like me) who prefers to chat with an LLM in the terminal instead of a website or app, check this project I'm working on. It's still pretty rough around the edges, but it knows Bash and it can mess with your system in fun and unexpected ways.
1
3
10
@alepiad
Alejandro Piad Morffis
1 year
Here's a fun little LLM-based project I'm working on: https://t.co/gguHa8JFcF It's a bot that lives in your terminal and knows a bit of Bash, Python, and Linux. It can run commands and create files and maybe wipe out your hard drive if you're not looking. Take it for a spin?
Tweet card summary image
github.com
An AI-powered assistant for your terminal and editor - apiad/lovelaice
0
2
8
@mjovanc
marcus
1 year
@alepiad Hey Alejandro! We are building a machine learning community here on X. Would you mind help us share it so more enthusiasts can find it? I would appreciate it a lot! ๐Ÿฆพ https://t.co/8k9fANRJcy
Tweet card summary image
twitter.com
A community for ML professionals and enthusiasts.
0
2
3
@ogrisinger
C Anthony Risinger
1 year
@YiMaTweets @alepiad @OpenAI @AnthropicAI @sama @ai_ctrl (EXPERIMENTAL section of README) #RIPrompt now encouraging the generation of internal 'AoT Symbolic Plans' to later follow and guide critical thought and output sections ๐Ÿคฉ for Claude projects
riprompt.com
RIPrompt is framework for creating auto-evolving and context-aware prompt systems. It uses dynamic equations and symbolic representations to generate highly adaptive and intelligent responses.
2
2
2
@alepiad
Alejandro Piad Morffis
1 year
5
7
60
@alepiad
Alejandro Piad Morffis
1 year
But all is not lost. We have a bunch of tricks to extend the reasoning capabilities of LLMs, as OpenAI o1 shows. I lay it out in this article if you want to read more.
3
2
74
@alepiad
Alejandro Piad Morffis
1 year
Reason 3: Inability to Loop. LLMs are not Turing-complete simply because they can't loop indefinitely. This means some semi-decidable problems are forever outside the reach of (pure) language models.
19
2
91
@alepiad
Alejandro Piad Morffis
1 year
Reason 2: Bounded Computation. Each token processed requires constant computation. This means the total computational budget is determined by the input size. But we know some problems (including logical reasoning) require an exponentially large computation.
11
4
88
@alepiad
Alejandro Piad Morffis
1 year
Reason 1: Stochastic Sampling. LLMs rely on probabilities to pick the next token. Even when you fix the temperature, randomness is still built into the language modeling paradigm. But logic is anything but random.
22
4
126
@alepiad
Alejandro Piad Morffis
1 year
LLMs cannot reason. Despite their impressive capabilities, all LLMs, including OpenAI o1, are still fundamentally limited by design constraints that make them incapable of true, open-ended reasoning. Let's break it down. ๐Ÿงต (1/5)
79
151
1K