Nabla
@nablaml
Followers
102
Following
55
Media
2
Statuses
16
Scientific Computing Software in Python+Mojo: https://t.co/mT3sLuIXjs
Joined July 2024
Learn more about the project:
nablaml.com
Automatic differentiation, JIT compilation, and GPU acceleration in Python with Mojo and MAX.
0
2
4
Nabla is now on GitHub Sponsors ๐ฅณ We are building a fast, customizable & educative ML framework. Our roadmap: Automated ND-parallelism, reviving the Nabla Mojo API, and one more thing to bring AI training to non-coders... Help us build the future of ML:
github.com
Support nabla-ml's open source work
1
2
7
Here is a glimpse into the newly added GPU support for Nabla ๐ค: Automatic device placement, custom Mojo kernel integration, and huge speedups on modern @AMD and @nvidia hardware. Shout out to @Modular and @LambdaAPI for making this all possible! More: https://t.co/oWWcvHGEkr
1
4
10
Visualizing SGD doing its thing ๐ฅพ Nabla + MAX + Matplotlib + NumPy: https://t.co/i0HU3i8qvW
0
2
10
I am starting a (notebook) series on training transformers with Nabla. Part 1 is a side-by-side (Nabla vs. JAX) toy implementation from scratch: https://t.co/RxSX9urQEi
0
5
14
Great work building on top of MAX and Mojo, bringing a cool new approach to AI training into the Modular ecosystem. Amazing work @tilli_fe!
JAX vs. Nabla: An initial speed comparison (on cpu) for training an MLP on a simple regression task. ๐ค The full Notebook: https://t.co/yuwuMNyHxj
2
8
79
Automatic Vectorization (vmap) in action: Write a program once, then use it for any batched input. If applied correctly, this can greatly reduce the number of for-loops and speed up a program. ๐ Learn more about visualizing program transformations: https://t.co/qK2yPpkqly
0
3
16
JAX vs. Nabla: An initial speed comparison (on cpu) for training an MLP on a simple regression task. ๐ค The full Notebook: https://t.co/yuwuMNyHxj
3
4
27
I am reverse-engineering JAX from scratch in Python, but instead of using XLA, I am using NumPy @numpy_team and MAX @Modular for CPU/GPU acceleration. ๐๐ซฆ Working: Function transforms like vmap, grad, jit etc., some built-in nn/ modules, pip-install. https://t.co/YPS3CSm67d
github.com
Machine Learning library for the emerging Mojo/Python ecosystem - nabla-ml/nabla
1
3
20
say hello to the modular forum to see many mojo-related discussions. https://t.co/MEwpY9xLMq
forum.modular.com
Today we are releasing a research preview of NABLA - a framework for differentiable programming in Mojo. Nabla aims to bring to Mojo what parts of JAX and PyTorch brought to Python: a high-level API...
0
1
4
There are many rough edges and features that still need to be implemented (operator coverage, GPU support, etc.), but the core AD engine has proven effective in initial tests. Thank you for checking out Nabla!
0
0
2
Unlike previous attempts (e.g. Endia) that failed by attempting to rebuild the entire stack, Nabla was designed from the ground up as a direct wrapper around Mojo and MAX to provide the same performance guarantees as them. Code, examples and our roadmap:
github.com
Machine Learning library for the emerging Mojo/Python ecosystem - nabla-ml/nabla
1
1
7
Ok, but what even is Differentiable Programming? How does Automatic Differentiation work? https://t.co/08xBy83bGT
1
0
3
Introducing NABLA - Differentiable Programming in Mojo A Research Preview Nabla aims to bring to Mojo what parts of JAX/PyTorch brought to Python: a high-level API for general program transformations, including vmap, jit, vjp, jvp & grad. Learn more:
nablaml.com
Automatic differentiation, JIT compilation, and GPU acceleration in Python with Mojo and MAX.
2
2
11