nablaml Profile Banner
Nabla Profile
Nabla

@nablaml

Followers
102
Following
55
Media
2
Statuses
16

Scientific Computing Software in Python+Mojo: https://t.co/mT3sLuIXjs

Joined July 2024
Don't wanna be here? Send us removal request.
@nablaml
Nabla
4 months
Nabla is now on GitHub Sponsors ๐Ÿฅณ We are building a fast, customizable & educative ML framework. Our roadmap: Automated ND-parallelism, reviving the Nabla Mojo API, and one more thing to bring AI training to non-coders... Help us build the future of ML:
Tweet card summary image
github.com
Support nabla-ml's open source work
1
2
7
@nablaml
Nabla
4 months
Here is a glimpse into the newly added GPU support for Nabla ๐Ÿค—: Automatic device placement, custom Mojo kernel integration, and huge speedups on modern @AMD and @nvidia hardware. Shout out to @Modular and @LambdaAPI for making this all possible! More: https://t.co/oWWcvHGEkr
1
4
10
@tilli_fe
TilliFe
4 months
Visualizing SGD doing its thing ๐Ÿฅพ Nabla + MAX + Matplotlib + NumPy: https://t.co/i0HU3i8qvW
0
2
10
@tilli_fe
TilliFe
6 months
I am starting a (notebook) series on training transformers with Nabla. Part 1 is a side-by-side (Nabla vs. JAX) toy implementation from scratch: https://t.co/RxSX9urQEi
0
5
14
@clattner_llvm
Chris Lattner
6 months
Great work building on top of MAX and Mojo, bringing a cool new approach to AI training into the Modular ecosystem. Amazing work @tilli_fe!
@tilli_fe
TilliFe
6 months
JAX vs. Nabla: An initial speed comparison (on cpu) for training an MLP on a simple regression task. ๐Ÿค— The full Notebook: https://t.co/yuwuMNyHxj
2
8
79
@tilli_fe
TilliFe
6 months
Automatic Vectorization (vmap) in action: Write a program once, then use it for any batched input. If applied correctly, this can greatly reduce the number of for-loops and speed up a program. ๐ŸŽ“ Learn more about visualizing program transformations: https://t.co/qK2yPpkqly
0
3
16
@tilli_fe
TilliFe
6 months
JAX vs. Nabla: An initial speed comparison (on cpu) for training an MLP on a simple regression task. ๐Ÿค— The full Notebook: https://t.co/yuwuMNyHxj
3
4
27
@tilli_fe
TilliFe
6 months
I am reverse-engineering JAX from scratch in Python, but instead of using XLA, I am using NumPy @numpy_team and MAX @Modular for CPU/GPU acceleration. ๐Ÿ๐Ÿซฆ Working: Function transforms like vmap, grad, jit etc., some built-in nn/ modules, pip-install. https://t.co/YPS3CSm67d
Tweet card summary image
github.com
Machine Learning library for the emerging Mojo/Python ecosystem - nabla-ml/nabla
1
3
20
@tilli_fe
TilliFe
7 months
Can you guess the output shape? ๐Ÿคจ
1
2
8
@nablaml
Nabla
7 months
There are many rough edges and features that still need to be implemented (operator coverage, GPU support, etc.), but the core AD engine has proven effective in initial tests. Thank you for checking out Nabla!
0
0
2
@nablaml
Nabla
7 months
Unlike previous attempts (e.g. Endia) that failed by attempting to rebuild the entire stack, Nabla was designed from the ground up as a direct wrapper around Mojo and MAX to provide the same performance guarantees as them. Code, examples and our roadmap:
Tweet card summary image
github.com
Machine Learning library for the emerging Mojo/Python ecosystem - nabla-ml/nabla
1
1
7
@nablaml
Nabla
7 months
Ok, but what even is Differentiable Programming? How does Automatic Differentiation work? https://t.co/08xBy83bGT
1
0
3
@nablaml
Nabla
7 months
Introducing NABLA - Differentiable Programming in Mojo A Research Preview Nabla aims to bring to Mojo what parts of JAX/PyTorch brought to Python: a high-level API for general program transformations, including vmap, jit, vjp, jvp & grad. Learn more:
nablaml.com
Automatic differentiation, JIT compilation, and GPU acceleration in Python with Mojo and MAX.
2
2
11