
onnxruntime
@onnxruntime
Followers
1K
Following
453
Media
45
Statuses
283
Cross-platform training and inferencing accelerator for machine learning models.
Joined September 2018
Run PyTorch models in the browser, on mobile and desktop, with #onnxruntime, in your language and development environment of choice 🚀
0
5
16
RT @Szymon_Lorenz: Developers, don't overlook the power of Swift Package Manager! It simplifies dependency management and promotes modulari….
0
1
0
#ONNX Runtime saved the day with our interoperability and ability to run locally on-client and/or cloud! Our lightweight solution gave them the performance they needed with quantization & configuration tooling. Learn how they achieved this in this blog!.
0
5
9
Give yourself a treat (like this adorable🐶 deserves) and read this blog on how to use #ONNX Runtime on #Android! .
Quick intro to @onnxruntime and applying #machinelearning on Android.
0
1
0
📢 This new blog by @tryolabs is awesome! Learn how to fine-tune a NLP model and accelerate with #ONNXRuntime!.
Maximize the power of LLMs! 💬 Our step-by-step guide covers fine-tuning for specific NLP tasks w/ GPT-3, OPT, & T5. We shared everything from building custom datasets to optimizing inf time with @huggingface 🤗Optimum and @onnxai.🚀.#LargeLanguageModels.
0
2
4
Join us live TODAY! We will be talking to Akhila Vidiyala and Devang Aggarwal on AI Show with Cassie! We will show how developers can use #huggingface #optimum #Intel to quantize models and then use #OpenVINO for #ONNXRuntime to accelerate performance. 👇.
1
1
6
👀.
🚀 Want easier and faster training for your models on GPUs? . Thanks to the @onnxruntime backend, 🤗 Optimum can help you achieve 39% - 130% acceleration with just a few lines of code change. Check out our benchmark results NOW!. 👀
0
2
1
RT @onnxai: We are seeking your input to shape the ONNX roadmap! Proposals are being collected until January 24, 2023 and will be discussed….
0
3
0
RT @Jhuaplin: Imagine the frustration of, after applying optimization tricks, finding that the data copying to GPU slows down your "MUST-BE….
0
15
0
RT @efxmarty: Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've….
0
19
0
📣The new version of #ONNXRuntime v1.13.0 was just released!!! . Check out the release note and video from the engineering team to learn more about what was in this release!. 📝📽️
2
1
4
👀.
Next up from #ONNXCommunityDay: Accelerating Machine Learning w/ @ONNXRuntime & @HuggingFace!. In this session, @jeffboudier will show the latest solutions from #HuggingFace to deploy models at scale w/ great performance leveraging #ONNX & #ONNXRuntime.
0
1
3
RT @loretoparisi: Finally tokenization with Sentence Piece BPE now works as expected in #NodeJS #JavaScript with tokenizers library 🚀! Now….
0
5
0
RT @anton_lozhkov: 🏭 The hardware optimization floodgates are open!🔥. Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline f….
0
16
0
RT @OverNetE: 💡Senior Research & Development Engineer per @deltatre, @tinux80 è anche #MicrosoftMVP e Intel Software Innovator. 📊Non perder….
0
2
0
RT @exendahal: @jfversluis What about a video on ONNX runtime? .Here is the official documentation And MAUI example….
0
2
0
RT @OpenAtMicrosoft: The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tuto….
0
6
0
In this article, a community member used #ONNXRuntime to try out GPT-2 model which generates English sentences from Ruby language:.
1
3
8