onnxruntime Profile Banner
onnxruntime Profile
onnxruntime

@onnxruntime

Followers
1K
Following
453
Media
45
Statuses
283

Cross-platform training and inferencing accelerator for machine learning models.

Joined September 2018
Don't wanna be here? Send us removal request.
@onnxruntime
onnxruntime
1 year
ONNX Runtime & DirectML now support Phi-3 mini models cross-platforms & devices! Plus, the new ONNX Runtime Generate() API simplifies LLM integration into your apps. Try Phi-3 on your favorite hardware! Read more: #ONNX #DirectML #Phi3.
3
14
30
@onnxruntime
onnxruntime
2 years
Run PyTorch models in the browser, on mobile and desktop, with #onnxruntime, in your language and development environment of choice 🚀
0
5
16
@onnxruntime
onnxruntime
2 years
RT @Szymon_Lorenz: Developers, don't overlook the power of Swift Package Manager! It simplifies dependency management and promotes modulari….
0
1
0
@onnxruntime
onnxruntime
2 years
#ONNX Runtime saved the day with our interoperability and ability to run locally on-client and/or cloud! Our lightweight solution gave them the performance they needed with quantization & configuration tooling. Learn how they achieved this in this blog!.
0
5
9
@onnxruntime
onnxruntime
2 years
Give yourself a treat (like this adorable🐶 deserves) and read this blog on how to use #ONNX Runtime on #Android! .
Tweet media one
@surfaceduodev
SurfaceDuoDev
2 years
Quick intro to @onnxruntime and applying #machinelearning on Android.
0
1
0
@onnxruntime
onnxruntime
2 years
📢 This new blog by @tryolabs is awesome! Learn how to fine-tune a NLP model and accelerate with #ONNXRuntime!.
@tryolabs
Tryolabs
2 years
Maximize the power of LLMs! 💬 Our step-by-step guide covers fine-tuning for specific NLP tasks w/ GPT-3, OPT, & T5. We shared everything from building custom datasets to optimizing inf time with @huggingface 🤗Optimum and @onnxai.🚀.#LargeLanguageModels.
0
2
4
@onnxruntime
onnxruntime
2 years
Join us live TODAY! We will be talking to Akhila Vidiyala and Devang Aggarwal on AI Show with Cassie! We will show how developers can use #huggingface #optimum #Intel to quantize models and then use #OpenVINO for #ONNXRuntime to accelerate performance. 👇.
1
1
6
@onnxruntime
onnxruntime
2 years
In this blog, we will discuss how to make huge models like #BERT smaller and faster with #Intel #OpenVINO, Neural Networks Compression Framework (NNCF) and #ONNX Runtime through #Azure!. 👇.
0
2
5
@onnxruntime
onnxruntime
2 years
👀.
@Jhuaplin
Jingya Huang
2 years
🚀 Want easier and faster training for your models on GPUs? . Thanks to the @onnxruntime backend, 🤗 Optimum can help you achieve 39% - 130% acceleration with just a few lines of code change. Check out our benchmark results NOW!. 👀
0
2
1
@onnxruntime
onnxruntime
3 years
RT @onnxai: We are seeking your input to shape the ONNX roadmap! Proposals are being collected until January 24, 2023 and will be discussed….
0
3
0
@onnxruntime
onnxruntime
3 years
RT @Jhuaplin: Imagine the frustration of, after applying optimization tricks, finding that the data copying to GPU slows down your "MUST-BE….
0
15
0
@onnxruntime
onnxruntime
3 years
RT @efxmarty: Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've….
0
19
0
@onnxruntime
onnxruntime
3 years
📣The new version of #ONNXRuntime v1.13.0 was just released!!! . Check out the release note and video from the engineering team to learn more about what was in this release!. 📝📽️
2
1
4
@onnxruntime
onnxruntime
3 years
👀.
@onnxai
ONNX
3 years
Next up from #ONNXCommunityDay: Accelerating Machine Learning w/ @ONNXRuntime & @HuggingFace!. In this session, @jeffboudier will show the latest solutions from #HuggingFace to deploy models at scale w/ great performance leveraging #ONNX & #ONNXRuntime.
0
1
3
@onnxruntime
onnxruntime
3 years
RT @loretoparisi: Finally tokenization with Sentence Piece BPE now works as expected in #NodeJS #JavaScript with tokenizers library 🚀! Now….
0
5
0
@onnxruntime
onnxruntime
3 years
RT @anton_lozhkov: 🏭 The hardware optimization floodgates are open!🔥. Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline f….
0
16
0
@onnxruntime
onnxruntime
3 years
RT @OverNetE: 💡Senior Research & Development Engineer per @deltatre, @tinux80 è anche #MicrosoftMVP e Intel Software Innovator. 📊Non perder….
0
2
0
@onnxruntime
onnxruntime
3 years
RT @exendahal: @jfversluis What about a video on ONNX runtime? .Here is the official documentation And MAUI example….
0
2
0
@onnxruntime
onnxruntime
3 years
RT @OpenAtMicrosoft: The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tuto….
0
6
0
@onnxruntime
onnxruntime
3 years
In this article, a community member used #ONNXRuntime to try out GPT-2 model which generates English sentences from Ruby language:.
1
3
8