Dong He
@dongheuw
Followers
174
Following
158
Media
1
Statuses
14
Scaling LLM training @meta superintelligence. PhD @uwcse.
Menlo Park, CA
Joined September 2022
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model
832
2K
13K
Benchmarks today have become less informative to the communities they are meant to serve—researchers, developers, and users. Task-me-anything automatically generates benchmarks depending on the user's need/application.
Have trouble finding a benchmark for your use case? Introducing TaskMeAnything, a benchmark generation engine that creates VQA benchmarks on demand for assessing multimodal language models like GPT-4o. Website: https://t.co/USdD8a8mvK
1
4
47
Have trouble finding a benchmark for your use case? Introducing TaskMeAnything, a benchmark generation engine that creates VQA benchmarks on demand for assessing multimodal language models like GPT-4o. Website: https://t.co/USdD8a8mvK
2
50
175
AI2 presents Task Me Anything Presents a benchmark generation engine which produces a benchmark tailored to a user’s needs proj: https://t.co/t535rLf2ek abs: https://t.co/cD6X2sCw3n
1
52
206
Next week I will be presenting Tensor Query Processor (TQP) at the PyTorch Conference! https://t.co/RkmScf37ou
@sched
0
2
18
We were thrilled to welcome industry partners & friends to #UWAllen’s research showcase last week featuring a talk by #MacFellow @YejinChoinka, the People’s Choice Awards, and @MadronaVentures giving the coveted Madrona Prize to @uw_db! https://t.co/nTmEL7sTRr
#MondayMotivation
0
5
13
I also see a full paper on this new "tensor query processing" engine at VLDB'22 itself: https://t.co/awOv3Kup4A Cool follow up to HummingBird. Interesting to see it can beat even BlazingSQL on TPC-H! Awesome work @dongheuw @scnakandala @MatteInter @carlo_curino & team. 👏
1
1
7
I’ll present my work on accelerating a common class of queries for deep neural network interpretation, interpretation by example queries, at #VLDB2022 #VLDB22 in person at 10:30 am on Thursday in C2.2. Excited to see you there! Also, stop by my poster at the poster session!
Vol:15 No:1 → DeepEverest: Accelerating Declarative Top-K Queries for Deep Neural Network Interpretation
0
1
9
Thus, TQP free-rides on billions of dollars of hardware/software investments for ML. If you don't believe this could work, we also have a demo for TQP on Wednesday! We'll show TQP's integration with TensorBoard, and how TQP accelerates ML + SQL queries end-to-end on GPU. (3/3)
0
0
2
The core idea is to compile SQL queries into tensor programs. We design and implement a query processor, TQP, on top of PyTorch so that SQL queries can be executed on CPUs, GPUs, and any other hardware supported by tensor computation runtimes like PyTorch, ONNX, etc. (2/3)
1
0
2
Looking forward to our in-person presentation at #VLDB2022 #VLDB22! @MatteInter and I will present the world's first query processor on tensor computation runtimes, Tensor Query Processor (TQP), on Tuesday morning. Hope to see you there! cc @GraySystemsLab @uwcse (1/3)
1
3
11