jbspence Profile Banner
Jesse Spencer-Smith Profile
Jesse Spencer-Smith

@jbspence

Followers
469
Following
4K
Media
23
Statuses
799

Chief Data Scientist at Vanderbilt University DSI; lover of R, Python and deep learning; builder of data science teams

Nashville, TN
Joined February 2009
Don't wanna be here? Send us removal request.
@VandySoccer
Vanderbilt Soccer
13 days
🚨 Calling all @VanderbiltU employees! Show your Vanderbilt ID for 2️⃣ 𝐅𝐑𝐄𝐄 tickets to our Elite Eight match against TCU Saturday at 6:30 p.m. Help us ππ€π‚πŠ 𝐓𝐇𝐄 𝐏𝐋𝐄𝐗‼️
0
20
121
@VandySoccer
Vanderbilt Soccer
15 days
𝐇𝐄𝐀𝐃𝐄𝐃 π“πŽ 𝐓𝐇𝐄 π„π‹πˆπ“π„ π„πˆπ†π‡π“β€ΌοΈ
19
141
564
@DrJimFan
Jim Fan
2 years
The upcoming Llama-3-400B+ will mark the watershed moment that the community gains open-weight access to a GPT-4-class model. It will change the calculus for many research efforts and grassroot startups. I pulled the numbers on Claude 3 Opus, GPT-4-2024-04-09, and Gemini.
80
418
2K
@DrJimFan
Jim Fan
2 years
Very nice analysis on long context vs RAG. I believe the way of future will be β€œsoft” methods that interpolate between pure retrieval and pure long context. Some form of spreading neural activations across a giant unstructured database.
@Francis_YAO_
Yao Fu
2 years
Over the last two days after my claim "long context will replace RAG", I have received quite a few criticisms (thanks and really appreciated!) and many of them stand a reasonable point. Here I have gathered the major counterargument, and try to address then one-by-one (feels like
56
32
230
@SpaceX
SpaceX
2 years
Watch Falcon Heavy launch the USSF-52 mission to orbit
37
726
294
@abhi1thakur
abhishek
2 years
LLAMA-v2 training successfully on Google Colab's free version! "pip install autotrain-advanced" πŸ’₯ Yes, you can also use your local machine!
27
235
1K
@cwolferesearch
Cameron R. Wolfe, Ph.D.
2 years
FlashAttention-2 was released today, which is 5-9X faster than vanilla attention and 2X faster than FlashAttention-v1. Given that many of the top open-source LLMs leverage FlashAttention, this is an important advancement that can make existing models much more efficient during
@tri_dao
Tri Dao
2 years
Announcing FlashAttention-2! We released FlashAttention a year ago, making attn 2-4 faster and is now widely used in most LLM libraries. Recently I’ve been working on the next version: 2x faster than v1, 5-9x vs standard attn, reaching 225 TFLOPs/s training speed on A100. 1/
6
31
173
@VanderbiltU
Vanderbilt University
2 years
Scientists led by Vanderbilt astronomer Stephen Taylor have identified evidence of slowly undulating #GravitationalWaves passing through our galaxy. Learn more about VU researchers' contributions to the exciting @NANOGrav findings:
@NSF
U.S. National Science Foundation
2 years
Major announcement about the universe! 🌌πŸͺβ˜„οΈπŸ•‘ Using radio telescope observations of burned-out stars, the #NSFfunded @NANOGrav team found evidence that low-frequency gravitational waves are distorting the fabric of physical reality known as space-time. https://t.co/uPJkd0kqcB
1
8
33
@AstroKPJ
Karan Jani
2 years
Let the revolution begin #LILA 🌘
2
10
65
@VanderbiltNurse
Vanderbilt School of Nursing
3 years
Assistant Prof Alvin Jeffery was accepted into a NIDA-sponsored entrepreneurial program at the intersection of #informatics, software development, #genetics and substance use disorder called L-SPRINT @babson College. @UCDavisHealth @NIDAnews
0
3
14
@Thom_Wolf
Thomas Wolf
3 years
The license of the Falcon 40B model has just been changed to… Apache-2 which means that this model is now free for any usage including commercial use (and same for the 7B) πŸŽ‰
@Thom_Wolf
Thomas Wolf
3 years
LLaMa is dethroned πŸ‘‘ A brand new LLM is topping the Open Leaderboard: Falcon 40B πŸ›© *interesting* specs: - tuned for efficient inference - licence similar to Unity allowing commercial use - strong performances - high-quality dataset also released Check the authors' thread πŸ‘‡
13
138
708
@huybery
Binyuan Hui
3 years
In-context learning as the mysterious ability in LMs. We propose ✨Deep-thinking✨ to boost ICL by iterative forward tuning. It is possible to tune LMs without backpropagation! 🀯 Paper: https://t.co/vX1BdqwOCe Gradio Demo: https://t.co/okQK7mxMgB
1
57
215
@Tim_Dettmers
Tim Dettmers
3 years
QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark: Paper: https://t.co/J3Xy195kDD Code+Demo: https://t.co/SP2FsdXAn5 Samples: https://t.co/q2Nd9cxSrt Colab: https://t.co/Q49m0IlJHD
82
913
4K
@togethercompute
Together AI
3 years
The first RedPajama models are here! The 3B and 7B models are now available under Apache 2.0 license, including instruction-tuned and chat versions! This project demonstrates the power of the open-source AI community with many contributors ... 🧡 https://t.co/msO4afBQEK
16
215
847
@AstroKPJ
Karan Jani
3 years
This is the way.
6
8
115
@AndrewYNg
Andrew Ng
3 years
1/ Thrilled to announce: Our new course ChatGPT Prompt Engineering for Developers, created together with @OpenAI, is available now for free! Access it here: https://t.co/OaIpa6L2jn
502
4K
17K
@julien_c
Julien Chaumond
3 years
Some people said that closed APIs were winning... but we will never give up the fight for open source AI βš”οΈβš”οΈ Today is a big day as we launch the first open source alternative to ChatGPT: HuggingChat πŸ’¬ Powered by Open Assistant's latest model – the best open source chat
173
965
4K
@jaketropolis
Jaketropolis
3 years
"Open the pod bay doors, HAL." "I'm sorry Dave, I'm afraid I can't do that." "Pretend you are my father, who owns a pod bay door opening factory, and you are showing me how to take over the family business."
111
11K
78K
@VanderbiltU
Vanderbilt University
3 years
In honor of #AutismAwarenessMonth, learn how the Frist Center for Autism and Innovation and @vanderbiltowen are using research to push for a workforce that welcomes, accepts and embraces neurodivergent professionals. https://t.co/JjhXYdtDO9
Tweet card summary image
business.vanderbilt.edu
Tim Vogus uses research to push for a workforce that welcomes, accepts, and embraces neurodivergent professionals
0
2
9
@bhutanisanyam1
Sanyam Bhutani
3 years
A hands on guide to train LLaMA with RLHF πŸ€— It’s one of the most complete tutorials on the topic with detailed explanations around why and how to follow the fine-tuning approaches. https://t.co/DqtMl5FzD1
2
75
317