cackerman21 Profile Banner
cackerman21 Profile
cackerman21

@cackerman21

Followers
386
Following
2K
Media
18K
Statuses
18K

🏔️⛷️🎸 Encryption, Data Science, Linux, OSINT, RedTeam, SysAdmin, Energy, view not my own-ish, sui ipsius doctrine

Boston, MA
Joined February 2021
Don't wanna be here? Send us removal request.
@cackerman21
cackerman21
2 months
MCPVerse: An Expansive, Real-World Benchmark for Agentic Tool Use https://t.co/2D5P0ajoI0 #LLM
0
0
0
@cackerman21
cackerman21
3 months
Large Language Models and How To Use Them https://t.co/tsh3bV60BJ
0
0
0
@cackerman21
cackerman21
4 months
Apple trained a large language model to efficiently understand long-form video https://t.co/naai4HiORv
0
0
0
@cackerman21
cackerman21
5 months
Estimating the historical impact of outbreak response immunisation programmes across 210 outbreaks in low and middle-income countries https://t.co/nl7tiEk4RH https://t.co/wRV9BGdOiy
0
0
0
@cackerman21
cackerman21
6 months
Using ai (wav2vec) to revived a dying language https://t.co/YWpCozX8Ce https://t.co/wOJQI99sCO
0
0
0
@cackerman21
cackerman21
6 months
Journalism, media, and technology trends and predictions 2025 https://t.co/G3NGrIm7pE
0
0
0
@cackerman21
cackerman21
6 months
Looming Commercial Real Estate (CRE) Exposure Debt Crisis 2025-27 - Maturity Wall Dime Bank: CRE exposure equal to 602% of its equity Eagle: 571% OZK: 566% Live Oak: 550% Merchants: 539% Flagstar: 539% ServisFirst: 538% First Foundation: 513% Provident: 488% First United: 478%
0
0
2
@cackerman21
cackerman21
7 months
Three Pointers for Effective and Accurate LLM Integration https://t.co/O5zZ8ZD3hB
0
0
0
@cackerman21
cackerman21
7 months
Off-the-Shelf Large Language Models Are Unreliable Judges https://t.co/coimlWoK8N
0
0
0
@cackerman21
cackerman21
7 months
Poster: Leveraging Large Language Models for Detecting OS-level Ransomware https://t.co/Aqq4wFjXHS
0
0
0
@cackerman21
cackerman21
7 months
Table meets LLM: Can Large Language Models Understand Empirical Study Structured Table Data? A Benchmark and Empirical Study https://t.co/bzjjbCorg3
0
0
0
@cackerman21
cackerman21
7 months
Replication for Language Models Problems, Principles, and Best Practices for Political Science https://t.co/T0ekU9sSTx
0
0
0
@cackerman21
cackerman21
7 months
Red Hat AI Inference Server 3.0 LLM Compressor Compressing large language models with the LLM Compressor library https://t.co/xOTprGChoC
0
0
0
@cackerman21
cackerman21
7 months
Balancing Innovation and Rigor: Guidance for the Thoughtful Integration of Artificial Intelligence for Evaluation https://t.co/XPx5CQcY3f
0
0
0
@cackerman21
cackerman21
7 months
Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications https://t.co/HLO8Wjc9Ce
0
0
0
@cackerman21
cackerman21
7 months
DCE-LLM: Dead Code Elimination with Large Language Models https://t.co/yYDaCHZPrg
0
0
0
@cackerman21
cackerman21
7 months
HealthBench: Evaluating Large Language Models Towards Improved Human Health https://t.co/tRAUf78m3a
0
0
0
@cackerman21
cackerman21
7 months
Do Large Language Models (Really) Need Statistical Foundations? https://t.co/zuCwEy6yQF
0
0
0
@cackerman21
cackerman21
7 months
BankGPT: the use of Large Language Models in official communications https://t.co/yaVoxUy5Dd
0
0
0
@cackerman21
cackerman21
7 months
Soft Prompting for Unlearning in Large Language Models https://t.co/9ojmhsHHDf
0
0
0