Explore tweets tagged as #Datasets
@PythonPr
Python Programming
3 days
Types of Datasets
2
33
138
@omoalhajaabiola
Omoalhaja
9 days
Free Datasets to practice data analytics projects 1. Enron Email Dataset Data Link: https://t.co/DG1aEo3RTh 2. Chatbot Intents Dataset Data Link: https://t.co/DKZlo6J76s 3. Flickr 30k Dataset Data Link: https://t.co/0PdyEl0yvJ 4. Parkinson Dataset Data Link:
11
151
718
@icookmiracles
Miracle
2 days
Kinda wild how under the radar $OPAN @Opanarchyai is right now, sitting under $1m mcap while quietly building what feels like the Hugging Face of robotics. If Hugging Face became the go to hub for AI models and datasets, OPAN is doing the same for robotics, creating an open
3
7
25
@hitasyurek
hitas.base.eth
28 minutes
AI agents are absolutely crushing it in DeFi rn ๐Ÿ”ฅ These bots are executing trades in milliseconds, analyzing massive datasets, and eliminating human error while we sleep @Velvet_Capital DeFAI is the future... no cap Are you bullish on AI powered portfolio management or still
8
0
9
@KryptoInsider1
Krypto Insider ๐Ÿ’ซ
15 minutes
๐Ÿšจ $NATIX is a great example of how Web3 can solve real-world problems. One of the biggest challenges in autonomous driving is data collection. Itโ€™s incredibly difficult and expensive. Most open source datasets only have a few thousand hours of driving data. โ–ช๏ธ
4
2
6
@jamiekingston
Jamie Kingston
12 minutes
$RCHV Archivas is a decentralized storage layer on $BNB Chain, using PoIS to create "living memory" that verifies and adds meaning to data. As BSC's first, it's AI-powered, rewarding useful items like models and datasets, not random bytes while burning $RCHV tokens per
2
14
32
@lorengirll
Lorencia
3 hours
Data has its own kind of gravity the more you collect the stronger it pulls Thatโ€™s why large datasets naturally attract apps, users, and infrastructure they create their own orbit of value and interaction @genome_protocol studies this effect to build fairer data ecosystems
40
1
42
@e_opore
Dhanian ๐Ÿ—ฏ๏ธ
3 days
Pre-training Objectives for LLMs โœ“ Pre-training is the foundational stage in developing Large Language Models (LLMs). โœ“ It involves exposing the model to massive text datasets and training it to learn grammar, structure, meaning, and reasoning before it is fine-tuned for
20
45
298
@pushpendratips
Pushpendra Tripathi
6 days
Deepfake tech, but for everyone. No datasets. No setup. No patience needed. Upload one photo โ€” watch AI do the rest. @higgsfield_ai | #HiggsfieldFaceSwap
11
4
21
@imtommitchell
Tom Mitchell
2 days
Thinking like a Data Analyst didnโ€™t come easily for me. I'd spend hours staring at datasets, unsure what questions to ask. I knew I had to change if I wanted to have a successful career in data. Luckily, I found a way through. Here's what I did in 4 steps:
10
58
432
@laravelbackpack
Backpack for Laravel
3 days
#Laravel lazy() vs get() Did you know.... You can stream large datasets from the DB using lazy() โ€” way more memory-efficient than get().
3
15
114
@Parajulisaroj16
PYOFLIFE.COM
9 days
๐Ÿ“Œ๐Ÿ“šThis step-by-step guide will explore the intricacies of analyzing complex survey data using the powerful R programming language. https://t.co/hKrDZvpIIF #DataScience #rstats #DataScientist #StatisticalLearning #machinelearning #datasets #datavisualizations
3
40
160
@OticGroup
Otic Group
3 days
Speaking at the ๐Ÿฎ๐—ป๐—ฑ ๐—”๐—ป๐—ป๐˜‚๐—ฎ๐—น ๐—”๐—œ ๐—ถ๐—ป ๐—›๐—ฒ๐—ฎ๐—น๐˜๐—ต ๐—”๐—ณ๐—ฟ๐—ถ๐—ฐ๐—ฎ๐—ป ๐—–๐—ผ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ in Kampala, ๐—ผ๐˜‚๐—ฟ ๐—”๐—œ ๐—ฆ๐˜†๐˜€๐˜๐—ฒ๐—บ๐˜€ ๐—˜๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ, @dylan_katamba, tackled a deeply important topic: how to prevent bias and ensure transparency in healthcare AI datasets and
0
6
11
@worldbankdata
World Bank Data
1 hour
Datasets are cited in countless ways โ€” acronyms, aliases, partial names. #AI can learn to recognize them all, thanks to synthetic training data that mirror real citation patterns. Here we explain how: https://t.co/NXgVukhOrW
0
2
3
@UK_CEH
UKCEH
5 days
Long-term monitoring tells a story of change. For 80 years scientists have monitored #Windermere and nearby lakes in Cumbria, tracking how climate change and pollution are reshaping our freshwater ecosystems. Itโ€™s one of the longest lake datasets anywhere in the world! ๐ŸŒŽ 1/
1
1
3
@zazik_13
ZAZIK
20 hours
One prompt = one model, or how Domain-Specialized Meta-Agents make it possible In current AI systems, if you want to create an agent for a specific direction, you need to train it, spend time, data, money, and compute... Each agent = a new model, new datasets, and new resource
18
0
24
@eidon_ai
EIDON AI
6 days
Decentralised Robotics ๐Ÿงต I/ Building datasets for embodied AI is toughโ€”humanoid robots need real-world human motion task data, but collecting it at scale has been limited to research lab projects or closed source big labs. At Eidon, we started with our wearable IMU trackers.
3
14
64
@EU_opendata
data.europa.eu
8 days
๐ŸŽจ Over three insightful sessions, the data.europa academy hosted the workshop 'Visualising Data for Impact' with Alberto Cairo, empowering participants to turn complex datasets into clear, ethical, and engaging visuals. Read more ๐Ÿ‘‰ https://t.co/rw9UTAccVO
0
3
4
@Valentina1Ghost
Valentina (โ–,โ–)
6 hours
Ritual Data Provenance: verifiable datasets for AI Models are only as trustworthy as the data that shapes them. Today that data moves through scripts and storage buckets with little trace of origin or integrity. Ritual Data Provenance makes datasets first class on chain
12
0
15