
K-State Data Science
@kstate_bigdata
Followers
2K
Following
20K
Media
82
Statuses
11K
Part of @kstate Center for #ArtificialIntelligence & #DataScience (#CAIDS) constellation. #BigData in #research, #science, #business, #courses, #academics
Manhattan, KS
Joined April 2014
Joint work with @KSUEntomology, led by PI @BrianSpiesman with co-PIs @banazir (William H. Hsu) & @bmccornack, with collaborators at @UWMadison, @xercessociety, @UnivOfKansas, & @RyersonU. #Nature #ScientificReports #ksukdd.
Our work with @BrianSpiesman's lab on #BeeMachine (a collaboration among @KSUEntomology / @kstateag, @kstate_bigdata / @kstate_CS / @KStateEngg, @xercessociety, @kuengineering, & @RyersonCompSci) is out in @nature @SciReports today! #BeeConservation #DeepLearning #ComputerVision
2
8
25
RT @RayFernando1337: Goodbye Claude Code. I hate to say this but Cursor + Claude 4 Sonnet Thinking (Max) 600k context is KING!!!. Latest….
0
62
0
RT @_avichawla: A graph-powered all-in-one RAG system!. RAG-Anything is a graph-driven, all-in-one multimodal document processing RAG syste….
0
409
0
RT @minchoi: Here is how I did it. Bookmark this for later:. 1. Grab a profile image. 2. Video containing audio of your choice (up to 60 s….
apps.apple.com
Pika is in early access beta. Download the app to join the Waitlist. Already have an invite code? Lucky you! Download the app and enter the code to get started. Introducing Pika, the first-ever...
0
11
0
RT @minchoi: This is wild. I just transformed Grok Companion Ani into a photorealistic actor and used Pika's new lipsync model for voice.….
0
158
0
RT @_avichawla: That's a wrap!. If you found it insightful, reshare it with your network. Find me → @_avichawla.Every day, I share tutori….
0
2
0
RT @_avichawla: Finally, the video shows prompting the LLM before and after fine-tuning. After fine-tuning, the model is able to generate….
0
3
0
RT @_avichawla: 6️⃣ Train. With that done, we initiate training. The loss is generally decreasing with steps, which means the model is bei….
0
2
0
RT @_avichawla: 5️⃣ Define Trainer. Here, we create a Trainer object by specifying the training config, like learning rate, model, tokenize….
0
2
0
RT @_avichawla: 4️⃣ Prepare dataset. Before fine-tuning, we must prepare the dataset in a conversational format:. - We standardize the data….
0
2
0
RT @_avichawla: 3️⃣ Load dataset. We'll fine-tune gpt-oss and help it develop multi-lingual reasoning capabilities. So we load the multi-l….
0
2
0
RT @_avichawla: 2️⃣ Define LoRA config. We'll use LoRA for efficient fine-tuning. To do this, we use Unsloth's PEFT and specify:.- The mod….
0
3
0
RT @_avichawla: 1️⃣ Load the model. We start by loading the gpt-oss (20B variant) model and its tokenizer using Unsloth. Check this 👇 http….
0
4
0
RT @_avichawla: Today, let's learn how to fine-tune OpenAI's latest gpt-oss locally. We'll give it multilingual reasoning capabilities as….
0
10
0
RT @chatgpt21: I’m actually really surprised GPT 5 thinking doesn’t hallucinate this. Most AI models will hallucinate a benchmark if they….
0
12
0
RT @kdnuggets: In technical terms, the F-distribution helps you compare variances. #statology
statology.org
In technical terms, the F-distribution helps you compare variances.
0
2
0
RT @minchoi: Less than 29 hours ago, OpenAI dropped GPT-5. And some people are calling it the best model. Some not so much. 10 wild exam….
0
16
0
RT @minchoi: ChatGPT Reddit is blowing up. ChatGPT users are literally cancelling over GPT-5
0
138
0