Rediminds, Inc
@rediminds
Followers
266
Following
1K
Media
1K
Statuses
3K
🤖 AI Innovators | 🛠 Crafting Industry Disruptors | 🌟 Fueling Growth Through Partnership | Let's lead the future, together. #AI #IndustryLeaders
Southfield, MI
Joined February 2020
Two years ago, we embarked on a journey to democratize and preserve human intellect through our DigitalMe project. While AI technology has evolved rapidly since then, our vision remains as relevant as ever. This throwback showcases our early implementation using D-ID, ElevenLabs,
1
1
2
Boston Dynamics just pulled the curtain back on the product version of Atlas, a fully electric humanoid aimed squarely at industrial work, with 2026 deployments already committed (Hyundai + Google DeepMind) and production starting immediately. What’s being positioned is a
bostondynamics.com
Boston Dynamics will manufacture the product version of its humanoid robot immediately with deployments scheduled at Hyundai and Google DeepMind.
NEWS: Boston Dynamics has just released a new video of its upgraded next-generation humanoid robot called Atlas. • 4 hour battery. Self-swappable for continuous operation • 6 feet 2 inches tall • Weight: 198 lbs • 56 total degrees of freedom • Now fully electric, ditching
0
0
0
Robotics is finally getting its “unbox it → talk to it → it works” moment. 🤖✨ Galaxea Dynamics just open-sourced the G0 Plus VLA model and launched a Pick Up Anything demo showing zero-shot embodied intelligence: give the robot a natural-language instruction and it executes
We just open-sourced G0 Plus VLA model & launched "Pick Up Anything" demo. See our robot perform diverse real-world tasks through pure language. No specialized training needed. That's zero-shot embodied intelligence. #VLA #Robotics #OpenSource 🔗Try now: https://t.co/Dt6Qq3oVoi
0
0
0
Robotics is officially graduating from grippers to hands. Scaling VLAs to dexterous hands isn’t about just adding parameters. It’s an explosion of action space, constant occlusions, and brutal data scarcity. GR-Dexter tackles that as a system, pairing a compact 21-DoF hand with
Scaling vision-language-action (VLA) models to high-DoF dexterous hands has long been a "holy grail" challenge due to the high-dimensional action space and data scarcity. As a wrap up of the year 2025, we are releasing GR-Dexter, a holistic hardware-model-data framework for
1
0
1
As we close out 2025, here's something fun to explore over the break! ✈️🌍 An open-source 3D flight simulator built with Cesium, React & TypeScript that lets you fly aircraft or drive a car across real-world terrain powered by Google Earth data. It's more of a float point on a
github.com
Contribute to WilliamAvHolmberg/cesium-flight-simulator development by creating an account on GitHub.
0
0
4
Stop paying frontier-model prices for “what’s the weather?” ☁️💸 UIUC’s Unified Lab just open-sourced LLMRouter, a unified routing framework that bundles 16+ routing strategies (single-round, multi-round, agentic, and personalized) under one consistent interface, so teams can go
Open-sourcing the first unified LLM routing library 🔥 Meet LLMRouter 16+ routers in ONE framework. Stop reimplementing routing papers from scratch, simply run pip install llmrouter-lib And instantly deploy SOTA LLM routing tailored to your exact needs! 🔗 Get Started with
0
0
0
If the context window is the new desktop, then most of us have been prompting like we organize files: chaos everywhere, hoping the right tab is open. In Google Antigravity, the shift is philosophical and practical: instead of stuffing everything into the model’s head, the agent
Say goodbye to context clutter. @kevinhou22 explains how Antigravity keeps context clean by letting the model manage its own attention span. Learn more → https://t.co/otIjru7Cvr
#DEVcember
0
0
0
Pittsburgh just became one of the hardest real-world benchmarks for robotics. ❄️🧊⛰️ rivr is stress-testing an autonomous delivery robot in conditions that are brutal even for humans: snow, ice, steep hills, and stairs; the kind of terrain that exposes every weakness in
An autonomous @rivr_tech delivery robot being tested in Pittsburgh. Snow, ice, hills and stairs everywhere: it was genuinely feeling dangerous as a human to go outside at times. Really a great stress test for the robots.
0
0
0
Robotics just entered its everyday competence era, and it’s happening through fine-tuning, not sci-fi magic. 🦾🏠 Physical Intelligence (Pi) just dropped a set of fully autonomous demos where their latest model (π0.6 fine-tuned) can: 🍳 wash a frying pan with soap + water 🪟
We got our robots to wash pans, clean windows, make peanut butter sandwiches, and more! Fine-tuning our latest model enables all of these tasks, and this has interesting implications for robotics, Moravec's paradox, and the future of large models in embodied AI. More below!
0
0
0
The agentic era isn’t just write code faster. It’s ship across the whole stack, web + backend + Android + iOS, while juggling Rust, Java, Go, C++, Kotlin, Obj-C, TS/JS like it’s normal. MiniMax just released M2.1, tuned for real complex engineering workflows: • Multi-language
MiniMax M2.1 is officially live🚀 Built for real-world coding and AI-native organizations — from vibe builds to serious workflows. A SOTA 10B-activated OSS coding & agent model, scoring 72.5% on SWE-multilingual and 88.6% on our newly open-sourced VIBE-bench, exceeding leading
0
0
1
Most robot demos stop at pick-and-place. The real bottleneck is what comes next: repeatable, high-precision assembly; align parts, insert screws, torque to spec… thousands of times a day. 🔩🤖 Kyber Labs just showed a highly dexterous robotic hand doing autonomous assembly +
Demo showing our system doing autonomous assembly of a part! What else should we have it do? And if you want a hand, wait list is open here! https://t.co/bh9Xv05eje
0
0
0
Firecrawl just turned the entire web into a programmable data layer. 🔍⚙️ Their new /agent API lets you describe the dataset you want (YC W24 companies with founders, all Nike Air Jordan listings with prices, every AI paper on arXiv), and an autonomous agent searches, navigates,
firecrawl.dev
Firecrawl /agent is a magic API that searches, navigates, and gathers data from even the most complex websites. Describe what data you want and agent handles the rest.
Introducing /agent by Firecrawl 🪄 Just describe what you need - with or without a URL then /agent searches, navigates, and gathers information from the widest range of websites, reaching data no other API can. Try out the research preview today.
0
0
0
AI agents just learned to speak UI. Google has open-sourced Agent-to-User Interface (A2UI), a common language that lets agents stream live, native interfaces instead of walls of text. Think flight cards, dynamic forms, date pickers, dashboards… all generated on the fly,
0
0
0
Text-to-video was just the warm-up. Now it’s text-to-world. 🌍🎮 Here’s why this matters for digital twins, robotics, and simulation-first AI: 🕒 Real-time streaming worlds – 24 FPS long-horizon video with interactive camera and character control. 📐 Stable 3D geometry over time
🚀🚀🚀Introducing HY World 1.5 (WorldPlay)! We have now open-sourced the most systemized, comprehensive real-time world model framework in the industry. In HY World 1.5, we develop WorldPlay, a streaming video diffusion model that enables real-time, interactive world modeling
0
0
0
Reading existing code is still one of the most expensive bottlenecks in software, especially in regulated environments where tribal knowledge lives in people’s heads and legacy repos. Google just launched Code Wiki (public preview): a continuously updated, structured wiki for a
Experience instant code understanding with the Code Wiki public preview → https://t.co/qdQjpeZTxc Key features include: 🔄 Automatic updates after each change 🧠 Context-aware Gemini chat 📊 Auto-generated architecture diagrams
0
0
0
A year ago, MCP was basically a clean idea for connecting models to real tools. Today, it’s graduating into neutral governance under the Linux Foundation’s Agentic AI Foundation (AAIF) which is how you know it’s turning into infrastructure, not a trend. Why this matters
github.blog
MCP is moving to the Linux Foundation. Here's how that will affect developers.
It started as a small idea to connect AI models to developer workflows. It turned into one of the fastest-growing open standards in the industry. 🚀 Now, the Model Context Protocol is officially joining the @linuxfoundation. Hear from the engineers and maintainers of GitHub,
0
0
1
Most AI agents live in the browser. But the real world runs on 3B+ Android devices; truck cabs, warehouses, clinics, factory floors. 📱 Meet Android Use from Action State Labs, an open-source library that gives AI agents hands on native Android apps: • Reads the Android
Meet Android Use - an open source library that gives AI agents hands to control native Android apps. It bypasses expensive vision models to run on cheap hardware, automating field ops in places laptops can't go. Watch Android Use in action:
0
0
1
Continuous Integration just got a new teammate: Continuous AI. ⚙️🤖 Jules now ships Scheduled Tasks and Suggested Tasks: 🔁 Scheduled Tasks – Put routine work on autopilot: • Nightly package releases • Weekly dependency bumps • Monthly backlog sweeps Jules can publish
Introducing Scheduled Tasks. Continuous Integration meets Continuous AI. Set the frequency: nightly package releases, weekly dependency updates, or monthly backlog sweeps. Jules not only can publish the release but can fix any errors while doing it, all in background while you
0
0
0
Data centers should live where power is abundant and cooling is free. 🌍🚀 VC legend Gavin Baker just called it. And Starcloud is already there, with the first NVIDIA H100 operating in orbit, turning sunlight and deep-space cold into a native AI compute platform. Why it matters
“The most important thing that’s going to happen in the world in the next 3-4 year is data centers in space” - @GavinSBaker We have the first Nvidia H100 operating in space at @Starcloud_Inc_ 🚀
0
0
0
AI is colliding with the limits of the grid. ⚡✈️ Boom Supersonic just unveiled Superpower, a 42 MW natural-gas turbine, derived from its supersonic engine program, built to power advanced AI data centers and accelerate the return of supersonic passenger travel. • 42 MW in a
Introducing Superpower, a natural gas turbine by Boom that delivers reliable energy to AI data centers while accelerating the return of supersonic passenger travel.
0
0
0
What does an agent-first IDE actually feel like in practice? 🧠💻 Google DeepMind’s Antigravity reimagines the dev environment around AI agents instead of files and tabs: • Agent Manager: your mission control, where you orchestrate tasks, review artifacts, and manage autonomy
What does an agent first IDE look like? Our Head of Product Engineering breaks down the unique interaction surfaces that make Antigravity work: the Editor, the Browser, and the Agent Manager.
0
0
0