MIT researchers “speak objects into existence” using AI and robotics The speech-to-reality system combines 3D generative AI and robotic assembly to create objects on demand.
MIT researchers at the School of Architecture and Planning developed a speech-to-reality system that combines generative AI, natural language processing, and robotic assembly to fabricate physical objects from spoken prompts.
System developed by researchers at MIT CSAIL and LIDS uses rigorous mathematics to ensure robots flex, adapt, and interact with people and objects in a safe and precise way. It helps robots remain flexible and responsive while mathematically guaranteeing they won’t exceed safe force limits.
MIT researchers developed an aerial microrobot that can fly with speed and agility comparable to real insects. The research opens the door to future bug-sized robots that could aid in search-and-rescue missions.
When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length.
Google DeepMind's new AI models, built on Google's Gemini foundation model, are making robots fold origami and slam dunk tiny basketballs. Gemini Robotics can interpret and act on text, voice, and ...