robotics

  • Top Space and Defense Tech Startups at Disrupt


    The 7 top space and defense tech startups from Disrupt Startup BattlefieldTechCrunch's Startup Battlefield pitch contest highlights innovative startups in space and defense technology, showcasing seven standout companies. Airbility is developing a two-seat electric vertical take-off and landing (eVTOL) aircraft with a unique VTOL technology and electric propulsion system for enhanced maneuverability. Astrum offers a propellantless space propulsion system that eliminates onboard fuel, potentially extending spacecraft lifespans and reducing costs for deep space exploration. A fintech-like platform provides risk analysis for spacecraft, enabling insurance coverage and fostering new credit forms in the space industry. Endox combines AI and robotics to inspect and maintain U.S. military equipment, while Hance develops an AI neural network to enhance real-time audio in unpredictable environments. Skylark's self-learning AI is designed for machine use in safety applications, addressing challenges in processing information at the edge. Lastly, Skyline offers navigation software independent of GPS, using AI to combat GPS jamming effectively. These innovations matter as they push the boundaries of technology in critical sectors, potentially transforming how we explore space and enhance defense capabilities.

    Read Full Article: Top Space and Defense Tech Startups at Disrupt

  • EngineAI T800: Humanoid Robot’s Martial Arts Moves


    The EngineAI T800 humanoid robot has demonstrated remarkable capabilities in executing complex martial arts maneuvers, showcasing advancements in robotics and artificial intelligence. Engineered to mimic human movements with precision, the T800's performance highlights significant progress in developing robots that can perform dynamic physical tasks with agility and control. This breakthrough could have profound implications for various fields, including robotics, AI research, and industries requiring precise physical operations, as it points to a future where robots may assist or even replace humans in physically demanding roles. Understanding the potential of such technology is crucial as it could revolutionize the way humans interact with machines and redefine labor across numerous sectors.

    Read Full Article: EngineAI T800: Humanoid Robot’s Martial Arts Moves

  • Egocentric Video Prediction with PEVA


    Whole-Body Conditioned Egocentric Video PredictionPredicting Ego-centric Video from human Actions (PEVA) is a model designed to predict future video frames based on past frames and specified actions, focusing on whole-body conditioned egocentric video prediction. The model leverages a large dataset called Nymeria, which pairs real-world egocentric video with body pose capture, allowing it to simulate physical human actions from a first-person perspective. PEVA is trained using an autoregressive conditional diffusion transformer, which helps it handle the complexities of human motion, including high-dimensional and temporally extended actions. PEVA's approach involves representing each action as a high-dimensional vector that captures full-body dynamics and joint movements, using a 48-dimensional action space for detailed motion representation. The model employs techniques like random timeskips, sequence-level training, and action embeddings to better predict motion dynamics and activity patterns. During testing, PEVA generates future frames by conditioning on past frames, using an autoregressive rollout strategy to predict and update frames iteratively. This allows the model to maintain visual and semantic consistency over extended prediction periods, demonstrating its capability to generate coherent video sequences. The model's effectiveness is evaluated using various metrics, showing that PEVA outperforms baseline models in generating high-quality egocentric videos and maintaining coherence over long time horizons. However, it is acknowledged that PEVA is still an early step toward fully embodied planning, with limitations in long-horizon planning and task intent conditioning. Future directions involve extending PEVA to interactive environments and integrating high-level goal conditioning. This research is significant as it advances the development of world models for embodied agents, which are crucial for applications in robotics and AI-driven environments. Why this matters: Understanding and predicting human actions in egocentric video is crucial for developing advanced AI systems that can interact seamlessly with humans in real-world environments, enhancing applications in robotics, virtual reality, and autonomous systems.

    Read Full Article: Egocentric Video Prediction with PEVA