Press Release
Agibot Released the Industry First Open-Source Robot World Model Platform – Genie Envisioner
Shanghai, China, 17th Oct 2025 — Recently, Agibot has officially launched Genie Envisioner (GE), a unified world model platform for real-world robot control. Departing from the traditional fragmented pipeline of data-training-evaluation, GE integrates future frame prediction, policy learning, and simulation evaluation for the first time into a closed-loop architecture centered on video generation. This enables robots to perform end-to-end reasoning and execution—from seeing to thinking to acting—within the same world model. Trained on 3,000 hours of real robot data, GE-Act not only significantly surpasses existing State-of-The-Art (SOTA) methods in cross-platform generalization and long-horizon task execution but also opens up a new technical pathway for embodied intelligence from visual understanding to action execution.
Current robot learning systems typically adopt a phased development model—data collection, model training, and policy evaluation—where each stage is independent and requires specialized infrastructure and task-specific tuning. This fragmented architecture increases development complexity, prolongs iteration cycles, and limits system scalability. The GE platform addresses this by constructing a unified video-generative world model that integrates these disparate stages into a closed-loop system. Built upon approximately 3,000 hours of real robot manipulation video data, GE establishes a direct mapping from language instructions to the visual space, preserving the complete spatiotemporal information of robot-environment interactions.

01/ Core Innovation: A Vision-Centric World Modeling Paradigm
The core breakthrough of GE lies in constructing a vision-centric modeling paradigm based on world models. Unlike mainstream Vision-Language-Action (VLA) methods that rely on Vision-Language Models (VLMs) to map visual inputs into a linguistic space for indirect modeling, GE directly models the interaction dynamics between the robot and the environment within the visual space. This approach fully retains the spatial structures and temporal evolution information during manipulation, achieving more accurate and direct modeling of robot-environment dynamics. This vision-centric paradigm offers two key advantages:
Efficient Cross-Platform Generalization Capability: Leveraging powerful pre-training in the visual space, GE-Act requires minimal data for cross-platform transfer. On new robot platforms like the Agilex Cobot Magic and Dual Franka, GE-Act achieved high-quality task execution using only 1 hour (approximately 250 demonstrations) of teleoperation data. In contrast, even models like π0 and GR00T, which are pre-trained on large-scale multi-embodiment data, underperformed GE-Act with the same amount of data. This efficient generalization stems from the universal manipulation representations learned by GE-Base in the visual space. By directly modeling visual dynamics instead of relying on linguistic abstractions, the model captures underlying physical laws and manipulation patterns shared across platforms, enabling rapid adaptation.

Accurate Execution Capability for Long-Horizon Tasks: More importantly, vision-centric modeling endows GE with powerful future spatiotemporal prediction capabilities. By explicitly modeling temporal evolution in the visual space, GE-Act can plan and execute complex tasks requiring long-term reasoning. In ultra-long-step tasks such as folding a cardboard box, GE-Act demonstrated performance far exceeding existing SOTA methods. Taking box folding as an example, this task requires the precise execution of over 10 consecutive sub-steps, each dependent on the accurate completion of the previous ones. GE-Act achieved a 76% success rate, while π0 (specifically optimized for deformable object manipulation) reached only 48%, and UniVLA and GR00T failed completely (0% success rate). This enhancement in long-horizon execution capability stems not only from GE’s visual world modeling but also benefits from the innovatively designed sparse memory module, which helps the robot selectively retain key historical information, maintaining precise contextual understanding in long-term tasks. By predicting future visual states, GE-Act can foresee the long-term consequences of actions, thereby generating more coherent and stable manipulation sequences. In comparison, language-space-based methods are prone to error accumulation and semantic drift in long-horizon tasks.

02/ Technical Architecture: Three Core Components
Based on the vision-centric modeling concept, the GE platform consists of three tightly integrated components:
GE-Base: Multi-View Video World Foundation Model: GE-Base is the core foundation of the entire platform. It employs an autoregressive video generation framework, segmenting output into discrete video chunks, each containing N frames. The model’s key innovations lie in its multi-view generation capability and sparse memory mechanism. By simultaneously processing inputs from three viewpoints (head camera and two wrist cameras), GE-Base maintains spatial consistency and captures the complete manipulation scene. The sparse memory mechanism enhances long-term reasoning by randomly sampling historical frames, enabling the model to handle manipulation tasks lasting several minutes while maintaining temporal coherence.

Training uses a two-stage strategy: first, temporal adaptation training (GE-Base-MR) with multi-resolution sampling at 3-30Hz makes the model robust to different motion speeds; subsequently, policy alignment fine-tuning (GE-Base-LF) at a fixed 5Hz sampling rate aligns with the temporal abstraction of downstream action modeling. The entire training was completed in about 10 days using 32 A100 GPUs on the AgiBot-World-Beta dataset, comprising approximately 3,000 hours and over 1 million real robot data instances.

GE-Act: Parallel Flow Matching Action Model: GE-Act serves as a plug-and-play action module, converting the visual latent representations from GE-Base into executable robot control commands through a lightweight architecture with 160M parameters. Its design cleverly parallels GE-Base’s visual backbone, using DiT blocks with the same network depth as GE-Base but smaller hidden dimensions for efficiency. Via a cross-attention mechanism, the action pathway fully utilizes semantic information from visual features, ensuring generated actions align with task instructions.

GE-Act’s training involves three stages: action pre-training projects visual representations into the action policy space; task-specific video adaptation updates the visual generation component for specific tasks; task-specific action fine-tuning refines the full model to capture fine-grained control dynamics. Notably, its asynchronous inference mode is key: the video DiT runs at 5Hz for single-step denoising, while the action model runs at 30Hz for 5-step denoising. This “slow-fast” two-layer optimization enables the system to complete 54-step action inference in 200ms on an onboard RTX 4090 GPU, achieving real-time control.

GE-Sim: Hierarchical Action-Conditioned Simulator: GE-Sim extends GE-Base’s generative capability into an action-conditioned neural simulator, enabling precise visual prediction through a hierarchical action conditioning mechanism. This mechanism includes two key components: Pose2Image conditioning projects 7-degree-of-freedom end-effector poses (position, orientation, gripper state) into the image space, generating spatially aligned pose images via camera calibration; Motion vectors calculate the incremental motion between consecutive poses, encoded as motion tokens and injected into each DiT block via cross-attention.

This design allows GE-Sim to accurately translate low-level control commands into visual predictions, supporting closed-loop policy evaluation. In practice, action trajectories generated by the policy model are converted by GE-Sim into future visual states; these generated videos are then fed back to the policy model to produce the next actions, forming a complete simulation loop. Parallelized on distributed clusters, GE-Sim can evaluate thousands of policy rollouts per hour, providing an efficient evaluation platform for large-scale policy optimization. Furthermore, GE-Sim also acts as a data engine, generating diverse training data by executing the same action trajectories under different initial visual conditions.

These three components work closely together to form a complete vision-centric robot learning platform: GE-Base provides powerful visual world modeling capabilities, GE-Act enables efficient conversion from vision to action, and GE-Sim supports large-scale policy evaluation and data generation, collectively advancing embodied intelligence.
EWMBench: World Model Evaluation Suite
Additionally, to evaluate the quality of world models for embodied tasks, the team developed the EWMBench evaluation suite alongside the core GE components. It provides comprehensive scoring across dimensions including scene consistency, trajectory accuracy, motion dynamics consistency, and semantic alignment. Subjective ratings from multiple experts showed high consistency with GE-Bench rankings, validating its reliability for assessing robot task relevance. In comparisons with advanced models like Kling, Hailuo, and OpenSora, GE-Base achieved top results on multiple key metrics reflecting visual modeling quality, aligning closely with human judgment.

Open-Source Plan & Future Outlook
The team will open-source all code, pre-trained models, and evaluation tools. Through its vision-centric world modeling, GE pioneers a new technical path for robot learning. The release of GE marks a shift for robots from passive execution towards active ‘imagine-verify-act’ cycles. In the future, the platform will be expanded to incorporate more sensor modalities, support full-body mobility and human-robot collaboration, continuously promoting the practical application of intelligent manufacturing and service robots.
Media Contact
Organization: Shanghai Zhiyuan Innovation Technology Co., Ltd.
Contact Person: Jocelyn Lee
Website: https://www.zhiyuan-robot.com
Email: Send Email
City: Shanghai
Country:China
Release id:35600
The post Agibot Released the Industry First Open-Source Robot World Model Platform – Genie Envisioner appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section
About Author
Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Digi Observer journalist was involved in the writing and production of this article.
Press Release
COOFANDY & Christopher Bell: Dressing the Journey to Victory – A Partnership Story Racing Toward Martinsville Speedway
The partnership between COOFANDY and Joe Gibbs Racing (JGR) alongside their driver Christopher Bell, established earlier this year, has been a dynamic fusion of high-speed motorsport and sophisticated style. As COOFANDY prepares to sponsor the event at Martinsville Speedway on October 26, 2025, let’s revisit the key moments of this thrilling collaboration.
Partnership Journey Recap
May: The Collaboration Begins
During its 10th-anniversary celebrations, COOFANDY officially announced Christopher Bell as its global brand ambassador. The launch also featured the debut of the “Bell’s Picks” product collection and a creative comic series.
June: Father’s Day Special Event
COOFANDY organized a special fan viewing experience during the FireKeepers Casino 400 in Michigan, blending COOFANDY fans with the NASCAR community to celebrate Father’s Day together.
July: Online Interaction & JGR Headquarters Experience
Christopher Bell made a surprise appearance in COOFANDY’s New York live stream, recommending his favorite styles. Subsequently, the brand hosted the “Approaching the Legend Journey,” inviting influencers and fans for an exclusive behind-the-scenes tour of the legendary Joe Gibbs Racing headquarters.
Next Stop: Martinsville – A Crucial Battle in the NASCAR PlayoffsThe partnership is accelerating towards its next highlight: the Xfinity 500 at Martinsville Speedway on October 26, 2025. This is not just another race on the calendar; it’s a critical elimination event in the NASCAR Playoffs Round of 8, where championship hopes are forged or shattered. COOFANDY’s sponsorship of Christopher Bell’s No. 20 Toyota at this pivotal moment underscores COOFANDY’s pursuit of excellence and peak performance. It places COOFANDY at the heart of the action, connecting with millions of passionate fans worldwide during one of the season’s most intense and watched races.
Track Aesthetics: COOFANDY Exclusive Designs Debut
For this landmark race, COOFANDY’s brand identity will be prominently displayed through custom-designed assets that bridge fashion and function:
Car Livery: The No. 20 Toyota will feature a unique livery incorporating COOFANDY’s brand elements. The design seamlessly integrates the brand’s visual identity with dynamic racing aesthetics, using a combination of the brand’s signature colors and sleek graphics that embody both speed and sophistication. The livery is designed to stand out under the track lights, ensuring high visibility and a powerful brand statement.
Firesuit: Christopher Bell will wear a specially designed firesuit featuring COOFANDY’s brand elements and logos. Beyond brand display, this suit reflects a balance of the brand’s elegant style and the rigorous technical demands of a professional driver.
Beyond the Track: COOFANDY’s New Chapter in Sports Marketing
The collaboration with NASCAR and a top-tier driver like Christopher Bell is a strategic cornerstone for COOFANDY’s global marketing expansion. This move leverages NASCAR’s immense popularity and emotional connection to authentically engage with a vast and loyal audience. It interprets COOFANDY’s “Dress the Journey” philosophy in a high-performance environment, linking the brand with values of excellence, precision, and the pursuit of victory. This partnership serves as a powerful engine for enhancing international brand awareness and connecting with new consumers who share a passion for sports and lifestyle.
Conclusion
COOFANDY sincerely thanks all fans for their support throughout this partnership. Don’t miss the next chapter: watch Christopher Bell drive the COOFANDY-branded car at Martinsville Speedway on October 26th. Stay tuned for more updates.
For more information, please visit the COOFANDY website and Amazon storefront, or connect with COOFANDY on Facebook and Instagram.
COOFANDY
Charlotte Liu
New York, US
About Author
Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Digi Observer journalist was involved in the writing and production of this article.
Press Release
A New Era in the Crypto Market ETH Volume Bot Redefines Success for Token Projects
The Ethereum ecosystem continues to evolve rapidly, with new token projects emerging every day. For many developers, maintaining transparent, consistent, and data-driven liquidity management on decentralized exchanges (DEXs) remains one of the biggest challenges. ETH Volume Bot, a blockchain automation platform, aims to support these needs by offering analytical and operational tools that help projects monitor, manage, and automate their on-chain trading activity in a secure and compliant way.

A Technology-Driven Approach to On-Chain Activity
ethvolumebot.com provides automated infrastructure to assist token teams in managing liquidity, transaction execution, and on-chain analytics on Ethereum-based DEXs. The platform leverages automation to improve transaction efficiency and to help projects better understand their market presence through advanced data insights.
Since its introduction, the system has been utilized by numerous Ethereum-based initiatives to streamline operational processes and optimize smart contract interactions within transparent and regulated frameworks.
Introducing the Batch Transaction Queue (BTQ)
One of ETH Volume Bot’s key innovations is the Batch Transaction Queue (BTQ) — a mechanism designed to optimize transaction efficiency and reduce gas expenditure on the Ethereum network.
BTQ enables multiple small transactions to be processed in a bundled and gas-efficient manner, helping project teams lower operational costs while maintaining transaction transparency and traceability on-chain.
This technology contributes to a more efficient use of network resources, minimizing redundant transactions and improving on-chain data consistency. By reducing gas costs, BTQ enhances accessibility for smaller or early-stage blockchain projects.
Advanced Controls and Analytics
The platform’s automation framework allows project teams to define operational parameters with precision, while the real-time analytics dashboard provides comprehensive visibility into performance metrics.
Teams can track liquidity distribution, trading patterns, and historical data, enabling informed, evidence-based decision-making.
The system integrates seamlessly with leading decentralized exchanges such as Uniswap, SushiSwap, and 1inch, ensuring compatibility with Ethereum-standard liquidity environments.
Security and Non-Custodial Design
Security and control remain top priorities. ETH Volume Bot follows a 100% non-custodial architecture, meaning users maintain full ownership and access to their assets at all times.
All operations are executed directly through Web3 wallets such as MetaMask or WalletConnect, ensuring that no funds are ever transferred to third-party custody.
The platform’s smart contracts have undergone independent security audits, validating their reliability and operational safety.
Transparency and Compliance
ETH Volume Bot emphasizes transparency, auditability, and compliance as fundamental principles of its design.
All on-chain activities are publicly verifiable, and the system operates strictly as a technological and analytical tool — not a financial advisory or promotional mechanism. Its purpose is to empower blockchain projects to manage their operations responsibly and within ethical standards.
About ETH Volume Bot
ETH Volume Bot is a blockchain automation and analytics platform that helps token projects manage transaction efficiency, liquidity operations, and smart contract activity across decentralized exchanges.
The system’s modular infrastructure is built for transparency, security, and operational scalability within the Ethereum ecosystem.
Official website: https://www.ethvolumebot.com
Media Contact
Organization: ETH Volume Bot
Contact Person: Aglae Bergnaum
Website: https://www.ethvolumebot.com
Email: Send Email
Country:United States
Release id:35647
The post A New Era in the Crypto Market ETH Volume Bot Redefines Success for Token Projects appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section
About Author
Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Digi Observer journalist was involved in the writing and production of this article.
Press Release
13-Year-Old Samanyu Sathyamoorthi Wins Curiosity Innovation Award at Global AI Summit with MyChemLab-ai Aiming to Solve Worldwide Chemistry Lab Access Crisis
Innovative Virtual Chemistry Platform Leverages Google’s Gemini AI to Create Accessible, Risk-Free STEM Learning for Millions of Students Lacking Hands-On Experience.
United States, 18th Oct 2025 – Samanyu Sathyamoorthi, a 13-year-old innovator, 8th-grade student at John M. Horner Middle School in Fremont, CA, and future attendee of Iron Horse Middle School in San Ramon, CA, has been recognized on the global stage for his pioneering work in educational technology.
Samanyu won the prestigious Achievement: Curiosity/Innovation Award at the ISF Global Junicorn & AI Summit 2025. A student-focused innovation event that took place on May 29-30, 2025 at Texas State University in San Marcos, Texas, convened the world’s most promising student innovators, known as “Junicorns,” to present how they are using cutting-edge AI and technology to solve significant, real-world problems. Samanyu’s platform, MyChemLab.ai, stood out for its profound potential to democratize science education globally.
Photo: (Image of Samanyu Sathyamoorthi receiving his Innovation Award for his project at the summit — MyChemLab.ai at the ISF Global Junicorn & AI Summit 2025, Texas State University, TX.”)
Disrupting the Chemistry Education Gap with AI
MyChemLab.aiis a revolutionary AI-powered virtual chemistry laboratory honored for its mission to make science education more accessible, interactive, and equitable worldwide. The platform’s creation is a direct response to a critical global crisis: the lack of functioning, hands-on chemistry laboratories.
According to research cited by Samanyu, an estimated 15 million students in the U.S. and 75 million students in India—a population equivalent to the combined populations of New York and California—lack the necessary infrastructure for practical, hands-on chemistry experience. This systemic deficit, particularly for students in grades 6 through 12, severely limits opportunities for experimentation, diminishes scientific curiosity, and curtails pathways into vital STEM careers.
“MyChemLab aims to tear down traditional barriers to science education and create equity,” explains Samanyu. “By providing an immersive, risk-free platform accessible on any device, students can experiment with elements and observe reactions that would typically require expensive, dangerous, or unavailable real-world equipment. It’s about making chemistry fun and accessible for every student, anytime and anywhere.”
Technical Depth and The Gemini AI Core
The effectiveness of MyChemLab.ai lies in its sophisticated technical architecture and the intelligent simulation engine at its core. Built on a modern full-stack foundation utilizing React.js for a dynamic front-end and Node.js for the back-end, the application secures student data and class progress via Firebase.
The platform’s core innovation is its deep integration with the Gemini AI Platform. The AI functions as a dynamic reaction engine, simulating complex chemical interactions with realistic fidelity. This capability allows students to manipulate environmental variables—including adjustable Pressure, Temperature, and Reaction Time controls—to observe cause-and-effect in real-time, replicating the unpredictability and rigor of a physical lab without any associated risk or cost.
Key features driving this educational depth include:
- A comprehensive chemical database containing over 115 elements and 30 compounds.
- Realistic outcome modeling via the Gemini AI Platform.
- Separate teacher and student portals designed to seamlessly integrate into classroom curricula, enabling assignments, collaborative learning, and standardized assessments.
The project is already demonstrating tangible impact, currently serving 40 active users and undergoing live testing in classrooms across the U.S. and India.
Photo: (Image of Samanyu Sathyamoorthi with his project at the summit — presenting MyChemLab.ai at the ISF Global Junicorn & AI Summit 2025, Texas State University, Austin, TX.”)
Validation and Educational Impact
The project has garnered significant praise from both the technical and educational communities, validating its promise as a transformative learning tool.
“It’s truly impressive to see someone your age harness AI for such a complex subject like chemistry,” said Dr. Sumathy Kumar, Ph.D., Chemistry Educator in India. “Projects like yours show us the incredible potential of young minds and remind us that the future of science is bright.”
Mitran, Director of Marketing at the ISF Global Summit, echoed this sentiment, adding, “I’m really impressed with the depth you’ve covered—from the secure login flow to the way you’ve integrated interactive sliders for variable manipulation. This is professional-grade application development.”
Educators have been instrumental in guiding the platform’s development. Ms. Corine Benedetti, a FUSD Teacher, provided constructive input, noting, “As someone with limited experience in chemistry, I think it could be very helpful to have a tutorial or suggested experiment section. That feedback will definitely help guide its next phase of development.” This practical feedback confirms MyChemLab.ai’s relevance and accessibility for a broad range of student and educator proficiency levels.
Photo: (Insert image of MyChemLab – “A screenshot of MyChemLab’s virtual experiment interface, featuring sliders for temperature, pressure, and time.”)
Next Steps and Expanding the Vision
Following his success at the ISF Summit, Samanyu is preparing to submit MyChemLab.ai to the prestigious 2025 Congressional App Challenge, which celebrates student innovation in computer science. His goal is to represent his schools and community at the national level, continuing to showcase how technology can equalize learning opportunities. He will be representing California’s 10th Congressional District (Rep. Mark DeSaulnier) from Iron Horse Middle School in San Ramon.
Looking ahead, Samanyu plans to dramatically enhance the platform’s pedagogical power:
- Integrating a built-in, real-time AI chatbot tutor that explains complex chemical principles and provides academic assistance.
- Introducing advanced Augmented Reality (AR) and Virtual Reality (VR) experiences for truly immersive experimentation.
- Adding gamification elements like badges and leaderboards to boost long-term student engagement and interest.
“Winning this award motivates me to keep building tools that make science accessible,” Samanyu shared. “I want every student, no matter where they are, to have the chance to explore the magic of chemistry.”
About ISF Global Junicorn & AI Summit
The Innovation STEM Foundation (ISF) hosts the Global Junicorn & AI Summit annually to inspire and recognize young innovators in AI, robotics, and sustainability. The 2025 event featured over 300 student innovators from 10+ countries, with judging panels comprised of academic and technology leaders.
Media Contact
Rocky Zester
crazyme2207@gmail.com
Project Website: www.mychemlab.ai
Student Innovation Channel:
https://www.youtube.com/channel/UCJ6UEab559ioCDRZ7LH_6sw/
Fremont & San Ramon, California
Media Contact
Organization: Silicon Stem
Contact Person: Rocky Zester
Website: https://www.mychemlab.ai/
Email: Send Email
Country:United States
Release id:35641
The post 13-Year-Old Samanyu Sathyamoorthi Wins Curiosity Innovation Award at Global AI Summit with MyChemLab-ai Aiming to Solve Worldwide Chemistry Lab Access Crisis appeared first on King Newswire. This content is provided by a third-party source.. King Newswire makes no warranties or representations in connection with it. King Newswire is a press release distribution agency and does not endorse or verify the claims made in this release. If you have any complaints or copyright concerns related to this article, please contact the company listed in the ‘Media Contact’ section
About Author
Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Digi Observer journalist was involved in the writing and production of this article.
-
Press Release1 week ago
Dream California Getaway Names Bestselling Author & Fighting Entrepreneur Tony Deoleo Official Spokesperson Unveils Menifee Luxury Retreat
-
Press Release6 days ago
Pool Cover Celebrates Over 10 Years of Service in Potchefstroom as Swimming Pool Cover Market Grows Four Point Nine Percent Annually
-
Press Release19 hours ago
Futuromining Launches XRP Mining Contracts – XRP Holders Earn $5,777 Daily
-
Press Release5 days ago
Weightloss Clinic Near Me Online Directory USA Launches Nationwide Platform to Help Americans Find Trusted Weight Loss Clinics
-
Press Release1 week ago
James Jara New Book Empowers CTOs and HR Leaders to Build High-Performing Remote Teams Across Latin America
-
Press Release18 hours ago
13-Year-Old Samanyu Sathyamoorthi Wins Curiosity Innovation Award at Global AI Summit with MyChemLab-ai Aiming to Solve Worldwide Chemistry Lab Access Crisis
-
Press Release1 week ago
Beyond Keyboards and Mice: ProtoArc Shines at IFA 2025 with Full Ergonomic Ecosystem
-
Press Release4 days ago
La Maisonaire Redefines Luxury Furniture in Dubai with Bespoke Designs for Homes Offices and Hotels