Forge Lab is a web-based AI training and simulation tool for vision-guided robots.

Forge Lab makes AI training and simulation for vision-guided robotics fast and accessible

Manufacturing engineers can receive ready vision programs and test their automation strategies faster


Apera AI has released Forge Lab, an AI-powered robotic vision training portal and simulation environment. A full simulation environment enables complete testing of automation strategies within hours. Automation professionals can then have vision programming completed for them by the Forge engine within 24 hours. Manufacturers will be able to significantly shorten implementation times for vision-guided robotic cells. They can ensure that their automation investments perform to expectations.

Visitors to Automate 2024 can experience Forge Lab in action from May 6-9, 2024 at booth 1857.

Forge Lab, which gives users of the company’s AI-powered vision software direct access to an AI training portal and simulation environment. Forge Lab allows the user to test it in a simulation environment before building a robotic cell. It then creates an AI-powered complete vision program.

What drove Forge Lab’s creation?

Working with customers, we noticed that there wasn’t an automation design tool that bridged robotic simulation and vision programming. Robots could be seen moving in simulation programs, but vision programming was still reserved for those with expertise. There was no way of predicting whether the simulated vision program would work in real life.

Forge Lab was created to bridge this gap and bring Apera’s simulation and automated vision training to more people. Our key goal was to make the process of designing and testing vision-guided robots faster and easier.

Simulating the built environment

Your strategy for the vision-guided robotic cell can be tested and iterated in a simulation environment. You can to upload a standard or custom end-of-arm tool, select pick points and run the vision program in a simulation environment. Simulating the application will allow engineers to optimize their picking strategy before spending on a robotic cell.

Forge Lab users can run a full simulation of their vision-guided robotic cell.
Forge Lab provides a simulation environment to test gripping strategies using custom or standard end-of-arm tools. Automation professionals can test their strategies and eliminate design mistakes in a simulation environment before building the robotic cell.

Forge Lab was created based on customer requirements to shorten robotic cell implementation times and rigorously test automation strategies. For multi-site manufacturers such as the automotive OEMs and Tier 1 suppliers, this allows automation teams to develop identical or similar robotic cells to perform bin picking, material handling, machine tending, or assembly.

Users of Forge Lab do not need a physical camera setup or robot to use the product. Using a web-based app, in-house automation teams and system integrators can complete proofs of concept faster in hours, not weeks or months.

The vision training process

Forge Lab uses a CAD model for AI vision training that is completed within 24 hours.

Forge Lab enables control of the vision training timeline. Users upload their CAD model and receive a ready vision program back within 24 hours, which can be deployed to their robotic cell running Vue robotic vision software. This enables you to bypass days or weeks of vision programming in a lab environment.

AI-powered vision software by Apera AI called Vue provides industry-leading total vision cycle times as low as 0.3 seconds. This compares to vision cycle times for conventional 3D vision cameras taking 3-8 seconds to provide the same robotic guidance instructions.

Apera AI trains the user’s part into our AI engine over approximately 1 million simulated cycles to reach greater than 99.99% reliability. A ready vision program is provided to the user, so they save days or weeks of vision programming time.

Forge Lab is available in Summer 2024 for Apera AI customers.

Want to know more?