For many augmented reality (AR) experiences on Meta Quest devices, the Quest Boundary can unnecessarily interrupt the user experience when a large physical space is required. While native apps can disable the Quest boundary, this capability isn’t directly available for web-based experiences. Thankfully, there is a way to disable the boundary completely.
We’re excited to announce that iQ3Connect 2025.2 (February 2025) will introduce improved 3D model rendering, elevating the realism of your XR experiences and training modules. While iQ3Connect has traditionally prioritized XR performance and technical accuracy over rendering quality—recognizing that the added cost and time to develop visually appealing 3D models often isn’t justified for technically focused training—this update reflects the evolving availability of visually realistic models.
As businesses increasingly invest in visually realistic models for use cases like marketing, and as public 3D model libraries become more accessible and cost-effective, we aim to ensure these assets can seamlessly enhance XR training experiences without significant additional investment. Importantly, our commitment to XR performance and technical accuracy remains unchanged. These rendering improvements are strategically designed to support your use cases without pursuing full photorealism.
The first phase of these updates introduces the automatic import of key material properties from GLB files, making it easier to incorporate visually realistic elements into your XR projects.
Try it out yourself by signing up for a free trial:
iQ3Connect v2025.1 brings major updates, enhancing web-based spatial training and XR content creation. Key features include:
Touch-based AR alignment for seamless virtual/physical overlay
New actions and triggers for better interactivity and navigation
Enhanced tracking for effortless performance monitoring and adaptive experiences
A simplified UI/UX for easier training creation.
Explore a sampling of the new capabilities of iQ3Connect 2025.1 below.
Touch-based AR Alignment allows precise virtual/physical alignment using controllers or hands. It’s compatible with any AR headset, works without markers, and aligns to any surface, regardless of size, shape, or material.
XR Action - Move Objects lets creators define which virtual objects users can interact with. Whitelist objects to allow interaction with a few, or blacklist objects to lock some while keeping others interactive.
XR Action - Wayfinding displays a virtual path to a target location and can trigger events when the user arrives, using a User Position Trigger.
XR Trigger - User Position spawns actions based on the user’s proximity to a specific object or location.
XR Trigger - Object-to-Object Proximity spawns actions based on the distance between two virtual objects, such as verifying correct placement.
Tracking, Logic, and Variables are now available directly in the Training Studio, eliminating the need for scripting. The visual interface allows easy tracking of user outcomes (e.g., time, incorrect steps, answers) and dynamic adjustment of training based on performance and selections using variables and logic.