The panorama of computing is present process a profound transformation with the emergence of spatial computing platforms(VR and AR). As we step into this new period, the intersection of digital actuality, Augmented Reality, and on-device machine studying presents unprecedented alternatives for builders to create experiences that seamlessly mix digital content material with the bodily world.
The introduction of visionOS marks a big milestone on this evolution. Apple’s Spatial Computing platform combines subtle {hardware} capabilities with highly effective improvement frameworks, enabling builders to construct functions that may perceive and work together with the bodily setting in actual time. This convergence of spatial consciousness and on-device machine studying capabilities opens up new prospects for object recognition and monitoring functions that had been beforehand difficult to implement.
What We’re Constructing
On this information, we’ll be constructing an app that showcases the facility of on-device machine studying in visionOS. We’ll create an app that may acknowledge and monitor a weight loss program soda can in actual time, overlaying visible indicators and data instantly within the consumer’s subject of view.
Our app will leverage a number of key applied sciences within the visionOS ecosystem. When a consumer runs the app, they’re offered with a window containing a rotating 3D mannequin of our goal object together with utilization directions. As they appear round their setting, the app repeatedly scans for weight loss program soda cans. Upon detection, it shows dynamic bounding traces across the can and locations a floating textual content label above it, all whereas sustaining exact monitoring as the thing or consumer strikes via area.
Earlier than we start improvement, let’s guarantee we have now the required instruments and understanding in place. This tutorial requires:
- The newest model of Xcode 16 with visionOS SDK put in
- visionOS 2.0 or later working on an Apple Imaginative and prescient Professional system
- Fundamental familiarity with SwiftUI and the Swift programming language
The event course of will take us via a number of key phases, from capturing a 3D mannequin of our goal object to implementing real-time monitoring and visualization. Every stage builds upon the earlier one, providing you with a radical understanding of creating options powered by on-device machine studying for visionOS.
Constructing the Basis: 3D Object Seize
Step one in creating our object recognition system includes capturing an in depth 3D mannequin of our goal object. Apple supplies a robust app for this objective: RealityComposer, obtainable for iOS via the App Retailer.
When capturing a 3D mannequin, environmental situations play an important position within the high quality of our outcomes. Establishing the seize setting correctly ensures we get the very best information for our machine studying mannequin. A well-lit area with constant lighting helps the seize system precisely detect the thing’s options and dimensions. The weight loss program soda can ought to be positioned on a floor with good distinction, making it simpler for the system to tell apart the thing’s boundaries.
The seize course of begins by launching the RealityComposer app and deciding on “Object Seize” from the obtainable choices. The app guides us via positioning a bounding field round our goal object. This bounding field is crucial because it defines the spatial boundaries of our seize quantity.

As soon as we’ve captured all the main points of the soda can with the assistance of the in-app information and processed the photographs, a .usdz file containing our 3D mannequin shall be created. This file format is particularly designed for AR/VR functions and incorporates not simply the visible illustration of our object, but in addition essential info that shall be used within the coaching course of.
Coaching the Reference Mannequin
With our 3D mannequin in hand, we transfer to the subsequent essential section: coaching our recognition mannequin utilizing Create ML. Apple’s Create ML software supplies a simple interface for coaching machine studying fashions, together with specialised templates for spatial computing functions.
To start the coaching course of, we launch Create ML and choose the “Object Monitoring” template from the spatial class. This template is particularly designed for coaching fashions that may acknowledge and monitor objects in three-dimensional area.

After creating a brand new venture, we import our .usdz file into Create ML. The system robotically analyzes the 3D mannequin and extracts key options that shall be used for recognition. The interface supplies choices for configuring how our object ought to be acknowledged in area, together with viewing angles and monitoring preferences.
When you’ve imported the 3d mannequin and analyzed it in varied angles, go forward and click on on “Practice”. Create ML will course of our mannequin and start the coaching section. Throughout this section, the system learns to acknowledge our object from varied angles and below completely different situations. The coaching course of can take a number of hours because the system builds a complete understanding of our object’s traits.

The output of this coaching course of is a .referenceobject file, which incorporates the skilled mannequin information optimized for real-time object detection in visionOS. This file encapsulates all of the discovered options and recognition parameters that can allow our app to determine weight loss program soda cans within the consumer’s setting.
The profitable creation of our reference object marks an essential milestone in our improvement course of. We now have a skilled mannequin able to recognizing our goal object in real-time, setting the stage for implementing the precise detection and visualization performance in our visionOS software.
Preliminary Venture Setup
Now that we have now our skilled reference object, let’s arrange our visionOS venture. Launch Xcode and choose “Create a brand new Xcode venture”. Within the template selector, select visionOS below the platforms filter and choose “App”. This template supplies the essential construction wanted for a visionOS software.

Within the venture configuration dialog, configure your venture with these main settings:
- Product Title: SodaTracker
- Preliminary Scene: Window
- Immersive Area Renderer: RealityKit
- Immersive Area: Blended
After venture creation, we have to make a number of important modifications. First, delete the file named ToggleImmersiveSpaceButton.swift as we gained’t be utilizing it in our implementation.
Subsequent, we’ll add our beforehand created property to the venture. In Xcode’s Venture Navigator, find the “RealityKitContent.rkassets” folder and add the 3D object file (“SodaModel.usdz” file). This 3D mannequin shall be utilized in our informative view. Create a brand new group named “ReferenceObjects” and add the “Food plan Soda.referenceobject” file we generated utilizing Create ML.
The ultimate setup step is to configure the required permission for object monitoring. Open your venture’s Information.plist file and add a brand new key: NSWorldSensingUsageDescription. Set its worth to “Used to trace weight loss program sodas”. This permission is required for the app to detect and monitor objects within the consumer’s setting.
With these setup steps full, we have now a correctly configured visionOS venture prepared for implementing our object monitoring performance.
Entry Level Implementation
Let’s begin with SodaTrackerApp.swift, which was robotically created once we arrange our visionOS venture. We have to modify this file to help our object monitoring performance. Change the default implementation with the next code:
import SwiftUI
/**
SodaTrackerApp is the principle entry level for the appliance.
It configures the app's window and immersive area, and manages
the initialization of object detection capabilities.
The app robotically launches into an immersive expertise
the place customers can see Food plan Soda cans being detected and highlighted
of their setting.
*/
@important
struct SodaTrackerApp: App {
/// Shared mannequin that manages object detection state
@StateObject personal var appModel = AppModel()
/// System setting worth for launching immersive experiences
@Surroundings(.openImmersiveSpace) var openImmersiveSpace
var physique: some Scene {
WindowGroup {
ContentView()
.environmentObject(appModel)
.process {
// Load and put together object detection capabilities
await appModel.initializeDetector()
}
.onAppear {
Activity {
// Launch instantly into immersive expertise
await openImmersiveSpace(id: appModel.immersiveSpaceID)
}
}
}
.windowStyle(.plain)
.windowResizability(.contentSize)
// Configure the immersive area for object detection
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.setting(appModel)
}
// Use combined immersion to mix digital content material with actuality
.immersionStyle(choice: .fixed(.combined), in: .combined)
// Disguise system UI for a extra immersive expertise
.persistentSystemOverlays(.hidden)
}
}
The important thing facet of this implementation is the initialization and administration of our object detection system. When the app launches, we initialize our AppModel which handles the ARKit session and object monitoring setup. The initialization sequence is essential:
.process {
await appModel.initializeDetector()
}
This asynchronous initialization hundreds our skilled reference object and prepares the ARKit session for object monitoring. We guarantee this occurs earlier than opening the immersive area the place the precise detection will happen.
The immersive area configuration is especially essential for object monitoring:
.immersionStyle(choice: .fixed(.combined), in: .combined)
The combined immersion model is crucial for our object monitoring implementation because it permits RealityKit to mix our visible indicators (bounding containers and labels) with the real-world setting the place we’re detecting objects. This creates a seamless expertise the place digital content material precisely aligns with bodily objects within the consumer’s area.
With these modifications to SodaTrackerApp.swift, our app is able to start the thing detection course of, with ARKit, RealityKit, and our skilled mannequin working collectively within the combined actuality setting. Within the subsequent part, we’ll study the core object detection performance in AppModel.swift, one other file that was created throughout venture setup.
Core Detection Mannequin Implementation
AppModel.swift, created throughout venture setup, serves as our core detection system. This file manages the ARKit session, hundreds our skilled mannequin, and coordinates the thing monitoring course of. Let’s study its implementation:
import SwiftUI
import RealityKit
import ARKit
/**
AppModel serves because the core mannequin for the soda can detection software.
It manages the ARKit session, handles object monitoring initialization,
and maintains the state of object detection all through the app's lifecycle.
This mannequin is designed to work with visionOS's object monitoring capabilities,
particularly optimized for detecting Food plan Soda cans within the consumer's setting.
*/
@MainActor
@Observable
class AppModel: ObservableObject {
/// Distinctive identifier for the immersive area the place object detection happens
let immersiveSpaceID = "SodaTracking"
/// ARKit session occasion that manages the core monitoring performance
/// This session coordinates with visionOS to course of spatial information
personal var arSession = ARKitSession()
/// Devoted supplier that handles the real-time monitoring of soda cans
/// This maintains the state of at the moment tracked objects
personal var sodaTracker: ObjectTrackingProvider?
/// Assortment of reference objects used for detection
/// These objects comprise the skilled mannequin information for recognizing soda cans
personal var targetObjects: [ReferenceObject] = []
/**
Initializes the thing detection system by loading and getting ready
the reference object (Food plan Soda can) from the app bundle.
This technique hundreds a pre-trained mannequin that incorporates spatial and
visible details about the Food plan Soda can we wish to detect.
*/
func initializeDetector() async {
guard let objectURL = Bundle.important.url(forResource: "Food plan Soda", withExtension: "referenceobject") else {
print("Error: Didn't find reference object in bundle - guarantee Food plan Soda.referenceobject exists")
return
}
do {
let referenceObject = attempt await ReferenceObject(from: objectURL)
self.targetObjects = [referenceObject]
} catch {
print("Error: Didn't initialize reference object: (error)")
}
}
/**
Begins the lively object detection course of utilizing ARKit.
This technique initializes the monitoring supplier with loaded reference objects
and begins the real-time detection course of within the consumer's setting.
Returns: An ObjectTrackingProvider if efficiently initialized, nil in any other case
*/
func beginDetection() async -> ObjectTrackingProvider? {
guard !targetObjects.isEmpty else { return nil }
let tracker = ObjectTrackingProvider(referenceObjects: targetObjects)
do {
attempt await arSession.run([tracker])
self.sodaTracker = tracker
return tracker
} catch {
print("Error: Didn't initialize monitoring: (error)")
return nil
}
}
/**
Terminates the thing detection course of.
This technique safely stops the ARKit session and cleans up
monitoring assets when object detection is now not wanted.
*/
func endDetection() {
arSession.cease()
}
}
On the core of our implementation is ARKitSession, visionOS’s gateway to spatial computing capabilities. The @MainActor attribute ensures our object detection operations run on the principle thread, which is essential for synchronizing with the rendering pipeline.
personal var arSession = ARKitSession()
personal var sodaTracker: ObjectTrackingProvider?
personal var targetObjects: [ReferenceObject] = []
The ObjectTrackingProvider is a specialised element in visionOS that handles real-time object detection. It really works at the side of ReferenceObject situations, which comprise the spatial and visible info from our skilled mannequin. We preserve these as personal properties to make sure correct lifecycle administration.
The initialization course of is especially essential:
let referenceObject = attempt await ReferenceObject(from: objectURL)
self.targetObjects = [referenceObject]
Right here, we load our skilled mannequin (the .referenceobject file we created in Create ML) right into a ReferenceObject occasion. This course of is asynchronous as a result of the system must parse and put together the mannequin information for real-time detection.
The beginDetection technique units up the precise monitoring course of:
let tracker = ObjectTrackingProvider(referenceObjects: targetObjects)
attempt await arSession.run([tracker])
After we create the ObjectTrackingProvider, we go in our reference objects. The supplier makes use of these to determine the detection parameters — what to search for, what options to match, and find out how to monitor the thing in 3D area. The ARKitSession.run name prompts the monitoring system, starting the real-time evaluation of the consumer’s setting.
Immersive Expertise Implementation
ImmersiveView.swift, supplied in our preliminary venture setup, manages the real-time object detection visualization within the consumer’s area. This view processes the continual stream of detection information and creates visible representations of detected objects. Right here’s the implementation:
import SwiftUI
import RealityKit
import ARKit
/**
ImmersiveView is accountable for creating and managing the augmented actuality
expertise the place object detection happens. This view handles the real-time
visualization of detected soda cans within the consumer's setting.
It maintains a set of visible representations for every detected object
and updates them in real-time as objects are detected, moved, or eliminated
from view.
*/
struct ImmersiveView: View {
/// Entry to the app's shared mannequin for object detection performance
@Surroundings(AppModel.self) personal var appModel
/// Root entity that serves because the dad or mum for all AR content material
/// This entity supplies a constant coordinate area for all visualizations
@State personal var sceneRoot = Entity()
/// Maps distinctive object identifiers to their visible representations
/// Allows environment friendly updating of particular object visualizations
@State personal var activeVisualizations: [UUID: ObjectVisualization] = [:]
var physique: some View {
RealityView { content material in
// Initialize the AR scene with our root entity
content material.add(sceneRoot)
Activity {
// Start object detection and monitor modifications
let detector = await appModel.beginDetection()
guard let detector else { return }
// Course of real-time updates for object detection
for await replace in detector.anchorUpdates {
let anchor = replace.anchor
let id = anchor.id
swap replace.occasion {
case .added:
// Object newly detected - create and add visualization
let visualization = ObjectVisualization(for: anchor)
activeVisualizations[id] = visualization
sceneRoot.addChild(visualization.entity)
case .up to date:
// Object moved - replace its place and orientation
activeVisualizations[id]?.refreshTracking(with: anchor)
case .eliminated:
// Object now not seen - take away its visualization
activeVisualizations[id]?.entity.removeFromParent()
activeVisualizations.removeValue(forKey: id)
}
}
}
}
.onDisappear {
// Clear up AR assets when view is dismissed
cleanupVisualizations()
}
}
/**
Removes all lively visualizations and stops object detection.
This ensures correct cleanup of AR assets when the view is now not lively.
*/
personal func cleanupVisualizations() {
for (_, visualization) in activeVisualizations {
visualization.entity.removeFromParent()
}
activeVisualizations.removeAll()
appModel.endDetection()
}
}
The core of our object monitoring visualization lies within the detector’s anchorUpdates stream. This ARKit function supplies a steady circulation of object detection occasions:
for await replace in detector.anchorUpdates {
let anchor = replace.anchor
let id = anchor.id
swap replace.occasion {
case .added:
// Object first detected
case .up to date:
// Object place modified
case .eliminated:
// Object now not seen
}
}
Every ObjectAnchor incorporates essential spatial information in regards to the detected soda can, together with its place, orientation, and bounding field in 3D area. When a brand new object is detected (.added occasion), we create a visualization that RealityKit will render within the appropriate place relative to the bodily object. As the thing or consumer strikes, the .up to date occasions guarantee our digital content material stays completely aligned with the true world.
Visible Suggestions System
Create a brand new file named ObjectVisualization.swift for dealing with the visible illustration of detected objects. This element is accountable for creating and managing the bounding field and textual content overlay that seems round detected soda cans:
import RealityKit
import ARKit
import UIKit
import SwiftUI
/**
ObjectVisualization manages the visible parts that seem when a soda can is detected.
This class handles each the 3D textual content label that seems above the thing and the
bounding field that outlines the detected object in area.
*/
@MainActor
class ObjectVisualization {
/// Root entity that incorporates all visible parts
var entity: Entity
/// Entity particularly for the bounding field visualization
personal var boundingBox: Entity
/// Width of bounding field traces - 0.003 supplies optimum visibility with out being too intrusive
personal let outlineWidth: Float = 0.003
init(for anchor: ObjectAnchor) {
entity = Entity()
boundingBox = Entity()
// Arrange the principle entity's remodel primarily based on the detected object's place
entity.remodel = Rework(matrix: anchor.originFromAnchorTransform)
entity.isEnabled = anchor.isTracked
createFloatingLabel(for: anchor)
setupBoundingBox(for: anchor)
refreshBoundingBoxGeometry(with: anchor)
}
/**
Creates a floating textual content label that hovers above the detected object.
The textual content makes use of Avenir Subsequent font for optimum readability in AR area and
is positioned barely above the thing for clear visibility.
*/
personal func createFloatingLabel(for anchor: ObjectAnchor) {
// 0.06 models supplies optimum textual content measurement for viewing at typical distances
let labelSize: Float = 0.06
// Use Avenir Subsequent for its readability and trendy look in AR
let font = MeshResource.Font(identify: "Avenir Subsequent", measurement: CGFloat(labelSize))!
let textMesh = MeshResource.generateText("Food plan Soda",
extrusionDepth: labelSize * 0.15,
font: font)
// Create a fabric that makes textual content clearly seen towards any background
var textMaterial = UnlitMaterial()
textMaterial.coloration = .init(tint: .orange)
let textEntity = ModelEntity(mesh: textMesh, supplies: [textMaterial])
// Place textual content above object with sufficient clearance to keep away from intersection
textEntity.remodel.translation = SIMD3(
anchor.boundingBox.middle.x - textMesh.bounds.max.x / 2,
anchor.boundingBox.extent.y + labelSize * 1.5,
0
)
entity.addChild(textEntity)
}
/**
Creates a bounding field visualization that outlines the detected object.
Makes use of a magenta coloration transparency to supply a transparent
however non-distracting visible boundary across the detected soda can.
*/
personal func setupBoundingBox(for anchor: ObjectAnchor) {
let boxMesh = MeshResource.generateBox(measurement: [1.0, 1.0, 1.0])
// Create a single materials for all edges with magenta coloration
let boundsMaterial = UnlitMaterial(coloration: .magenta.withAlphaComponent(0.4))
// Create all edges with uniform look
for _ in 0..<12 {
let edge = ModelEntity(mesh: boxMesh, supplies: [boundsMaterial])
boundingBox.addChild(edge)
}
entity.addChild(boundingBox)
}
/**
Updates the visualization when the tracked object strikes.
This ensures the bounding field and textual content preserve correct positioning
relative to the bodily object being tracked.
*/
func refreshTracking(with anchor: ObjectAnchor) {
entity.isEnabled = anchor.isTracked
guard anchor.isTracked else { return }
entity.remodel = Rework(matrix: anchor.originFromAnchorTransform)
refreshBoundingBoxGeometry(with: anchor)
}
/**
Updates the bounding field geometry to match the detected object's dimensions.
Creates a exact define that precisely matches the bodily object's boundaries
whereas sustaining the gradient visible impact.
*/
personal func refreshBoundingBoxGeometry(with anchor: ObjectAnchor) {
let extent = anchor.boundingBox.extent
boundingBox.remodel.translation = anchor.boundingBox.middle
for (index, edge) in boundingBox.kids.enumerated() {
guard let edge = edge as? ModelEntity else { proceed }
swap index {
case 0...3: // Horizontal edges alongside width
edge.scale = SIMD3(extent.x, outlineWidth, outlineWidth)
edge.place = [
0,
extent.y / 2 * (index % 2 == 0 ? -1 : 1),
extent.z / 2 * (index < 2 ? -1 : 1)
]
case 4...7: // Vertical edges alongside peak
edge.scale = SIMD3(outlineWidth, extent.y, outlineWidth)
edge.place = [
extent.x / 2 * (index % 2 == 0 ? -1 : 1),
0,
extent.z / 2 * (index < 6 ? -1 : 1)
]
case 8...11: // Depth edges
edge.scale = SIMD3(outlineWidth, outlineWidth, extent.z)
edge.place = [
extent.x / 2 * (index % 2 == 0 ? -1 : 1),
extent.y / 2 * (index < 10 ? -1 : 1),
0
]
default:
break
}
}
}
}
The bounding field creation is a key facet of our visualization. Fairly than utilizing a single field mesh, we assemble 12 particular person edges that type a wireframe define. This method supplies higher visible readability and permits for extra exact management over the looks. The sides are positioned utilizing SIMD3 vectors for environment friendly spatial calculations:
edge.place = [
extent.x / 2 * (index % 2 == 0 ? -1 : 1),
extent.y / 2 * (index < 10 ? -1 : 1),
0
]
This mathematical positioning ensures every edge aligns completely with the detected object’s dimensions. The calculation makes use of the thing’s extent (width, peak, depth) and creates a symmetrical association round its middle level.
This visualization system works at the side of our ImmersiveView to create real-time visible suggestions. Because the ImmersiveView receives place updates from ARKit, it calls refreshTracking on our visualization, which updates the remodel matrices to keep up exact alignment between the digital overlays and the bodily object.
Informative View

ContentView.swift, supplied in our venture template, handles the informational interface for our app. Right here’s the implementation:
import SwiftUI
import RealityKit
import RealityKitContent
/**
ContentView supplies the principle window interface for the appliance.
Shows a rotating 3D mannequin of the goal object (Food plan Soda can)
together with clear directions for customers on find out how to use the detection function.
*/
struct ContentView: View {
// State to regulate the continual rotation animation
@State personal var rotation: Double = 0
var physique: some View {
VStack(spacing: 30) {
// 3D mannequin show with rotation animation
Model3D(named: "SodaModel", bundle: realityKitContentBundle)
.padding(.vertical, 20)
.body(width: 200, peak: 200)
.rotation3DEffect(
.levels(rotation),
axis: (x: 0, y: 1, z: 0)
)
.onAppear {
// Create steady rotation animation
withAnimation(.linear(length: 5.0).repeatForever(autoreverses: true)) {
rotation = 180
}
}
// Directions for customers
VStack(spacing: 15) {
Textual content("Food plan Soda Detection")
.font(.title)
.fontWeight(.daring)
Textual content("Maintain your weight loss program soda can in entrance of you to see it robotically detected and highlighted in your area.")
.font(.physique)
.multilineTextAlignment(.middle)
.foregroundColor(.secondary)
.padding(.horizontal)
}
}
.padding()
.body(maxWidth: 400)
}
}
This implementation shows our 3D-scanned soda mannequin (SodaModel.usdz) with a rotating animation, offering customers with a transparent reference of what the system is searching for. The rotation helps customers perceive find out how to current the thing for optimum detection.
With these parts in place, our software now supplies a whole object detection expertise. The system makes use of our skilled mannequin to acknowledge weight loss program soda cans, creates exact visible indicators in real-time, and supplies clear consumer steering via the informational interface.
Conclusion

On this tutorial, we’ve constructed a whole object detection system for visionOS that showcases the mixing of a number of highly effective applied sciences. Ranging from 3D object seize, via ML mannequin coaching in Create ML, to real-time detection utilizing ARKit and RealityKit, we’ve created an app that seamlessly detects and tracks objects within the consumer’s area.
This implementation represents just the start of what’s attainable with on-device machine studying in spatial computing. As {hardware} continues to evolve with extra highly effective Neural Engines and devoted ML accelerators and frameworks like Core ML mature, we’ll see more and more subtle functions that may perceive and work together with our bodily world in real-time. The mix of spatial computing and on-device ML opens up prospects for functions starting from superior AR experiences to clever environmental understanding, all whereas sustaining consumer privateness and low latency.
Source link