[SOLVED] What is ARAnchor exactly?


I’m trying to understand and use ARKit. But there is one thing that I cannot fully understand.

Apple said about ARAnchor:

A real-world position and orientation that can be used for placing objects in an AR scene.

But that’s not enough. So my questions are:

  • What is ARAnchor exactly?
  • What are the differences between anchors and feature points?
  • Is ARAnchor just part of feature points?
  • And how does ARKit determines its anchors?


Updated: March 26, 2022.



ARAnchor is an invisible null-object that can hold a 3D model at anchor’s position in World Space. Think of ARAnchor just like it’s a transform node with local axis (you can translate, rotate and scale it) for your model. Every 3D model has a pivot point, right? So this pivot point must meet an ARAnchor.

If you do not use anchors in ARKit/RealityKit app, your 3D models may drift from where they were placed and this will dramatically impact your app’s realism and user experience. Thus, anchors are crucial elements of your AR scene.

According to ARKit documentation 2017:

ARAnchor is a real-world position and orientation that can be used for placing objects in AR Scene. Adding an anchor to the session helps ARKit to optimize world-tracking accuracy in the area around that anchor, so that virtual objects appear to stay in place relative to the real world. If a virtual object moves, remove the corresponding anchor from the old position and add one at the new position.

ARAnchor is a parent class for all other types of anchors existing in ARKit framework, hence all these subclasses inherit from ARAnchor class but cannot use it directly in your code. I must also say that ARAnchor and Feature Points have nothing in common. Feature Points are rather for successful tracking and for debugging.

ARAnchor doesn’t automatically track a real world target. If you need automation, you have to use renderer(...) or session(...) instance methods that you can call if you conformed to protocols ARSCNViewDelegate or ARSessionDelegate respectively.

Here’s an image with visual representation of plane anchor. But keep in mind: by default, you can neither see a detected plane nor its corresponding ARPlaneAnchor. So, if you wanna see any anchor in your scene, you have to "visualize" it using three thin SCNCylinder primitives.

enter image description here

In ARKit you can automatically add ARAnchors to your scene using different scenarios:

  • ARPlaneAnchor

    • If horizontal and/or vertical planeDetection instance property is ON, ARKit is able to add ARPlaneAnchors to the current session. Sometimes enabled planeDetection considerably increases a time required for scene understanding stage.
  • ARImageAnchor (conforms to ARTrackable protocol)

    • This type of anchors contains information about the position and orientation of a detected image (anchor is placed at an image center) in world-tracking session. For activation use detectionImages instance property. In ARKit 2.0 you can totally track up to 25 images, in ARKit 3.0 and ARKit 4.0 – up to 100 images, respectively. But, in both cases, not more than just 4 images simultaneously. It was promised, that in ARKit 5.0, however, you can detect and track up to 100 images at a time (but it’s still not implemented yet).
  • ARBodyAnchor (conforms to ARTrackable protocol)

    • In the latest release of ARKit you can enable body tracking by running your session with ARBodyTrackingConfiguration(). You’ll get ARBodyAnchor in a Root Joint of CG Skeleton, or at pelvis position of tracked character.
  • ARFaceAnchor (conforms to ARTrackable protocol)

    • Face Anchor stores the information about the topology and pose, as well as face’s expression that you can detect with a front TrueDepth camera or with regular RGB camera. When face is detected, Face Anchor will be attached slightly behind a nose, in the center of a face. In ARKit 2.0 you can track just one face, in ARKit 3.0 – up to 3 faces, simultaneously. In ARKit 4.0 a number of tracked faces depends on a TrueDepth sensor and CPU: smartphones with TrueDepth camera tracks up to 3 faces, smartphones with A12+ chipset, but without TrueDepth camera, can also track up to 3 faces.
  • ARObjectAnchor

    • This anchor’s type keeps an information about 6 Degrees of Freedom (position and orientation) of a real-world 3D object detected in a world-tracking session. Remember that you need to specify ARReferenceObject instances for detectionObjects property of session config.
  • AREnvironmentProbeAnchor

    • Probe Anchor provides environmental lighting information for a specific area of space in a world-tracking session. ARKit’s Artificial Intelligence uses it to supply reflective shaders with environmental reflections.
  • ARParticipantAnchor

    • This is an indispensable anchor type for multiuser AR experiences. If you want to employ it, use true value for isCollaborationEnabled instance property in MultipeerConnectivity framework.
  • ARMeshAnchor

    • ARKit and LiDAR subdivide the reconstructed real-world scene surrounding the user into mesh anchors with corresponding polygonal geometry. Mesh anchors constantly update their data as ARKit refines its understanding of the real world. Although ARKit updates a mesh to reflect a change in the physical environment, the mesh’s subsequent change is not intended to reflect in real time. Sometimes your reconstructed scene can have up to 50 anchors or even more. This is due to the fact that each classified object (wall, chair, door or table) has its own personal anchor. Each ARMeshAnchor stores data about corresponding vertices, one of eight cases of classification, its faces and vertices normals.
  • ARGeoAnchor (conforms to ARTrackable protocol)

    • In ARKit 4.0+ there’s a geo anchor (a.k.a. location anchor) that tracks a geographic location using GPS, Apple Maps and additional environment data coming from Apple servers. This type of anchor identifies a specific area in the world that the app can refer to. When a user moves around the scene, the session updates a location anchor’s transform based on coordinates and device’s compass heading of a geo anchor. Look at a list of supported cities.
  • ARAppClipCodeAnchor (conforms to ARTrackable protocol)

    • This anchor tracks the position and orientation of App Clip Code in the physical environment in ARKit 4.0+. You can use App Clip Codes to enable users to discover your App Clip in the real world. There are NFC-integrated App Clip Code and scan-only App Clip Code.

enter image description here

There are also other regular approaches to create anchors in AR session:

  • Hit-Testing methods

    • Tapping on the screen, projects a point onto a invisible detected plane, placing ARAnchor on a location where imaginary ray intersects with this plane. By the way, ARHitTestResult class and its corresponding hit-testing methods for ARSCNView and ARSKView will be deprecated in iOS 14, so you have to get used to a Ray-Casting.
  • Ray-Casting methods

    • If you’re using ray-casting, tapping on the screen results in a projected 3D point on an invisible detected plane. But you can also perform Ray-Casting between A and B positions in 3D scene. The main difference of Ray-Casting from Hit-Testing is that, when using the first one ARKit can keep refining the ray cast as it learns more about detected surfaces, and Ray-Casting can be 2D-to-3D and 3D-to-3D.
  • Feature Points

    • Special yellow points that ARKit automatically generates on a high-contrast margins of real-world objects, can give you a place to put an ARAnchor on.
  • ARCamera’s transform

    • iPhone’s camera’s position and orientation simd_float4x4 can be easily used as a place for ARAnchor.
  • Any arbitrary World Position

    • Place a custom ARWorldAnchor anywhere in your scene. You can generate ARKit’s version of world anchor like AnchorEntity(.world(transform: mtx)) found in RealityKit.

This code snippet shows you how to use an ARPlaneAnchor in a delegate’s method: renderer(_:didAdd:for:):

func renderer(_ renderer: SCNSceneRenderer, 
             didAdd node: SCNNode, 
              for anchor: ARAnchor) {
    guard let planeAnchor = anchor as? ARPlaneAnchor 
    else { return }

    let grid = Grid(anchor: planeAnchor)


AnchorEntity is alpha and omega in RealityKit. According to RealityKit documentation 2019:

AnchorEntity is an anchor that tethers virtual content to a real-world object in an AR session.

RealityKit framework and Reality Composer app were released in WWDC’19. They have a new class named AnchorEntity. You can use AnchorEntity as the root point of any entities’ hierarchy, and you must add it to the Scene anchors collection. AnchorEntity automatically tracks real world target. In RealityKit and Reality Composer AnchorEntity is at the top of hierarchy. This anchor is able to hold a hundred of models and in this case it’s more stable than if you use 100 personal anchors for each model.

Let’s see how it looks in a code:

func makeUIView(context: Context) -> ARView {
    let arView = ARView(frame: .zero)
    let modelAnchor = try! Experience.loadModel()
    return arView

AnchorEntity has three components:

To find out the difference between ARAnchor and AnchorEntity look at THIS POST.

Here are nine AnchorEntity’s cases available in RealityKit 2.0 for iOS:

// Fixed position in the AR scene
AnchorEntity(.world(transform: mtx)) 

// For body tracking (a.k.a. Motion Capture)

// Pinned to the tracking camera

// For face tracking (Selfie Camera config)

// For image tracking config
AnchorEntity(.image(group: "GroupName", name: "forModel"))

// For object tracking config
AnchorEntity(.object(group: "GroupName", name: "forObject"))

// For plane detection with surface classification
AnchorEntity(.plane([.any], classification: [.seat], minimumBounds: [1, 1]))

// When you use ray-casting
AnchorEntity(raycastResult: myRaycastResult)

// When you use ARAnchor with a given identifier
AnchorEntity(.anchor(identifier: uuid))

// Creates anchor entity on a basis of ARAnchor
AnchorEntity(anchor: arAnchor) 

And here are only two AnchorEntity’s cases available in RealityKit 2.0 for macOS:

// Fixed world position in VR scene
AnchorEntity(.world(transform: mtx))

// Camera transform

Also it’s not superfluous to say that you can use any subclass of ARAnchor for AnchorEntity needs:

func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {

    guard let faceAnchor = anchors.first as? ARFaceAnchor 
    else { return }

    arView.session.add(anchor: faceAnchor)

    self.anchor = AnchorEntity(anchor: faceAnchor)

Reality Composer’s anchors:

At the moment (February 2022) Reality Composer has just 4 types of AnchorEntities:

enter image description here

// 1a
AnchorEntity(plane: .horizontal)

// 1b
AnchorEntity(plane: .vertical)

// 2
AnchorEntity(.image(group: "GroupName", name: "forModel"))

// 3

// 4
AnchorEntity(.object(group: "GroupName", name: "forObject"))

AR USD Schemas

And of course, I should say a few words about preliminary anchors. There are 3 preliminary anchoring types (February 2022) for those who prefer Python scripting for USDZ models – these are plane, image and face preliminary anchors. Look at this code snippet to find out how to implement a schema pythonically.

def Cube "ImageAnchoredBox" (prepend apiSchemas = ["Preliminary_AnchoringAPI"])
    uniform token preliminary:anchoring:type = "image"
    rel preliminary: imageAnchoring:referenceImage = <ImageReference>

    def Preliminary_ReferenceImage "ImageReference"
        uniform asset image = @[email protected]
        uniform double physicalWidth = 45

Visualizing AnchorEntity

Here’s an example of how to visualize anchors in RealityKit (mac version).

import AppKit
import RealityKit

class ViewController: NSViewController {
    @IBOutlet var arView: ARView!
    var model = Entity()
    let anchor = AnchorEntity()

    fileprivate func visualAnchor() -> Entity {

        let colors: [SimpleMaterial.Color] = [.red, .green, .blue]

        for index in 0...2 {
            let box: MeshResource = .generateBox(size: [0.20, 0.005, 0.005])
            let material = UnlitMaterial(color: colors[index])              
            let entity = ModelEntity(mesh: box, materials: [material])

            if index == 0 {
                entity.position.x += 0.1

            } else if index == 1 {
                entity.transform = Transform(pitch: 0, yaw: 0, roll: .pi/2)
                entity.position.y += 0.1

            } else if index == 2 {
                entity.transform = Transform(pitch: 0, yaw: -.pi/2, roll: 0)
                entity.position.z += 0.1
            model.scale *= 1.5
        return self.model

    override func awakeFromNib() {

enter image description here

Answered By – Andy Jazz

Answer Checked By – David Goodson (BugsFixing Volunteer)

Leave a Reply

Your email address will not be published. Required fields are marked *