AR / VR

LiDAR Research — Analyzing Apple’s LiDAR Scanner

LiDAR scanner by Apple

X min read

8.10.2020

article content

Aim of the Research

The aim of this research is to explore the capabilities of the LiDAR Scanner used in the iPad Pro 2020 and iPhone 12 Pro. We’re going to research the pros and cons of this solution, along with potential issues. We’ll also see if it enhances the overall performance of ARKit.

Project Assumptions

The project was supposed to validate whether the use of the LiDAR Scanner in the new iPad Pro is going to significantly influence the quality of augmented reality (AR) applications.

A set of dedicated tests is needed to answer the following questions:

  • Does LiDAR reduce the floating effect in virtual objects? Is LiDAR able to improve the stability of the scene in low lighting conditions?
  • How complex is the SDK mesh?
  • Can we export the selected mesh/area?
  • Can we manipulate the mesh?
  • How can we create different levels of details and connect them together?
  • Can we interact with the detected mesh?
  • How does LiDAR react to an outstretched hand? Is it going to consider it as a part of the environment and create a mesh for the hand?
  • Does LiDAR influence the detection of flat, printed symbols?
  • Does LiDAR influence the detection of 3D symbols?

Results of Our LiDAR Research

According to our tests, the iPad LiDAR Scanner significantly improves the reconstruction of the real world. It detects surfaces, walls, and surrounding objects quickly and accurately. What’s more, it also performs depth scanning of the area and allows a system-based implementation of occlusion effect algorithms.

Despite the advantages mentioned above, the LiDAR Scanner doesn’t address many of the biggest AR issues. The new radar doesn’t make the scene more stable, and it doesn’t solve the floating and flickering problems either.

The performance in low light conditions also remains unimproved. What’s more, these issues can affect topology detection and generate irrelevant artifacts.

To sum up, the quick and accurate LiDAR radar makes AR applications more immersive thanks to system-based occlusion. On the other hand, it’s not a universal solution that would solve all the main problems and take mobile augmented reality to another level.

Adding a new scanner to iPad devices is another step in the evolution rather than a revolution in itself.


Answering Project Questions

Does LiDAR reduce the floating effect in virtual objects? Is LiDAR able to improve the stability of the scene in low lighting conditions?

Unfortunately, the performance of an iPad with the new lidar sensor in low lighting conditions is exactly the same as the performance of other devices. The iPad is dealing with the same issues, including floating and flickering. I believe this is caused by the lack of extra feature points.

How complex is the SDK mesh?

According to our tests, the complexity of the mesh is satisfying.

The ARKit generates 30 thousand vertices on average for a room full of details. In the worst-case scenario, room topology consisted of 63 thousand vertices.

According to the Unreal Engine documentation, the suggested number of vertices in a single mobile app mesh is less than 65,535 (1, 2). Keep in mind, however, that the number of vertices doesn’t mean the application will be slowed down. The complexity of the scene depends on many other factors, including lighting, physics, etc.

According to industry experts from discussion boards, we can assume that modern mobile devices are able to process more complicated scenes (3, 4, 5, 6).

What’s more, the other tests conducted in a large corridor and the ones that involved laser scanning of a large area weren't too challenging for the device.

When it comes to the complexity and the appearance of the mesh, it’s positively surprising. We compared the results to our 6D.ai test from last year, and LiDAR turns out to be much better.

See our 6D.ai test number one and 6D.ai test number two

LiDAR and its mesh algorithm do a great job with connected triangles, which is not true for its competitor.

Can we export the selected mesh/area?

Yes, sure. Due to time constraints, I haven’t done that in the project. According to the documentation, the detected mesh should be saved as ARGeometryElement (vertex buffer), accessed through ARMeshAnchor. Of course, these anchors will be available in the structure of the ARKit session. From there, you can move on to write the export.

Data structures:

ARMeshAnchor — includes property geometry, ARMeshGeometry type

Geometry — includes vertex buffer (ARGeometryElement) and more information about the structure

Faces — vertex buffer with the detected area

Can we manipulate the mesh?

The answer is no, with a little but.

All properties representing the geometry are read-only, and they only have the getter. Based on that, we assume that the creators don’t want us to modify this mesh.

Additionally, ARKit and the LiDAR Scanner update the mesh in the next steps, which might overwrite our changes.

However, it’s not a total limitation. We can still create a copy of the mesh or disable updates by turning off the AR session and leaving the detected mesh in working memory. There are still options to customize the detected topology.

How can we create different levels of details and connect them with each other?

ARKit optimizes the detected mesh automatically. It’s best visible when you look at flat walls. On the first scan, they will be built of several small triangles. With every scan, the ARKit will keep adding up the small triangles to create bigger ones.

When it comes to the junctions, there weren’t any major issues. I haven’t spotted any holes nor bigger artifacts. According to my observations, the algorithm is doing really well.

Unfortunately, I wasn’t able to render different levels of details. It would be great if the mesh could become less detailed when the user goes further from the object. In every test, no matter the location of the user, the detected mesh hasn’t changed the level of detail.

Can we interact with the detected mesh?

It depends on how we define interaction.

In general, the detected mesh mirrors/approximates the reality around the user. The user may interact with virtual objects in the scene — e.g., block the objects with enabled physics.

When it comes to interacting with the mesh through, e.g., our hand or physical subjects, it depends on the session configuration.

Normally the lidar sensor detects people and moving objects and generates a mesh for these objects. Unfortunately, these meshes are not very precise, and the delay in updates is very visible (even up to 1-2 seconds).

Freshly detected elements can influence virtual objects, although they remain a part of a larger whole. Because of this, it’s hard to create an effect where a real bowling ball hits virtual pins. In this case, the ball is considered a part of the whole, not a separate object with enabled physics. Another difficulty, in this case, is the visible delay.

When it comes to hands being visible in the field of view, you can disable including them in the mesh. All you need to do is enable people segmentation in the AR session.

How does LiDAR react to an outstretched hand? Is it going to consider it as a part of the environment and create a mesh for the hand?

As I mentioned in the last answer, it depends on the AR session configuration. If the session has people segmentation enabled, LiDAR will try to omit elements of the human body.

I used the word try on purpose. Despite very good effects, you can tell that the ARKit updates the covered part of the environment. This may lead to small artifacts. During testing, they were quite irrelevant, and they’re unlikely to affect the experience. However, it’s good to keep that in mind when working on a project with complex physics.

If people segmentation is disabled, LiDAR will scan the hand and generate a mesh for it. The scan won’t be very precise, and there will be a visible delay.

Does the LiDAR Scanner influence the detection of flat, printed symbols?

I haven’t observed a significant advantage of an iPad with LiDAR over other devices. The results were very similar. I think you could say that LiDAR doesn’t affect the detection of flat surfaces.

Does LiDAR influence the detection of 3D symbols?

Tests in this area were quite complicated and ambiguous. Based on the number of detected feature points and the fact that it’s quite similar to results from, e.g., iPhone SE 2020, I believe LiDAR doesn’t affect the detection of 3D objects.

This is probably caused by the lack of extra feature points from the LiDAR camera, which was confirmed in tests of other groups and other features.

Potential Extensions

In the future, it would be good to test the recognition of the environment together with the classification of the detected structures. The release of scene reconstruction means ARKit can estimate if the surface is a floor, a table, or maybe a piece of furniture to sit on. This is useful in the application logic, e.g., to situate the object on a given surface.
I’ve omitted this aspect in my research, as I wasn’t able to rate it accordingly. It would be good to compare this aspect with another solution, e.g., Unity MARS.

Recommendations for Commercial Projects

My main recommendation for commercial projects is not to treat the LiDAR Scanner as a one-size-fits-all solution. The first announcements were very optimistic and described the Apple's LiDAR scans as a huge revolution. Unfortunately, the practice proves otherwise. The scanner itself is used mostly for environment recognition, without affecting session stability. As of today, LiDAR has only this specific goal in mobile AR.

Related articles

Supporting companies in becoming category leaders. We deliver full-cycle solutions for businesses of all sizes.

woman drawing room plan on mobile phone AR
Mobile Development
AR / VR

Getting a Room Plan with ARKit

ARKit offers plenty of possibilities for developers to incorporate the user's environment into a mobile app. Check out how we scanned a room using Apple's ARKit.

man with magic leap glasses
AR / VR

Best AR Glasses to Look Out for in 2024 and Beyond

Check out a curated list of AR glasses and headsets that have the potential to freshen up the market.

Cookie Consent

By clicking “Accept All Cookies,” you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.