The LiDAR Scanner is Apple’s modified version of light detection and ranging (LiDAR) technology: a laser-based distance measurement method. There’s been a lot of talk about the capabilities of Apple's LiDAR Scanner — the company claims it to be a “breakthrough” technology, giving the iPad Pro 2020 “a more detailed understanding of a scene.” We decided to analyze the capabilities of the LiDAR Scanner and see if it delivers what is promised.
The aim of this research is to explore the capabilities of the LiDAR Scanner used in the iPad Pro 2020 and iPhone 12 Pro. We’re going to research the pros and cons of this solution, along with potential issues. We’ll also see if it enhances the overall performance of ARKit.
The project was supposed to validate whether the use of the LiDAR Scanner in the new iPad Pro is going to significantly influence the quality of augmented reality (AR) applications.
A set of dedicated tests is needed to answer the following questions:
According to our tests, the iPad LiDAR Scanner significantly improves the reconstruction of the real world. It detects surfaces, walls, and surrounding objects quickly and accurately. What’s more, it also performs depth scanning of the area and allows a system-based implementation of occlusion effect algorithms.
Despite the advantages mentioned above, the LiDAR Scanner doesn’t address many of the biggest AR issues. The new radar doesn’t make the scene more stable, and it doesn’t solve the floating and flickering problems either.
The performance in low light conditions also remains unimproved. What’s more, these issues can affect topology detection and generate irrelevant artifacts.
To sum up, the quick and accurate LiDAR radar makes AR applications more immersive thanks to system-based occlusion. On the other hand, it’s not a universal solution that would solve all the main problems and take mobile augmented reality to another level.
Adding a new scanner to iPad devices is another step in the evolution rather than a revolution in itself.
Unfortunately, the performance of an iPad with the new lidar sensor in low lighting conditions is exactly the same as the performance of other devices. The iPad is dealing with the same issues, including floating and flickering. I believe this is caused by the lack of extra feature points.
According to our tests, the complexity of the mesh is satisfying.
The ARKit generates 30 thousand vertices on average for a room full of details. In the worst-case scenario, room topology consisted of 63 thousand vertices.
According to the Unreal Engine documentation, the suggested number of vertices in a single mobile app mesh is less than 65,535 (1, 2). Keep in mind, however, that the number of vertices doesn’t mean the application will be slowed down. The complexity of the scene depends on many other factors, including lighting, physics, etc.
What’s more, the other tests conducted in a large corridor and the ones that involved laser scanning of a large area weren't too challenging for the device.
When it comes to the complexity and the appearance of the mesh, it’s positively surprising. We compared the results to our 6D.ai test from last year, and LiDAR turns out to be much better.
LiDAR and its mesh algorithm do a great job with connected triangles, which is not true for its competitor.
Yes, sure. Due to time constraints, I haven’t done that in the project. According to the documentation, the detected mesh should be saved as ARGeometryElement (vertex buffer), accessed through ARMeshAnchor. Of course, these anchors will be available in the structure of the ARKit session. From there, you can move on to write the export.
ARMeshAnchor — includes property geometry, ARMeshGeometry type
Geometry — includes vertex buffer (ARGeometryElement) and more information about the structure
Faces — vertex buffer with the detected area
The answer is no, with a little but.
All properties representing the geometry are read-only, and they only have the getter. Based on that, we assume that the creators don’t want us to modify this mesh.
Additionally, ARKit and the LiDAR Scanner update the mesh in the next steps, which might overwrite our changes.
However, it’s not a total limitation. We can still create a copy of the mesh or disable updates by turning off the AR session and leaving the detected mesh in working memory. There are still options to customize the detected topology.
ARKit optimizes the detected mesh automatically. It’s best visible when you look at flat walls. On the first scan, they will be built of several small triangles. With every scan, the ARKit will keep adding up the small triangles to create bigger ones.
When it comes to the junctions, there weren’t any major issues. I haven’t spotted any holes nor bigger artifacts. According to my observations, the algorithm is doing really well.
Unfortunately, I wasn’t able to render different levels of details. It would be great if the mesh could become less detailed when the user goes further from the object. In every test, no matter the location of the user, the detected mesh hasn’t changed the level of detail.
It depends on how we define interaction.
In general, the detected mesh mirrors/approximates the reality around the user. The user may interact with virtual objects in the scene — e.g., block the objects with enabled physics.
When it comes to interacting with the mesh through, e.g., our hand or physical subjects, it depends on the session configuration.
Normally the lidar sensor detects people and moving objects and generates a mesh for these objects. Unfortunately, these meshes are not very precise, and the delay in updates is very visible (even up to 1-2 seconds).
Freshly detected elements can influence virtual objects, although they remain a part of a larger whole. Because of this, it’s hard to create an effect where a real bowling ball hits virtual pins. In this case, the ball is considered a part of the whole, not a separate object with enabled physics. Another difficulty, in this case, is the visible delay.
When it comes to hands being visible in the field of view, you can disable including them in the mesh. All you need to do is enable people segmentation in the AR session.
As I mentioned in the last answer, it depends on the AR session configuration. If the session has people segmentation enabled, LiDAR will try to omit elements of the human body.
I used the word try on purpose. Despite very good effects, you can tell that the ARKit updates the covered part of the environment. This may lead to small artifacts. During testing, they were quite irrelevant, and they’re unlikely to affect the experience. However, it’s good to keep that in mind when working on a project with complex physics.
If people segmentation is disabled, LiDAR will scan the hand and generate a mesh for it. The scan won’t be very precise, and there will be a visible delay.
I haven’t observed a significant advantage of an iPad with LiDAR over other devices. The results were very similar. I think you could say that LiDAR doesn’t affect the detection of flat surfaces.
Tests in this area were quite complicated and ambiguous. Based on the number of detected feature points and the fact that it’s quite similar to results from, e.g., iPhone SE 2020, I believe LiDAR doesn’t affect the detection of 3D objects.
This is probably caused by the lack of extra feature points from the LiDAR camera, which was confirmed in tests of other groups and other features.
Using the LiDAR Scanner did not solve the main issues with mobile AR applications. In low lighting conditions, the application was struggling with floating, flickering, and instability of the session in the case of focusing on a plain surface for a longer period of time. All of the phenomena mentioned above were reproduced.
Additionally, in the case of losing the stability of an AR session, LiDAR had some extra problems. For instance, it was likely to generate the whole topology of the scene once again, or place virtual artifacts behind a wall.
In the future, it would be good to test the recognition of the environment together with the classification of the detected structures. The release of scene reconstruction means ARKit can estimate if the surface is a floor, a table, or maybe a piece of furniture to sit on. This is useful in the application logic, e.g., to situate the object on a given surface.
I’ve omitted this aspect in my research, as I wasn’t able to rate it accordingly. It would be good to compare this aspect with another solution, e.g., Unity MARS.
My main recommendation for commercial projects is not to treat the LiDAR Scanner as a one-size-fits-all solution. The first announcements were very optimistic and described the iPad LiDAR Scanner as a huge revolution. Unfortunately, the practice proves otherwise. The scanner itself is used mostly for environment recognition, without affecting session stability. As of today, LiDAR has only this specific goal in mobile AR.
New to augmented reality? Read our in-depth guide to learn more about augmented reality and how it works.