Answering Project Questions
Does LiDAR reduce the floating effect in virtual objects? Is LiDAR able to improve the stability of the scene in low lighting conditions?
Unfortunately, the performance of an iPad with the new lidar sensor in low lighting conditions is exactly the same as the performance of other devices. The iPad is dealing with the same issues, including floating and flickering. I believe this is caused by the lack of extra feature points.
How complex is the SDK mesh?
According to our tests, the complexity of the mesh is satisfying.
The ARKit generates 30 thousand vertices on average for a room full of details. In the worst-case scenario, room topology consisted of 63 thousand vertices.
According to the Unreal Engine documentation, the suggested number of vertices in a single mobile app mesh is less than 65,535 (1, 2). Keep in mind, however, that the number of vertices doesn’t mean the application will be slowed down. The complexity of the scene depends on many other factors, including lighting, physics, etc.
According to industry experts from discussion boards, we can assume that modern mobile devices are able to process more complicated scenes (3, 4, 5, 6).
What’s more, the other tests conducted in a large corridor and the ones that involved laser scanning of a large area weren't too challenging for the device.
When it comes to the complexity and the appearance of the mesh, it’s positively surprising. We compared the results to our 6D.ai test from last year, and LiDAR turns out to be much better.
See our 6D.ai test number one and 6D.ai test number two
LiDAR and its mesh algorithm do a great job with connected triangles, which is not true for its competitor.
Can we export the selected mesh/area?
Yes, sure. Due to time constraints, I haven’t done that in the project. According to the documentation, the detected mesh should be saved as ARGeometryElement (vertex buffer), accessed through ARMeshAnchor. Of course, these anchors will be available in the structure of the ARKit session. From there, you can move on to write the export.
ARMeshAnchor — includes property geometry, ARMeshGeometry type
Geometry — includes vertex buffer (ARGeometryElement) and more information about the structure
Faces — vertex buffer with the detected area
Can we manipulate the mesh?
The answer is no, with a little but.
All properties representing the geometry are read-only, and they only have the getter. Based on that, we assume that the creators don’t want us to modify this mesh.
Additionally, ARKit and the LiDAR Scanner update the mesh in the next steps, which might overwrite our changes.
However, it’s not a total limitation. We can still create a copy of the mesh or disable updates by turning off the AR session and leaving the detected mesh in working memory. There are still options to customize the detected topology.
How can we create different levels of details and connect them with each other?
ARKit optimizes the detected mesh automatically. It’s best visible when you look at flat walls. On the first scan, they will be built of several small triangles. With every scan, the ARKit will keep adding up the small triangles to create bigger ones.
When it comes to the junctions, there weren’t any major issues. I haven’t spotted any holes nor bigger artifacts. According to my observations, the algorithm is doing really well.
Unfortunately, I wasn’t able to render different levels of details. It would be great if the mesh could become less detailed when the user goes further from the object. In every test, no matter the location of the user, the detected mesh hasn’t changed the level of detail.
Can we interact with the detected mesh?
It depends on how we define interaction.
In general, the detected mesh mirrors/approximates the reality around the user. The user may interact with virtual objects in the scene — e.g., block the objects with enabled physics.
When it comes to interacting with the mesh through, e.g., our hand or physical subjects, it depends on the session configuration.
Normally the lidar sensor detects people and moving objects and generates a mesh for these objects. Unfortunately, these meshes are not very precise, and the delay in updates is very visible (even up to 1-2 seconds).
Freshly detected elements can influence virtual objects, although they remain a part of a larger whole. Because of this, it’s hard to create an effect where a real bowling ball hits virtual pins. In this case, the ball is considered a part of the whole, not a separate object with enabled physics. Another difficulty, in this case, is the visible delay.
When it comes to hands being visible in the field of view, you can disable including them in the mesh. All you need to do is enable people segmentation in the AR session.
How does LiDAR react to an outstretched hand? Is it going to consider it as a part of the environment and create a mesh for the hand?
As I mentioned in the last answer, it depends on the AR session configuration. If the session has people segmentation enabled, LiDAR will try to omit elements of the human body.
I used the word try on purpose. Despite very good effects, you can tell that the ARKit updates the covered part of the environment. This may lead to small artifacts. During testing, they were quite irrelevant, and they’re unlikely to affect the experience. However, it’s good to keep that in mind when working on a project with complex physics.
If people segmentation is disabled, LiDAR will scan the hand and generate a mesh for it. The scan won’t be very precise, and there will be a visible delay.
Does the LiDAR Scanner influence the detection of flat, printed symbols?
I haven’t observed a significant advantage of an iPad with LiDAR over other devices. The results were very similar. I think you could say that LiDAR doesn’t affect the detection of flat surfaces.
Does LiDAR influence the detection of 3D symbols?
Tests in this area were quite complicated and ambiguous. Based on the number of detected feature points and the fact that it’s quite similar to results from, e.g., iPhone SE 2020, I believe LiDAR doesn’t affect the detection of 3D objects.
This is probably caused by the lack of extra feature points from the LiDAR camera, which was confirmed in tests of other groups and other features.