The healthcare industry is projected as one of the greatest beneficiaries of augmented reality tech. Learn how AR is already reshaping clinical practice.Read more
Over the past century, clinical practice has undergone an almost unthinkable transformation. Just over 150 years ago, doctors didn’t even know they could transmit germs on their hands.
On maternity wards of the past, doctors would move from one female patient to the next, examining each without washing their hands in between, much less wearing gloves.
Needless to say, maternal death ran rampant.
So to think we can now use augmented reality during reconstructive surgery to locate bone fractures and blood vessels is quite an astonishing advancement.
Check out what else augmented reality makes possible in healthcare.
The potential of AR is visible across industries. From education to manufacturing to automotive, augmented reality fills gaps in workflows, offering tangible opportunities for improvement.
For example, AR can be used to eliminate inefficiencies in manufacturing — the constant loss of focus and time necessary for the engineers to refer to paper instructions amounts to considerable losses over time.
If you've ever struggled with a paper instruction for a shiny new IKEA bookshelf, quality assurance in the automotive industry is like the IKEA situation times 1,782.4.
For extremely precise and complex tasks, the help of computers is invaluable. And the better the augmented reality technology gets, the more advanced its use cases in clinical practice.
In fact, the healthcare industry is projected to add a $47.9 billion boost to the global economy by 2022.
The current clinical practice is evolving at light speed, but the sector still has multiple areas in need of solutions that can benefit from technology.
AR is an entirely new concept to you? Read our AR guide to learn the basics of augmented reality.
Reconstructive surgery calls for a high precision to yield the expected results and improve a patient's recovery time.
Augmented reality goggles can be used to feed information from CT scans and MRI images directly into a surgeon’s field of vision. This way, the surgeon knows where blood vessels and bone fractures are and can increase the precision of the incision during reconstructive surgery.
First, surgeons perform diagnostic imaging on the patient. Then, the data from CT scans, MRI, and X-rays is digitized and transformed into a 3D model, which shows the location of soft tissues, blood vessels, and bones.
This rendering is then fed into the AR device. During the surgery, the 3D rendering is mapped on a patient’s body, providing the surgeon with critical information.
Thanks to this approach, surgeons don’t have to look back and forth between the patient and the images, or rely on audible Doppler ultrasound, which is currently the prevalent method during reconstructive surgeries.
The technology is far from mature, with many challenges waiting in line before mainstream adoption is possible.
For example, transforming information from CT scans and MRI into 3D models is time consuming — the ER won’t benefit from AR, at least for now.
Neurosurgeons at Johns Hopkins used augmented reality during spinal fusion surgery on June 8, 2020. Two days later, surgeons relied on an AR headset during the removal of a cancerous tumor from the spine of another patient. Both procedures were conducted using a pilot augmented reality headset from Augmedics.
In 2021, Dr. Harvinder Sandhu at Stamford Health performed a successful spinal surgery using AR goggles. The AR technology uses data from MRIs to overlay critical tissue around a patient's spine. Provided with detailed data projected directly on the retina, the doctor is able to perform more precise surgeries that speed up recovery and decrease the likelihood of infection.
Global skill gaps, pandemic-related border closures, and travel limitations stress the need for remote assistance solutions. Discover Remote Assist App.Read more
For many of us, technology is a staple commodity. It permeates a growing number of households, and factories increasingly rely on highly specialized machinery. And while technology has catapulted the evolution of our civilization, it’s prone to breaking.
But technicians who can fix problems may not always be available, and on-site service is often costly. Pandemic-related border closures and travel restrictions also limit the availability of specialists. In this context, remote assist solutions emerge as reliable support across industries.
In the last 30 years, the adoption of specific technologies in the US alone has skyrocketed.
In the business world, technology has spurred production and manufacturing on a palpable scale. According to the World Economic Forum, OECD producers that adopted technology “have grown at a rate of 3.5%, compared with an anaemic 0.5% for the laggards.”
Even small manufacturing facilities have seen an increase in the diffusion of technological development, which led to a more productive workforce and output increases.
Developing countries have also observed an increase in technological innovation, albeit at a lower scale.
The global technological advancement and the ubiquitous presence of technologies in many sectors call for engineers and technicians who can not only drive that innovation but also maintain it.
Yet there’s an alarming shortage of skilled engineers and technicians who can carry out innovative infrastructure projects. On the other hand, less qualified personnel can complete tasks with varying levels of complexity given professional assistance.
Remote assistance might therefore become an affordable and scalable solution that bridges the gap between the need for support and the lack of on-site skill.
The app has two modes: consultant and user.
When a user connects, they choose a problem from a list of available topics.
The user then waits for a connection with a technician.
After logging in, the technician sees a list with active user sessions awaiting help for a given topic.
Once connected with the user, the technician has access to the user’s rear camera.
The technician communicates with the user:
How we evaluated different input methods available for mobile augmented reality and why we didn’t choose a hand gesture-based interaction method.Read more
In this article, we’ll explain why we considered incorporating hand gestures as an interaction method for one of our mobile XR projects, what options we evaluated, and why we ultimately decided not to follow that path, at least for the first release.
In April 2020, we started working on a new AR project for a large entertainment center in Las Vegas. The project was a treasure hunt-like experience to be deployed as an interactive installation.
During the initial discussions, iPhone 11 Pro Max was selected as the target platform. The devices would be encased in custom-designed handheld masks to amplify the feeling of immersion into the whole theme of the venue.
We chose that specific model of the iPhone for the horsepower necessary to handle the demanding particle-based graphics. Also, the phone had a reasonably sized AR viewport at a relatively low weight of the entire setup.
Considering the project’s scale, its high dependency on the graphical assets, and our pleasant experiences with developing the Magic Leap One app for HyperloopTT XR Pitch Deck, we selected Unity as our main development tool.
In the AR experience, the users would rent the phone-outfitted masks and look for special symbols all around the dim but colorfully lit, otherworldly venue. The app would recognize the custom-designed symbols covered with fluorescent paint and display mini-games over them for the users to enjoy.
We already had several challenges to solve.
The main one was detecting the symbols — there was nothing standard about them in terms of AR-tracking. Also, the on-site lighting was expected to be atmospheric, meaning dark — totally AR-unfriendly.
The games themselves were another challenge. Users could approach and view the AR games from any distance and at any angle (e.g., over any background). The graphics would consist mostly of very resource-intensive particle effects, as it was the theme of the app.
Plus, we only had a couple of months to develop the AR experience.
There was also one big question left “how are the users going to interact with the app?”
During our work on a mobile AR project, we reviewed many non-touch screen interaction methods. Read our findings and learn where to use different solutions.Read more
For one of our recent projects — a mobile AR treasure-hunt type experience that consisted of a series of mini-games — we were challenged to find a way of interacting with the app that would be more engaging and fun than a touchscreen interface.
Since we’ve spent some time and effort reviewing various options, we thought it might be helpful to others if we shared our findings.
Our goal is to show you what’s out there and highlight the solutions that we find to be the most promising for mobile interactions.
Not all of the solutions target mobile AR or are actual controllers. But we thought it makes sense to include them as well, since, as you will see, some of them have the potential to be viable in the future or are simply really cool.
The list is by no means exhaustive, and we’d be happy to hear if you know of any awesome AR interaction solutions that we might have missed.
Litho is a very promising mobile AR controller. The device is small with a futuristic look, which matched really well the Sci-Fi theme of our project.
The initial research showed great potential, and, since we developed our app in Unity and Litho offers an SDK for it, we decided to take it to round two of our evaluation.
Our colleague Dominik prepared a prototype that allowed us to determine how well Litho fit in with the requirements of our project.
The SDK setup was straightforward and implementing the prototype took relatively little time. From the video, you can see that Litho requires minimal initial calibration. The interaction is based on a precise virtual laser pointer with a fixed origin. Litho offers 3 DOF readings, meaning you can detect how the hand is oriented in space, but not where it’s located. The quality of the interactions doesn’t depend on the lighting conditions.
After evaluation, we decided not to use Litho for our entertainment experience, mainly because it didn’t offer hand position tracking, which we considered crucial for achieving a high fun factor.
The significance of this argument came from the fact that adding any kind of external controller made the logistics more complex so the provided entertainment value needed to be worth that extra cost.
Our app would be preinstalled on mobile devices, which would then be rented on the premises for the users to enjoy for a specified time.
Using any kind of controller required us to pair it with the devices, ensure it remained charged, and account for it when picking up the equipment once a visitor finished using the app.
Even though we decided not to use Litho in our project, I think it could work well in any kind of mobile AR application where the precision of the interaction is the key and where the users interact with the app regularly.
FinchDash / FinchShift are AR/VR motion controllers. A Unity SDK is available. Of the two, only FinchDash is described as supporting mobile platforms, which is a bit unfortunate as it only allows 3 DOF tracking.
Also, the controller’s rather unremarkable looks don’t fit well with themed experiences.
FinchShift, FinchDash's sibling device, uses an additional piece of equipment in the form of an armband to offer full 6 DOF input. It’s also more visually appealing in my opinion.
ManoMotion is a camera-based mobile hand tracking solution that offers a Unity SDK. It’s easy to set up and doesn’t require additional calibration. It can scale depending on the needs from 2D tracking through 3D tracking of the hand position and finally to a full-blown skeletal tracking with gesture recognition.
As with any camera-based solution though, it has some technical limitations that need to be considered.
The main one is the effective control area that is only as large as the camera’s field of view. Since we’re discussing a mobile use case here, it’s even more significant as the device in our project would be held in one hand and you can extend the other one so far before it becomes awkward.
Another disadvantage is the reliance on computer vision algorithms, which causes the accuracy to be inconsistent across different lighting conditions. Especially colored light can degrade the experience quite a bit.
That said, we had the chance to work with ManoMotion’s support on our challenging use case (dim colored lighting). It turns out that ManoMotion can adjust their algorithm if the target conditions are known in advance. In our case, it allowed achieving a similar level of accuracy in the challenging lighting as in an optimal one, which was very impressive.
Google MediaPipe is an open-source camera-based hand tracking solution, similar to ManoMotion, and as such it shares the same limitations. In terms of platforms, it supports Android and iOS. But at the time of our research, it didn’t offer an officially supported Unity SDK.
ClayControl is another option in the category of camera-based hand tracking solutions. It seems to cover a wide range of platforms, including mobile, and has Unity on the list of compatible software platforms.
ClatControl’s website mentions low latency as one of the key selling points, which is interesting considering that solutions based on cameras and computer vision usually involve some degree of an input lag. It seems there is no SDK openly available for it, so we didn’t have a chance to evaluate it.
Polhemus is a very promising wearable camera-less hand tracking solution. It’s based on 6 DOF sensors, which don’t require a field of view and can provide continuous tracking even in complete darkness.
At the time of our research, however, it was PC-only. On the website, you can find information about VR, but unfortunately no AR support yet.
Although Xsens DOT is a motion-tracking device and not an actual controller, it could be used as one with some additional work. It does offer 3 DOF support so the orientation data is accurate but the position is only estimated.
It’s smaller than Litho, which itself is quite small. At the time of writing, I wouldn't consider it practical for typical AR interactions. It might be worth considering for more specific motion tracking needs.
FingerTrak is another promising wearable, this time in the form of a bracelet. It allows continuous hand pose tracking using relatively small thermal cameras attached to a wristband.
On its own FingerTrak allows detecting gestures without the field of view limitation, which affects the other camera-based solutions. It doesn’t look like a finished product yet, but it seems to be an interesting approach that could turn out well in the future.
Leap Motion Controller uses a relatively small sensor with cameras and a skeletal tracking algorithm to accurately detect the user’s hands. Unfortunately, at the time of writing, it doesn’t support mobile yet.
Wait, what? As surprising as it sounds, someone actually seems to have tried using AirPods Pro for 6 DOF tracking! This is more of a curio, but if the video is actually legitimate, AirPods open up some unexpected possibilities.
Ensuring your Android app development project is successful is easy when you know what’s involved in the process. Learn all about Android app development.Read more
Shopping, organizing, scheduling, banking, working, and simply having fun, it’s often a mobile app that solves, bringing convenience galore.
All right, so how come such a shattering number of apps fails?
That’s easy — because they don’t solve anything for their target audience. Indeed, it’s the failure to research the market that’s the top reason why mobile applications flop on app stores.
Whether we’re talking about iOS or Android app development, the principle is the same:
Do your research and battle test your app idea with potential users. Only then will you be able to use the application as a vehicle for business growth.
In this article, I’ll take you through the basic steps you need to take to make your Android development project successful.
A business analysis of your idea is the most important part of the app creation process.
I can’t stress this enough but validating your idea before investing money in its creation will help you approach the process with the necessary level of confidence backed by data.
Here, you have to consider concepts such as:
A key phase during idea validation is a detailed analysis of your competitors.
Looking at how your competitors solve for their audiences will help you build a better strategy for your product — you’ll know what worked for them and what has drawn negative feedback.
Similarly important is determining if your product is lucrative. To do this, think about how your competitor’s growth looks like. Have they developed a sizable customer base? Are they out looking for specialists to expand their business even more?
In essence, you validate your idea to learn if the product will pick up on the market and give you sustainable growth opportunities.
Use tools such as lean canvas for a deeper dive into your product analysis.
Be it an internal application for employees, a customer-facing app, or a game, the design is important across all app types. Positive user experience comes from both how the app looks like and how it works.
That said, when creating designs, make sure the designer understands the project and knows your target audience. The colors, typography, pictures, animations — all have to be aligned to reflect the needs and preferences of your users.
If you’re releasing your app into a highly competitive market with many similar apps, the design of the interface can make or break a deal.
For example, users might love the functionalities offered by your competitor. But if that product has a poor interface and usability, consumers would just as well leave the application in search for a better solution.
To be a viable alternative, however, your product needs seamless UI and UX.
So, regardless of application type, keep your design smart, intuitive, and clean. A cluttered interface is a sure-fire way to make your users wrinkle their noses.
Augmented reality is a flexible technology that blends digital information with the real world. Learn what is augmented reality in our guide.Read more
We’re slowly entering the era of augmented reality dominance. With prices for AR hardware dropping and tech giants announcing the release of their AR devices in 2021–2022, mainstream AR adoption is just a matter of time.
But it’s no longer a far-flung future.
Dive deeper into the world of augmented reality in our guide to immersive tech.
With all the technology that goes into an AR experience, the definition of augmented reality is pretty straightforward.
Augmented reality (AR) is a computer-enhanced version of the physical world, with digital content used to amplify the user experience of reality.
Digital content in AR can range from simple graphics and animations to videos and GPS overlays. The rendered content can “feel” and respond to the user’s environment and the user can also manipulate the content (via gestures or movement, for example).
With all the theory checked off, let’s check out some of the most exciting augmented reality apps out there.
Mondly helps users learn 33 languages via adaptive learning, which customizes the curriculum based on user progress. The app offers AR-supported conversations, pronunciation advice, and advanced statistics to improve language learning and knowledge retention.
Mondly also features augmented reality lessons where you can view animals and objects in your space for deeper engagement and better experience.
An augmented reality language lesson. Source: Mondly
Now this is a hugely entertaining AR mobile app that rubs my imagination in the right way.
Anyone a fan of Stranger Things, here? The 80’s AR Portal gives you a chance to immerse yourself in a fun experience and feel the crazy retro-cosmic spirit of the 80s. It’s definitely not a perfect app, but the concept is alluring.
ARLOOPA is an educational app with a flair for entertainment. With the app, you can view multiple AR models in your space and engage with them.
Interestingly, the app has three different types of AR available for you to experience: marker-based, markerless, and geo-location.
The Human Anatomy Atlas 2021 is one of the best augmented reality examples out there. Excellent for medical students and health enthusiasts, the app features some of the most detailed human anatomy models, with tissues, muscles, bones, and nerves. All supported by interactive lectures.
And you can view all those fancy body parts in AR.
Bringing immersive technologies into the healthcare industry can help doctors explain complex procedures to patients and serve as an excellent reference during reconstructive surgeries.
We talked in detail about augmented reality in healthcare in one of our articles.
No respected AR app list can go without the IKEA home decor app.
IKEA Place lets users render augmented reality furniture in their living spaces. The app views real-size furniture, letting homeowners better visualize the items from IKEA’s catalog before deciding on a purchase.
For home decor, there’s also the amazing Houzz app, which got the Editor's Choice award on the app store.
Shapr3D is an award-winning app for 3D modeling workflows. It’s a professional CAD (computer-aided design) tool with robust capabilities. A work-horse for designers.
With the expansion of the app to macOS, the Shapr3D iPad users can now collaborate on prototypes across devices.
In 2020, Shapr3D got Apple’s Design Award (the first CAD application to get it).
Canvas is a house scanning AR application that uses Apple’s raw lidar data to create interior CAD models and floor plans.
The improved depth-mapping properties of LiDAR technology result in more accurate models (the developer claims a 5x increase in accuracy compared to the non-lidar app version).