Failures are a natural event in the life cycle of a product, but they’re especially painful when a whole project ends up unsuccessful.
When a digital product fails, the team usually performs an inspection called a postmortem. The team gathers, analyzes what went wrong, tries to draw conclusions, and creates a note. The learnings from that failure help the company adjust the processes so as not to repeat the same mistake.
But what if we could look into the future before the project failed? At nomtek, we use a premortem — an attempt to predict and mitigate problems before they happen.
What Is a Postmortem?
Before explaining what a premortem is, let’s take a closer look at a postmortem.
In medical language, a postmortem describes the procedure after the patient's death, where doctors and pathologists try to understand the cause of demise. The detailed analysis of why something as complex as a human body has failed can be applied to multiple contexts.
For example, when you need to understand what exactly went wrong to be able to avoid it in the future.
Software development loves postmortems because they’re a great learning tool.
What Is a Premortem?
The main purpose of a premortem is to simulate failure before it happens.
More specifically, a premortem is a meeting that takes place before the first line of code is written. All team members involved in a project attend it:
Depending on the size of the project and the team, a premortem meeting lasts one to two hours.
We follow these steps:
#1. The scope of the project is explained so that each participant has a full picture of the situation and potential threats.
At nomtek, we do a premortem after the kick-off meeting, when the team already had a chance to familiarize themselves with the current state and plans for the future.
#2. The moderator announces that the project has failed — this is important. We are talking in the past tense: the project has ended. Participants must feel that the action has already happened.
#3. By using sticky notes or an agile retrospective tool, participants list the reasons why the project failed.
These reasons can be the obvious ones like exceeding the budget, the less obvious ones that all fell ill with COVID and the timeline crumbled, or the totally abstract ones like a falling meteorite.
We throw out all ideas until we can move on to the next stages.
#4. Then we proceed as in a typical agile retro. We group similar cards and vote for the most important problems.
A good criterion is to vote on things that have the best chance of occurrence, and ones that we have an influence on. Sorry, we won't upvote the meteorite scenario ;)
#5. Here we discuss and come up with solutions. There are different techniques for how to carry out this step.
We simply give participants a moment to come up with specific actions and then post them simultaneously on the Slack channel where we can review the available solutions.
Experience tells us that sometimes the most unconventional ideas end up on the action point list, so it's important not to reveal solutions too early, so as not to disturb the creative phase.
The selected solutions must be actionable, precise, and measurable. We can select more than one way of mitigating the risk of a single problem.
#6. The developed action points are assigned to the appropriate people, and then we set a regular checkpoint to check the progress, e.g., in two weeks or at the beginning of a regular retrospective if it’s a repetitive activity.
Most legendary products out there start with a minimum viable product (MVP) that expands and blooms into a feature-rich solution. But not all companies get the MVP stage right. Learn what a minimum viable product is and how to build it the right way.
What Is an MVP?
An MVP is a release of your product with a minimum set of features enough to validate your main hypothesis.
In other words, an MVP is a product that solves your users’ core problems.
Eric Ries says a minimum viable product “allows a team to collect the maximum amount of validated learning about customers with the least effort.”
An MVP is a low-risk way to answer the below questions:
Do people need that product?
Can I generate a following and a returning audience with it?
Is my idea of solving a problem something that people really need? And are willing to pay for?
To do that, you don’t need fireworks and dozens of sparkling features — you need the core functionality that gives you answers to the above questions. Glitter and complexity will come later, in the next iterations.
It’s worth keeping in mind that you don’t need an MVP at all to confirm a hypothesis or check if your idea is viable. You can first look into building a proof of concept (POC) or a prototype.
Developing a mobile app is a complex process that involves technical knowledge, product strategy experience, and good old organizational skills. Here are the basic questions you have to answer before developing a mobile app. Learn what to know before developing an app.
Does Your Business Need an App?
Mobile apps might be used by billions of users multiple times a day, but would your business actually benefit from an app?
In other words, do you have business goals, especially long-term ones, that a mobile app would support?
A mobile app should solve an existing problem that’s been vetted and — more importantly — verified to generate a demand for a solution.
For example, besides an in-depth business analysis, you can also check your website analytics to learn if your customers would find value in a mobile app.
If your website is seeing a lot of mobile traffic, seize and analyze this data. Maybe you could create an app that delivers what your users are coming for to your website via their mobile phones. Check traffic sources and events in your analytics software to gain insight.
An app might also support an upcoming event your business is hosting. With an event-focused app, you can breathe more engagement for users and facilitate communication between participants.
Can I Make an App with No Experience?
It all depends on the complexity of your app.
When you lack programming skills, you can consider using low- or no-code platforms to build a simple app. That said, low- or no-code solutions still call for a level of experience, but the entry threshold is lower than that of traditional programming.
So if you need an app that will support internal company processes for employees, low-code solutions can help build one that addresses simple tasks and workflows. Similarly, no-code platforms can also be used to build e-commerce shops.
But again, there’s a learning curve — getting enough expertise will take some time. And if you need complex programming to add additional functionality to your app, some serious skills are required, code or no code involved.
Also, low-code development doesn’t completely eliminate the need to hire a developer, but it can immensely support your dev team and let them focus on creating solutions for complex business processes.
And while you can take the time and effort to build a mobile app in a no-code platform, the technical side of an app is only one part of the story.
To build, release, and maintain a successful mobile application, you’ll need experience in idea validation, data analytics, and product design just to name three.
A mobile app agency brings experience galore to the table. You get the expertise of people who went through the process of idea validation many times before and know what to do to make an app pick up on the market — sometimes do it without writing a single line of code.
Still, if you decide to pursue the process of mobile app development on your own, be sure to first validate your idea extensively using tools such as the lean canvas.
In this article, we’ll explain why we considered incorporating hand gestures as an interaction method for one of our mobile XR projects, what options we evaluated, and why we ultimately decided not to follow that path, at least for the first release.
A New XR Project
In April 2020, we started working on a new AR project for a large entertainment center in Las Vegas. The project was a treasure hunt-like experience to be deployed as an interactive installation.
During the initial discussions, iPhone 11 Pro Max was selected as the target platform. The devices would be encased in custom-designed handheld masks to amplify the feeling of immersion into the whole theme of the venue.
We chose that specific model of the iPhone for the horsepower necessary to handle the demanding particle-based graphics. Also, the phone had a reasonably sized AR viewport at a relatively low weight of the entire setup.
Considering the project’s scale, its high dependency on the graphical assets, and our pleasant experiences with developing the Magic Leap One app for HyperloopTT XR Pitch Deck, we selected Unity as our main development tool.
The Heart and Soul of the Experience
In the AR experience, the users would rent the phone-outfitted masks and look for special symbols all around the dim but colorfully lit, otherworldly venue. The app would recognize the custom-designed symbols covered with fluorescent paint and display mini-games over them for the users to enjoy.
We already had several challenges to solve.
The main one was detecting the symbols — there was nothing standard about them in terms of AR-tracking. Also, the on-site lighting was expected to be atmospheric, meaning dark — totally AR-unfriendly.
The games themselves were another challenge. Users could approach and view the AR games from any distance and at any angle (e.g., over any background). The graphics would consist mostly of very resource-intensive particle effects, as it was the theme of the app.
Plus, we only had a couple of months to develop the AR experience.
There was also one big question left “how are the users going to interact with the app?”
Building good digital products is a combination of being innovative and following tested mobile app development methods. A proof of concept (POC), prototype, and minimum viable product (MVP) help test a product idea before you make a significant investment.
What are the differences between a POC, prototype, and MVP, and how to choose the one that fits your project best? Read on for answers.
POC vs. MVP vs. Prototype: Short Definition
Proof of concept — A POC is a method of validating assumptions with target users and checking if your idea is feasible technically.
Prototype — A mobile app prototype evaluates the general “shape” of your idea (e.g., look, flow, user interaction).
Minimum viable product — An MVP is a fully working version of your product but with only the core features that let you collect initial user feedback.
In the world of mobile app development, a POC is a simple project that validates or demonstrates an idea. The purpose of a POC is to check if an idea can be developed and won’t consume excessive resources or time.
With a POC you essentially evaluate core functionality. If your app idea is complex, you can have many POCs to test each functionality.
User experience is pushed aside when you build a POC. That’s because it takes lots of time and work to create an optimal user experience, and that’s not the point of creating a POC. The goal is to validate technical capability.
Features of a proof of concept
Catch early investor interest. You can build a POC to present your idea to investors to acquire seed funding for further development.
Innovate. Innovation happens at the intersection of technological viability and market demand. A POC will help you check if your idea can be built using current technology.
Save time. When you check if your idea can be built, you automatically save time that would be wasted if you were to figure out technical viability issues once you hired developers and committed significant resources and time.
Pick the technology. Creating many POCs using different technologies can help you decide which technology stack is the most suitable for your project. This way, you’ll know early on what’s possible as you move forward and how to structure your product’s roadmap.
Check against the competition. If you plan to release a mobile application in a heavily competitive market, a POC will help you validate unique features in your offer. Your product will need to include a unique approach to solving the same problem to be a better alternative to what’s already out there.
Example of a proof of concept
PONS XR Interpreter
Companies around the world are increasingly embracing remote-work solutions and collaboration methods. We worked with PONS — a global publishing house and our long-term partner — to create a proof of concept for an XR cross-language communication solution supported by AI.
The POC helped validate if XR Interpreter could be used in a professional environment to make communication easier.
Mobile analytics gives you all the necessary insight into in-app user behaviors. The data from mobile analytics can help you make informed decisions about changes in your features or design. Mobile analytics is also invaluable in adjusting in-app processes and funnels because you can directly measure how every step affects user experience (UX).
What Is Mobile App Analytics?
Mobile analytics is the process of collecting and analyzing in-app data. This data gives you precise information about app performance and in-app user journey, letting you fix ineffective elements.
Mobile analytics is one of the key elements to improving conversions, retention, and user engagement. It’s the foundation of great digital products that engage target audiences.
How Does Mobile Analytics Work?
Mobile analytics software integrates with your mobile application to gather and analyze the data produced by users and the app itself. For example, an analytics tool can be embedded as an SDK into the code. You need different SDKs depending on the platform release of your app (e.g., iOS, Android).
Analytics lets you hypothesize, make assumptions, and evaluate experiments. It’s the key to refining and adjusting your product so that it matches the real-life behaviors that users display in your app.
With analytics, you know how users are using your app, what in-app actions they take, and what features they’re using (or not).
For example, you can check the completion of onboarding or how the buying process looks from the user’s perspective.
For one of our recent projects — a mobile AR treasure-hunt type experience that consisted of a series of mini-games — we were challenged to find a way of interacting with the app that would be more engaging and fun than a touchscreen interface.
Since we’ve spent some time and effort reviewing various options, we thought it might be helpful to others if we shared our findings.
Our goal is to show you what’s out there and highlight the solutions that we find to be the most promising for mobile interactions.
Not all of the solutions target mobile AR or are actual controllers. But we thought it makes sense to include them as well, since, as you will see, some of them have the potential to be viable in the future or are simply really cool.
The list is by no means exhaustive, and we’d be happy to hear if you know of any awesome AR interaction solutions that we might have missed.
What Mobile Augmented Reality App Interaction Methods Are There?
Litho is a very promising mobile AR controller. The device is small with a futuristic look, which matched really well the Sci-Fi theme of our project.
The initial research showed great potential, and, since we developed our app in Unity and Litho offers an SDK for it, we decided to take it to round two of our evaluation.
Our colleague Dominik prepared a prototype that allowed us to determine how well Litho fit in with the requirements of our project.
The SDK setup was straightforward and implementing the prototype took relatively little time. From the video, you can see that Litho requires minimal initial calibration. The interaction is based on a precise virtual laser pointer with a fixed origin. Litho offers 3 DOF readings, meaning you can detect how the hand is oriented in space, but not where it’s located. The quality of the interactions doesn’t depend on the lighting conditions.
After evaluation, we decided not to use Litho for our entertainment experience, mainly because it didn’t offer hand position tracking, which we considered crucial for achieving a high fun factor.
The significance of this argument came from the fact that adding any kind of external controller made the logistics more complex so the provided entertainment value needed to be worth that extra cost.
Our app would be preinstalled on mobile devices, which would then be rented on the premises for the users to enjoy for a specified time.
Using any kind of controller required us to pair it with the devices, ensure it remained charged, and account for it when picking up the equipment once a visitor finished using the app.
Even though we decided not to use Litho in our project, I think it could work well in any kind of mobile AR application where the precision of the interaction is the key and where the users interact with the app regularly.
FinchDash / FinchShift
FinchDash / FinchShift are AR/VR motion controllers. A Unity SDK is available. Of the two, only FinchDash is described as supporting mobile platforms, which is a bit unfortunate as it only allows 3 DOF tracking.
Also, the controller’s rather unremarkable looks don’t fit well with themed experiences.
FinchShift, FinchDash's sibling device, uses an additional piece of equipment in the form of an armband to offer full 6 DOF input. It’s also more visually appealing in my opinion.
ManoMotion is a camera-based mobile hand tracking solution that offers a Unity SDK. It’s easy to set up and doesn’t require additional calibration. It can scale depending on the needs from 2D tracking through 3D tracking of the hand position and finally to a full-blown skeletal tracking with gesture recognition.
As with any camera-based solution though, it has some technical limitations that need to be considered.
The main one is the effective control area that is only as large as the camera’s field of view. Since we’re discussing a mobile use case here, it’s even more significant as the device in our project would be held in one hand and you can extend the other one so far before it becomes awkward.
Another disadvantage is the reliance on computer vision algorithms, which causes the accuracy to be inconsistent across different lighting conditions. Especially colored light can degrade the experience quite a bit.
That said, we had the chance to work with ManoMotion’s support on our challenging use case (dim colored lighting). It turns out that ManoMotion can adjust their algorithm if the target conditions are known in advance. In our case, it allowed achieving a similar level of accuracy in the challenging lighting as in an optimal one, which was very impressive.
Google MediaPipe Hand Tracking
Google MediaPipe is an open-source camera-based hand tracking solution, similar to ManoMotion, and as such it shares the same limitations. In terms of platforms, it supports Android and iOS. But at the time of our research, it didn’t offer an officially supported Unity SDK.
ClayControl is another option in the category of camera-based hand tracking solutions. It seems to cover a wide range of platforms, including mobile, and has Unity on the list of compatible software platforms.
ClatControl’s website mentions low latency as one of the key selling points, which is interesting considering that solutions based on cameras and computer vision usually involve some degree of an input lag. It seems there is no SDK openly available for it, so we didn’t have a chance to evaluate it.
Polhemus is a very promising wearable camera-less hand tracking solution. It’s based on 6 DOF sensors, which don’t require a field of view and can provide continuous tracking even in complete darkness.
At the time of our research, however, it was PC-only. On the website, you can find information about VR, but unfortunately no AR support yet.
Although Xsens DOT is a motion-tracking device and not an actual controller, it could be used as one with some additional work. It does offer 3 DOF support so the orientation data is accurate but the position is only estimated.
It’s smaller than Litho, which itself is quite small. At the time of writing, I wouldn't consider it practical for typical AR interactions. It might be worth considering for more specific motion tracking needs.
FingerTrak is another promising wearable, this time in the form of a bracelet. It allows continuous hand pose tracking using relatively small thermal cameras attached to a wristband.
On its own FingerTrak allows detecting gestures without the field of view limitation, which affects the other camera-based solutions. It doesn’t look like a finished product yet, but it seems to be an interesting approach that could turn out well in the future.
Leap Motion Controller
Leap Motion Controller uses a relatively small sensor with cameras and a skeletal tracking algorithm to accurately detect the user’s hands. Unfortunately, at the time of writing, it doesn’t support mobile yet.
Apple AirPods Pro
Wait, what? As surprising as it sounds, someone actually seems to have tried using AirPods Pro for 6 DOF tracking! This is more of a curio, but if the video is actually legitimate, AirPods open up some unexpected possibilities.
To develop a great mobile app, you need a reliable partner who will help you build a product that aligns with your business goals. With many iOS development companies to choose from, it can be daunting to pick a trustworthy contractor.
The key is to find a partner who will not only develop your product but also actively participate in the idea validation process and further product refinement.
Learn how the iOS development process looks like and how to work with an iOS development company.
What Is Required for iOS App Development?
Apple has a whole set of resources and tools to help developers build apps that work on devices running iOS. That said, the iOS app development process is somewhat different from Android app development — to build an iOS app, you’ll need to equip yourself with prerequisite tools.
an Apple Mac
an Apple Developer account (priced $99 yearly, $299 for an enterprise account)
and Xcode to sign and publish a native iOS application
While an Apple Mac and an Apple Developer account are self-explanatory, Xcode is an integrated development environment (IDE) built specifically for creating native apps for Apple’s devices.
That’s easy — because they don’t solve anything for their target audience. Indeed, it’s the failure to research the market that’s the top reason why mobile applications flop on app stores.
Whether we’re talking about iOS or Android app development, the principle is the same:
Do your research and battle test your app idea with potential users. Only then will you be able to use the application as a vehicle for business growth.
In this article, I’ll take you through the basic steps you need to take to make your Android development project successful.
Start with an In-Depth Idea Validation and Analysis
A business analysis of your idea is the most important part of the app creation process.
I can’t stress this enough but validating your idea before investing money in its creation will help you approach the process with the necessary level of confidence backed by data.
Here, you have to consider concepts such as:
Unique value proposition — how is your offer different from everything out there?
Target audience — who are your preferred users?
Reach channels — how will you reach your users with your mobile product?
Revenue channels — how will you monetize your product? (e.g., in-app sales, advertising, paid features)
A key phase during idea validation is a detailed analysis of your competitors.
Looking at how your competitors solve for their audiences will help you build a better strategy for your product — you’ll know what worked for them and what has drawn negative feedback.
Similarly important is determining if your product is lucrative. To do this, think about how your competitor’s growth looks like. Have they developed a sizable customer base? Are they out looking for specialists to expand their business even more?
In essence, you validate your idea to learn if the product will pick up on the market and give you sustainable growth opportunities.
Use tools such as lean canvas for a deeper dive into your product analysis.
Create Stunning, Engaging, and Simple UI and UX
Be it an internal application for employees, a customer-facing app, or a game, the design is important across all app types. Positive user experience comes from both how the app looks like and how it works.
That said, when creating designs, make sure the designer understands the project and knows your target audience. The colors, typography, pictures, animations — all have to be aligned to reflect the needs and preferences of your users.
If you’re releasing your app into a highly competitive market with many similar apps, the design of the interface can make or break a deal.
For example, users might love the functionalities offered by your competitor. But if that product has a poor interface and usability, consumers would just as well leave the application in search for a better solution.
To be a viable alternative, however, your product needs seamless UI and UX.
So, regardless of application type, keep your design smart, intuitive, and clean. A cluttered interface is a sure-fire way to make your users wrinkle their noses.
Animations are important. iOS developers seem to know that because applications in the App Store are usually much more polished than their Android counterparts.
What’s the reason behind this? Are Android devs simply lazy?
The problem is that for a long time Android SDK didn’t offer great tools for creating animations. This has been changing throughout the years and nowadays creating beautiful animations in Android is a lot easier.
At the I/O 2018 conference, Google introduced yet another great library — MotionLayout. In this article, we will take a deep dive into the world of Motion Layout and explore countless possibilities that it offers to developers.
So, what’s the MotionLayout? Simply put, MotionLayout is a Viewgroup that extends ConstraintLayout.
We can define and constraint children just like we would do when using standard ConstraintLayout. The difference is that MotionLayout builds upon its capabilities - we can now describe layout transitions and animate view properties changes.
The amazing thing about MotionLayout is that it’s fully declarative. All transitions and animations might be described purely in xml.
Before we start playing with MotionLayout, we need to add suitable dependencies to a project.
The most important parameter here is app:layoutDescription. It lets us point to a scene definition. In this scene, we will define our layout transitions.
Another interesting parameter is app:motionDebug=”SHOW_ALL” Thanks to this parameter, our layout will show information helpful with debugging and adjusting animations, namely the path and progress of the animation.
Now we need to define our scene. Take a look at the diagram below.
As you can see, MotionScene consists of two major blocks.
Transition block contains several pieces of information:
The touch handler defines the way users will interact with the animation. The animation might be started with a click action or a user might progress animation with a swipe gesture.
KeyFrameSet will enable us to fully customize animations. We will take a detailed look at keyframes later in this article.
References to starting and final layout constraints.
Apart from that, we need to define a starting and final constraint set. We don’t need to define constraints for views that are still during the animation.
Now that we know the basics we can create our scene.
motion:constraintEndStart — references final layout constraints
motion:duration — defines the duration of the animation. Note that it has no effect if the touch handler is defined as OnSwipe. In that case, the duration of the animation is defined by the velocity of the user's gesture and additional parameters that will be described later.
motion:dragDirection — determines the direction of the gesture that needs to be performed to progress the animation. If it’s equal to “dragRight,” a user needs to swipe from left to right to progress the animation. If a user swipes from right to left, the animation will be reversed.
motion:touchRegionId — defines the view that needs to be dragged to progress the animation. If it’s not defined, a user might swipe anywhere within MotionLayout.
motion:touchAnchorId — parameter might be a little confusing. We need to tell MotionLayout how much the animation should progress given the number of pixels a user dragged their finger on. So the library will determine how much a user's swipe gesture progresses the animation based on the distance between the starting and final position of the touchAnchor view.
motion:touchAnchorSide — determines the side of the touchAnchor view.
motion:dragScale — determines how much a swipe gesture will progress the animation. If dragScale is equal to 2 and a user's finger moves 2cm, the touch anchor view will move 4cm.
motion:maxAcceleration — determines how fast the animation will snap to initial or final state once the user releases their finger.
motion:maxVelocity — is similar to motion:maxAcceleration but determines the maximum velocity.
<ConstraintSet> — is a set of initial or final layout constraints. Each constraint defines attributes for a particular view. Take note that we can’t declare any view attribute as a part of a constraint. It should describe view position or one of the following:
If we need to animate a different view attribute, we should declare it as <CustomAttribute>. We will learn how to do this in the next section.
Let’s take a look at the animation we created.
As you can probably remember we enabled debug overlay.
Thanks to this we can see text at the bottom describing the animation progress and animation frame rate. We can also see paths of our animated views.
One thing in 2020 was certain — the uncertainty. All the trends and projections announced for 2020 have either been modified or their emergence delayed. But in technology, we’ve seen an unprecedented evolution.
E-commerce sales soared, online education matured, and on-demand services rose to huge popularity. To meet the sudden customer demand, companies across the globe have increased their spending on digital transformation.
This demand has in turn spurred the growth and branching out of multiple related services. Will 2021 be the continuation of that expansion or maybe other technologies will see increased adoption?
5G Fuels the World
5G is a gateway to the realm of a mind-bending technological revolution. That’s not an overstatement — most of the technology trends of the future will be relying on that connectivity.
According to the annual state of the global mobile economy report by GSMA, by 2025, 5G will amount to 20% of global connectivity. And while the rollout pace isn’t yet astonishing, the hype around 5G keeps the public’s interest and curiosity up.
Activity trackers, phones, smartglasses, cars, cameras, and plenty of other devices collect data about you and your close ones. From your physical geographic location to your browsing history to even face recognition, companies have data galore about you.
The analysis of this highly personal data (your behavior, interests, and preferences) gathered by the Internet of Things devices is dubbed the Internet of Behavior (IoB).
The more we use online services and connected devices the larger amount of personal data we leave behind. Companies know very well about our political preferences, where we live, what we do, what we believe in, what our interests are, who we associate with, etc.
This data along with a slew of information coming from devices that are yet to enter the market (e.g., smartglasses that know exactly where you look at any given moment) will give businesses an unprecedented wealth of information to use to influence our behavior.
But the IoB also means several customer benefits — for example, the more data about driving patterns is collected from connected cars, the better driving experiences the automotive companies can build.
In 2021, we’re likely to see companies use the data from connected devices to create extremely personalized offers and products.
We’re slowly entering the era of augmented reality dominance. With prices for AR hardware dropping and tech giants announcing the release of their AR devices in 2021–2022, mainstream AR adoption is just a matter of time.
But it’s no longer a far-flung future.
Dive deeper into the world of augmented reality in our guide to immersive tech.
With all the technology that goes into an AR experience, the definition of augmented reality is pretty straightforward.
Augmented reality (AR) is a computer-enhanced version of the physical world, with digital content used to amplify the user experience of reality.
Digital content in AR can range from simple graphics and animations to videos and GPS overlays. The rendered content can “feel” and respond to the user’s environment and the user can also manipulate the content (via gestures or movement, for example).
With all the theory checked off, let’s check out some of the most exciting augmented reality apps out there.
Mondly helps users learn 33 languages via adaptive learning, which customizes the curriculum based on user progress. The app offers AR-supported conversations, pronunciation advice, and advanced statistics to improve language learning and knowledge retention.
Mondly also features augmented reality lessons where you can view animals and objects in your space for deeper engagement and better experience.
An augmented reality language lesson. Source: Mondly
80's AR Portal
Now this is a hugely entertaining AR mobile app that rubs my imagination in the right way.
Anyone a fan of Stranger Things, here? The 80’s AR Portal gives you a chance to immerse yourself in a fun experience and feel the crazy retro-cosmic spirit of the 80s. It’s definitely not a perfect app, but the concept is alluring.
ARLOOPA is an educational app with a flair for entertainment. With the app, you can view multiple AR models in your space and engage with them.
Interestingly, the app has three different types of AR available for you to experience: marker-based, markerless, and geo-location.
Human Anatomy Atlas 2021
The Human Anatomy Atlas 2021 is one of the best augmented reality examples out there. Excellent for medical students and health enthusiasts, the app features some of the most detailed human anatomy models, with tissues, muscles, bones, and nerves. All supported by interactive lectures.
And you can view all those fancy body parts in AR.
Bringing immersive technologies into the healthcare industry can help doctors explain complex procedures to patients and serve as an excellent reference during reconstructive surgeries.
No respected AR app list can go without the IKEA home decor app.
IKEA Place lets users render augmented reality furniture in their living spaces. The app views real-size furniture, letting homeowners better visualize the items from IKEA’s catalog before deciding on a purchase.
For home decor, there’s also the amazing Houzz app, which got the Editor's Choice award on the app store.
Shapr3D is an award-winning app for 3D modeling workflows. It’s a professional CAD (computer-aided design) tool with robust capabilities. A work-horse for designers.
With the expansion of the app to macOS, the Shapr3D iPad users can now collaborate on prototypes across devices.
Here at nomtek we pay special attention to application responsiveness. It’s also very important for apps we create to be intuitive for our users. Great animations play a huge part in how users perceive apps.
In Android's early days, we didn’t have much choice in the available solutions. Now there are hundreds of awesome libraries that make our life as app developers easier.
In this article, we will give you an overview of animation libraries that we use in our everyday work.
Advancing Clinical Practice with Augmented Reality
There’s still lots of research, refinement, and mainstream adoption needed for augmented reality to enhance clinical practice in healthcare systems across the world.
As the technology matures and overcomes technical and implementation hurdles, augmented reality will most likely become a go-to tool for treating a variety of ailments. Doctors will use AR solutions to improve patient outcomes and the quality of surgeries.
I’ve seen lots of articles about using various libraries and frameworks, clean code, and programming practices but almost none about organizing a developer’s work around implementing a feature. So I decided to share what works for me, and I hope it will help someone improve their process.
I’ll tell you how I analyze a user story, design and implement a solution, and prepare a merge request.
I’m currently working on an Android project that follows Clean Architecture variation, with Gradle submodules containing features. All the examples in the article will be based on this project.
Getting to Know the Feature
I start with reading the whole story and looking through the designs, to load all the context into my head. Then I go through the story again, but this time listing everything that needs to be done.
And I do mean everything, not only the implementation pieces.
Need to add a copy to the translation tool — list it. Ned to talk with other teams about some integration details — list it.
Entries like “load data” are good for now. I worry about all the details later (fetching data from the API, caching locally, handling errors, etc.).
In a perfect world, the user stories that we pick up to work on should be small enough to result only in a few points on the list. Unfortunately, this is not always the case in the real world.
Making the Code Review Easy
I follow this pattern until I'm out of items on the sublist.
Then I prepare a pull request from the sub-branch to the feature branch: “OA-5_load_data” ->“OA-5.” This way, reviewers can get through all the code in smaller pieces that are easier to digest.
Before actually creating the pull request, I read through all the changes. Such a pre-review step makes the code review process easier. Most typos and styling issues are caught here and other developers don't have to point them out.
After merging all the pieces, I create another pull request — this time to the development branch: “OA-5” -> “develop.” In this one, reviewers can only skim through the code looking mostly for leftover developer tweaks (unnecessary logs, navigation shortcuts, etc.), since they've seen all of the code before.
Once this pull request is merged, the feature is ready for the QA.
While learning is fun in itself, some classes can bore even the most curious person out there. And let’s say it, when you’re bored, the last thing you want to do is to actually understand something.
With many schools and universities still using remote education in many countries, engagement is more important than ever to maintain the quality of education.
Augmented reality helps teachers stimulate students’ interest by providing them with immersive visual experiences.
Google is developing its AR capabilities full-throttle. With the company’s Google for Education initiative, educators can use several 3D models to help students better understand concepts and retain knowledge.
Interestingly, researchers found that “Anatomy 4D mobile application was better able to hold the attention of the students than the anatomy notes.”
By leveraging augmented reality, students can receive practical experience in multiple fields in a simulated learning environment.
AR Facilitates the Understanding of Difficult Concepts
Augmented reality can help students understand difficult and abstract concepts. For example, how the human digestive system looks and works like.
On a bigger scale, the enormity of the universe and our own solar system are difficult to comprehend, but augmented reality lets students get a more immersive view of such gargantuan structures.
Augmented reality lets students experience and understand scientific phenomena without actually running experiments.
For example, various chemical reactions are too dangerous to run in the classroom. By bringing these experiments into a tangible virtual environment, AR gives students access to many topics beyond reach.
Another way augmented reality makes learning immersive is by letting students manipulate objects and experience the resulting reactions and phenomena.
This way, students gain a better conceptual understanding of topics that could otherwise be inaccessible to them.
Augmented Reality Improves Learning Performance
Augmented reality helps students learn new concepts faster and more effectively.
For example, in a study evaluating zSpace, an AR/VR learning solution, researchers found that it was easier for students to understand concepts better when augmented reality was involved.
zSpace is a comprehensive solution built to create immersive classrooms for experiential learning. Using zSpace, educators can prepare custom learning material enriched with advanced augmented reality elements.
With tools such as zSpace, educators can craft curriculums in a way that makes it easier and faster for students to absorb information.
Medical students and engineers can perform procedures on virtual objects to practice their skills and gain the necessary knowledge.
AR Adds Context to Lessons
For history teachers, it can be tricky to maintain a seamless experience for students for the duration of the whole lesson.
Without an easily relatable context, students usually pore over dates and time periods — tangible visual material could help them digest the lesson better.
For example, when talking about hieroglyphs or historical structures, educators can render the object right in the classroom, and then deliver the lesson.
Building Immersive Homework with AR
Not all students are equally eager to do their homework. Some need a little more incentive.
Augmented reality helps make workbooks and textbooks more interesting by letting students engage in immersive 3D renderings of studied concepts.
While we’re yet to experience a full-scale AR disruption, the potential of this tech to revolutionize workflows and processes is already huge. Let’s see how and when businesses can use augmented reality.
Note: I’ll be using the terms augmented reality and mixed reality interchangeably throughout the article.
While the benefits can directly contribute to, say, increased sales in e-commerce businesses — Houzz’s customers are 11x more likely to make a purchase after using the company’s mobile AR feature — the AR technology isn’t universally viable for all sectors.
How to Implement AR in Your Business Strategy?
Before jumping into the world of augmented reality tech, there are a few questions you have to answer.
If you want to include AR in your business strategy, first ask yourself what it is specifically that you want to achieve through an AR solution.
To give you an example: a problem can be something missing in the workflow.
Let’s say quality assurance at your company takes a lot of time to complete. The reason why might be that QA professionals need to comb through stacks of paper instructions to complete the process.
This inefficient approach results in a waste of time: seconds turn into minutes and minutes into hours. In the long term, it amounts to a significant drop in productivity.
Augmented reality could come in handy here by feeding all the steps and actions necessary to conduct a QA test into a mixed reality headset. The application would interact with and respond to the actions of the tester in real-time.
Here’s Renault’s road to quality assurance supported by mixed reality:
Wondering what companies use augmented reality? Let’s look at some of the use cases of augmented reality across industries and sectors.
The manufacturing sector is expected to be one of the biggest beneficiaries of cross-reality solutions. Combined with the rollout of 5G connectivity that offers speeds 100x faster than 4G LTE, augmented reality can be a huge opportunity for manufacturing facilities to improve a number of their processes and workflows.
The upside of using AR in manufacturing facilities is that it’s a fraction of the cost compared to investing in complex hardware.
Besides, augmented reality is much more convenient to exchange information since there are no physical restrictions such as cables, devices. Data is fed to the AR application virtually.
Onboarding. With AR, employees just starting out in a manufacturing plant could see interactive hints and instructions on how to use machinery, with all important information layered over the physical equipment. AR onboarding can increase employee safety and shave off ramp-up time.
See an example of using AR for training purposes at BMW:
Productivity. AR headsets equipped with AI technology could help employees get from point A to point B in the most efficient fashion, leading to potential productivity gains in the long term. Moreover, engineers could send a request for a specific part by simply pointing at it.
Operational information. With AR elements overlaid in a factory, manufacturing employees could have easy access to information about the performance of existing equipment and infrastructure. For example, interactive gauges over different areas on the assembly line offer real-time insight for employees.
Safety. Mixed reality solutions can inform employees about dangers (e.g., areas closed for maintenance/cleaning). And when an emergency happens, workers in need of help or assistance could transmit an interactive beam with their whereabouts.
Relying on complex technology and hardware, cars are becoming increasingly difficult to maintain for mechanics, who may not yet have the know-how necessary for servicing. The pace of the digital evolution in the automotive industry calls for improvements in various workflows and processes.
Remote assistance. AR-based remote assist lets engineers show other employees how to conduct complex repairs and service maintenance on vehicles.
Training. Cross-reality workshops are a safe and efficient way to share information. Engineers can enroll in digital training where they learn the workings of complex machinery and how to assemble various parts. Instructors show trainees how to disassemble an engine without actually putting it apart.
Prototyping. Every new iteration of a prototype can be costly for an automotive company. Car prototypes can cost upwards of $100,000. Immersive designs aid in iterating and improving on product designs without companies having to spend anything on expensive prototypes.
Now, $100,000 might not seem like a lot for big automotive companies, but when we’re talking about multiple iterations, it can amount to a nice sum.
Content consumption. Enhanced with augmented reality, written content can be expanded to include immersive experiences. AR can also serve as a visual aid in non-fiction writing to better illustrate concepts and events.
Board games. Traditional board games could use augmented reality to enhance the level of immersion for gamers. Physical boards can be transformed from 2D experiences into interactive 3D adventures — the board stays the same while the elements turn virtual.
In the military, soldiers can use augmented reality that transforms sensor data into visual input to gain greater insight into their surroundings as well as to improve navigation.
Situational awareness. Sensors and cameras implemented in AR tech provide soldiers with more information regarding their surroundings. Other critical information can also be fed into a headset from headquarters.
Since 2018, the US Army has been looking into AR when developing the Integrated Virtual Augmentation System (IVAS). The IVAS provides mission-critical information to soldiers on the battlefield, for example, the system performs a quick object identification check.
Navigation. In aviation, augmented reality blends complex charts and maps into a pilot’s field of view, decreasing the need to check the information on displays.
The US Army is exploring the possibilities of augmented reality goggles for combat dogs. The idea is to give dogs in the field more contextual information, along with visual indicators that tell dogs where to go.
Ordinarily, soldiers guide their dogs with lasers or hand gestures. During a mission, however, it might not be possible for the soldier to be close enough to the dog to give it commands. This is where AR goggles step in, letting soldiers guide their dogs through visual cues rendered in the glasses. Additionally, the goggles attached to the dog’s head transmit what the dog sees back to the soldier.
Tenant instructions. Landlords renting apartments can use augmented reality to provide tenants with instructions. For example, to help tenants orient themselves around the apartment or explain how to use and locate different utilities.
Along with smart locks that eliminate the need for the landlord to hand the tenant the keys, augmented reality further decreases the necessity for contact.
Tenants simply put on a headset or turn on an app and explore the flat themselves with detailed instructions.
Immersive experiences. To advertise offered destinations and facilities, travel agencies can turn to augmented reality to create AR tour presentations. This way, customers get to experience interactive content and learn more about a destination. AR tour presentations also help travel agencies prepare offers with content customized to cater to different target audiences.
Moreover, to improve customer experience, travel agencies can equip their customers with advanced digital tour guides. These augmented reality guides can be further tweaked to include memorable and personalized experiences to tourists in a given location. For example, an AR guide could contain sightseeing places that match customer needs and preferences.
With relatively cheap implementation costs and easy accessibility, mobile AR is a great choice for a goal-oriented tech asset. Businesses can use augmented reality to increase customer engagement, improve brand promotion and awareness, and facilitate the creation of product demos.
Both Apple and Google are heavily developing their respective AR software development kits (SDK). ARInsider projects that by the end of 2020, there will be over 2.5 million AR-compatible mobile devices out there compared to only 500 million active users. There’s a huge potential for commercial adoption of this tech.
The AR market alone is blooming, with various forecasts projecting it to reach between 30 to 80 billion USD in the upcoming years (1, 2, 3).
No mainstream implementation of costly technology necessary to bring it to the users makes augmented reality that much more accessible. On the other hand, virtual reality or mixed reality headsets are still too expensive for such widespread adoption.
But what is all the fuss about? Can AR apps transform businesses and add tangible value? Check out how augmented reality is helping businesses.
Houzz enhances its selling and reach capabilities with AR, to help customers better visualize how a product would look like in their house. A growing selection of products available for an AR viewing helps Houzz fuel sales and increase engagement.
Once you’ve set up AR capability inside an ecommerce app, creating and adding 3D models of products is a relatively inexpensive way to bring AR to your customers.
Great-looking shoes don’t always look equally great on feet. Wanna Kicks gives its customers a chance to try how a pair of sneakers would look on their feet before making a purchase.
There’s still a huge opportunity in retail AR that will most likely be explored in the upcoming years. For example, customers could create a 3D image of their body to be used across shops for better fitting experiences. And I’m not talking here about a simple superimposition of an image of a shirt onto the picture of your body. A 3D scan would reflect how the material flows on different body shapes.
RoomScan Pro and RoomScan LiDAR
Using patented technology, RoomScan LiDAR creates floor plans using the iPhone's newest hardware, including Apple’s LiDAR Scanner and the A12Z Bionic chip.
Multiple export options, floor plan views, and whole floor scan are just some of the features RoomScan LiDAR offers its users. It’s a great tool for architects who want to quickly draw floor plans or for homeowners.
AR Ruler App
How many times have you been in a situation where you had to measure something fast but had no tools to do it? Well, to the rescue in such situations come AR-based measuring apps such as AR Ruler App.
AR Ruler App lets users measure distance, angles, volume, and height, among many other available measurement options.
While there are still inaccuracies present in the app, with the introduction of iPhone 12 Pro — that features Apple’s LiDAR Scanner — we can expect this family of apps to offer significantly better measurement accuracy.
SunSeeker is another tool valued by various professionals. From architects and homeowners to realtors and gardeners, SunSeeker gives users detailed information on how the sun operates in a given location.
MR, AR, XR, 3D, VR… makes your head spin just thinking about deciphering those, right?
Because it so happens that it might actually be a tricky thing to do.
As the features and capabilities of mixed reality, augmented reality, and virtual reality grow more aligned, it’s becoming increasingly difficult to give a clear definition to describe each. Especially augmented reality and mixed reality have become problematic to define.
So without further ado, let’s see what mixed reality (MR) is.
Defining Mixed Reality: Different Takes on MR Tech
According to Deloitte, however, mixed reality is a subset of augmented reality.
But when we think about how the capabilities of augmented reality have grown over the years, AR can now be considered synonymous with mixed reality. At least, from a strictly technical point of view.
So why the disparity?
A few years back, most of us associated augmented reality solely with apps. At the time, however, AR apps and tech had rather limited capabilities in injecting interactive digital objects into the user’s environment.
The biggest problem was the lack of occlusion. Or in other words, the ability of 3D models to “hide” behind objects in the real world, which would happen if the objects were really in our space. This had the potential of ruining the perceived level of immersion for the user.
But in 2019, updates from Apple and Android have eliminated the issue, letting digital AR objects better react to and co-exist with the real-world environment.
Mixed reality used to be touted as superior to augmented reality precisely because of MR being the more immersive technology, capable of sensing and reacting to the world it was projected in.
That’s no longer the case.
The current AR technology lets mobile phones create high-resolution depth maps that accurately place objects in the real-world environment, taking into account other physical objects present in the space.
So why mixed reality? My guess is that the term mixed reality was coined to help users associate it with headset-based AR and remove any possible confusion with mobile AR.
This was a legitimate approach — a few years ago. Just think about the hype with Pokémon Go. It’s no wonder that an enterprise-targeting company such as Microsoft wanted to differentiate its technology.
And how does virtual reality fit into the picture?
Virtual reality completely occludes the user’s vision and replaces what the user sees with a simulation. Read more about virtual reality and augmented reality in our article.
There’s also Facebook’s take on VR in its Infinite Office, where the user sees a monochromatic version of their physical surroundings (as fed into the headset via a camera). This makes the experience fall somewhere on the spectrum of mixed reality if we use the definition proposed by Microsoft or Unity.
Another historically significant feature that is said to distinguish mixed reality from augmented reality is interactivity. When using Magic Leap One or HoloLens 2, users can use hand gestures to manipulate digital objects.
Check out how our partnership with Jagermeister resulted in an interactive mixed reality experience where users could “touch” the music with their hands.
With mobile AR, on the other hand, users interact via on-screen gestures.
Lastly, mixed reality headsets simply offer a more powerful way of experiencing augmented reality. The headsets pack more sensors and give us a greater sense of immersion thanks to advanced optics through which the experience is delivered. But it’s still AR.
As you can see, the line between augmented reality and mixed reality is currently blurred to the point of nonexistence. But since the solutions that deliver AR via headsets are often called mixed reality, we’ll stick to this term for the rest of the article.