How should you integrate the cloud and IoT into your business strategy? Read on for answers.Read more
Cloud computing has been the catalyst for achieving unimaginable goals for companies of all sizes. With the cloud, businesses can handle, manage, process, and analyze huge amounts of data.
Cloud computing enables startups, early-stage companies, and large enterprises to draw valuable business insights. Integrated with IoT, the cloud gives modern enterprises the ability to become more efficient at data analytics.
Enterprises use cloud computing services and IoT solutions to stay relevant and become more competitive. Jeff Weiner, the former CEO of Linkedin, said the cloud empowers modern companies to have access to the best innovation infrastructure.
How should you integrate the cloud and IoT into your business strategy? How do these two technologies tie into a single thread to make sense for a modern business? Answers below.
Cloud computing and IoT complement each other. Cloud computing allows IoT devices to record, capture, process, analyze, and store data at a massive scale.
Together, IoT and cloud computing services streamline cost-effective automation and data analytics.
When we look at the ways IoT helps modern business, user analytics is one of the major drivers for IoT adoption. Businesses use the cloud and IoT to analyze big data to reveal patterns, trends, and associations.
Modern IoT solutions are built on a basic premise — helping businesses optimize operations. Cloud computing helps process all the data generated by the IoT. Big data analytics plays a large role in making IoT solutions efficient at automation and optimization.
Cloud computing takes care of storage and security for an IoT-based app. At the same time, cloud computing acts as a bridge between the IoT platform and big data.
Cloud computing services enable IoT solutions to act intelligently by handling:
Cloud computing paves the way for IoT devices to the internet for data storage and processing. The data can then be used by any other complementing technology, system, or solution remotely.
Think of it like this — you ask Alexa, a consumer IoT device to find you the nearest restaurant. Alexa connects to a cloud application like Google Maps and provides you the results. Data feed (your voice), the IoT device (Alexa), and the cloud application (Google Maps) interact to give you the right result here.
Saving large data volumes generated from IoT devices can be a security nightmare. Cloud allows encrypting critical operational data while bringing down the costs of storage.
Your business doesn’t have to invest in server infrastructure and security. Plus, you get high-end security measures embedded in the cloud. Using the cloud for IoT helps in reducing the chances of leaks and cyber attacks.
Nomtek Labs is our internal research and development department where we explore technologies and how they can benefit businesses and users.Read more
The world we live in is far from being idle. Technologies and industries evolve rapidly, with maddening speed at times. When Nomtek was founded in 2010, Apple was selling the iPhone 3G. It was the company’s second iPhone and the first to use the 3G network.
We hopped right in, to participate in the development of the mobile world.
3 megabits per second — that’s the dizzying speed 3G promised. Sounds bleak compared to 100 megabits per second possible with 5G. But that’s how fast mobile life was spinning back then.
The Android ecosystem itself was also in its early stages. In October 2009, Android Eclair (2.3) was released, with Motorola Droid reigning as the most popular mobile phone.
By today’s standards, it was a strange-looking smartphone, with a hidden keyboard at that.
Much and more has changed since Motorola Droid’s reign over ten years ago. IT systems have become denser, more complex, and more accessible.
Take a Japanese farmer who, in 2016, created a system that uses AI to classify cucumbers. Sophisticated technology such as deep learning has become increasingly present in areas commonly associated with manual labor.
3G was soon replaced by 4G in developed countries, dramatically improving network connectivity and changing how people consume content. 4G helped Netflix conquer the world of streaming services, giving users access to favorite films and series at home or on the go.
All these new connectivity technologies, more efficient chips, and the evolution of augmented and virtual reality can cause a reverberating wave of changes for industries and people across the globe.
At nomtek, we always knew that investing in our development was the best thing we could make, hence the idea for nomtek labs — our answer to the rapidly evolving world.
As a company made of people who relish discovering innovation, we don’t intend to stand behind or rely solely on old technologies and methodologies.
We are tech enthusiasts who love exploring new sectors and playing with technology.
At nomtek, everyone can participate in a number of initiatives that boost knowledge and develop skills. We have internal weekly guild meetings, free time for self-development, and budget for workshops and conferences.
Nomtek labs is one of these initiatives.
Ensuring your Android app development project is successful is easy when you know what’s involved in the process. Learn all about Android app development.Read more
Shopping, organizing, scheduling, banking, working, and simply having fun, it’s often a mobile app that solves, bringing convenience galore.
All right, so how come such a shattering number of apps fails?
That’s easy — because they don’t solve anything for their target audience. Indeed, it’s the failure to research the market that’s the top reason why mobile applications flop on app stores.
Whether we’re talking about iOS or Android app development, the principle is the same:
Do your research and battle test your app idea with potential users. Only then will you be able to use the application as a vehicle for business growth.
In this article, I’ll take you through the basic steps you need to take to make your Android development project successful.
A business analysis of your idea is the most important part of the app creation process.
I can’t stress this enough but validating your idea before investing money in its creation will help you approach the process with the necessary level of confidence backed by data.
Here, you have to consider concepts such as:
A key phase during idea validation is a detailed analysis of your competitors.
Looking at how your competitors solve for their audiences will help you build a better strategy for your product — you’ll know what worked for them and what has drawn negative feedback.
Similarly important is determining if your product is lucrative. To do this, think about how your competitor’s growth looks like. Have they developed a sizable customer base? Are they out looking for specialists to expand their business even more?
In essence, you validate your idea to learn if the product will pick up on the market and give you sustainable growth opportunities.
Use tools such as lean canvas for a deeper dive into your product analysis.
Be it an internal application for employees, a customer-facing app, or a game, the design is important across all app types. Positive user experience comes from both how the app looks like and how it works.
That said, when creating designs, make sure the designer understands the project and knows your target audience. The colors, typography, pictures, animations — all have to be aligned to reflect the needs and preferences of your users.
If you’re releasing your app into a highly competitive market with many similar apps, the design of the interface can make or break a deal.
For example, users might love the functionalities offered by your competitor. But if that product has a poor interface and usability, consumers would just as well leave the application in search for a better solution.
To be a viable alternative, however, your product needs seamless UI and UX.
So, regardless of application type, keep your design smart, intuitive, and clean. A cluttered interface is a sure-fire way to make your users wrinkle their noses.
Learn how to design a chatbot assistant to support the operator in flight and accommodation searches.Read more
Contemporary chatbots don’t only support conversations but can also assist a human operator. They allow for parallel analysis of conversions, searching for answers in the background and suggesting answers that significantly shorten the response time of the operator.
Their expansion has been triggered by the emergence of commercial platforms such as Dialogflow or wit.ai, which, in a quick and easy way, allow for the extraction of knowledge hiding complex Natural Language Processing (NLP) processing algorithms.
Chatbots are gaining popularity for many reasons. They:
The creation process of a bot based on revealing intents and entities (parameters, that describe an intent) can be divided into the following steps:
Communication with the chatbot should be similar to communication with people (with regard to the bot-type limits) and provide valuable information. Therefore, the first step should always be to design a conversation.
Extraction of intents. Intents are the main actions that the chatbot is able to serve. The most important step is to extract some abstract actions that the user will be able to invoke by asking the question in natural language.
Extraction of entities, that will be served as deliverables to intents.
Training a bot by asking it questions in natural language. It is not about the quantity but the diversity of questions i.e. rephrasing questions about the same intent.
Testing and training. The next step is to test the matching of intents and entities while having a bigger number of users. In this stage integration with more communication channels e.g. Slack, Skype, Facebook Messenger might be crucial. Content gathered in such a way will serve test purposes and improve a bot.
Our challenge was to design a bot that helps in search of accommodation and flights. There is already a number of similar bots created by flight and hotel search companies, but we wanted to design a bot that uses the Polish language.
At the moment, it is not supported by Dialogflow. We used wit.ai that supports recognition of some units in Polish (beta version) i.e date or location. We distinguished two main INTENTS: search_flight and accomodation_search.
The next step is to create training databases — the corpus — containing different sentences that will be related to intents. To train the most successful model that would match the intent, a need for diversity not only the number of sentences must be emphasized. Providing additional channels of communications (Slack, Facebook Messanger) facilitates smooth questioning of chatbot by many different users.
At this stage, the bot doesn’t return any sensible answer. It only returns the result of an intent processing and indicates a probability of its accuracy, which allows the user to assess the accuracy of returned results and test other possibilities. Wit.ai collects sentences and adds them to the training database — it improves the training of recognition model.
The model trained in such a way could sufficiently support the operator. It allowed for a prompt aggregation of databases containing intents and entities.
Examples of processing results:
1. “I am looking for a hotel for two people in Warsaw for the weekend”
2. “I am looking for an all-inclusive hotel on the Canary Islands in May”
3. “I am looking for a flight from Warsaw to Berlin tomorrow at 8:00 p.m.”
Precision and recall — also known as sensitivity — training result of intents:
For comparison (chart below), the result of training the same intentions with the first ineffective method.
The ascending strategy was very unsuccessful. It consists of the identification of intents from short sentences e.g., “I am looking for a hotel” to longer ones: “I am looking for a double room in a hotel in Kolobrzeg.” The increase of the sentence database didn’t improve the result of adjusting intent. This strategy failed in providing good training for the model. Eventually, we performed the training without the short sentences in the database.
As a result, we were able to better identify intents and entities.
A short explainer article on machine learning and artificial intelligence. Learn what is machine learning and how to use it.Read more
Below, I will give you an idea what machine learning is and what you can expect from this arcane field.
Please do not be afraid as our rational expectations are not terminator-style killing machines or rise of the robots. On the contrary, I will try to convince you that machine learning is a wonderful field deeply rooted in science and engineering that can give your organization a competitive advantage in the market.
Machine learning is not an AI. Well, you might be surprised because AI and machine learning are terms which are often used interchangeably in many articles, blog posts etc… But the truth is machine learning is “only” a subfield of a much broader field of Artificial Intelligence. Based on one of the most popular textbooks about artificial intelligence: “Artificial Intelligence: A Modern Approach (AIMA)” I created a picture, which presents this fact.
Machine learning is a very important part of AI, but only part of it. This should not be surprising because after all, we can say that AI is about agents acting rationally in the environment. To act rationally you need much more than only an ability to extract patterns from data. For example, you need to receive perceptions, to build knowledge about the world, be able to reason rationally, be able to act — machine learning is only a part of this great journey.
Supervised learning means that we train the model with data that contains inputs, which is correctly labeled. So for example, if we want to create a model that predicts housing prices then in addition to input data (e.g floor area, location, number of rooms, number of floors, etc…) we also have a price of each house. Then after training a model with enough samples (each sample represents a house) we expect that our model will be able to generalize, find correlations between input vectors and output (price) and correctly predict the price for houses from outside of the training set. The most popular algorithms used for supervised learning are mentioned below:
This is a standard, highly interpretable method for regression - which is modeling relationships between input variables which are independent of each other and depending on the input output variables. For example, this could be just a linear dependency between floor area and house price, easily representable on 2D space.
A business case might be to understand product-sales drivers like price, competitors price, quality, etc…
This model is similar to linear regression but is used for a classification task. Classification means that we expect binary value as output. So the model predicts that something is either true or not. For example, in our case, we would like to know whether a house will be sold in the next 6 months, the output is either yes or no.
Another business case might be a decision whether a loan will be repaid or not.
A decision tree is a highly interpretable model that can solve both classification and regression problems. It is highly interpretable because it is represented as a tree that splits data values at each branch depending on some features. In each leaf of the tree, there is an output value.
A business case in our house example would be to recognize which features are most important for determining house price. Another case might be to understand product features that make it most likely to buy.
This method is an example of ensemble learning. Its result is a combination of many various models of the same type. In the case of a random forest, it is easy to guess that the type of the model is, of course, a decision tree. Random forest improves the accuracy over decision tree by averaging the results from running the method multiple times. However, we are losing the high interpretability of simple decision tree.
A business case might be for example to predict power usage in an electrical grid.
This is a classification technique. It uses Bayes theorem to calculate the probability of events based on the knowledge of factors that might affect that event. It is a rather simple technique but for text categorization, it is competitive with more advanced approaches.
A typical business case is to categorize emails as spam or not based on occurrences of words in the text.
This technique is usually used for classification but can also be applied to regression problems. This model represents training input data as points in space and tries to find a gap between categories that is as wide as possible. Then when a test vector is being checked it falls to one side of this wide gap and this side represents the result.
A nice business case might be to determine whether a user is likely to click on the ad or not.
Neural networks deserve separate text on their own, here are a few words to explain how they work. Neural networks try to mimic the way neural connections in the human brain work. They usually contain a few layers of neurons. Each neuron from one layer is connected to some neurons from the next layer. The first layer represents the input vector, and the last layer represents the output.
Neural networks are good at representing complicated non-linear dependencies between the input and output. They have been known since the 1950’s but until about 2000s we did not have enough computing power to make them work for non-trivial examples. The huge improvement in the field was also possible by rediscovery of the backpropagation algorithm in the 1980s. There are different kinds of neural networks: standard neural networks, convolutional neural networks, recurrent neural networks, however, I will not go into details of their specifics here.
Possible use cases are broad, they can solve all of the use cases mentioned in other models. Fine-tuned by an expert they can achieve much better results than models described previously. One of the typical cases is handwritten text recognition. This was an introduction of neural networks to make this task feasible for industry and reduce users usage.
When we don’t have labeled training data but still want to find patterns in it we talk about unsupervised learning. Unsupervised learning algorithms try to infer structure from given data, most commonly they gather the training data into clusters (in multidimensional space). Then after training such a model, we can check to which category new input data belongs. For example, this seems as a good method for segmenting the customers according to different parameters we can gather about them. Some of the unsupervised algorithms are listed below:
This algorithm clusters the data in a k separate groups (thus the name of the algorithm). Each group contains the input data vectors that are nearest to themselves in d-dimensional space where d is the number of attributes in each vector.
The business case would be for example to find groups of houses that are somehow similar to themselves according to parameters such as price, price per square meter, city, location, number of floors, etc… Another obvious business case is one mentioned earlier for segmenting customers into different groups to better target each group with marketing campaigns.
This algorithm creates a hierarchical classification tree where each node in the tree represents a group. Subnodes of a node represents a further split of the node group.
In a typical use case, you can cluster your customers into more and more detailed groups. As those groups are represented as hierarchy, you can for example, target marketing campaign to a group represented by a node that is on any level of the tree depending on your specific needs.
Recommender systems are not a separate technique. It is rather that those systems usually use some clustering algorithms to identify similar groups for which similar things should be recommended.
A use case might be to recommend which movies a user should watch using user similarity to other users.
Reinforcement learning is used when there is not enough data to learn or a program needs to interact with the environment to receive feedback about the value of its actions. During its lifetime reinforcement learning algorithm tries to maximize the reward it receives for its actions.
Some examples of use cases might be the optimization of trading strategy (receiving constant feedback about the rewards of previous decisions/actions). Another case could be to balance the load of electricity grids (also learning over time which actions maximize the balance of the grid).