The blog

Error message

  • Notice: Use of undefined constant q - assumed 'q' in nebb_preprocess_page() (line 132 of /home/62/w143230/nebbd7/sites/all/themes/nebb/template.php).
  • Notice: Use of undefined constant q - assumed 'q' in nebb_preprocess_page() (line 133 of /home/62/w143230/nebbd7/sites/all/themes/nebb/template.php).
  • Warning: Invalid argument supplied for foreach() in nebb_preprocess_page() (line 185 of /home/62/w143230/nebbd7/sites/all/themes/nebb/template.php).

The Journey Of Your Data! From Your Sensors To Your Dashboards!

“The Journey Of Your Data! From Your Sensors To Your Dashboards!” is the title of the talk I gave at the last industry-science meetup organized by the University of South-Eastern Norway (USN) and Energy Valley. The meetup aimed to create an arena where regional high-tech industry players from the energy and engineering sector can meet and interact with strategic research initiatives, interdisciplinary research, and science groups from USN. Since the goal of the meetup was to show results from applying the latest state-of-art research in solving real industry problems, we have decided to give a technology introduction to Qlarm. My talk was a high-level journey on making the most important architectural design decisions that one must make while designing a solution like Qlarm.

Qlarm

Qlarm is an intuitive cloud platform that enables conditional monitoring, intelligent alarming, interactive notifications, reporting, and analytics tailored for industrial control systems and industrial IoT implementations. As a solution, Qlarm is the result of many years of experience in control systems, deep understanding of the core industrial problems, and Nebb’s recent R&D activities in cloud technologies and machine learning. Namely, one of the core services of Nebb is the design and implementation of complex control systems in various domains. The task at hand was to find a way to transform raw sensor data into meaningful information for our customers. Qlarm is not only doing that but going a step further. With Nebb’s cloud-based industrial IoT and control system cloud platform we bring to the table more benefits for our customers.

Cloud Gateway

When you build a cloud industrial control system or industrial IoT solution, all the sensor data will arrive at a certain endpoint in your cloud solution. You might have a couple of gateways in between, or mesh communication among the sensors, but eventually, the data will hit certain cloud endpoint. The cloud endpoint acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. In general, the cloud gateway is defined by the following quality attributes:

  • Security: Security is equally important all over the solution. The cloud endpoint is the entry point in our cloud solution. It is essential that we control which sensor and what kind of data is to be sent at a given time frame.
  • Scalability: If our customers want to introduce more measuring points or more production units to monitor, then we must support that. The increased ratio of data messages/second must be supported out of the box.
  • Interoperability: The modern cloud solutions must scale in the number of endpoints (Industrial IoT or control systems), but also in the variety of the endpoints. The supported communication protocols can define the interoperability of the solution. There are a couple of protocols that are already accepted as standard, but still different products support and require different communication protocols.
  • Reliability: The cloud endpoint must be reliable, and it’s not allowed to lose any sensor data. Integrated buffer is one way to achieve higher reliability.

Time-Series Database

We already know how to get the sensor data to the cloud. Before we do anything with the data, we need to store it. To decide on the storage technology, we have to see what the structure of our data actually is. Namely, our data, both from Industrial IoT and control systems, is data that measures how things such as equipment, condition, weather, etc., change over time. The time is not just a metric, but a primary axis. This type of data is time-series data, and since we are dealing with time-series data, a logical solution is to use a time-series database (TSDB). The time-series databases are the most popular DB engines in the last two years and there are solid reasons for that:

  • Scalability:Time-series data accumulates very quickly, and normal databases are not designed to handle that scale. Time-series databases handle scale by introducing efficiencies that are only possible when you treat time as a first-class citizen. 
  • Usability: TSDBs also typically include functions and operations common to time-series data analysis, such as data retention policies, continuous queries, flexible time aggregations, etc.

Serverless Architecture

So far, our data will reach to the cloud, and we know how and where to store it, but there is still one more thing missing between these two components. The cloud endpoint is not responsible for data storage, neither is the time-series database for data ingestion. Looking back at the quality attributes of the first two components, what they have in common is scalability. Both the cloud endpoint and the time-series database can scale to support more messages per second. The component in the middle must support this scalability; otherwise, we lose the scalability of the solution. To achieve solution-wide scalability we can use Serverless Architecture pattern. Serverless Architecture is a software design pattern where applications are hosted by a third-party service, eliminating the need for server software and hardware management by the developer. Applications are broken down into individual functions that can be invoked and scaled individually. The main benefits of Serverless Architecture are infinite and instant scalability and pay per use. That means if we have more load, then the third-party provider will almost instantly scale to handle the load and also we will get charged only for the used resources. If there is no load, the customers don’t have to pay anything.

Anomaly Detection with Serverless Architecture

Functional Dashboards

So far, we managed to get the data from the sensors to the cloud and what we have is data, but do we have the information? From a developer point of view, the solution is almost ready, but from a customer point of view, we have nothing. The next thing is to build a web application that will display the data. Showing the raw time-series data is not enough. Customers demand meaningful information. Therefore, we need to design appropriate dashboards. There are multiple popular ways to categorize dashboard based on their purpose (Analytical, Strategic, Operational, Tactical, etc.). The operational dashboard is a digital version of the control rooms. They provide almost “real-time” information to the operators so they can act quickly. Very common dashboard type for this purpose is a “Live View” dashboard, where the latest sensor values are shown. However, showing the latest values from all the sensors and control systems is not useful enough. It will end up in broken UX very soon. A human being cannot extract meaningful information from about 10,000 sensor values. One thing that can help is showing only the sensors or control systems with anomalies. But how to detect an anomaly? One way is to use fixed thresholds. For instance, if the engine temperature is over 100 degrees, then you can show it on the dashboard. This approach is an improvement, but still, it has some disadvantages. First, it’s a manual work to enter all the thresholds, and second, the thresholds can change over time. Due to physical deterioration, the equipment can be depreciated, or the environment might change, and the thresholds are not valid anymore. On a large-scale solution with thousands of industrial IoT sensors or control systems, this can be very difficult to maintain.

Operational Dashboard

What if we can automatically detect an anomaly in the sensor data and display only the anomalies on the operational dashboard? In that case, we don’t overload the customer with information, and we show only the important information: We detect an anomaly in the second engine of production process X. Do you know the customer’s answer on this? Yes, that’s what we want! Great, we’ve found something that our customers need. Now, we need to make it happen. Fortunately, there is a solution that fits into our architecture very well. For time-series data anomaly detection, there are already mature algorithms that give highly reliable results. There are also libraries that implement the algorithms and enable running in a stateless fashion. Based on this, we can create a serverless component that will run the time-series anomaly detection algorithm on each stream of data and push only the detected anomalies to the dashboard.

Qlarm Anomaly View

We live in a time where technology changes faster than ever. One wrong design decision in your solution can reduce your time to market and that can lead to lost opportunities. In the journey, we have shown how proper architectural reasoning can result in a scalable and functional product such as Qlarm. If Qlarm is what you need or you want to find out more, feel free to reach out to us.

DevReach, Sofia, 2018 Impressions


A few weeks ago, I had a chance to attend the DevReach software development conference, which took place on November 12-14 in Sofia, Bulgaria. This year, the conference was a 3-day event with a pre-conference workshop held on the first day and 2 days of conference sessions.
DevReach strives to be the premier developer conference in Central and Eastern Europe. With more than 800 attendees from nearly 20 countries this year DevReach marked its 10th edition. The conference is intended for IT professionals engaged or interested in application development. This event featured world-renowned industry experts who shared their knowledge in a stimulating, enjoyable and friendly atmosphere. As a conference, DevReach offers the ideal opportunity to enhance your proficiency in software application development and boost your confidence.

I was told that this anniversary edition was going to be special, but before this, I hadn’t had the chance to attend at such a big event. And it lived up to my expectations, there was so much quality content and speakers, that I wished I could have been present in more places at once.

Looking at this year’s agenda, I decided that my focus will be getting updated on .NET technology.

For that reason, I was looking forward to seeing Jon Galloway, who works at Microsoft on ASP.NET, Azure as the Executive Director of the .NET Foundation. I was present at both of his presentations .NET Core Today and Tomorrow and Blazor – A New Framework for Browser-based .NET Apps.
He introduced us with a lot of exiting new stuff in .NET Core 2.1 and .NET Core 2.2. The key focus was on performance improvements and I must say, performance using .NET Core was improved to satisfy different requirements. Also, he presented an early look at .NET Core 3.0 and experimental Blazor project which is set to run .NET code in the browser with WebAssembly.
Blazor is a new experimental web UI framework from the ASP.NET team that aims to bring .NET applications into all browsers (including mobile) via WebAssembly. It allows us to build true full-stack .NET applications, sharing code across server and client, without the need for transpilation or plugins.
There were a few demos, some of them failed during presentation but it was nice to have the chance to look at something new. Jon Galloway presented a modern, component-based architecture (inspired by modern SPA frameworks) at work, as we use it to build a responsive client-side UI. It covered both basic and advanced scenarios using Blazor’s components, router, DI system, JavaScript interop.
In my opinion, the disadvantage of using Blazor I must say is bandwidth, a lot of data was transferred on request between server and client. There was almost 1 MB data transferred between server and client on initial request. Will it be accepted by developers? I guess it will be used in small applications when there is no need for client-side developer experts.

Interesting topics were presented by Jeremy Likness, a Cloud Developer Advocate for Azure at Microsoft. He presented us with one of the most interesting topics these days, Cosmos DB and Serverless .NET technology.
Hiker’s guide to the Cosmos (DB) was presented to introduce us with Cosmos DB. In his experience, developers are daunted by the triple punch of giving up relational databases like SQL, allowing their data to live in the cloud, and moving from hands-on hardware to turnkey solutions. He showed us multi-model structure of Cosmos database and its benefits. With the click of a button, Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure's geographic regions. It offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs), something no other database service can offer.
Another interesting topic by Jeremy Likness was Code First in the Cloud: Serverless .NET with Azure.
The popularity of microservices combined with the emergence of serverless based solutions has transformed how modern developers tackle cloud native apps. Microsoft’s Azure cloud provides a feature known as serverless functions (including Azure Functions and Logic Apps) that let developers stand up to integrated end points leveraging the programming language of their choice without having to worry about the supporting infrastructure. Basically, his examples were how to develop serverless .NET apps and connect them with queues, web requests, and databases or seamlessly integrate with third-party APIs like Twitter and Slack.
I must say, his presentation refreshes our memory and understanding about serverless technology. For some time, we have been using Azure Functions in our applications, but it is always nice to have additional information regarding bottlenecks, if this technology is not used properly.

I have always been amazed by the security of applications. Authentication is one of my favorite topics. That is why I decided to see the presentation by Chris Klug, The Whirlwind Tour of Authentication & Authorization with ASP.NET Core.
Authentication and authorization are not fun topics for most people. It is generally that thing that must be there, but nobody really cares about. And on top of that, every time the requirements are a little bit different. Every time we must figure out how to write all the plumbing to get it done properly. It is security after all.
In ASP.NET Core, Microsoft has made it easy to get it all done. In most cases, it is only a few lines of code and some minor configuration, and you are up and going. However, if you don’t know what you are doing, it can be a daunting task.
For me, his presentation was the best, exactly what we all need, working code and examples, simply outstanding. There were different examples of working types of authorizations, how they are configured and used. There were social logins, local logins, even AD-based logins, which can be used in one of our products that we are working on, where we are using users from Active Directory.
Also, he presented us with token-based logins for securing Web API’s. Everything that we need to properly set and run when it comes to authenticating your users in ASP.NET Core.

I am happy to say that I have found the whole conference experience amazing. It was such a privilege for me to be able to attend, and I must express my thanks to Nebb for allowing me to be part of this year’s event.

Written by: Jovica Mitkovski
Category: Control Systems