Where did we come from?
IT (the abbreviation of Information Technology) came into our daily lives in the early 1980s with the PC (Personal Computer). At that time, IT consisted of using computers to store, retrieve and manipulate data or information, often in the context of a business and in the consumer goods sector. Its characteristics were the following: a single piece of equipment (the PC), a single user (the human), and all this not connected to the outside world. The economic model consisted mainly of selling electronic equipment to the general public: the PC, which consisted of hardware and software. Technologically, this new economic model was based on (i) the development of microprocessors, (ii) the birth of a new operating system (OS), and (iii) the appearance of a human-machine interface (HMI) which could be directly used by humans with the help of a mouse.
A second step occurred in the early 1990s when PCs became capable of being connected to each other using the telephone line: this was the starting point of the internet. This new stage was characterized by a single piece of equipment (the PC) and a single user (the human being), now connectable to the outside world. This was the starting point for a new business model of providing answers to online questions via internet sites. New revenues appeared, based on the redirection of potential customers, advertising and the provision of files.
On the technological front, this second step was based on the development of the microprocessor, the appearance of a human-machine interface (HMI) directly usable via a web browser, and the considerable increase in storage capacity in the back offices of new emerging companies such as Google.
Where are we now?
A third step occurred in early 2007 with the arrival of the first iPhone, and consisted of the fusion of information technologies with those of telecommunications. This gave rise to information and communication technologies called ICT, or more commonly Mobile Internet, which offered the possibility of easy “Peer-to-Peer” interactions between users. This third step was characterized by the appearance of a second piece of equipment in addition to the PC: the smartphone, intended for a single (human) user, with a wireless connection to the network. This step also gave birth to a third piece of equipment: the tablet, a kind of hybrid device, sometimes used as a small computer, sometimes as a kind of large smartphone. This third step gave birth to two new economic models:
1. The first economic model is characterized by the fact that the user produces the data themselves. This is what we call the Web 2.0, defined by the free production of information by the user for the benefit of companies that use this economic model in the image of, for example, Facebook. These companies then access a CAPEX-free production of data, the raw material of this new economy.
2. The second economic model is characterized by the fact that it is the provider of these new companies (service platform) who themselves invest in their own equipment in order to be able to provide a service. This is what we call the Uber model, or AirBnB model. The service provider then becomes an investor who not only is not remunerated in this capacity by the service platforms, but who must also bear the depreciation. Companies that use this model thus access a CAPEX-free investment allowing the service. There can be no better “cost-killer”, since the cost is carried over to the service provider.
This third step is technologically based on the development of microprocessors, the appearance of a new human-machine interface (HMI) directly usable from a store with “apps”, and a significantly increased storage capacity, together with the use of wireless telecommunications for the internet. All this resulted in a “dematerialization” which, from the point of view of the end user, moved their storage space and the processing of their data locally to relocated servers, commonly called the Cloud.
All these activities have been made possible thanks to major developments in the fields of microprocessors, with the increase of their power, storage, with a constantly increasing capacity, increasingly user-friendly HMIs, and wireless telecommunications. It is very important to understand that all this data is generated and/or used by humans and via the use of two devices: the PC and the smartphone, tablets being simply a hybrid of the PC and the smartphone. These two devices are controlled by proprietary environments: Windows, iOS and Android.
In this context, the evolution described below is unavoidable. Above all, it is important to understand that what we call the Internet of Things (IoT) consists of electronic devices directly or indirectly connected to the internet. All sectors of economic activity are and/or will be impacted: real estate, industry, agriculture, health, transport, education, etc.
Faced with such a large quantity and diversity of IoT to manage, the processing of information quickly becomes colossal in terms of data, links, IT processing and storage, not to mention the impact this will have on various legal processes, insurance, and economic models. To face this new world being established, it is essential to be able to set up an infrastructure capable of managing this massive influx from the Internet of Things. The development of a new approach to Information and Communication Technologies (ICT) is therefore imperative in view of the inadequacy of current ICTs as solely dependent on two electronic devices, the PC and the smartphone, connected to the internet and interacting under the control of a human being.
What are the main ICT sub-systems?
Information and Communication Technologies consist of the following elements: First, the electronic equipment, which consists essentially of a processor (CPU), an operating system (OS) and a Human Machine Interface (HMI). Then come data centers, which are mainly storage and mass processing sites using technologies such as AI (Artificial Intelligence). Finally, telecommunications that connect these different subsystems.
What are the main challenges?
To avoid the multiplication of the IT resources of each IoT, and to reduce them, it is necessary to be able to share the “hardware” resources of IoTs between them. This is essential in order to achieve M2M (Machine to Machine), that is, the possibility that two IoTs could perform their functions directly between themselves without connecting to the Cloud or a third electronic device. We need to implement a technology to simultaneously mix and manage industrial time known as “Real time” between multiple IoTs, and human time with its user interface. We also need to secure the connected objects so that they only do what they were built for. In other words, we must secure the electronic device intrinsic to the object.
Then we must secure all the data itself manipulated by the IoT. In fact, all this data is the property of the owner of the connected object, which is why it is mandatory to have technology that is not open to all and remains under the control of the owner of the data. With this large number of electronic devices, energy consumption is becoming crucial, and we must adopt an ecological attitude towards the technology deployed for this new world of IoT.
Finally, there must be a de-correlated execution of the Cloud for security reasons, but also because the connection with the Cloud can be interrupted momentarily (breakdown, maintenance, etc.). This processing execution of the data must then be able to be performed locally, completely independently of the Cloud, with or without internet connection. This is what we call Edge Computing.
What parts of the puzzle can we act on?
Over the past 40 years, we’ve made tremendous progress in terms of processor power, storage capacity, telecommunications, and HMIs with our in-store apps. In contrast, in terms of OS (operating system), we have consistently continued to use an OS dedicated to a single device for a single user. However, the deployment of IoT requires that we have an OS which is Multi-User and can be shared between several devices.
This new OS will have to:
1. be deployable on several electronic devices, allowing this set of devices to be seen as a single multi-core device, allowing the pooling of all its hardware resources. We can then talk about a shareable OS;
2. be able to simultaneously be a real-time OS called RTOS (Real Time Operating System) and an OS for a human user called GPOS (General Purpose Operating System);
3. be reliable by construction, using FSM (Finite State Machine) technology, which brings determinism, a key element of security;
4. be secure – in addition to using the FSM technology that brings determinism, it will have to be structured as two subsets: the first serving as a security lock with the link to the hardware and the implementation of protocols, the second supporting applications;
5. be able to operate in a wide range of processors, from the smallest to the most powerful;
6. be energy efficient in its execution, not require large RAM (Random-Access Memory) and be able to run from Flash memory;
7. be able to run the current and future programming languages of developers;
8. be able to receive and execute another operating system, to maintain the software investment already deployed;
9. and come with its own Boot and Loader.
This new OS, SynapOS, is protected by copyright and has been fully developed and documented without any contribution from third parties.
This new stage will enable the arrival of a fourth economic model which will be based on a first local processing of the data, which we will call Edge Cloud, before it is raised to the next level (Cloud) if necessary.
CEO of HyperPanel Lab, Co-founder of SynapOS
Copyright © 2018 HyperPanel Lab. and SynapOS All Rights Reserved