Digital assistant for the control room — A battle for supremacy rages in the consumer world of voice-controlled digital assistants. Google, Amazon, Apple, Samsung, and Microsoft are the trailblazers on this market and want to make their assistant an integral part of our everyday life. But when you look to industrial applications, the search still comes up empty. This is about to change, because useful applications are manifold and a prototype is now entering the pilot phase.
In 2018, visitors at electronics fairs such as CES, the Consumer Electronics Show in Las Vegas, or the “Internationale Funkausstellung”-Fair in Berlin got a taste of the world of tomorrow: people will increasingly speak with things — with cars, notebooks, TVs, headphones, or microwaves. If you believe the manufacturers, digital (language) assistants will become ubiquitous. Assistants like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, Samsung’s Bixby, or Google’s Assistant are becoming the control centers for networked devices, which we control by voice command. In fact, predictions of a triumph of these assistance systems is supported by a number of indicators: dispensing with input devices such as a keyboard or mouse, for example, frees users from annoying hardware and at the same time allows them to have both hands free while operating the device.
Interacting with devices that listen to us will also be much more intuitive. Recent technical advances in the field of natural language processing play a major role in this. Today, modern concepts — some making use of cloud computing and AI — allow for a much more extensive, more natural interaction with systems. At the moment, there are still a lot of questions regarding their uses and applications.
At Siemens, the development of a digital assistant for industrial applications starts by answering precisely these questions.
User Requirements Determine Product Development
At the Achema in June 2018, four young development engineers from Siemens introduced the prototype of a digital assistant during a customer event. With its help, the audience was able to interact with the process control system Simatic PCS 7 using voice commands. Johannes Vorsamer, Markus Krause, Kim Schreier, and Sven Hetterich form the core team that drove the development of this system at Siemens from idea to functional software in less than twelve months. The focus during development was, however, not on the product itself but on the system operators, maintenance crews, and shift managers as well as other users at the plant. “For us it was tremendously important not to create use cases from the product perspective,” software architect Vorsamer explains.
For this reason, many ideas that were derived in specially conducted customer interviews were adopted in the product. The team found that a multi-modal interaction concept, i.e. a human- machine interaction via multiple human senses, could solve many of today’s challenges in the process industry. “During our surveys, the enormous strain on plant operators was repeatedly brought up,” reports Krause, software architect for TIA and Simatic PCS 7. “On the one hand, operators today perform far fewer manual interactions, on the other, their responsibilities have become much more varied.”
At the Heart of Process Control
In many cases, modern process plant control rooms are similar to communication centers in which information is managed and passed on by phone, e-mail, or verbally. This intermediary role takes a lot of the operators’ time and attention. Based on this fact alone, the developer team was able to derive four initial use cases for the assistant.
The system serves:
- operators as a source of information for questions regarding processes and process control;
- new employees as an intuitive introduction to get to know (control) systems;
- shift managers or supervisors as a central contact point for querying process and operating data;
- operating and maintenance crews as a tool for control or maintenance questions, e.g. in the form of context-based support during start-up or shut-down scenarios or through dialog-based step-by-step maintenance procedures.
Not Your Standard Consumer Assistant
“The core of our assistant is a chatbot that acts as a dialog moderator between system and user,” Krause explains the basic functionality and the software developer Schreier adds: “Our system architecture is so flexible that in addition to language, other types of dialog are also possible, for instance gestures, facial expressions, or eye movements.” This multi-modal approach makes sense as one form of communication can be more useful than others depending on the situation or surroundings. For instance, voice input does not make much sense in the field due to noise, but gestures accepted by the system do. The four Siemens employees do not want to tear down established methods, but simply expand the possibilities of control systems with their idea: “Our goal is not to shift control systems completely to voice control,” says Vorsamer. “Many things are — and will remain — easier and faster to do with a keyboard or mouse. We do not want to interfere with the classic process control. We simply want to make work easier and create real added value for users.”
This article is protected by copyright. You want to use it for your own purpose? Contact us via: support.vogel.de/ (ID: 45966591 / Control & Automation)