Meet SAM, the Secure Artificial Intelligent Messenger
Back in November, we told you about the very first ti&m code camp, where our employees were tasked with finding innovative solutions to several technical challenges. In today’s interview, we have decided to speak with the winning team about “SAM”, their artificial intelligence application, and find out more about how the solution was built, how accurate it is and what their plans for the future are.
ti&m: What is SAM?
SAM team: We had two different approaches going forward. The first idea, by team member Kai Windhausen, was the Secure Artificial Intelligent Messenger, or SAM. We like to think of SAM as a person, a service that you can communicate with and one that can accomplish useful tasks automatically. We had this idea, but we didn’t know how we were going to materialize it, as well as how we were going to integrate it into the ti&m channel suite. Our second approach was an older one. Here, our thinking was that we’d take an API and do something with that – for a better understanding of the process you can check out VPRIS’s Call by Meaning paper.
The result was that SAM would understand the meaning of an input and the API was what the system was capable of doing. We needed and tried to link the two. In the end, we ended up with quite an interesting mobile app. SAM was able to, for instance, book an appointment. However, it was a bit of a hit and miss, which is what made the app so interesting. At times, SAM could be quite impressive by being very precise, sometimes not so much. This was the case because due to the time limitations of the code camp challenge, we didn’t have time to integrate a threshold mechanism. For example, we’d get a precision understanding of 2% and SAM would still try to something with it. So you would say, “Order me a pizza tomorrow at 2 o’clock” and SAM would understand the correct time and date, but it would make an appointment for pizza instead of ordering pizza.
How does Sam understand what I say?
SAM uses a 3rd-party service to understand what the user says. The input of the user is sent to an external AI service, which has the task of translating the input into an intent. We were using Microsoft Language Understanding Intelligent Service (LUIS). LUIS is based on a Recurrent Neural Network (RNN) that is specific for each app. This means that the AI improves over time, i.e. the more you use it, the more accurate it becomes.
How does it work?
Sam was designed with the Channel Suite module, which resulted in SAM being easy to integrate and interact with the other Channel Suite micro-services. To explain how it works, it's better to use an example: Let's say that the appointment module wants to use the SAM module in order to book an appointment. To reach this goal, the appointment module should register to SAM. During this step, the module should send the following information:
- The intent that the module can satisfy, such as 'book appointment'.
- A bunch of example strings, initially used to train the neural network of SAM to recognize the intent, i.e. 'I want to book an appointment' or 'I need an appointment'.
- A callback endpoint to be called by SAM when the user wants to book an appointment.
- Other information required by the appointment endpoint before being called. In this case, a date is required to book an appointment.
Once the module has completed the registration, it's SAM’s turn. When the SAM module receives an input string, it sends it to LUIS. LUIS has already been trained with the instructions given to it by the registered modules. Once LUIS replies with an intent, corresponding to the intention of the user, SAM checks if there is a registered module which can satisfy such an intent. If this is the case, before calling the call-back endpoint, it checks if additional information is needed to satisfy the intent. In our example, a date is also required to book an appointment. Therefore the SAM module asks the user for the missing data, once it has collected the outstanding data, it calls the provided call-back endpoint and finally fulfils the intent of the user.
Is it accurate?
Based on the results we achieved after 30 hours spent with this project, the answer is YES! This does not mean that this module is perfect and ready for market. There is still a lot of work to do before reaching a perfect product, but what we have is impressive: our module can distinguish different commands, even when they are similar to each other, and understands when a command can not be handled.
Unfortunately, the accuracy of SAM depends mainly on the accuracy of the neural network used to map normal strings to intents, in our case LUIS, and on the training of it. Because of that, the only way to improve the accuracy of the system is to improve the training of the neural network.
Which tech did you use?
We used Java and the Spring framework for the back-end and the Eureka services for the integration with the Channel Suite modules. We created the mobile application using the React native framework and iOS dictation as a speech-to-text translator. Finally, we used LUIS as natural language processing service to map text into intents.
How was your team organized during the development?
We organized our team quite efficiently. From the get go we split the project into four distinct tasks, which then each team member tried to complete:
- David Perrenoud worked on the development of the mobile application, and its integration with the speech-to-text services
- Fabio Brunori was primarily focused on integrating LUIS into our module
- Janik Lüthi developed all of the core functionalities of the SAM module itself, like the interactions with the mobile application and the business logic of the module
- Federico Bellini built the structure of the Channel Suite module and tried to integrate SAM into the other Channel Suite micro-services
Each member of our team came from a different background and had their own unique skillset, which was the strongest aspect of our team. Because we had a good coverage of the different technologies, frameworks and architectures that our application was based on.
Did you have any difficulties during the development?
Besides the lack of sleep and abusive use of coffee? Jokes aside, we really had a hard time maintaining focus due to lack of sleep. However, there were technical difficulties too! Because we weren't familiar with all the services we wanted to use, we had to experiment a bit. We really wanted to use an Alexa system, so we brought one to the camp. We didn’t want to limit our product to one device, which is why we limited our usage of Alexa to the speech-to-text function. We found out that implementing Alexa was going to be a harder than expected. Consequently we ended up using iOS for the dictation function.
What about security?
We can profit from all the security layers that are already integrated into the Channel Suite. Initially, setting up the app is quite laborious as the security requirement are quite pedantic. For certain customers, it may be undesirable to use external services like LUIS, so we aim to replace it with an internally hosted instance of Rasa NLU for example.
There are definitely a couple of rough edges that we have to iron out. For now, we installed it only on our phones. I guess we have to push the right people for a bit to get a budget to optimize SAM. Later, we need to make a demo environment that we can show our customers in order to get some of their feedback to see where this can go. It's way too much fun to just be put inside in a box!
565 registered participants, 152 submitted projects and 20280 hours of coding – the occasion was of course Hack Zurich, Europe’s biggest hackathon and one of the most anticipated events for ambitious hackers. The rainy weekend was the perfect opportunity for hackers from all over the world to get together in the Technopark and develop innovative solutions and new apps for the sponsor’s challenges. The goal: develop an amazing prototype and maybe, even disrupt a whole industry.Mehr erfahren
IT-Systeme vernetzen sich immer stärker. Application Programming Interfaces, kurz APIs, sind Anwendungsprogrammierschnittstellen zwischen IT-Systemen und werden daher 2017 stark an Bedeutung gewinnen. Ihre erfolgreiche Anwendung erfordert die Zusammenarbeit von Business, IT-Entwicklung und IT-Betriebsmitarbeitern in kleinen, interdisziplinären Teams von 6-8 Personen.Mehr erfahren
ti&m hat das Hosting nach dem ISAE 3000-Standard auf Wirksamkeit der FINMA, Rundschreiben RS 2018/3 und RS 2008/21, prüfen lassen. Im Interview erklärt Karsten Burger, Head Innovation Hosting & Application Management bei ti&m, die Hintergründe und was dies für Vorteile für ti&m-Kunden hat. Zudem gibt er einen Einblick in die Erfolgsgeheimnisse des Hostings von ti&m.Mehr erfahren
There are two major issues I have faced in the past few years, when writing AngularJS applications, and I have seen numerous other teams fighting the same battles. Out of these experiences the “Lazy Angular” approach came to life. It gives us a project structure which works for both, large and small applications. And it enables us to keep a somewhat consistent load time as new features come to life and our app grows.Mehr erfahren