AI and the cloud – a universal panacea?
AI has evolved into a real buzzword these days. But it’s such an emotionally loaded term that there’s now a wide gap between the idea and the reality. This article attempts to explain what AI is already capable of and why cloud technologies have such an important role to play in it.
Just what are we talking about actually? “Artificial Intelligence” (AI) refers to the basic idea of getting machines to perform “smart” tasks. The subordinated term “Machine Learning” (ML) by contrast refers to the learning process by which a system acquires a given behavior based on data. In general, it can be said that ML is learning based on patterns, examples and experience.
However, today’s algorithms are still very simple. Until now, programming has primarily been concerned with defining rules. Machine learning is the program that identifies these rules for us and continually improves them over time on the basis of data.
AI is not all high tech and robotics
Artificial Intelligence is often associated with robots that may serve us at some future point in time. This picture is exaggerated by the media – AI often quite simply means that smart algorithms can use data to support processes. We already benefit from the assistance of many cloud-based AI applications: Google Maps automatically shows us the quickest route to our destination, households use intelligent devices to measure and independently adjust their electricity consumption, and the best email spam filters use AI. In business, too, machine learning can perform many practical tasks, from assisting with resource allocation to predicting and responding to future customer needs through to automating tedious and repetitive jobs.
Pretrained AI systems from the cloud
Various technology providers are keen to make artificial intelligence more accessible and are making available interfaces, known as APIs, that enable any organization to integrate pretrained AI systems into their own processes. Many APIs are already used for image analysis. Google’s AutoML, for instance, can be used to train an in‑house image recognition model by feeding it with predefined image type templates that the model learns from and that enable it to recognize and identify, with relatively little effort, a wide range of images, e.g. of screws.
Data are an important part of AI development. And data‑driven tasks – that is to say, information‑based challenges – are a prerequisite for any AI application. However, these do not have to be present within your own company. A large number of digital datasets are made freely available by communities interested in a wide variety of issues, from wine ratings to urban sounds through to consumer behavior on Black Friday, in the form of texts, satellite images and videos.
Many businesses are already using image recognition and identification, and are able to train it with increasing specialization. With AutoML, Google Cloud is making a system available that anyone can use to build and train their own ML model without much prior knowledge.
Two examples: Disney uses AutoML Vision to assign products and product photos rapidly to specified categories in its online store. So a product manager might upload photos of a T-shirt with a Spider-Man design to the website, and the product is immediately tagged with “Spider-Man,” “T-shirt,” “Marvel”
The Zoological Society of London (ZSL) has pledged to protect animal species worldwide. In order to be effective in fulfilling this mission, they have installed so‑called camera traps across the globe. This enables them to count and record animals in their natural habitat. ZSL has used AutoML Vision to train an ML model to recognize which animals have been photographed, enabling them to be automatically classified. The work of many weeks thus becomes the work of only a few hours or a couple of minutes.
Computing power from the cloud
What does all this have to do with the cloud though? Machine learning, deep learning and artificial intelligence in general
require huge amounts of processing power. One simple way to acquire this is to hire high-performance hardware in data centers that users can access via the Internet. The latest technology for generating significant improvements in processing power for AI systems is known as Tensor Processing Units (TPU). TensorFlow is a key component of many AI and ML systems. The TPU
processor developed by Google is built to run the TensorFlow open source AI framework at exceptional speeds, enabling AI systems to run between 15 and 30 times faster. This is equivalent to a leap of seven years into the future compared to previous development cycles. We have installed large numbers of so‑called pods in our data centers, to which TPUs can be connected. One pod alone delivers 11.5 petaflops of performance. Companies can scarcely maintain this sort of hardware at the local level. For this reason, cloud computing plays a key role in the development of AI systems.
The three points for successful use of AI are:
- It requires high-quality datasets that ML systems can use to recognize patterns. Ideally, ML systems should be trained with data that are representative of the real world.
- Good tools and frameworks are essential. Although basic ML algorithms can be described in just a few minutes, they are pretty complicated to implement. It is therefore necessary to have a series of services that do not require ML or programming know-how.
- Immense processing capacity is required. The cloud provides precisely this kind of high‑performance hardware that businesses and developers can use for their own ML models.
These simple steps will ensure you make a successful start with AI in the cloud.
Digitalisation and digital transformation are two very different things. The first is relatively straightforward and can usually be implemented without opposition; the second is a hugely difficult undertaking – especially for large corporations.find more information
Das Jahr 2015 markiert ein Meilenstein in der digitalen Medienlandschaft. Zum ersten Mal verwendeten mehr Leute das Internet über mobile Geräte als über Desktop-Browser. Die Webseitenbetreiber haben deshalb ihre Webseiten responsive gestaltet. Je nach Gerät und Bildschirmgrösse wird das Layout der Seite anders dargestellt, so dass der Inhalt immer optimal sichtbar ist.find more information
Während viele Unternehmen noch damit beschäftigt sind, im Rahmen ihrer agilen Transformation eine DevOps-Kultur einzuführen, entwickelt sich im Tooling-Bereich schon der Begriff NoOps, um weitere Schritte auf dem Weg zur vollständigen Automatisierung im Software-Betrieb zu beschreiben. Braucht es in Zukunft überhaupt noch ein Operations-Team?find more information
Welche Vor- und Nachteile haben Kubernetes und OpenShift? Dieser Frage geht ti&m-Surfer Bernd Leinfelder in seinem aktuellen Blog-Beitrag nach.find more information
Mobile-Applikationen haftet immer noch der Ruf an, weniger sicher zu sein als ihre webbasierten Gegenstücke. In der Realität verhält es sich jedoch genau umgekehrt. Mobile-Applikationen sind aufgrund moderner Sicherheitskonzepte mobiler Betriebssysteme bedeutend sicherer als jeder PC oder jedes Notebook. Dies selbst dann, wenn manche Sicherheitsvorkehrungen vom Benutzer durch so genanntes Jailbreaking bzw. Rooting der Geräte aktiv unterwandert worden sind.find more information