Tomorrow is good: Human beings, machines with emotions?

Computers are good at abstract thinking; we are all too keen to delegate complex calculations to them in order to free ourselves from that chore. There is something threatening about the intelligence of machines too. Robots and synthetic or artificial intelligence (AI) force us to question our place in the world. What does it mean to be human? Where does the boundary lie between man and machine? What is man? – enlightenment philosopher Immanuel Kant pondered. Our moral views on in vitro fertilization (IVF) have evolved considerably over the past decades. Even to the extent that many people would find it unacceptable to refuse a couple who is eligible (within certain rules, such as age) to go through with an IVF procedure in the Netherlands or Belgium.  Reference is then made in this context to techno-moral change: modifying moral beliefs as a consequence of technology.

Die as a cyborg

Machine and body will become more and more intertwined. Philosopher James Moor asserts that we are born today as human beings, but that many of us will die as cyborgs. Cyborg stands for ‘cybernetic organism’. As in, partly human, partly computer. Moor’s claims are justified, even though cyborg may sound like science fiction. A good example is the pacemaker which is in fact a miniature computer. Moreover, there are pacemakers that are connected to the internet. There are bionic limbs too, such as a bionic arm for disabled veterans or people with congenital disabilities. As well as exoskeletons for patients with full paraplegia.

For example, knee or hip prostheses are implants in the body, which we have been familiar with for some time already. These are not computerized technologies. Still, our human dignity and integrity have not been altered by them. We have over time accepted these implants without any problems. Even further developments, as yet unknown to us, may amount to a broader sense of human dignity. Consequently, we should not be ‘automatically’ opposed to them.

Thanks to science and technology, human beings have been improving for centuries. And the results are clearly apparent, because we are living longer and healthier lives. The debate must now focus on ethical boundaries and problems – what is desirable? And also – what kind of cyborgs do we want to be? For example, AI implants should not only be accessible to the happy few who can afford them, which invariably means that only they can enjoy the benefits. The principle of justice is important for ensuring fair, democratic access to technology. Damage or risk of harm to the patient and third parties obviously needs to be curtailed.

Are we expendable?

How unique is humankind? Are we replaceable by robots and AI systems? AI researcher Rodney Brooks thinks we should rid ourselves of the idea that we are special. We, people, are ‘just’ machines with emotions. Not only are we able to build computers that recognize emotions, but eventually we could also build emotions into them. According to him, it will at some point even be possible to design a computer with real emotions and a state of consciousness. But he also remains rather cautious and avoids making statements about when that is going to happen. That is a wise decision, because the brain is extraordinarily complex. There is still not enough known about its specific workings or the very long evolution that preceded it. Least of all about being able to replicate it just like that.


EU Commissioner Vestager to present new AI law at the start of 2020

Over the next three months, European Commissioner Margrethe Vestager will draft a new European law for AI. As of December, she will be responsible for the digitization of the European market. She plans to present her new AI law in March. After that, the European Parliament and the governments and parliaments of the Member States will have to approve her new AI law.

The new AI law is to lay out the rules regarding the collection and sharing of data by, among others, the large American tech companies such as Facebook, Amazon and Google whose internet platforms are being used on a massive scale by European citizens. At the moment there is only a guideline for e-privacy and one set of regulations for data protection (GDPR). The new law must include rules that make the collectors and distributors of data liable for any abuse use of this data.

Nightmare for the US

The greatest nightmare for the high profile big tech companies in the US is her intention to adopt new tax regulations following on from the new AI law. This should apply to internet platforms all over the world which make money from consumers in European countries. In recent years, Vestager has already taken Apple to court for tax evasion. She imposed a fine of 13 billion euros on them for this.

As far as she is concerned, the new tax regulations that she has in mind should be applicable worldwide. If she cannot do this because, for example, some countries do not want to cooperate, she said that the European Commission will continue to impose fines on non-European companies on an individual basis if they pay insufficient tax in the EU.

Breaking up Google and Facebook

She may also impose fines if American big tech companies abuse their dominant market position. She has done so in the past few years while she was European Commissioner for Competition. If these fines do not lead to an improvement in their behaviour on the European market, she wants to break up the American business conglomerates. That is what she said in response to questions from Paul Tang, a Dutch Member of the European Parliament. Tang is also member of the Progressive Alliance of Socialists and Democrats on behalf of this PvdA party (the Dutch Labor Party). Vestager then told Tang that she had the means to do this. She did not specify what kind of means she has at her disposal.

Member of the European Parliament Paul Tang wants Commissioner Margrethe Vestager to break open American ‘big tech’ companies.

Gaining citizen’s trust

With its new European AI law, Vestager said they want to allay the fears of European citizens. In particular those who currently lack faith in the digitization of society. She says this is necessary as she believes there are two types of companies. The type that is digital – and the type that will soon become digital. In other words, sooner or later all citizens will have to participate in the digitization of everyday life, so she wants to make sure that the Internet is not intimidating to them.

In the second place, she wants AI to be used to make the citizens’ lives easier rather than more difficult. She wants to prevent digital platforms from collecting data via AI in order to influence the choice of consumers and businesses so that they can earn money from them. It was precisely for this reason that during her previous term as European Commissioner for Competition, she imposed a fine of 4.3 billion euros on the search engine Google.

More rules, less innovation?

The question is whether the new rules for AI will not stand in the way of innovation. Nicola Beer, an MEP from the Renew Group in the European Parliament, wanted to know whether Vestager had thought about how she intended to preserve Europe’s leading role in AI innovation. Vestager replied that she was looking for a more balanced situation. According to her, European citizens should benefit from the innovations that AI brings. Yet at the same time also be protected against their eventual misuse.

Europarliamentarian Nicola Beer wants to know how Vestager will ensure that the EU will remain a leader in the AI field.

Meanwhile, the initial reactions from the AI group of professionals to Vestager’s plans for new legislation have been quite reserved. “I find it a bit vague that Vestager says that AI sometimes makes life more difficult.” That’s what Buster Franken says, AI entrepreneur and developer from TU/e. “It is true that AI influences your choices via Google. But that can also make your life a lot easier.”

‘Small-scale AI companies in the EU are the victims’

Franken believes that there is a danger that a new law will burden smaller AI companies with far too many rules. “We already have a hard time finding capital to invest in our innovations. If new rules are added now, that will adversely affect us. It also means that you have extra work in order to comply with them. Maybe we don’t have the money for this. While this new law is supposed to combat abuse by large companies such as Google and Facebook.”

Read also: ‘Europe must invest in a hub for collaborative robots in SMEs’

“The point is namely that companies like Google can abuse data because they have loads of money. If there is a new law, they will undoubtedly be able to comply with it. Then they will simply look for another route. They have enough money to hire an army of elite lawyers. Small AI companies don’t have that.”

Who owns AI-based works? The Law faces a challenge

Our copyright law is aimed at protecting works created by the human mind. But with the rise of artificial intelligence, products and ideas are increasingly the result (in part) of stand-alone algorithms. Lawyer Martin Hemmer wonders whether our legal system is fully prepared for this development.

What amount of human input is deemed necessary in order to be able to claim copyright? And who is the owner of a product that comes from an AI system? Hemmer, who works for the AKD law and notary firm, raises these questions in his lecture at the ‘Digital Dilemmas’ network event held by the IT company Atos in Eindhoven. As a lawyer specializing in intellectual property and IT, he sees signs that this will become a major issue in the coming years.


Can the rules for human beings also apply to machines? As an example, Hemmer mentions the project ‘The Next Rembrandt,’ whereby artificial intelligence made a similar painting by itself on the basis of existing Rembrandt paintings. “Who is the rightful owner here? Is it the person who came up with the self-learning system? Or the person who threw in the data – or the commissioning party? Is there a copyright holder at all, and do we really think it is necessary for there to be protections in place for this? We don’t have an answer to these dilemmas as yet.”

The result of the algorithm. But who owns the copyright? Photo: ING

Human intervention

The subject was also on the agenda at the annual conference of the International Association for the Protection of Intellectual Property (AIPPI) in September. Hemmer was there on behalf of the Netherlands to seek international agreement on the course of action for this issue. It was decided unanimously that copyright protection is unwelcome for work that has solely been generated by AI. There are two important conditions for eventual protection, Hemmer explains. “There has to be some human intervention and the originality of the work must be an obvious result of that.”

If you design software that can produce works of its own, then according to the AIPPI, that is still insufficient grounds for claiming copyright, the lawyer explains. “But if, for example, the results are actually selected by a human being, and these results lead to an original end result, copyright protections can be put in place.”


Hemmer is also anticipating a heated debate within patent law. “AI has long been used by pharmacists to find new applications for medicines. To put it bluntly, these findings are now being spit out by a computer. Is it still a matter of innovativeness as is the case in traditional patent law?”

“The bottom line is that AI is playing an increasingly important role in the generation of works and inventions,” Hemmer concludes. “The discussion that will take place in the courtroom in the coming decades is whether there has been sufficient human input.”

What is artificial intelligence? What can you do with it and what are the opportunities and risks involved? Buster Franken and Vincent Müller answer the most important questions in this IO video:

Read more IO articles on AI in our digital category here.

Augmented Reality assists surgeons in the operating theater

Artificial intelligence is taking on more and more tasks in our modern world. For example, we use it every day when we use online search engines. Translation programs are unimaginable without AI, as are speech recognition, face recognition, computer games and, in the future, autonomous driving. In medicine, AI is also becoming more widespread and has already found its way into the operating theater. Just a few days ago, Innovation Origins wrote about operating with live 3D image navigation inside the body.

The Karlsruhe Institute of Technology (KIT) has now gone one step further and has even been awarded the NEO 2019 Innovation Prize (worth €20,000) by the Karlsruhe TechnologyRegion for their ‘HoloMed’ system. The new system assists surgeons in the operating room via Artificial Intelligence (AI) and Augmented Reality (AR). It does this by creating a model from computer tomographic images of the patient. These reveal the hidden structures deep inside the body.

GPS for the brain

HoloMed’s main focus is on cranial punctures. This is a procedure whereby accumulated fluid is removed from the brain in order to reduce pressure. Frequently used for e.g. brain hemorrhages, craniocerebral trauma and strokes. In order to determine the optimal point of insertion and alignment for the puncture, the surgeon must measure and glean data from “various anatomical landmarks” from computer tomography (CT) and/or magnetic resonance imaging (MRI) scans.

“The difficulty lies in the fact that determining the angle of insertion only allows for a very small margin of error and the doctor isn’t able to see the target straightaway,” notes Professor Björn Hein. He oversees the project together with Professor Franziska Mathis-Ullrich at KIT. Determining this exact point is complicated as these images are only two-dimensional and the human head is three-dimensional. That’s why only about 60 percent of all free-hand incisions are able to pinpoint the best position.

Surgeons use HoloMed augmented reality glasses to assist them in determining this optimal insertion point and angle for the puncture needle. An AI developed at the AI by science staff member Christian Kunz uses the data from the patient’s digital file and their latest CT and/or MRI scans for creating a model that accurately depicts the structures deep inside the body that cannot be seen externally. This information is superimposed onto the surgeon’s AR glasses and shows the surgeon precisely where and how to guide the needle, much like a navigation system.

Easy to use and cost-efficient

Professor Hein states that machine learning methods are used in the automated generation of this information. “First of all, a segmented 3D model of the head is generated, which is used to determine the target position. However, the doctor is always able to make their own adjustments if appropriate,” Hein adds. The aim of the system is to provide an “innovative, novel and cost-effective solution that has a direct influence on the quality of these procedures”.

After its puncture method is successfully rolled out, HoloMed will also be used for other operations in the future. Since the system is, firstly, easy to use, and secondly, cost-efficient, the inventors say it is ideal for lowering healthcare costs. Plus it would also benefit poorly financed hospitals in emerging countries.

Cover photo: Dr. Michal Hlavac from the University Clinic for Neurosurgery Ulm and Christian Kunz from the “Health Robotics and Automation” (HERA) KIT team evaluating the HoloMed system during the initial surgery simulation with a dummy. (Photo: KIT-HERA).

Tomorrow is good: Beware of the visionary

Research on artificial intelligence (AI) started in the years after the Second World War. John McCarthy, an American mathematician at Dartmouth College, coined the term in 1955 while he was working on a proposal for a summer school that he was seeking funding for. A group of AI pioneers met at that summer workshop in 1956 – the Dartmouth Summer Research Project on Artificial Intelligence. The term AI may have been new, but academics such as British mathematician Alan Turing were already thinking for some time about ‘machine intelligence’ and a ‘thinking machine.’ The objective of the Dartmouth project was also along these lines: simulate intelligence in machines and have computers work out problems that until then had been the preserve of human beings. The summer project did not quite live up to its expectations. The participants were not all present at the same time and were primarily focused on their own projects. Moreover, there was no consensus on theories or methods. The only common vision they shared was that computers might be able to perform intelligent tasks.

AI in 2056

The surviving pioneers from the Dartmouth summer project met up again together for a conference in the summer of 2006. During this three-day conference, they asked what AI would look like in 2056. According to John McCarthy, powerful AI was ‘likely’, but ‘not certain’ by 2056. Oliver Selfridge thought that computers would have emotions by then, but not at a level comparable to that of humans. Marvin Minsky emphasized that the future of AI depended first and foremost on a number of brilliant researchers carrying out their own ideas rather than those of others. He lamented the fact that too few students came up with new ideas because they are too attracted to the idea of entrepreneurship. Trenchard More hoped that machines would always remain under human control and stated that it was highly unlikely that they would ever match the capabilities of the human imagination. Ray Solomonoff predicted that truly intelligent machines were not as far from reality as imagined. According to him, the greatest threat lies in political decision-making.

Who is right?

A wide range of opinions, so it seems. Who among them will be right? Predicting technological breakthroughs is difficult. In 1968, the year when 2001: A Space Odyssey by Stanley Kubrick was released, Marvin Minsky stated that it would only take a generation before there would be intelligent computers like HAL. To date, they don’t exist. In 1950, Alan Turing thought that the computer could pass the Turing test by the year 2000, which turned out to be a miscalculation. Vernor Vinge predicted in 1993 that the technological means to create ‘superhuman intelligence’ would be in place within thirty years and that shortly after that year the human age would come to an end. There are still a few years left before it’s 2023, but even this prediction is excessively utopian.

Flip a coin

Making predictions for the future is problematic, as by definition the future is not determined. The role of chance is often greatly underestimated as well. Even experts are scarcely able to “predict the future any better than if you were to flip a coin.” Therefore, we should all be a bit wary.  Not in the least when it comes to visionaries and tech gurus with their exaggerated dystopian or utopian worldviews. So, don’t just believe anyone who claims that AI will definitely outstrip human intelligence within ten years.

Rules for Robots

The new book by Katleen Gabriels, Regels voor robots. Ethiek in tijden van AI (Rules for Robots; Ethics in times of AI) will be published next week. The English translation will be published in early 2020.



Tomorrow is good: By the time you get to the Moral Lab

I had no idea you could be anti time, but it is possible. In fact, that’s a reference to the Dutch title of a booklet about the Mastboomhuis museum. The Mastboomhuis is the only Dutch example of a historic house ‘suspended in time.’ A form of preservation where everything, including the neglected repairs and leftover piles of mail, remains exactly the same as it was left. It is an enchanting experience, wherein you physically step back in time. Back into the life of Henri Mastboom. As such, the rather ill-tempered Henri was quite averse to progress. He was literally anti time.

Whereas I am ahead of time as far as you can be ahead of time. As a sympathizer of the Design Thinking school of thought, my mission in life is to help design a brighter future. That’s why I’m so pleased that the Dutch Design Week is kicking off this weekend. A week in which the entire city of Eindhoven is dedicated to shaping the future. And this time the Dutch Design Week is all the more special for me because we are part of it ourselves.

The ethical conscience within my research group has joined forces with the designers collective We Are. This has led to a veritable moral laboratory. In this moral laboratory we examine how artificial intelligence should be programmed when it comes to making ethical decisions. In a time when chatbots and robot assistants give us solicited and unsolicited advice and when we no longer make our own choices – but choices are made for us – we have to make sure that the artificial intelligence that is advising us and that is making those choices for us, is doing so on the basis of our own ideology. Only then will we be able to fully embrace and trust artificial intelligence. That’s how we design a future that allows people to leave decisions up to technology without any misgivings.

Henri Mastboom would have found our exhibition ‘Moral Lab’ at the Dutch Design Week utterly appalling, and I think that’s the greatest compliment that you can give us.

About this column:

In a weekly column, written alternately by Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Tessie Hartjes, Jan Wouters, Katleen Gabriels and Auke Hoekstra, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all the previous articles.


TU Munich: Incarnation of the H-1 robot

Scientists surrounding Prof. Gordon Cheng from the Technical University of Munich (TUM) recently gave the robot H-1 a biologically inspired artificial skin. With this skin, (which is the largest organ in humans by the way), the digital being will now be able to feel its body and its environment for the first time. However, while real human skin has around 5 million different receptors, H-1 has a total of just over 13,000 sensors. These can be found on the upper body, arms, legs and even on the soles of its feet. Their goal is to provide the humanoid with its own sense of a physical body. Thanks to the sensors on the soles of the feet, for example, H-1 is able to adapt to uneven ground and even balance on one leg.

But of far greater importance is the robot’s ability to safely embrace a human being. And this is not as trivial as it sounds. As robots are capable of exerting a force that would seriously harm humans. A robot comes into contact with a human being at several different points especially during an embrace. It must be able to quickly calculate the correct movements and the appropriate amount of force required using this complex data.

“This may be less important for industrial applications, but in areas such as healthcare, robots have to be designed for very close contact with people,” Cheng explains.

Biological models as a basis

The artificial skin is based on biological models in combination with algorithmic controls. The skin of H-1 is made up of hexagonal cells. They are about the size of a €2 coin. The autonomous robot has a total of 1260 of these cells. Each cell is equipped with sensors and a microprocessor. These are used to measure proximity, pressure, temperature and acceleration. Thanks to its artificial skin, H-1 perceives its environment in a much more detailed and responsive way. This not only helps it to move around safely. It also ensures that it is safer in its interaction with people. It is able to actively avoid any accidents.

Event-driven programming delivers more computing power

So far, the main obstacle in the development of robot skin has been computing power. Previous systems were already running at full capacity when evaluating data from several hundred sensors. Taking into account the tens of millions of human skin receptors, the limitations soon become clear.

To solve this problem, Gordon Cheng and his team chose a neuroengineering approach. They do not permanently monitor skin cells, but use event-driven programming. This allows the computational workload to be reduced by up to 90 percent. The key is that individual cells only pass on data from their sensors when measured values vary. Our nervous system works in a similar way. For example, we can feel a hat as soon as we put it on. Yet then we quickly get used to it and don’t need to give it any attention. We only tend to become aware of it again once we take it off or it gets blown away. Our nervous system is then able to concentrate on other, new impressions which the body has to react to.

Prof. Gordon Cheng ©Astrid Eckert /TUM

Gordon Cheng, Professor of Cognitive Systems at TUM, designed the skin cells himself about ten years ago. However, this invention really only reveals its full potential as part of a sophisticated system. This has recently been featured  in the specialist journal ‘Proceedings of the IEEE.’

More IO articles on this topic can be found here:

Top 10 Emerging Technologies (2): social robots

Could you love a robot?

Start-up of the Month: towards the customer service of the future with AI

Every working day we select a European start-up of the day and each week we choose a weekly winner. At the beginning of the new month, readers can decide who will be awarded the Start-up of the Month. In recent months, the winners have come from all over Europe. In June from Italy, in July from Spain, in August from England and in September …. (drum roll) – we have a German winner!


Founder e-bot7 (f.l.t.r.): Fabian Beringer, Xaver Lehmann, Maximilian Gerer @e-bot7

The team behind e-bot7 wants to help steer customer telephone services into the future by utilizing artificial intelligence to improve the speed and quality of customer service. As a result, queues of 45 minutes and frustrating repeat calls should in future be a thing of the past.

The demand for good customer service is greater than ever and this technology makes it much cheaper and more efficient than it has been in the past. And that’s how you’ll save on both personnel and office costs. Due to this innovative concept, most of the votes from our Innovation Origins readers for the monthly winner went to this Munich-based start-up.

[democracy id=”6″]

All Start-ups of the Month are automatically in the running for the first Innovation Origins Start-up of the Year award to be presented next year.



Vote now! Who will be our Start-up of the Month?

Every working day we select a European start-up of the day and every week we choose a weekly winner. At the start of each new month, readers can decide who will be awarded the Start-up of the Month award. And next year (drum roll) …. but that will take a while.

The nominees for September come from Austria, Poland, Lithuania and Germany. The choice is yours. We will once again reacquaint you with the companies below the poll.

[democracy id=”6″]

1. Herbi Clean: Clean your house with acorns

Cleaning products typically contain a lot of harmful chemicals. It is not without reason that those orange warning labels appear on the packaging. Yet Mother Nature also has a cleaning lady hiding in her, as the Polish company Herbi Clean has proven. They came up with cleaning agents made of acorns which do not need any ominous orange warning labels.

It is actually quite odd that not more research is being done into cleaning products made from plant-based material. Why should we spray our homes with dangerous substances or artificial chemicals if there is a substance in nature that does exactly the same without the disadvantages? As there seems to be a lot more to be gained from this, Innovation Origins Herbi Clean was awarded the title of Start-up of the Week.

2. Parkbob digitizes mobility processes

Christian Adelsberger (c) Parkbob

Parkbob was launched four years ago with an app for motorists looking for parking space in Vienna. Within a short period of time, the start-up company expanded its services even further. Today, it is an expert in digital transport services and cooperates with Shared Mobility providers worldwide.

Four years after its establishment, a parking assistant service has already been integrated into Amazon’s voice control system. Now it is Alexa who is providing drivers with information about available parking spaces and parking fees. Soon Parkbob will also be available for other navigational devices and in-car systems. This service is always free of charge for customers. The real profit area is in the B2B sector, specifically in the mobility and automotive sectors.

Several factors led to the rapid growth of Parkbob: the decisive factors, however, were venture capital finance, expansion into the USA and diversification. Today, Parkbob covers a total of sixty cities all over the world. The collaborative partner is Reach Now at BMW/Daimler.


3. Sketch AR: Transforming the world into your blank canvas

Drawing is a skill that usually requires a lot of practice. Tracing something over another piece of paper or physically covering an image with translucent tracing paper are both possible. Yet now this can all be done in a more modern and practical way. Meet SketchAR, the first app that combines augmented reality with actual drawing.

The Lithuanian initiators combine creativity and technology in a whole new way and make drawing, a skill that you either have or don’t have, more accessible to everyone. It is a great example of how the real world and augmented reality can enhance each other. Its simplicity and combination of something analog with something digital convinced IO to reward SketchAR with the title Start-up of the Week!

4. E-Bot7  Automated customer service

The team behind E-Bot7 wants to help telephone customer services enter into the future by using artificial intelligence to ensure that customers are served faster and more effectively. As a result, queues of up to 45 minutes and frustrating repeated calls (due to unsolved problems) may be a thing of the past.

The need for customer service is greater than ever, yet this technology makes it cheaper and more efficient than ever before. And that’s how you save on both personnel and office costs. It’s a pity though that this technology means that thousands of call center employees will have to look for new employment in the coming decade. Nevertheless, the innovative start-up from Munich was selected as Start-up of the Week.

Start-up of the day: Forget Siri and Alexa. Now meet Edward, a portable AI sales assistant.

Edward is the “child” of Tomasz Wesołowski and Bartłomiej Rozkruta. Previously, they both ran their own software development business and produced software on commission. Then they noticed that clients did not want complicated programs and difficult interfaces. Ideally, they should simply be able to talk to a computer. That is why Tomasz and Bartek are working on a solution in their latest company: ” This AI works fast, is pleasant and easy to use, and underneath they have algorithms that automate some of the most common tasks”.

Who is Edward?

Tomasz Wesołowski, CEO and co-founder Edward AI: Edward is a portable smart sales assistant. This is a mobile phone application, which during the day tells the retailer what to do next and also does some of the typical things for them which no salesman likes to do. Such as filling in data, filing reports, making notes, keeping an eye on the contact with the client. For example, after a meeting with a client, Edward will ask for a memo to be dictated to it, then record it in text form and extract key information from it.

What’s the matter with Edward?

People have less and less time to use traditional computers, and retailers are particularly affected. They are constantly on the road, at meetings with clients. Plenty of things are going on around them, so they may easily forget about something. The last thing they want to do at the end of the day is to open up their computer and type in all the things that happened. That’s why we make life easier for them with Edward. Our assistant tells them what to do, some of the things it does for them. Therefore, at the end of the day, the retailer will automatically have more time for their customers and for themselves.


What are you better at than the competition?

We operate in a narrow market segment. Around the world, we have identified around 10 other smart sales assistants. What distinguishes us is that we are not dependent on one language. At this point, Edward “speaks” Polish and English, but we could easily have it be translated into other languages.

And furthermore, Edward is flexible. This is not a program that works the same way within every organization. Yet we can quickly personalize it depending on the specific requirements of any given company. For example, in some companies there is a requirement that after each conversation with a customer, the retailer should mark the categories of products they have discussed and make a note. Edward does that. In other companies, there may be no requirement to submit reports after each conversation, but salespeople may have to focus on meetings and fill in a special questionnaire during meetings. Then, for example, the questionnaire can be filled in by dictating it to Edward.

What are the biggest obstacles you are facing?

Educating the marketplace remains the biggest obstacle. Creating an innovative solution must also create a market for it. Therefore, our work with customers often consists of having to explain to the customer what artificial intelligence in sales means, what are its possibilities, what value it will bring for them, why they should be interested in it at all. It’s like working at the core of a client’s needs. For us, this is the biggest barrier, because before it gets to the point of sale, we have to work very hard on educating people.

When did you feel proud of your achievements?

The feedback that we receive from our customers tells us that what we do makes sense. From time to time, Edward asks its users how they like working with it. That is why we know that more and more customers see value in this product. We’re very happy about that.

What are your plans for this year?

First of all, we want to increase the number of customers. For the time being, we focus mainly on the Polish and Indian markets. Maybe we’ll go into Britain. We are talking to a prospective representative right at this moment.

There is a lot of interest in Edward, especially among large clients such as banks and insurance companies. We are here to serve them.

What is your goal in the next five years?

We want our platform to become the standard when it comes to retailer’s work. We want to have a strong presence in Poland, because it is our main market, and to be present in markets such as Australia, Great Britain and the United States.



Even robots become more creative if you let them make their own experiences

At the RSS in Freiburg, robotic researchers presented new ideas to make robots’ independent learning more efficient. Leaving out the right elements is the most important thing.

From the point of view of AI and robot research, providing robots with precise motion sequences that they can repeat reliably and without any problems is comparable to a digital Stone Age. In the present day what matters is equipping robots with algorithms that allow them to find their way around the unknown as independently, flexibly and quickly as possible. The company motto of DeepMind, an AI company of the Alphabet Group, sums up the connected promise nicely: Solve intelligence. Use it to make the world a better place. Anyone who manages to reproduce intelligence digitally doesn’t have to worry about the rest.


However, the actual problems researchers are currently dealing with seem rather mundane: A robot arm is supposed to learn to carry out a kind of wiping action with different objects in order to throw different objects into a box and sort them in the process, a computer is supposed to steer a vehicle accident-free on a road, and so on.

There are at least two basic problems: The physical properties of the objects involved are usually not 100 percent known, and the solution sequences predefined by trainers are often only partial solutions and not exactly perfect either. Therefore, robots should learn to find their own solution in inadequately defined situations by trial and error and assemble the promising elements into a new and better solution. Reinforcement learning, meaning interactive experimentation with positive reinforcement of the most successful experiments, is usually the method of choice. The decisive factor is the choice of reward system. After all, one wants to prevent endless trial and error and not rule out successful trials from the beginning.


A robot from a team of researchers from Princeton, Google, Columbia and MIT has now succeeded in deriving control parameters for gripping and throwing different objects based on visual observations using the trial-and-error method. Many challenges are hidden in this process: The robot must recognize objects and their position in the messy pile, grab, accelerate, and release them at the right moment in the right place. Mass and weight distribution of the objects are just as unknown as their flight characteristics.

The possibilities of such a solution are tempting: Robots that perform similar tasks in industry could become significantly faster and multiply their maximum reach by throwing.

The newly developed Tossingbot uses its own observations to correct the predictions suggested by a simple physical throwing model and independently optimizes the grip position, throwing speed and dropping point. In order to throw an average of 500 objects per hour into the correct boxes, 15,000 test attempts were necessary. After that, it tossed the objects without any further errors.


Robots don’t have to start from scratch. Trainers can provide the device with different solutions. The typical issues with so-called imitation learning: You want to reduce the number of trainings required, and the robot should develop solutions that are better than the initial input and the system should not be sensitive to extremely bad input.

A research group at King Abdullah University in Thuwal, Saudi Arabia, proposes a method called OIL, Observational Imitation Learning. OIL delivers significantly more solid results than the mere imitation of more or less good training sessions and thus achieves results faster than learning systems that are primarily based on rewards. Although those might not require human training at all, they often do not reach their full potential since the reward system leaves too much leeway.

This is why OIL evaluates input and only adopts the most successful sequences, for example when driving a car over a test track. This way, the input of a large number of trainers can be processed and thus also make it possible to figure out different strategies. At the same time, OIL does not waste time exploring meaningless options, as the trainers, in contrast to autonomous learning systems, exclude them from the very beginning.

The team used the algorithm to control a drone and a simulated car. Compared to other algorithms and human pilots, OIL was extremely successful. For example, the expert human driver was able to steer the car a little faster along the route but made more than twice as many mistakes as OIL. Something that makes this somewhat perplexing: OIL also made mistakes.

Study using AI: men’s and women’s brains are different

First of all: Every brain is different. Not only are there differences between the brains of men and women, but brains can also be distinguished based on their age, variability regarding their individual personality traits, performance differences or cognitive impairments. Additionally, there are other factors that influence individual variability in the brain. These include hormone fluctuations, temporal rhythms, changes in motivation and other internal and external factors. It is only when all these are taken into account that individual prognoses about variations in character traits, cognitive performance, human behavior as well as clinical pictures are possible.

As you can see, the brain is quite complex. Nevertheless, we have to start somewhere. Jülich researchers led by Dr. Susanne Weis have successfully trained a self-learning software to detect whether an fMRI scan shows a female or male brain using artificial intelligence (AI). Although there have already been several studies on the difference between male and female brains, these are controversial due to the very small amount of data. Factors that are not obvious can easily falsify the results.

“Our methodology using artificial intelligence, on the other hand, produces very trustworthy results”.

…, explains Dr. Susanne Weis from the Institute of Neuroscience and Medicine at the Research Center Jülich and the Institute of Systemic Neurosciences at the Heinrich-Heine-University Düsseldorf.


According to the current study, the female and male brains differ particularly in the functional connectivity of certain areas of the cinguli gyrus (part of the limbic system), the precuneus (important for visual perception), and the medial frontal cortex (important for controlling situational actions and regulating emotional processes). Ultimately, these networks are essential to speech function, the processing of emotions, and social perception.

The brain researchers led by Susanne Weis and Prof. Simon Eickhoff initially used brain scans of 434 volunteers between the ages of 22 and 37 of the “Human Connectome Project” for their study. The brain scans – more precisely: the functional magnetic resonance imaging (fMRI) – reveal areas in the brain that are active at the moment of acquisition and interact with each other. The subjects let their thoughts wander during the recordings. Weis:

“Our results are based exclusively on the activity at rest, meaning the subjects did not have to work on any specific tasks. It is therefore about the intrinsic, task-independent functional organization of the brain”.

And not, as some might have liked to see confirmed, about differences in solving cognitive, mathematical or creative tasks.


The researchers trained their learning software to assign the sex of the test persons to the neuronal connection patterns. The scientists also always gave the software feedback to which extent the result was correct. This enabled the AI to gradually improve its mathematical model. The scientists then used the AI to use fMRI images to predict the sex of 310 other Human Connectome Project volunteers and 941 of the 1,000-brain-study volunteers. Its predictions were accurate around 70 percent of the time.

“The study results show that areas in the brain are linked differently in women than in men”

…says Susanne Weis. She stresses:

“The results, however, do not allow conclusions such as ‘Women are better at dealing with feelings’.


The question of what reasons there are for these differences in the brain must also remain open: For example, both biological and acquired causes could be conceivable, for example through the way one was brought up. Therefore, she once again underlines that “the male” or “the female” brain does not exist, and

“…that the individual differences in the brain are influenced by many more factors than just biological gender, such as social influences, hormones,  and gender identity.”

In the current study, Weis’s research team initially considered gender to be purely binary. In principle, they could also use the applied methodology to investigate the extent to which inter- or transsexuality, for example, is reflected in the networks in the brain.

“However, this would require brain scans and data from a larger number of inter- or transsexuals. But extensive studies are not available yet”

…says Weis. The scientist also has another research goal in mind:

“We want to find out which factors modulate gender differences in brain organization. We are particularly interested in the influence of naturally fluctuating hormones – such progesterone, estradiol, testosterone – and the self-perceived gender identity.

The study by the scientists from Jülich, Düsseldorf and Singapore recently appeared in the journal “Cerebral Cortex”. The original publication can be read here.

Tomorrow is good: Genuine intelligence

This week the TU/e announced that it was taking a big step forward in the development of AI by investing €100 million in scientists and laboratories in the next 5 years. I was personally honored with the invitation to set up this “Eindhoven Artificial Intelligence Institute”, EAISI, together with a wonderful team of professors and specialists.

Due to my experience with smart cars and my contacts at Singularity University, I am often exposed to AI. I have written here many times about my vision on the division of roles between machines: Artificial Intelligence for the fine work inside the lines and the human being with their genuine intelligence for coloring outside the lines. If we allow robots, which will soon be ten times smarter and stronger than ourselves, to interpret rules flexibly, that seems to me to be a dangerous acceleration towards the end of humanity.

So for the time being, an autonomous car rides really well in cooperation with a chauffeur made of flesh and blood, and the computer/human combination performs even better than a chess computer. A self-steering company fares less well than a company with a human leader. Genuine intelligence ultimately makes all the difference.

As artificial intelligence moves from cyber-only into the real world, knowledge of high-tech systems, human-machine interaction and data is essential. This is when AI wanders into our Brainport garden, because that’s where our traditional strengths lie.

It will be interesting times, yet busy times too. That’s why I’m obliged to stop, at least temporarily, with my column at Innovation Origins. Having started out as a columnist at E52, I have seen IO grow into a flourishing platform for high-tech news- my compliments to the entire editorial team! It was a privilege for me to be able to make a minor contribution to this. Thanks to all the readers for all of their reactions. Especially to those readers who disagreed with me. Because whoever colors outside my lines, are the ones I learn from most.

100 million euros, 50 professors for new Artificial Intelligence institute in Eindhoven: EAISI

EAISI, Frank Baaijens, Carlo van de Weijer @ Bart van Overbeeke

by Norbine Schalij, Cursor

TU Eindhoven is rapidly setting up an institute that is needed to create awareness of its education and research in the field of artificial intelligence among future students, the best researchers, the business sector and (European) financiers. This EAISI, led by Carlo van de Weijer (director of TU/e’s Strategic Area Smart Mobility), will be launched on September 2, 2019. His first tasks will be to attract fifty new full and associate professors and to find suitable accommodation.

As of next academic year, TU/e will have its own AI institute, so that everyone with an interest in artificial intelligence knows that they have to go to Eindhoven for education and research in this field. The institute will be known as the Eindhoven Artificial Intelligence Systems Institute. Its acronym EAISI sounds easy, according to dean Frank Baaijens.

“AI is relevant to every research field at TU/e, and it is crucial to all future high-tech systems.” Baaijens says that on further consideration, there seemed to be a serious overlap between three of the envisaged CRT’s (Cross Research Themes, described in the Strategy 2030, ed.), and that is why the university decided to quickly set up an AI institute. These three CRT’s will be part of this institute: Complex High Tech Systems, Data-driven Intelligent Systems, and Human-centered Systems and Environments.

Rapid Pace

The process of setting up this institute takes place at a rapid pace, despite the busy schedules of the professors who are involved. Carlo Van de Weijer: “I was asked for this a few months ago. This is the first presentation to the TU/e community, we still need to fill out the details. A team was set up, but we are inviting everyone to share their ideas with us. I mostly see motivated people who are willing to clear their schedule for this. TU/e is making a strong commitment with this hundred-million-euro investment.”

EAISI’s scientific mission is to collect and analyze data and to make real-time decisions in safety-critical situations based on that data. That means that these pieces of advice could have a serious impact on people’s lives, such as with self-driving cars and care robots. “That is exactly the specific role we see for TU/e. In the past, AI could be found primarily in the cyber world, at Google,, Spotify and other consumer platforms, Now, AI is evolving in the direction of fields in which TU/e has always been successful, such as High Tech Systems and Human Technology Interaction. At EAISI, the machines become intelligent,” says Van de Weijer.

Wijnand IJsselstein is a member of the scientific team at EAISI. Photo © Bart van Overbeeke

The intention is to attract fifty full professors, in addition to the hundred scientists who are already working in this field at TU/e. New and current employees will sign their contracts with the departments. When asked whether all new professors will be women, Van de Weijer replies with a smile that it is entirely up to the departments to make that decision.


What Van de Weijer would like most is to see his student teams – AI is relevant to most of them – and the education and research in this field together in one location. The Laplace building, which is currently being renovated for educational purposes, will not be able to accommodate the Institute for another two years. That is why EAISI will start at the Gaslab on September 2.

“Our institute will be a major customer in terms of computing powers, so there is a nice symbolism to a location in the university’s former computing centre. Our goal is to create quite a stir there, an atmosphere comparable to the one in TU/e innovation Space. The second floor of Laplace needs to become an AI playground, with students and robots on the monkey bars,” so thinks EAISI’s director.

But it doesn’t stop just with play. In five years’ time, the institute will show results in several different fields. “Each year we want to raise thirty million euros from outside TU/e for research. We expect that the money will come from a collaboration between companies (plans have already been discussed with Philips, NXP and ASML), from the Dutch Research Council, and from European subsidy providers. A budget has been created for fifty full professors, who we will have to find within five years. That will be quite a challenge because AI specialists are in high demand, but we’re not afraid to take up that challenge.”

A hundred and fifty interested TU/e students and staff members heard the first plans last Monday afternoon. Next month, Van de Weijer and chairman of the Executive Board Robert-Jan Smits will join Prime Minister Mark Rutte, among others, on a trade mission “We will do all we can to present our EAISI plans during that mission. The brochures aren’t finished yet, but I will be carrying new business cards.”

More precise weather forecast thanks to neural networks


Weather forecasts often change daily and even hourly. And depending on which channel you watch on TV or which app you use on your mobile phone, there are immense differences for the same day. However, it is not the inability of the meteorologists but the “chaotic system” atmosphere that is to blame for the deviations or even false predictions. “Physical properties such as temperature, humidity or cloudiness change constantly,” explain scientists from the Karlsruhe Institute of Technology (KIT). It can happen that thunderstorms, sun, cold and heat follow each other on one day – like the weather this May.

Meteorologists are therefore faced with the challenge of predicting this chaos and making reliable statements. They assume current measurements in the atmosphere and simulate alternative scenarios. For example, they calculate how changes in temperature or humidity could affect the weather and compare up to 50 scenarios for each parameter. “If the results are similar, this indicates that a forecast with these values is relatively reliable and that the state of the atmosphere in this area is stable and predictable,” says Dr. Peter Knippertz of the Institute for Meteorology and Climate Research at KIT.

Nevertheless, this method does not guarantee truly reliable predictions. “The computer scenarios cannot map some physical relationships with the necessary depth of detail or spatial resolution,” explains Dr. Sebastian Lerch from the Institute of Stochastics at KIT, who works closely with Knippertz. For example, temperature predictions were always too mild at certain locations and too high at others, because “local conditions, some of which are time-variable, cannot be given to the models”. Therefore, it is necessary to post-process the results of the simulations with elaborate statistical procedures and expert knowledge in order to obtain better forecasts and probabilities of occurrence for weather events.

© Sebastian Lerch, KIT


“Most weather forecasts today are based on physical models of the atmosphere calculated on supercomputers. In order to take into account uncertainties (e.g. in initial conditions or details of the physical models), ‘ensemble simulations’ are used, meaning several simulation runs, for example with varying initial conditions,” Lerch describes. “However, the predictions of these ensemble simulations usually show systematic errors that have to be corrected by statistical methods. Based on predictions and observations from the past, statistical models are used to correct the errors. In the past, only ensemble predictions of the target were used: So if you want to correct the temperature predictions, only temperature predictions were used as input.”

Lerch and his colleagues from meteorology and mathematics at KIT have now developed a new method based on artificial intelligence to avoid errors in weather forecasts. “We have developed an approach that provides better predictions than established standard methods,” said Lerch. This approach involves neural networks that process information according to the model of the brain. The computer programs are trained to process certain data optimally. Like the human brain, the neural networks collect “experience” during training and can thus continuously improve – but much faster than a human brain. “The most complex network-based models take about 25 minutes to train (on a single Nvidia Tesla K20 GPU),” emphasizes Lerch. Ideally, the networks can then precisely determine, for example, the probability of local weather events such as thunderstorms occurring.

Weather models would produce predictions for a variety of other variables that influence temperature, such as pressure, cloud cover, solar radiation, wind, etc., Lerch continues. “The concrete functional dependencies of the temperature prediction errors on these input variables is very complex and non-linear, and therefore cannot be described in a simple functional form even by meteorological experts. While standard methods require such a functional description of the dependencies for the formulation of the models, the neural network-based models proposed by us ‘learn’ these dependencies independently, without the specification of concrete functional correlations”.

© Pixabay


To train the network, the scientists used weather data from Germany, which 537 weather stations had recorded from 2007 to 2016. Input parameters for the neural network included cloud cover, soil moisture and temperature. The researchers then compared the forecasts made by the network with forecasts from established techniques. “Our approach has made much more accurate predictions for almost all weather stations and is much less mathematically complex,” summarizes Lerch. And can it be said how much more accurate the forecasts are than before? Yes, says the scientist.

The quality of the predictions would be measured using the continuous ranked probability score (CRPS). “This is a mathematically based evaluation rule for predictions in the form of probability distributions that takes into account the uncertainty quantified in the prediction. Averaged over all days and stations, a CRPS improvement of about 3% is achieved compared to the best standard procedures,” he explains. “The network-based models provide the best predictions at about 80% of the 537 observation stations. In paired comparisons, the observed CRPS differences between network-based models and the best standard procedures are statistically significant in between 30% and 67% of the observation stations.

Next to their learning ability and the fact that they can recognize non-linear relationships independently, neural networks have another advantage: they can process large amounts of data faster than previous methods and human experts. “The two previous standard methods, which can process the same number of predictors as the most complex network models, took 48 minutes or 430 minutes to train (i.e. about two or 17 times as long),” says Lerch. The direct comparison is difficult due to the different programming environments (Python for the network models, R for the standard procedures) and hardware environments (GPU for the network models, CPU for the standard procedures), but: “For the first time, we were able to show that neural networks are ideally suited for improving weather forecasts and obtaining information about meteorological processes.

You can find out more about Artificial Intelligence here.

Mobile robots help with rescue operations – thanks to their artificial intelligence

Rescue operations, for example after earthquakes, can be life-threatening for the rescuers. Again and again, the courageous men and women put their lives in danger when they go in search of survivors lying in the rubble somewhere. Equally dangerous are operations in fires where firefighters are repeatedly injured or even die in the course of their work. And think of the cave diver who died in Thailand last summer when a group of young people fell into the trap.

In such a dangerous situation, people are increasingly able to get help from robots. During rescue operations, fire-fighting operations or inspections in the deep sea, mobile, self-learning robots can relieve people of dangerous or unhealthy activities. Depending on external circumstances, operations – such as deep-sea research – would not only be more economical but perhaps the only way to do it.

© Karlsruher Institut für Technologie

Active support

“Learning systems work independently, alone or in hybrid teams with other learning elements or as autonomous systems with a human being. As assistants, they assess risks to people, but they are also able to act adequately and independently in a given situation”, says the report of the Learning Systems Platform, which was presented at the Karlsruher Institut für Technologie (KIT).

In their presentation, the researchers presented two possible application scenarios for such robots. “The use of artificial intelligence is associated with enormous opportunities for our society. Especially in disaster management, the decommissioning of nuclear power plants or in maritime areas, there are great opportunities to effectively support specialists with artificial intelligence,” says Professor Holger Hanselka, president of the Karlsruher Institut für Technologie and member of the steering committee of the Learning Systems platform.

The platform has set up an interdisciplinary working group to discuss how learning systems for life-threatening environments can be developed and used for the benefit of people. “IT security will be extremely important, especially for autonomous systems we use in crisis situations. Therefore, KIT’s research focuses not only on the protection of the external borders of a complex IT system, but also on the protection of each individual component, and KIT brings its IT security expertise to the learning system platform.

The Working Group on Life-threatening Environments assumes that in about five years’ time, Artificial Intelligence will be able to support people in disaster response and in reconnaissance and maintenance missions. In the “Rapid rescue assistance” application scenario, scientists have demonstrated how AI-supported robot systems can support firefighters on the ground and from the air in the event of a chemical factory fire.

GAMMABOT: Universal, mobile robot for multisensorial environmental detection. It captures precise 3D spatial geometries, thermal images and gamma spectrometry. © Karlsruher Institute of Technology

Using multi-sensor technology, the systems can “quickly create a detailed picture of the situation, establish a communication and logistics infrastructure for rescue work, search for injured people and identify sources of danger,” say KIT scientists. In the application scenario “autonomous underwater operation”, robotic underwater systems maintain the foundations of an offshore wind turbine. They can navigate the deep sea on their own and, if necessary, request support from divers or remote-controlled systems.

Technical obstacles

The researchers admit that there are still a number of obstacles to be overcome before such systems can actually be used. One of these obstacles is autonomous learning in unfamiliar environments, and the other is the collaboration of independent robots with humans.

“What if several people need help, but the robot can only take care of one person?”

“The demands placed on learning systems are particularly high in hostile environments: they must be intelligent and at the same time robust against extreme conditions and be able to function independently under unpredictable conditions,” says Jürgen Beyererer, head of the workgroup Life Hostile Environments of the learning system platform. “Until then, AI-based systems can be remotely operated by emergency services providers and the data collected can be used for the development of intelligent functions. Gradually, the systems achieve a higher degree of autonomy and can further improve themselves through machine learning”.

Bureaucratic hurdles

In addition to the technical challenges, some tough questions must also be clarified. Who is responsible? What about liability and insurance? What happens if these systems cause damage? How do you protect yourself against theft? These are just some of the questions that arise before such systems are actually used. The issue of the regulation of property rights also arises in international application areas. “For example, under current international maritime law, unmanned systems may be possessed by the finder in international waters.”

MANOLA / MAFRO – A universal climbing robot with trolley and carrier, which can thermally remove surfaces and “measure freely” using 10kW diode laser. The main field of application is nuclear installations. The climbing robot is transported to its destination in a transport trolley. © Karlsruher Institute of Technology

In addition, the processing of personal data may lead to privacy and data protection problems. “Such cases may occur when learning systems are used in disaster relief or firefighting operations and data of affected people are picked up and passed on. In addition, a formal framework ‘with technical, legal and ethical levels’ should also be found for a situation where several people need help, but the robot can only take care of one person.

Even though solutions have been found to all these questions, one thing is certain, despite all artificial intelligence. Human intelligence will remain indispensable. KIT researchers are well aware of this. “There is no doubt that mankind, as an operational force and decision-maker, will continue to be irreplaceable in the future – especially in operations to save human lives.”


Towards improved earthquake analysis with artificial intelligence


Worldwide, hundreds of earthquakes occur every day, and more than a million every year. However, most of them have a magnitude of 1 to 2 on the Richter scale and are so minor that they can only be detected by sensitive instruments. Only earthquakes of magnitude 4 cause noticeable vibrations, while earthquakes of magnitude 5 can cause damage to buildings. Even earthquakes of this magnitude occur more than 10,000 times each year.

Even earthquakes with a magnitude of more than 7 happen monthly. Earthquakes with a magnitude of more than 8 occur approximately once a year. The strongest earthquake ever measured was the Valdivia quake, which occurred in Chile on May 22, 1960. It had a magnitude of 9.5 and triggered a 25 metre high tsunami. In Germany, too, the earth trembles almost daily. Between January 1 and May 30, 2019, 114 earthquakes with a magnitude between 1 and 2.8 were registered.

While the earthquakes in Germany do not cause any damage, in many countries of the world earthquakes not only cost people their homes and possessions, but also their lives all too often. However, it is not always the main earthquakes that cause the most damage and are the strongest, often it is the relatively weak aftershocks that have far more catastrophic effects. Much of the damage could certainly be minimised and lives saved if these quakes could be predicted.

The optimal analysis of earthquakes used to require a high degree of human know-how. But with KIT’s neural network, more data can be analysed in less time. © Manuel Balzer, KIT

Neuronal network to locate the epicentre

There are essentially two types of earthquakes: tectonic and volcanic. Tectonic earthquakes are much more frequent than volcanic ones and are also much stronger. In these earthquakes, seismic waves travel through the earth, consisting mainly of compression or primary waves (P-waves) and shear or secondary waves (S-waves). The faster P-waves arrive first at a seismological station, followed by the slower S-waves. Both can be recorded in seismograms.

Despite constantly improving technology, earthquake prediction is still not completely reliable. But researchers at the Karlsruhe Institute of Technology (KIT) have now found a way to precisely locate earthquake centres. They use a neural network to determine the arrival time of seismic waves. In the journal Seismological Research Letters, they explain that artificial intelligence can evaluate data just as accurately as an experienced seismologist.

The scientists explain that it is important to precisely determine the arrival of the many earthquake waves at the seismometer station – the so-called phase input – in order to precisely localise the seismic events. Only then would it be possible to carry out further seismological evaluations. It could also be possible to predict aftershocks, which can sometimes cause more serious damage than the first quake. The precise localisation of earthquake centres would also make it possible to better distinguish physical processes underground, which in turn would enable scientists to draw conclusions about the structure of the earth’s interior.

So far, data from seismological stations (triangles) in Chile were used to reconstruct the locations of epicentres (circles). © J. Woollam et al

Evaluation by AI more accurate than by seismologists

The evaluation of the seismograms, the so-called picking, is traditionally done by hand. This is not only very time-consuming, a certain subjectivity of the seismologist can also affect the accuracy. However, the picking algorithms developed for automatic evaluation so far have not achieved the accuracy of manual picking by an experienced seismologist, as various physical processes influence the seismic wave field.

However, scientists at KIT’s Geophysical Institute (GPI), the University of Liverpool and the University of Granada have now shown that artificial intelligence can evaluate data with the same accuracy as humans. They used a Convolutional Neural Network (CNN) and trained it with a relatively small data set on 411 earthquake events in northern Chile. CNN then determined the operating times of unknown P-phases and S-phases at least as accurately as an experienced seismologist when picking manually – and it was far more accurate than a classical picking algorithm.

“Our results show that artificial intelligence can significantly improve earthquake analysis – not only with large amounts of data, but also with limited data,” explains Professor Andreas Rietbrock of the GPI.

Also interesting:
The human brain as an inspiration for AI developers: What to do when artificial intelligence fails?
Prague gets new European institute for artificial intelligence
Artificial intelligence for the investigation of swarm behaviour

AI: Smart Clothes as instructors

De robot imiteert de beweging van de mens en programmeert zichzelf. Foto: Anne Schwerin

Until a few years ago, clothing served only to protect people and at the same time still had fashionable aspects. But meanwhile, our second skin can do more and more. The measurement of body data such as pulse value or calorie consumption by means of integrated sensors is almost an old hat. Now, however, the clothing will also take on teaching functions through artificial intelligence: On the one hand as a trainer for humans, on the other hand as a programmer for robots.

The latest development comes from Turing Sense. For over three years, a team of 27 engineers and competitive athletes has worked on their vision of replacing complex video analyses of movements with digital technologies such as AI. Their vision: to design complicated sports exercises in a timely, precise and effective manner. The result was officially launched recentely. It’s a yoga outfit that incorporates sensors that connect to a virtual yoga studio through an app. Yoga videos by renowned instructors such as Brett Larkin, Kim Sin and Molly Grace are offered here. Almost as if the yogini were personally on site, she leads through the selected yoga course. The i-Double scans the execution of the asana, the yoga posture, of the students through W-LAN. This is then displayed as an avatar on the mobile or TV screen so that the user can observe his likeness to the teacher while practising warrior, dog and co. As an interactive app, the i-Yogini also reacts to voice commands such as “Freeze” or “Show me the camera”. But now comes the clue: when the user asks for “How does this look?”, he will receive a correction of the yoga position, if necessary. The workout can thus be individually adapted to the requirements of personal performance.

Of course, high-tech clothing also meets the highest demands in terms of comfort and functionality. It is even washable. The outfit called Pivot Yoga consisting of a shirt and pants for $99 is currently only available in the USA and Canada. The app currently only works in combination with IOS 11, an iPhone 7 and higher. An Android app, as well as the delivery to Europe, is planned.

Possibly exactly because demanding yoga only has the desired effect on body and soul through precise execution, there is another clothing manufacturer that has already specialised in smart clothing for yoga in 2017: Wearablex. Although with the so-called Nadi X Pants, which are also connected to an app via Bluetooth, the yoga student receives haptic feedback instead of a visual and optical correction. Ten tiny, individually adjustable vibrations are sent to the hip area, knees and ankles to indicate an incorrect position. They provide peace of mind when the position is correct. Wearable X Smart Pants are currently available in the USA, Canada, the EU (plus Switzerland, Norway) and Australia/New Zealand, they work under IOS and cost $249.

The Dresden-based Start-Up Wandelbots is currently working on an exact opposite application of artificial intelligence. The company has developed software that enables robots to program themselves by imitating human movements – sent to them by, for example, a technician using Smart Clothes. This new technology should be 20 times faster and 10 times cheaper than conventional programming. This application can be seen, for example, in the Transparent Factory of VW in Dresden.

The focus of changebots is currently still on industrial robots. But if these have proven themselves in the first practical tests, the technology could become a groundbreaking innovation: the application is so simple that in the future, everyone could be able to program an individual robot, even those without background knowledge. In addition to industrial assembly, conceivable areas of application include use at home and in nursing care.

Work 4.0: Working More Efficiently and having more fun with Artificial Intelligence


Every employee can sing a song about this. You concentrate on one project, it runs fantastically, the ideas flow just like that … and then a colleague comes and interrupts, the phone rings or an email arrives. The concentration is gone, and you feel like you need an eternity to get back on track. All this not only costs time, but also builds up additional stress, which is often greater than is bearable or healthy in the long run, due to things such as open-plan offices, time and performance pressure or constant availability.

As the Ärzteblatt reports, every second German citizen feels threatened by burnout and almost nine out of ten Germans feel stressed by their work. More than 50 percent of employees at least occasionally suffer from back pain, persistent fatigue, internal tension, listlessness or sleep disorders. 53 percent of those surveyed say they sleep poorly, 54 percent ponder their work.

In addition, people are not always the same performers, and all have different creative phases, not just writers or artists. On the other hand, they can be so absorbed in their work that they come into a concentrated state – the “flow” – and thus feel more comfortable and satisfied and are more efficient.

The Karlsruhe Institute of Technology (KIT) has therefore set itself the goal in the coordinated Kern project (short for “Developing Competences and Using them Correctly in the Age of Digitization”) to establish and maintain this flow with the help of artificial intelligence. It is developing an assistance system that recognizes the flow thanks to AI by means of heart rate or skin conductance in order to shield any disturbances or build up competencies that promote flow.

“Automation and the progressive digitalization of value chains are rapidly changing the world of work”, says Professor Alexander Mädche of KIT. “Modern competence and education management must continuously support employees in the targeted development and deployment of their competences in the workplace”. The background to the project is the assumption that a person is most satisfied and most productive if he or she is able to carry out “his/her” job undisturbed and if his or her skills are optimally suited to the demands of his or her job.

© Pixabay

All Employees have Competences

“Ultimately, all employees have competencies and want to develop them further in the digital world”, emphasizes Mädche. “A key innovation of the project is that we try to recognize when employees have skills deficits in their daily work. The concept is based on saying that, when I realize that employees don’t flourish in the state of their work, also called flow, then they need support regarding their tasks or competencies. This is exactly what the Competence Assistance System does.”

To put it simply, Kern is about keeping this flow alive, which ideally should not be interrupted but supported in order to maximize this time in the flow during work. “If you imagine a project manager who receives a lot of messages as an example, the idea of the competence assistance system is to regulate this”, says Mädche. The system would then process the notifications accordingly so as not to interrupt the project manager during his time in the flow.

In order to achieve this, however, the flow must first be reliably detected. For this purpose, the volunteers wear wearables such as a wristband or chest strap, which for example measures heart rate or skin conductance. Since these physiological data are very complex patterns that can vary greatly from person to person, the researchers know that new approaches from the field of AI are required in order to recognize patterns of flow in real time. With a neuroevolutionary deep learning approach, a method of machine learning, a research group at the KIT recently succeeded in identifying the flow on the basis of physiological data.

On this basis, the Kern project then develops the prototype of an AI-based KAS, which is to provide situational feedback, such as e-mails and notifications, in such a way that the flow is not disturbed. However, the system would also recognize if productive work is disturbed for a longer period of time, for example because the tasks no longer correspond to the employee’s competence profile, and would make suggestions for “personal competence development” in this case.

“The Kern project conceives educational formats for both task accomplishment and strategic personnel development”, as it is said at the KIT. “These can range from short reports with everyday tips to a digital assistant and personal advice from a human expert”. Like a navigation system in a car, that indicates bypass options in traffic jams, AI-based KAS provide situation-dependent recommendations for action, for example by suggesting concrete learning or work units. It is then up to the employee to decide whether or not to follow these recommendations.

© Pixabay

Helping Employees to Develop Further in their Everyday Working Lives

Despite all the advantages that these new systems offer, like supporting and advising employees on the basis of physiological data in real time, they also interfere with the privacy of these individuals. Therefore, Mädche and his colleagues attach great importance to data protection. “On the one hand, we need to see what data is shared with whom and when, and how we can ensure privacy. On the other hand, however, we also need to know how we can use intelligent methods to evaluate data.” However, Mädche is firmly convinced that “AI-based KAS have great potential, but we have to understand and design them as socio-technical systems”.

The scientists explain, that the KAS developed in the Kern project, are intended to help employees to develop their skills further in their everyday work and ideally to do so precisely and interactively. “In this way, individual needs and company goals are to be taken into account equally and a framework is to be created in which employees can continue their education in an economically and motivated manner and built up their skills correctly.”

The Kern project is coordinated by KIT and carried out in cooperation with the partners SAP SE, TÜV Rheinland Akademie GmbH, Campusjäger GmbH, and B. Braun Melsungen AG. It is funded with 1.36 million euros by the Federal Ministry of Labor and Social Affairs (BMAS) as part of the New Quality of Work Initiative (INQA).

Also interesting:

Prag gets new European Institute for Artificial Intelligence


The Human Brain as an Inspiration for AI developers: what to do when Artificial Intelligence fails?

Actually, artificial intelligence’s (AI) purpose is to help people to cope with complex tasks. AI systems can quickly analyse and interpret the large amounts of data generated by modern big data and sensor technologies. They control vehicles or complex production processes.

When artificial intelligence fails

© Ravirajbhat154 via Wikimedia Commons.

But artificial intelligence makes mistakes or lets itself be duped by childish tricks. In the USA, for example, misinterpretation of sensor data led to fatal accidents with autonomous cars. Chinese teenagers who want to deceive AI-supported surveillance systems in shopping malls or inner cities wear carnival masks. Then, the surveillance AI ignores them as a sensor error. Canadian researchers had AI identify objects in a living room. When they showed the image of an elephant, the AI ceased to function. It became blind to objects it had previously identified correctly. Scientists at the University of Frankfurt are now investigating how AI systems can be made more reliable and, above all, safer. The team around computer scientist Prof. Visvanathan Ramesh is particularly interested in the criteria according to which the reliability of AI systems can be assessed.

How artificial intelligence becomes more reliable

“Absolute security is impossible,” says Professor Ramesh. “In the past, the security of complex systems was proven by formal model-based design processes according to strict security standards in the development and by extensive system tests. That’s about to change. Data-driven machine learning techniques are widely used in AI development. The AIs based on them deliver unexpected and unpredictable results – or they fail when they see an elephant where none was before and which they cannot find in their database.

From Ramesh’s point of view, security comes from thorough and extensive simulation and modelling of application scenarios. The problem, Ramesh continues, is to anticipate as many changes as possible and translate them into simulations. For example, if you want to send an AI-controlled spacecraft to Mars, you have to anticipate as many scenarios as possible and compare them with real data. Simulations are then created from the tested scenarios. These are then used to teach the AI how to react to the respective situation.

Model for artificial intelligence: the human brain

Ramesh’s approach is to combine knowledge from computer science, mathematics and statistics with knowledge from fields dealing with the analysis of human abilities. These are neurosciences, psychology and cognitive sciences. The human brain serves as a model. With its learning architecture, it can handle a wide range of tasks in different situations and environments.

Ramesh has been working for 25 years on suitable methods for the development, formal design, analysis and evaluation of intelligent vision systems. These are optical sensors controlled by intelligent programs.

Design Principles for Reliable Artificial Intelligence

An AI system needs to accurately recognize its environment and also understand differences between different contexts – such as the difference between driving on an almost empty highway and driving in dense city traffic. How safe AI systems really are, depends on the fact that they can make plausible decisions for people and assess their reliability themselves. Above all, the systems must be able to explain their decisions at any time. It is important for developers to clearly distinguish between different areas: User requirements, modelling, implementation and validation. In the same way, the interfaces between them must be precisely defined.

The result is systems that actually identify a child with a dragon mask as such and cannot be disrupted by an elephant in the living room.

AEROBI: AI in practice

Ramesh and his team at the University of Frankfurt have refined and applied these principles over the past seven years. They have developed platforms for rapid prototyping, simulation and testing of vision systems. This included security systems, but also applications for the detection of brake lights in automobile traffic. Recently, as part of the EU project AEROBI, they developed a vision system for an autonomous drone. The drone is to inspect large bridge structures for damage. The Frankfurt scientists developed AI technologies with which the drone can navigate the airspace around the bridge and then go on to detect and classify fine cracks and other irregularities. AEROBI is coordinated by Airbus. The drone has so far been tested on two bridges.

© Ravirajbhat154 via Wikimedia Commons.