Tomorrow is Good: The Professor and the Politician

The professor and the politician sat at the table of a lunchroom in The Hague with cups of fresh mint tea. The politician had invited the professor to talk about the transparency of algorithms. He wanted to set strict rules for the use of algorithms by the government, with emphasis on the word “strict,” he added.

The politician said, “I want a watchdog who will check all the government algorithms,” words which he clearly found unsavory. The professor noticed that the politician had a preference for the words “rules” and “watchdog”, and for the expression “with an emphasis on…”.

The usefulness of a watchdog

By the time they had finished their first cup of tea, they had found that there are roughly two types of algorithms: simple and complex. Simple algorithms, they thought, translate rules into a kind of decision tree. On a napkin the politician drew blocks and lines to represent this and as an example she cited the application for a rent allowance. She noted that there are already thousands of simple algorithms in use by the government.

The professor suggested that such algorithms could be made transparent relatively easily, but that this transparency would actually bring you back to the regulations on which the algorithm is based. Sipping his tea, he added: “So you could rightfully ask what the use of an algorithm watchdog would be in this case.”

At this point, the conversation stopped for a moment, but then they decided they agreed on this after all.

“B-uu-uu-t,” said the politician, looking ominous again, “then there are the complex algorithms. Neural networks and all that.'”

The professor looked thoughtfully out the window since that seemed like the right thing to do, then replied that neural networks are as transparent as the human brain. If you could make neural networks transparent, you wouldn’t be able to derive anything from them.

The politician nodded slowly. She knew that, too.

Training the network

You can train such a network, you can test the outcome and you can also make it work better, but transparency, or the use of an algorithm watchdog, wouldn’t add any value here either, the professor concluded.

Once again, the conversation came to a standstill.

The politician had spoken and the professor couldn’t disagree with her. “That’s precisely why I want a ban on the use of far-reaching algorithms by the government,” added the politician, “emphasis on the word ban.”

“The effect would then be counterproductive,” the professor said, “by prohibiting the use of algorithms by the government, you create undesirable American conditions in which commercial parties develop ever-smarter algorithms, become more powerful as a result, and in which the democratically elected government becomes marginalized.

The professor felt that the last part of his sentence had turned out to be softer than he would have liked. He considered repeating it, but instead asked “Why do you always use the word ‘watchdog’?”

“Because a watchdog conveys decisiveness,” the politician replied. “We want to make the public feel safe with the government, and a watchdog is a good representation of that.”

Curious bees

The professor was starting to feel miserable. The government as a strict watchdog? The image reminded him of countries like China. Or America.

“I don’t like that metaphor,” he said, “it has such an indiscriminate character. It’s powerful, but also a bit stupid and simplistic.

“Then why don’t you come up with a better analogy!” the politician challenged him cheerfully.

The professor was reminded of an article he had recently read and replied: “I think the image of a bee population would fit better.” It was a somewhat frivolous answer, but in a bee colony, curious bees are sent out to look for opportunities that are of value to the entire colony.

The politician laughed a lame laugh.

“Nice image, professor, but an algorithm bee wouldn’t work in the political arena!”

The professor suspected that the politician had a good point there.

They had one final cup of tea together and then once again went their separate ways.

bout this column:

In a weekly column, written alternately by Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Tessie Hartjes, Jan Wouters, Katleen Gabriels and Auke Hoekstra, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all the previous articles.

Prep for RoboCup 2019: instead of training sessions – scan images

Two multiplied by five machines on two legs running after one ball so that they can kick it into the opponents’ goal – that is robot football. Even detecting the ball is a challenge for these players. Tim Laue from the University of Bremen explains how deep learning plays an important role and what a training camp for robotic footballers looks like.

What role does Deep Learning play in robot football?

Deep learning is a technique which is good at identifying and classifying objects. Some mobile phones use it. For example, if you search for ” bicycle ” in your picture gallery, the program will find photos that show bicycles without you ever having itemized them. With robot football, it is essential to be able to immediately and unambiguously identify your teammates, the playing field and above all the ball on the basis of the video images that have been recorded. So we use that.

Don’t the ball and the other players have a tracking device that makes recognition easier?

No. The ball used to be orange. It was usually the only orange spot on the field. They simply had to look for this color value while processing images. That was comparatively easy. Today the ball has a black and white pattern and is easily confused with shadows on the ground or with parts of other robots. I soon reached the limit using a classic image processing approach, such as scanning for dark spots on a large white spot.

So, you are now teaching your robots the concept of the ‘ball’?

We are designing a so-called neural network. We show this network a large number of images. Images that were previously categorized by humans. The network then receives information along with an image as to whether the image depicts a ball or not. This way the network learns frame by frame what constitutes a ‘ball’ in these images. Normally you would define a list of properties that the software has to scan for.

These days, we are redefining the learning process and now let the software compile on the images themselves the properties which make up a ball. There are two main influencing factors for that. On the one hand, there is the range of variations of the example material and the number of repetitions with these images which we feed the network. On the other, we also determine the level of depth for the network.

How long does such a training last?

In order to get reasonably satisfactory results, we need more than 20,000 images, of which only a fraction actually contain a ball, as well as hundreds or even thousands of repeat runs just which feature the characteristics of a ball.

Nowadays, all teams in the RoboCup use this method because it produces pretty solid results, even when lighting conditions change and colors are different. However, you can still see robots running into the penalty area.

Why does it take so long to learn?

Computers are not as good at recognition as humans are. In fact, I can show a child a painting of a giraffe and it is highly probable that the next day when it visits the zoo, the child will recognize the unfamiliar creature with the long neck as a giraffe. A lot of processes happen when a child recognizes something.
The neuronal network has none of these. That means, you have to show it the ball in all its possible variations; the ball is far away, close, half hidden by a teammate, shaded, brightly lit etc … The more variations that we are able to offer the system, the better. Always in the hope that the images will cover as much of the playing field as possible where a ball can be placed. The network can then abstract and recognize ball representations that lie somewhere between the examples shown.

Is the speed at which the image is processed during the game important as well?

It then calculate is the ball rolling, how fast does it roll, which direction is it heading, where am I right now, what do I do now? After all that, it determines its course of action. In case I really want to evaluate every single image, then I only have 16 milliseconds to do all of those calculations for every single image. And we do want to process every single image. There may be a crucial piece of information hidden in an image. Missing even one is therefore not a good idea.

A next step could be to link the optical information with other properties.

That would open a whole new kettle of fish. A computer is good at calculating things. But first you have to convert everything into quantifiable information. You are actually able to do that quite well with images these days. At the moment there is no other way around this when it comes to robot soccer.

How does the robot decide where to look? Do NAOs not have a 360-degree camera?

That is the eternal question. Should all robots always have the ball in sight or should the tasks be distributed across the team? There are a few software programs that try to answer this question. In fact, you frequently see on the video recordings that robots sometimes miss something important.

Mr. Laue, do you play football yourself?

No, I watch football, but I have no talent whatsoever myself. At present these two areas are still too far removed from each other. With real football knowledge, you’d be more likely to get in the way than be able to help.

Yet the long-term goal for the robot community is to one day really get robots to compete against a human team?

Somebody set a goal for 2050. I can’t say whether this is realistic.

Nevertheless, one thing is clear: compared to other challenges, football is still a very simple discipline. We have an entire Collaborative Research Center at the University of Bremen that is working on how to let robots act sensibly in a domestic environment. This is highly complex because this environment is far more unstructured than football. As a human being, I can also cook in an unfamiliar kitchen. When a robot enters these kinds of environments, it gets really complicated.

@Photos: Tim Laue, University of Bremen

Facebook’s Yann LeCun: Europe should have more ambition in AI research

Yann Lecun

Increasingly, Facebook shapes our view of the world. We see the opinions of our friends in their responses to news that may or may not be fake. But which messages we see first and which messages remain hidden is all controlled by algorithms – algorithms based on artificial intelligence (AI). And these algorithms are the work of a large research team led by Professor Yann LeCun, making LeCun one of the world’s leading and most influential technologists.

Mark Zuckerberg asked LeCun in 2013 to build the AI capability within Facebook, at a time when the link between social media and AI was still unexplored. LeCun had by then built a formidable reputation in academia, and his algorithms were being used in industry – almost all handwritten cheques are processed using his image-recognition model.

Philips, Signify and the TU/e awarded Yann LeCun the prestigious Holst Memorial Medal 2018. Radio4Brainport’s Jean-Paul Linnartz spoke to LeCun on his visit to Eindhoven. Listen to the full interview here.

(Also: Listen to the Radio4Brainport podcast: The 2018 Holst Memorial Lecture with Yann LeCun).

On the importance of providing an environment for creative research, such as that which Dr Gilles Holst created at NatLab

“I started my career at Bell Labs, which was very much modelled on this idea that you do research in an open way and which is scientist-driven research – so, bottom up, with a lot of freedom to work on whatever topic seems relevant or interesting. And this is one of the things that I have tried to reproduce to some extent at Facebook AI Research (FAIR), to maximise the creativity and the way to go forward. Not just to advance technology, but to advance science, which I think is necessary for the domain of AI.”

(See also: Facebook’s head of AI delivers Holst Memorial lecture, says open innovation is a route to faster scientific progress).

Facebook wouldn’t work without deep learning

“It actually is almost exactly five years ago, on 9 December 2013, that it was announced that I would be joining Facebook. What had happened was that, over the preceding months, Mark Zuckerberg and the leadership at Facebook had identified that AI was going to be the key technology for the next decade, and so they decided to invest in that. And that turned out to be true. Facebook is entirely constructed around deep learning nowadays. If you take deep learning out of Facebook, it doesn’t work anymore.”

AI has significant implications for healthcare – and will save lives

“Probably one of the most exciting applications and developments these days is computer vision, which is the application of deep-learning convolutional networks in particular, to medical imaging. It is one of the hottest topics in radiology these days. One idea, for example, is that by using deep-learning-based reconstruction, we could accelerate the collection of data from an MRI machine, which means the test would be cheaper, simpler and faster, which means people can have more of it, essentially. And so the analysis can be done automatically. And so one can have a fast turnaround for diagnosis. Medical imaging I think is one of the biggest applications, and is going to save lives.”

On the view that machines learn from humans, but that humans don’t learn from computers

Yann LeCun and Jean_Paul Linnartz 2018
Yann LeCun and Jean Paul Linnartz 2018

“It is not entirely true that we don’t learn from machines. For example, people have gotten better at playing chess and Go, because they have played against machines, and with machines. If the machine is better than you at a particular task, you get better at it, because you use it to educate yourself. Generally, what is most powerful is the combination of a machine and a person – an expert in the field.

So, machines are there to complement and to empower us, but not to replace us. I am not one of those people who believe that radiologists are going to be replaced by an AI system. It is not the case. There are going to be just as many radiologists, except that their jobs are going to change. Instead of having to spend eight hours a day in a dark room looking at slices of MRIs, they might be able to actually talk to patients or spend more time on complicated cases.”

Preparing for a career in AI – math, math and more math!

“In AI, in fact, you have to study more math than you would otherwise have to if you work on regular computer science. Regular computer science, at least in North America, but it is partly true also in Europe, does not have a huge requirement for mathematics, and most of it is for discrete mathematics. But if you work on machine learning and AI and neural nets and deep learning and computer vision and robotics, that requires actually a lot more continuous math – the kind of math that we used to study forty years ago in the engineering programme. Interestingly, some of the methods that are useful to analyse what happens in a deep-learning system, many of these methods come from statistical physics, for example. What I tell young students who want to get into AI, if you are ambitious, take as many math courses as you can. Take multi-variate calculus, and partial differential equations, and things like that. And study physics, also; quantum mechanics, statistical physics.”

AI combined domain knowledge, physical devices and hardware is a great opportunity for Brainport

“There are lots of opportunities in new kinds of hardware. Of course, NXP is right in that business. I think over the next five to ten years we are going to see neural-net accelerator chips popping up in just about everything we buy. Everything that has electronics in it will have a neural-net accelerator chip. Within a couple of years, it will be the case for mobile phones, cameras, vacuum cleaners, every toy. Every widget with electronics in it, if you want, will have some sort of neural net chip in it. So, there are a lot of opportunities for that kind of industry. Signify can place AI in the edge rather than in the cloud. We are going to see a motion from the cloud to the peripheries to mobile devices and eventually to the Internet of Things devices.”

China has a vast interest in AI

“China is interesting because it is investing massively in AI. The interesting thing in China is that the public itself is very interested in AI. China is one of the two countries where I am recognised on the streets. Not in the US [where I live]. Only in China and in France. In France because I am French, but in China because there is so much interest in AI, that it is everywhere, absolutely everywhere. The thing is, the Chinese have an advantage, in that they have a very large home market. And a disadvantage, in that they are a completely isolated ecosystem in terms of online services. That is going to make it difficult for them to export their services.”

Facial recognition technology: Both benign and nefarious uses

Yann LeCun Source_Radio4Brainport
Yann LeCun in Eindhoven

“Facial recognition is one of the things that made Facebook interested in deep learning in the first place. In the spring of 2013, a small group of engineers at Facebook started experimenting with convolutional networks for image recognition and for face recognition, and they were getting really, really good results. Within a few months, they beat all the records, published a really nice paper at the Conference on Computer Vision and Pattern Recognition in 2014, that was called Deep Face. That was deployed very quickly; you post a picture and your friends are in the picture and they get tagged automatically, and they can choose to tag themselves or not. At first, it was not turned on in Europe, but now it is turned on in Europe on a voluntary basis. Unfortunately, it has been deployed, a very similar technology, using convolutional nets, which is kind of my invention, very widely in China on a grand scale, and it is used to spy on people, essentially. So, there are nefarious uses of technology that, thankfully in many countries, the democratic institutions protect us against, but it is not the case everywhere. There is a very big difference between China, Europe and the US. The US and Europe are getting closer together. Facebook is now applying GDPR-like rules in the US as well. Those are good rules.”

No, Europe does not need its own Facebook in order to ensure it keeps up with AI technology

“Actually, no, it is not necessary for Europe to develop its own Facebook. The reason it is not necessary is that, firstly, there are several parts to developing AI. One part is developing new methods, new algorithms – new science – making the field go forward. For this, you don’t need a Facebook or a Google. You need funding for research, you need a good infrastructure for universities, large computational infrastructure that is accessible to researchers, you need industry support. But there could be that in Europe.”

Myth: You need big data for AI

“There is this myth that somehow you cannot develop new AI techniques if you don’t have access to enormous amounts of data, like Facebook, Google and Microsoft do. It is not the case. At FAIR, for example, we almost exclusively use public data, because we want to be able to compare our algorithms to other people’s. So, we don’t use internal data. Once we have something that works, of course, we work with engineering groups, and they try it on internal data. But to actually make research go forward, you don’t need data that companies like Facebook have access to. You need the drive from the applications, of course, to be able to motivate enough people to work on this. What makes FAIR possible is that Facebook is a large company, is well-established in this market, and has enough profits or cash to finance long-term research. It used to be the case for Philips. Holst’s creation was a forward-looking, fundamental lab. I had friends working there 20 years ago. This is not the case anymore. Bell Labs is the same. It used to be a leading light, it is a shadow of its former self. It is true for a lot of industry research labs across the world, particularly in Europe. Today in Europe, if you want to find an advanced research lab in information technology, in industry, there just aren’t many that practice open research on a grand scale.”

(See also: Why Europe should have its own AI centre).

My advice to Brainport-based companies seeking advice on AI technology? Get ambitious and go big

“It is up to companies like Philips or NXP or others, that are sufficiently forward-looking and have enough resources to really get into this, to create ambitious research labs. If you are not ambitious enough about the goals of a research lab, it is going to be second-rate. And if you want to be ambitious about it, it has to be open. That means the culture is very different. If you are a company that builds widgets, you tend to be very secretive about your research and development.”

(See also: Tomorrow is good: The ten commandments of Holst).

“It is the case for Apple, for instance. Apple is nowhere to be seen on the research circuit for AI. They develop the technology around AI, but they don’t really push the science of AI forward, because they build widgets and they have a secretive culture. The companies that move the field forward are the ones that are not secretive and are not too possessive about intellectual property. And that puts them in a good position to hire, to innovate, to propose tools that other people use, so it makes it easier to make progress. Practice open research. That is my recommendation.”

Open source is essential for faster innovation. Facebook basically doesn’t believe in patents.

Yann LeCun Source TUe
Yann LeCun (c) TUe

“There is no need for protection. What makes the value of a technology is how fast you can bring it to market. For a company, you have a choice between working with universities, which is relatively cheap. And then trying to get new innovations from them by either hiring students or by having interns or by having research contracts with universities. It creates a relatively slow process with a lot of friction to do technology transfer. The main issue with technology transfer is not whether you have the best technology, it’s whether you believe this good technology is something that you can do something with.

The situation we find ourselves in sometimes is that we think we have the best system for, say, classifying text, translating the language, recognising speech. We open source it, and we, of course, talk to the engineering group at the same time. And the engineering groups, you know, they are doing their thing, they don’t have a lot of bandwidth; they have to reallocate their resources in order to pick up on new technology and make progress. So, they have to believe that what you bring to them is really, very useful. And what we do is, we put it in open source and we can point to it and say, “Look, it has 5 000 stars on GitHub and it is used by 200 other companies except us. Isn’t that embarrassing?!” Things like this, where you are convincing product groups and engineering groups that your technology is good, is the main obstacle to technology transfer.

If you have an in-house research group, even if you practice open research, even if you open source everything, you will get there first. And that is the only thing that matters. You don’t need to protect it. Facebook basically doesn’t believe in patents.”