Tomorrow is good: Human beings, machines with emotions?

Computers are good at abstract thinking; we are all too keen to delegate complex calculations to them in order to free ourselves from that chore. There is something threatening about the intelligence of machines too. Robots and synthetic or artificial intelligence (AI) force us to question our place in the world. What does it mean to be human? Where does the boundary lie between man and machine? What is man? – enlightenment philosopher Immanuel Kant pondered. Our moral views on in vitro fertilization (IVF) have evolved considerably over the past decades. Even to the extent that many people would find it unacceptable to refuse a couple who is eligible (within certain rules, such as age) to go through with an IVF procedure in the Netherlands or Belgium.  Reference is then made in this context to techno-moral change: modifying moral beliefs as a consequence of technology.

Die as a cyborg

Machine and body will become more and more intertwined. Philosopher James Moor asserts that we are born today as human beings, but that many of us will die as cyborgs. Cyborg stands for ‘cybernetic organism’. As in, partly human, partly computer. Moor’s claims are justified, even though cyborg may sound like science fiction. A good example is the pacemaker which is in fact a miniature computer. Moreover, there are pacemakers that are connected to the internet. There are bionic limbs too, such as a bionic arm for disabled veterans or people with congenital disabilities. As well as exoskeletons for patients with full paraplegia.

For example, knee or hip prostheses are implants in the body, which we have been familiar with for some time already. These are not computerized technologies. Still, our human dignity and integrity have not been altered by them. We have over time accepted these implants without any problems. Even further developments, as yet unknown to us, may amount to a broader sense of human dignity. Consequently, we should not be ‘automatically’ opposed to them.

Thanks to science and technology, human beings have been improving for centuries. And the results are clearly apparent, because we are living longer and healthier lives. The debate must now focus on ethical boundaries and problems – what is desirable? And also – what kind of cyborgs do we want to be? For example, AI implants should not only be accessible to the happy few who can afford them, which invariably means that only they can enjoy the benefits. The principle of justice is important for ensuring fair, democratic access to technology. Damage or risk of harm to the patient and third parties obviously needs to be curtailed.

Are we expendable?

How unique is humankind? Are we replaceable by robots and AI systems? AI researcher Rodney Brooks thinks we should rid ourselves of the idea that we are special. We, people, are ‘just’ machines with emotions. Not only are we able to build computers that recognize emotions, but eventually we could also build emotions into them. According to him, it will at some point even be possible to design a computer with real emotions and a state of consciousness. But he also remains rather cautious and avoids making statements about when that is going to happen. That is a wise decision, because the brain is extraordinarily complex. There is still not enough known about its specific workings or the very long evolution that preceded it. Least of all about being able to replicate it just like that.


Tomorrow is good: Beware of the visionary

Research on artificial intelligence (AI) started in the years after the Second World War. John McCarthy, an American mathematician at Dartmouth College, coined the term in 1955 while he was working on a proposal for a summer school that he was seeking funding for. A group of AI pioneers met at that summer workshop in 1956 – the Dartmouth Summer Research Project on Artificial Intelligence. The term AI may have been new, but academics such as British mathematician Alan Turing were already thinking for some time about ‘machine intelligence’ and a ‘thinking machine.’ The objective of the Dartmouth project was also along these lines: simulate intelligence in machines and have computers work out problems that until then had been the preserve of human beings. The summer project did not quite live up to its expectations. The participants were not all present at the same time and were primarily focused on their own projects. Moreover, there was no consensus on theories or methods. The only common vision they shared was that computers might be able to perform intelligent tasks.

AI in 2056

The surviving pioneers from the Dartmouth summer project met up again together for a conference in the summer of 2006. During this three-day conference, they asked what AI would look like in 2056. According to John McCarthy, powerful AI was ‘likely’, but ‘not certain’ by 2056. Oliver Selfridge thought that computers would have emotions by then, but not at a level comparable to that of humans. Marvin Minsky emphasized that the future of AI depended first and foremost on a number of brilliant researchers carrying out their own ideas rather than those of others. He lamented the fact that too few students came up with new ideas because they are too attracted to the idea of entrepreneurship. Trenchard More hoped that machines would always remain under human control and stated that it was highly unlikely that they would ever match the capabilities of the human imagination. Ray Solomonoff predicted that truly intelligent machines were not as far from reality as imagined. According to him, the greatest threat lies in political decision-making.

Who is right?

A wide range of opinions, so it seems. Who among them will be right? Predicting technological breakthroughs is difficult. In 1968, the year when 2001: A Space Odyssey by Stanley Kubrick was released, Marvin Minsky stated that it would only take a generation before there would be intelligent computers like HAL. To date, they don’t exist. In 1950, Alan Turing thought that the computer could pass the Turing test by the year 2000, which turned out to be a miscalculation. Vernor Vinge predicted in 1993 that the technological means to create ‘superhuman intelligence’ would be in place within thirty years and that shortly after that year the human age would come to an end. There are still a few years left before it’s 2023, but even this prediction is excessively utopian.

Flip a coin

Making predictions for the future is problematic, as by definition the future is not determined. The role of chance is often greatly underestimated as well. Even experts are scarcely able to “predict the future any better than if you were to flip a coin.” Therefore, we should all be a bit wary.  Not in the least when it comes to visionaries and tech gurus with their exaggerated dystopian or utopian worldviews. So, don’t just believe anyone who claims that AI will definitely outstrip human intelligence within ten years.

Rules for Robots

The new book by Katleen Gabriels, Regels voor robots. Ethiek in tijden van AI (Rules for Robots; Ethics in times of AI) will be published next week. The English translation will be published in early 2020.



Tomorrow is good: never say that ethics is just a trend

The fastest way to get rid of a philosopher? Simple. Say that ethics is just a ‘trend .’ They will charge out of the room with such haste that the hyperloop will pale in comparison.

A while ago I heard a compliance manager talk about #metoo as an ‘ethical trend’. The #metoo movement created a culture of speaking out in the workplace more relevant and, as an employer, you have to take this into account. Research and consultancy from the Gartner company presented digital ethics and privacy as strategic ‘trends’ for 2019.

The benchmark Dutch dictionary Van Dale defines trend as ‘fashion’ amongst other things, in the sense of ‘the latest fashion’ or ‘setting a trend.’ This way, ethics is put on an equal footing with oversized shoulders, which incidentally will be completely on trend this autumn and winter.

At long last, widespread public concern

Who even actually thought up the term ‘ethical trends’? Take #metoo: at long last there is widespread public concern for structurally flawed and problematic situations that have been tolerated for far too long. If #metoo is just a trend, it will probably blow over at some point, just like skinny jeans are gradually disappearing from the streets in favor of flared trousers. Last season, sexually inappropriate behaviour at work was out of fashion. Yet come this autumn, sexism is back to square one.

In the book ‘How Much is Enough? Money and the Good Life’, economist Robert Skidelsky and his son, philosopher Edward Skidelsky, are advocating the reintroduction of the moral dimension into current Western market thinking. As in, the loss of humanity is immense in a society that has an insatiable craving for profit at the expense of values, the common good and fundamental rights. Privacy is not a ‘strategic trend’, but a human right, as set out in Article 12 of the Universal Declaration of Human Rights. Respect for human rights is not a fad.

The yearning for friendship

Ethics and the notion of what a good life is have been central to philosophy for centuries. “Of all the ways to achieve absolute happiness in life that wisdom provides us with, by far the most important one is to find friendship,” said ‘trendsetter’ Epicurus. And he was right: anyone who feels connected to family, friends, loved ones and a social network feels happier. The opposite of belonging – feeling lonely and cut off from others – makes us deeply unhappy and leads to poorer health. Having good relationships paves the way to a good life, which runs contrary to individualism and self-interest.

“A life that does not look critically at itself is not worth living,” says the other trendsetter named Socrates. In other words, “vindica te tibi”- “spend time with yourself”, according to the wise sentiments of Seneca the trend watcher. ” Take a look at yourself and examine yourself in various ways and keep an eye on yourself; have a closer look at whether you have made any progress in philosophy or in life itself.”

Whoever continues to inspire centuries later on, has not launched a trend, but rather an indispensable guide to a good life.

About this column:

In a weekly column, alternately written by Bert Overlack, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Tessie Hartjes, Jan Wouters, Katleen Gabriels and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here are all the previous columns.



Tomorrow is good: Made with morality

It was actually launched in the summer of 2016: the car for women. The SEAT Mii by Cosmopolitan was the result of a collaboration between the SEAT design team and the editors and readers of the women’s magazine Cosmopolitan. The car was marketed as “exclusive” and aimed at the modern, fashion-conscious woman. The SEAT Mii by Cosmopolitan is ‘trendy’, ‘fashionable’, ‘sporty’, ‘versatile’ and ‘bold’ and offers the modern woman the opportunity to ‘express her personal lifestyle,’ says the marketing director.

The ‘women’s car’ purports an aesthetic appeal which is full of cringe-worthy clichés. The car is small, easy to drive and park, including rear parking sensors, and is equipped with gadgets such as extra hooks to hang a handbag. It is available in two colours: “the very feminine Violetto and the slightly more conventional Candy White.”  Violetto is purple and Candy White is off-white, although these don’t sound all that exclusive.


“Dark tones predominate in the upholstery so as to add a sense of both security and glamour.” How colour can lead to feeling safer is not clear to me, but extra protection would not be a superfluous luxury for women in cars. Women are less involved in accidents than men, but if they do end up in them, they are more likely to be injured and die.

Car safety is tested with the ‘standard man’ in mind and that has fatal consequences for women. Crash test dummies are modelled on the ‘average’ man: 1.77m tall, 76 kg in weight and the distribution of their muscle mass is masculine. The fact that women have fewer muscles in their neck and upper body than men makes them more susceptible to whiplash in the event of a rear-end collision. The chair offers insufficient protection for women who on average weigh less than men. Women are also more likely to be injured in the event of a head-on collision, even when they are wearing a seat belt.

There have been female crash test dummies since the late 1960s, but critics argue that these do not take enough into account how women are really built; they are not merely smaller and lighter versions of men. In Europe, it was only in 2014 that a female test dummy was developed that was modelled on an ‘average’ woman. Pregnant test dummies did not receive attention until the 1990s, about forty years after the first test dummies were developed, which meant that the safety of pregnant women was ignored for a very long time.

Moral impact

Anyone who designs, continually makes choices. Not just functional choices but also those with a moral impact. Morality and technology are consequently not separate domains at all but are strongly intertwined. Designers, engineers, computer scientists, programmers, etc.: they often see themselves as neutral, practical and active in the field of the sciences. Without making it so explicit or viewing it as such, they can take their own world, moral framework or gender as the norm, with the result that others are discriminated against or ignored. Only by paying explicit attention to the ethical aspects of a design do these blind spots become visible. Ethics play a crucial role on various levels: that of the designer, the user and the technology itself that is the result of (moral) choices. For a long time, ethical technology was regarded as something that came later – after technology had been developed. But ethical questions about design must be asked immediately prior to and during the design process. As every product made by humans is ‘made with morality.’

About this column

In a weekly column, alternately written by Eveline van Zeeland, Jan Wouters, Katleen Gabriels, Mary Fiers, Lucien Engelen, Peter de Kock, Tessie Hartjes and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here are all the previous columns.

Tomorrow is good: Self-tracking for Stoics

At the end of the 19th century, weight scales in France were advertised with the message that those who weigh themselves, know themselves well; and those who know themselves, they live well. Many decades later, Jawbone used the slogan ‘Know Yourself Live Better’ as a way to convince potential customers to use the UP wearable to track their physical data, such as calorie consumption and sleep patterns.

The link between physical performance and moral evaluation is not new. Someone who cares for themselves and makes a physical effort is often praised for discipline and self-control. People who use self-tracking apps or devices, such as the Jawbone wearable, track mainly physical activity, diet and weight. The accent is thereby on managing the body in a quantitative and measurable way. Yet does this also benefit us in the moral sense of the word?

Keeping a diary was a precursor to modern self-tracking, just as Benjamin Franklin (1706-1790) did. When he was twenty, he wrote down thirteen virtues which he wanted to cultivate. Every day he reflected on his successs and failures with the help of two questions. At the top of the page of his diary was written: “What good shall I do this day?”, which he pondered upon in the morning. At the bottom was the question “What good have I done today?” In the evening, he went over the day in order to answer that second question. A Fitbit or smartwatch automatically tells you whether you have reached your daily 10,000 steps and whether you were ‘successful’. But what Franklin did was also a form of self-tracking, albeit a much more complex one. Franklin kept a record of his efforts. The diaries provide insight into how he held himself responsible for becoming a better person. By keeping a close eye on his objectives and intentions, he took steps so as to accomplish good deeds. For Franklin, it was not about self-improvement in contemporary terms of optimization, productivity or efficiency, but about moral self-improvement.

Deeper self-insight

A similar emphasis on moral self-improvement can be found amongst the Stoics, such as Epictetus, Seneca and Marcus Aurelius. In the many discussions about ‘quantified self’, most attention is paid to quantitative measurements. How many calories did you consume? How many hours did you sleep? How many steps did you take? And so on. But there are also apps on the market which make the most of Stoic philosophy. The apps ‘Stoic Meditations’ and ‘Stoic Self-Reflect Journaling’ are examples of this. Their goal is less ‘measurable’: a deeper, more complex self-insight through daily quotes, exercises and mastering of Stoic techniques

This also involves forms of self-tracking, such as keeping your thoughts in a digital diary, in order to get a grip on them and figure them out. Self-examination is crucial for Stoics. “Vindica te tibi”, Seneca wrote, “own yourself” or “spend time with yourself”. Seneca called for a fundamental investment in self-knowledge and examination of one’s conscience.  If you know what you stand for, you will not lose yourself if something happens to you. Those who know themselves are also less receptive to the judgment of others and do not let their lives depend on (the approval of) others.

The latter is crucial, because the Stoics encourage you to focus only on what is within your control. By definition, what others think about you lies outside of this and is therefore a waste of time. The Stoic apps help the user to focus their thoughts on what is within their control. In short: you do not always have control over what happens to you, but you do have control over your reaction to it, therefore pay proper attention to that. Contrary to popular belief, the Stoics have no aversion to emotions. It’s about getting a grip on things, in part through self-analysis and mental discipline.

Favorite quotes

Every morning the apps offer a quote from a well-known Stoic which the user is able to reflect on. Favorite quotes may be saved, so that you always have them on hand. One exercise you may get, is to leave something out for a certain period, like showering for a week with cold water instead of hot water, in order to get a better appreciation of what you have. Another technique for learning Stoic habits is the ‘premeditatio malorum’ or negative visualization: think silently about the different ways in which life could disappoint you. Imagine that you lose something which you consider valuable. This will also help you to appreciate what you have.

Physical self-care is vital. An assignment which the app could give you is to take a long walk in nature. Quantitative forms of self-tracking are not at odds with a Stoic lifestyle, as long as you keep in mind that a healthy body is at the service of a healthy mind and not vice versa.

The Stoic self-tracking apps can be a valuable complement to the works of Epictetus, Marcus Aurelius and Seneca. Of course, the apps cannot replace the Stoic books which are full of wisdom. Yet suppose on one rather miserable morning you get to see this quote from Marcus Aurelius on your smartphone: “Begin the morning by saying to thyself, I shall meet with the busy-body, the ungrateful, arrogant, deceitful, envious, unsocial. All these things happen to them by reason of their ignorance of what is good and evil.”

That’s when you receive a comforting thought to start the day with.

About this column:

In a weekly column, written alternately by Maarten Steinbuch, Mary Fiers, Peter de Kock, Eveline van Zeeland, Lucien Engelen, Tessie Hartjes, Jan Wouters, Katleen Gabriels and Auke Hoekstra, Innovation Origins tries to figure out what the future will look like. These columnists, occasionally joined by guest bloggers, are all working in their own way on solutions to the problems of our time. So that tomorrow is good. Here are all the previous articles.

Tomorrow is Good: the importance of forgetting

Eternal Sunshine of the Spotless Mind

“Hans has been keeping a diary for more than thirty years and finds comfort in it, and also takes pleasure in it. The other day he found his diary from ’47, read it and, despite the idea for a story or even a novel, it made him very sad. There were things in that past that he would rather have forgotten forever.”

In 2018, Mensje van Keulen published her diaries from 1977-1979 as ‘Precipitation of a marriage’ (Neerslag van een huwelijk). The fragment above can be found in the diary. Especially the last sentence is recognizable: done things don’t happen again, yet many of us regret things we did or didn’t do. Either we regret how things turned out, or that they happened to us in the first place.

The burden of memory is a recurring theme in philosophy. Friedrich Nietzsche wrote how man “cannot learn to forget and is always attached to the past. No matter how far, no matter how fast he walks, the chain follows.” The older you get, the heavier the baggage can weigh. But if you don’t forget, “it’s impossible to live.”

In the poetic film Eternal Sunshine of the Spotless Mind ex-lovers have their memories erased after a break in love, so they can continue with a clean slate. In 2019, they would have to erase not only their own memory, but also the often ruthless memory of the internet, in order to break free from the past.

Hilarious, astounding or heartbreaking

At the end of the year, Facebook presents its users with their ‘Year in review’. A photo collage gives you an overview of the past year. In a much-discussed blog post from 2014, Eric Meyer talks about ‘inadvertent algorithmic cruelty’. In December 2014, he got to see his annual review. “Here’s what your year looked like!” – and then his smiling daughter Rebecca appeared. 2014 was the year she died. Meyer states that if a person would do this to the parents of a deceased child, we would morally condemn it. But “coming from code, it’s just unfortunate. These are hard, hard problems. It isn’t easy to programmatically figure out if a picture has a ton of Likes because it’s hilarious, astounding, or heartbreaking.” Facebook could give the user more control, for example by making it easier to disable annual overviews and push messages from memories.

The mobile Internet and the Internet of Things make ‘coveillance’ easier: keeping an eye on each other. With the camera in our smartphone, we take public and private photos and videos, often without thinking about whether that is desirable. Online we can drag others (uninvited) into the public domain, even if it’s really not funny.

Dani Mathers, an American model, saw an older lady showering in the dressing room of the sports club, took a picture of the naked woman and swung it on Snapchat with the words: ‘If I can’t unsee this then you can’t either‘. The photo was shared en masse and appeared on international news websites. Mathers was convicted for it, but the woman in the picture has to live with the fact that the picture will not disappear from the internet.

A victim of the terrorist attacks of 22 March 2016 in Zaventem airport explained how, while he was bleeding on the ground, bystanders took pictures of him: “I don’t understand why everyone takes pictures while people are suffering. I myself would never do that. ‘No pictures please’, I eventually yelled. That’s not normal, is it?” Sometimes looking seems to have taken the place of thinking.

The right to noise

In ‘Long time coming’ Bruce Springsteen sings about his children: “Well if I had one wish in this god forsaken world, kids / It’d be that your mistakes would be your own.” In the past, only children of distinguished families grew up in front of cameras. In the meantime, there are many image archives of children and young people, which increases the chance that stupidities are registered.

“Once a dustbin, history becomes a freezer,” says Anita Allen. Moreover, you don’t know on which servers data, photos or videos of yours are kept that can be ‘defrosted’ later on. Examples of this are nude photographs and revenge pornography, which may still appear online years later.

In 2007, researchers Martin Dodge and Rob Kitchin argued in favour of programming an ‘ethics of forgetting’ in the design. For example, by deliberately making the flawless memory of technology less perfect, or at least by giving users that option. This can be done by erasing certain data, by making it blurred or by adding noise to the data. Such a deliberate form of ‘amnesia’ can relieve people of a burden by loosening the past. Forgetting is an important value in an information society. “The right to erasure” is already part of GDPR. The right to noise can easily be added to that.

Of course, not all of the problems described above require a technological solution. We need more etiquette when it comes to (unsolicited) filming, monitoring and photography. A better world also starts with a centuries-old golden rule: ‘Don’t do to someone else what you don’t want to be done to yourself’.

About this column:

In a weekly column, alternately written by Eveline van Zeeland, Jan Wouters, Katleen Gabriels, Maarten Steinbuch, Mary Fiers, Carlo van de Weijer, Lucien Engelen, Tessie Hartjes and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here are all the previous episodes.

Tomorrow is Good: Tracking the Future

Purdue University, Pixabay

“We know where each student is anytime – which is virtually all the time – their mobile devices are connected to our WIFI network. When they enter their dorm, or dining court, or recreational facility, they swipe in, and a machine captures the time and place.”

At Purdue University, every student is tracked with a system called ‘Academic Forecast‘. In 2018, Rector Mitch Daniels wrote an op-ed about it in The Washington Post; the fragment above is from this piece.

Not only data from students’ files, such as grades and the number of log-ins in the course management system, are collected, but also where they are located on campus, such as in the campus gym or in one of the dining facilities. Next, correlations are searched for. The data is also compared with data of successful students from previous years. Daniels writes: “Does the data say that too many days away from campus, or too many absences from class, or too much in-class browsing of websites unrelated to the course, or too few visits to the gym, correlates with lower grades? Does eating meals with the same people day after day appear to help scholastic performance? If so, shouldn’t we bring this to the students’ attention, for their own good?”

Spurious Correlations

The Spurious Correlations website is full of correlations. The more mozzarella is eaten in the US, the more civil engineering doctorates are awarded. And the more films Nicolas Cage appears in, the more people drown by falling into a swimming pool. Which shouldn’t be a reason to stay away from swimming pools when a new film with Cage is premiering. Because: correlation is not causation. Still, students are given a ‘nudge’ based on correlations if their behaviour needs to be adjusted. ‘For their own good’, says rector Daniels. But is it? Control, and possibly even bad science, is presented as ‘care’.

Control, and possibly even bad science, is presented as ‘care’.

‘Academic Forecast’ is sold under the heading of ‘success’. The system “uses that information to show you where you stand on each behaviour, so you can see whether you are on track to be a successful student.” ( What does that mean, success? To graduate as soon as possible? Steve Jobs, Bill Gates and Mark Zuckerberg, who are considered very successful by many people, were all drop-outs. And if the number of visits to the sports club guarantees academic success, I personify academic failure. The system only allows a narrow view of success. Shouldn’t a successful student also think critically and autonomously?

The lost father

Even without a ‘nudge’, students have to learn how to overcome problems. And learn to pay for the consequences of certain behaviour. In Herman de Coninck’s poem ‘Parabel van de verloren vader‘ (Parable of the Lost Father), the father is saddened to see how his son constantly expresses other people’s opinions. His paternal advice is: “Your own life, start with your damned own life, / and come back in ten years’ time with sadness / instead of righteousness, for example after one or two marriages”. Like Immanuel Kant stated: becoming an adult is “to use one’s own mind without another’s guidance”. The road to autonomy, resilience and adulthood is a winding road of trial and error that you can’t just show in excel sheets. The current urge for measurement and efficiency is diametrically opposed to space and giving trust, as the father showed in the poem.

The road to autonomy, resilience and adulthood is a winding road of trial and error that you can’t just show in excel sheets.

What does Academic Forecast do with the autonomy of young adults in a generation that is often already closely tracked by their parents? The system does not even allow for an opt-out: the data are collected anyway. You can only choose not to see your own data, and therefore not to compare it with that of all the others. The only opt-out you get is less competition with others.

Purdue is an engineering university. The fact that future engineers are already accustomed to monitoring worries me. Unless they use their technological ingenuity and autonomy to find ways to circumvent the system. In that case, I would have trust in it.

About this column:

In a weekly column, alternately written by Eveline van Zeeland, Jan Wouters, Katleen Gabriels, Maarten Steinbuch, Mary Fiers, Carlo van de Weijer, Lucien Engelen, Tessie Hartjes and Auke Hoekstra, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented with guest bloggers, are all working in their own way on solutions for the problems of our time. So tomorrow will be good. Here are all the previous episodes.