42 News, Home Blog, Magazine, Vol. 3: Digital Transformation
Leave a comment

“Even the very best academic, the most critical journalist can be manipulated”

0 1 #2 Detail © Max Dauven

How do we distinguish relevant from irrelevant information? In his interview with 42 Magazine, science journalist Ranga Yogeshwar talks about the overflow of information and why it is important to filter it. He calls for the disclosure of algorithms while simultaneously pointing out the positive sides of digital transformation and suggests that its benefits should be used beyond economic ends.

Mister Yogeshwar – as a physicist and science journalist, you must often find yourself at the source of research and innovation. When did you first notice how rapidly digital transformation was changing our lives?

I started programming very early on. I built my first computer in the late seventies. I think I was one of the first people in Germany who could send e-mails. The problem was that hardly anyone had an e-mail account back then. That was when I realised that the internet was going to change the grammar of communication.

So are we in the middle of a revolution?

We are at the beginning of a revolution and its consequences are not entirely predictable yet. When printing was invented, we were suddenly able to reproduce communication. Fundamentally, that was what enabled Martin Luther’s Reformation. It wouldn’t have been possible without the invention of printing. So when the internet came along, it was no surprise that this new communication technology had a huge potential to change how we interact as a society.

What sort of change are you thinking of in particular?

A simple example: right now, we are doing an interview. Later, this interview will be published as a text. It is not clear if this will be the same in fifteen years. Even today, there are new technologies, for example intelligent assistants, who use speech as a source of direct input instead of written text. On the book market, audiobook sales rival those of traditional books. When people have a problem with a device and need a manual, they watch a YouTube video. There is a possibility, incredible as it may sound, that we are becoming a post-text society – that within 20-30 years, we may be establishing a culture in which the written word as we know it will be replaced – by speech, by videos, maybe by something else entirely.

Isn’t the idea of a post-text society, of a culture without text or writing, a little far-fetched?

It does seem far-fetched. But thinking about it bears an important potential: for example, as the world becomes more and more complex, we are noticing that the intersection of man-machine-interface is becoming increasingly simple. You needed a whole manual to use the first telephones. Today, interfaces are designed to be intuitive. In a way, they can reveal their own virtual grammar. It is possible that this trend will continue.

What does that mean for the individual?

Of course there are intellectual consequences. The development of language, written and spoken language, was the instrument of enlightenment. When you write, you distil your thoughts by arranging them in writing. Writing is a vehicle – not just of communication, but also of reflection. What happens when most people stop reading and watch videos or talk to devices instead, is unpredictable.

What effects could this development have on society?

On the one hand, the articulation of the individual through language, through writing, is something phenomenally simple. On the other hand, it is phenomenally complicated. Reading means deciphering a code – letters – and that means a lot of work. It is much easier if a device does the job for you or if someone shows you a video. But language is part of a thinking society. In some areas, we may even see a renaissance of the written word. My concern is that society will be divided into a small group of people who will write, maybe program, maybe design – and a huge group of people who will simply consume, won’t write anymore, but will only listen.

… and won’t think anymore?

And will think differently, maybe. One-hundred years ago, greater access to information enabled the expansion of democracy. Censorship meant that the flow of information was cut off and information was held back from the public. Of course, that wasn’t good. But today, the situation has reversed and there is an overflow of information. This is a big disadvantage, because as we see more information, it becomes a sea of information, an ocean, even: It becomes harder and harder to distinguish relevant information from irrelevant information.

Social media profits from showing users wrong information”

What role does social media play in this overflow of information?

The business model of a social network is to operate so that users will stay in their ecosystem for as long as possible so that they can show them as many ads as they can. We know from research that the appeal of loud content, maybe even fake news, is very high. This means that social media profits from showing users wrong information. A specific example from an MIT (Massachusetts Institute of Technology) study: A fake post on Twitter will reach 1500 people six times faster than a true story. The result is that we experience a peculiar agitation in the media that doesn’t always go hand in hand with the truth. In the past, the procurement of information was the problem – in the future, it will be the filtering of information, the distillment to relevance or significance.

Who will be responsible for developing these information filters?

That is an interesting and very current debate. If you followed Mark Zuckerberg’s Senate hearing in April 2018, you could sense the same question hanging in the air, although it wasn’t answered. But implicitly, Zuckerberg said that in the future, he would be able to detect and delete fake news automatically, to identify terrorist groups, to potentially carry out the whole process of filtering information – all with the help of artificial intelligence.

But this statement was never questioned. Could a social network that connects more than two billion people worldwide become a censor? What consequences would that have for different cultures? The things you can say in the US; you aren’t always allowed to say in Germany. The situation is different again in countries like Egypt or China.

“In the past, the procurement of information was the problem. In the future, it will be the filtering of information”

What would such a regulation look like in Germany for example?

In Germany, you could establish a government authority. But that would be dangerous because as a result, you would create an apparatus of censorship. Maybe we would need an independent authority? The public-service broadcasters? What criteria should be used to filter? Who defines what is and isn’t fake news? What mechanisms can ensure that we will allow correct information in the future? Even when it supports a position outside the mainstream? We need to have this debate.

Do you think that politicians and political scholars are aware of the relevance of the debate you demand?

No, politicians aren’t aware of the debate’s relevance at all. So much was obvious during Zuckerberg’s hearing. The total lack of awareness in politics was remarkable. There were obviously people in the room who didn’t even have a Facebook account and had a peculiar view of social media as a result. Unfortunately, this meant that they weren’t able to ask the really important questions. Nobody asked or stated what criteria the algorithms are based on. Or about the phenomenon of the echo chamber. If economic reasons can increase fake news, that can have a tangible influence on society and could even decide a political vote.

So if politicians shouldn’t be responsible for the correct use of data, do we need an independent organisation for that?

The internet and social media confront us with the same sort of questions that the invention of printing did. At first, there was uncontrolled growth. Then Rome stepped in and censorship was established, with all its resulting consequences. Basically, this happened because Rome claimed that people were publishing lies. We have to learn from that and take a stand. There must be no institution that controls the flow of information based on political interest. On the other hand, it is a political task to establish an independent body that does just that.

What would you suggest?

There are a number of possibilities. A censor is one of them. You could approach the problem from the legal side and toughen the laws regarding fake news. If an institution published a piece of news that could have disastrous consequences and might be misused politically, they would be punished for that. It is slowly becoming clear that the uncontrolled growth – that unlimited, unfiltered information – isn’t viable anymore.

In your opinion, where should we draw the line?

There are already a lot of good laws. We have to be clear on the fact that the internet and social networks reverse the flow of the media. Mass media has become the media of the masses. Individuals can suddenly become mass media. We have established words like “influencer”, individuals who have so many “followers”, that they, in the truest sense of the word, influence people. In traditional press laws, there is a clear separation between advertising and content. Advertising has to be clearly marked. Many “influencers” are in fact more like infomercials. That has to be made absolutely clear.

How does the digital space change the relationship between media and consumer?

In the future, we will not only have to talk about media in the traditional sense. We will also have to talk about the traces that the use of digital content leaves. In the past, when we read a book, we did it on our own on a green lawn. Today, everything we read electronically is logged. In other words: the book reads us. Everyone who reads this article in its published form reveals their own data while reading.

None of that has been sufficiently settled in legal terms yet. On top of that, we have noticed that, similarly to advertising, we can gather political stances through “targeting”. Just as it is possible to persuade someone to buy something, it is possible to persuade them to vote for someone. For the first time in history, we are experiencing the confrontation between the public and an enormously powerful machinery capable of manipulation. A couple of people might say: “That will never happen.” And I think that is a huge mistake. Even the best academic, the most critical journalist has to be aware that they can be manipulated.

How dangerous is our digital infrastructure when even critical minds can be manipulated?

There is a Facebook experiment that was only mentioned briefly during the Senate hearing. Facebook tested what would happen if they put positive or largely negative content in a user’s news feed. So, we are not just talking about intellectual manipulation, we are talking about emotional manipulation. This experiment, which has only been dealt with on a surface level, has enormous potential. There is a huge danger that manipulation will not remain limited to consuming. It is also misused by nations that, due to their digital infrastructure, can evaluate their citizens in no time at all. China is already trying it. It is now possible to control citizens completely. It is vital that we understand that. The digital infrastructure offers great opportunities but is also an extremely dangerous breeding ground for dictators and authoritarian systems. This is a danger we have to be aware of.

We might soon able to predict human behaviour with the help of technical tools. What would that mean in this context?

Yes, “predicting” is one of the next steps. Advertising tries to influence people but at the moment, there is no way of knowing if such manipulation works. If they can anticipate the next step, advertisers will have a much stronger hold on the consumer. “Predicting” is already used extensively in other areas, because it’s not just about ads that you click on. When you write a message, your mobile phone suggests words you might use next. If you take a closer look, the suggestions aren’t half bad. In a couple of years, through machine learning, the software will have a pretty precise idea of what you will write next. Precise predictions of your behaviour are the next step. When that time comes, we will need a new term. Not “I would like” but “I am made to like”.

What consequences would that have?

Everyone can ask themselves today: Why did I buy this jumper? Why did I book this holiday? Often, when we really think about it, we realise that an ad or an influencer led us to this decision. An artificial desire was increased. That’s the same as before. The difference is that today, we are able to execute this process individually, due to gigantic mechanical support. Today, there is a form of control that isn’t as obvious as it used to be. I remember how it used to be in the Soviet Union: There were huge loudspeakers, and each morning a radio programme was broadcast across the streets. Everyone knew: this is propaganda. Unfortunately, today’s propaganda isn’t as obvious.

“Communication is more than a business model”

With that in mind, is it time to recognise the more positive sides of digital transformation?

Well, first of all, the digital development in the early phases of the internet was largely driven by economic factors. If you talk to a company in Silicon Valley, the question is always the same: “What’s the business model?”. Every action, each one of the countless apps, the social networks, the search engines, the video platforms – at their core, they are all economic models with the goal to earn money. It is slowly time to realise that communication is more than a business model.

If that becomes clear, we might see that there are more degrees of freedom. And we even know of some! Wikipedia is an example. Wikipedia is a model where the sense of community is central; where people try to make knowledge accessible together. There are many initiatives that serve the public good instead of the advancement of an individual. Maybe it is time to change tack and to say: “We have the chance to use the benefits of digital culture for something beyond economic means.”

And second of all?

The cost of a product’s reproduction, the terminal costs, are reduced to a minimum in the digital space. If I write a text today and share it online, the costs are close to zero. That means, we are developing into a culture where traditional economic entities dissolve. In some areas, we are approaching a world of no terminal costs. That would be a kind of ideal and the internet would really become an asset for many.

Nevertheless, there are many sceptics…

…who say, for example, we should stay clear of social networks completely. But I think that’s the wrong approach, because they have an enormous potential. As with many other technologies, there is a period of adjustment here. Right now, we don’t have many rules, and we are just starting to ask the question who should establish them. If you ask me, it is the responsibility of the state. Just like over a hundred years ago, when cars started to appear on our roads and we realised: we can’t go on like this. We need traffic rules, we need road signs, we need traffic lights. It was the state’s responsibility then. It should be the same here.

In your opinion, what needs to be done?

We are starting to create the necessary conditions. The awareness that we need to do more than we did in the past is definitely there. For example, when it comes to the monopolisation of the digital space: Today, the motto is “The winner takes it all”. The consequence is that there is only one big search engine, namely Google – and just one big social network – Facebook. In business, we have the cartel law. It’s time to apply it to digital spaces as well. My personal opinion is that algorithms should be disclosed when they gain a certain level of relevance. In a democratic society, we cannot allow streams of information to be controlled by a private company in a non-transparent way.

Interview: Katharina Tesch

Translation: Elisabeth Lewerenz

digital-transformation-and-information

Ranga Yogeshwar is a physicist, science journalist, author and presenter. In his book “Nächste Ausfahrt Zukunft” (“Next Exit: the Future”), published in October 2017, he deals with the digital revolution and the effects of artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.