(THIS ARTICLE IS MACHINE TRANSLATED by Google from Norwegian)
The classic model of how we perceive and respond to things outside of us is that information comes to the brain from the body or environment, and that it is processed before information is sent out again. The theory of bodily cognition states that it is different:
There is always interaction with the outside world. The information is not located i superego. For example, if you are talking to someone else: What you feel and think is closely related to the other and how it behaves.
In the same way is our interaction with computers, says Edward Ashford Lee, professor emeritus of electrical engineering and computer science at the University of California. When we search for information through Google, the machine comes with specific suggestions as to what we should search for. These are often better than our own suggestions, and so we move on with them.
It is common to consider computers as tools that do what we want them to do. And what they can do is determined by how the computer engineers made them. This top-down model calls Ashford Lee digital creationism. We never start at zero; the computer engineers will always sit and connect pre-existing programs. They are thus a kind of mutation operators, and what "survives" is impossible to say in advance.
Does this mean that we are losing control? No, we can't lose something we've never had. Both humans and machines are changing. The fact that humans are changing machines is probably all in agreement, but neither is the opposite idea new. Philosopher and Professor of Literature Marshall McLuhan (author of Understanding Media) believed that we humans must look at new technology as new organs – it's just learning how to use them.
For many, the online personality will often appear more real.
Humans have a disability compared to machines: Due to the digital nature of their knowledge, everything a machine learns can be almost immediately copied to another similar machine. Humans, on the other hand, must start from scratch and go through decades of tedious and imperfect knowledge transfer (also called education). However, this handicap also provides opportunities. We must start early and get young people interested in the problems of digital humanism. For it is the next generation that will stand for innovation on the one hand and blame for mistakes on the other.
Today's students should learn to problematize what it means to be private. For this, you can use the tools they already know – Snapchat, Instagram and Facebook – as illustrations. Privacy is a philosophical riddle and a whole new concept. By studying the technology around privacy, we can gain an insight into what it actually means to people. We need to get young people to understand the dynamics when something is spread virally. This is something quite different from teaching them how to write programs that sort numbers, which is what most people start with in computer education today.
Understanding how technology works
Computer programs affect the way we think, and we change the computer programs. A driving force is the search for software that makes you more money; then you will constantly change and adjust the software to achieve this goal.
In contrast, for example, we have open-source solutions that rely on one not will make money from sharing important knowledge. The latter is struggling with commercial programs precisely because there is no financial incentive to change or improve the software.
Political regulation of software requires that we understand how it works (digital creationism). But we do only to a certain extent, and it is difficult to predict the development further. It is also not clear which characters we relate to. Today, most people have one online personality in addition to his real personality. Not only is there a medial difference between the two – for many, there will be a significant difference in the figure, and the online personality will often appear as sea really.
Most technology educations in the United States require a course of ethics, but this does not work as well as intended, writes Ashford Lee. We can have technologists who make all kinds of right ethical choices, and results are still not good. What is needed is cultural understanding of how technology works.
For example, Google created a program that could call a restaurant, find out if a table was available, and book a place for you. To make it seem real, varying pronunciations of words, pauses and different intonation had been entered. And it worked. But the reaction was not as intended: The audience was cursed: Why did you create software that tries to fool us?
The first part of university education in the US has a compulsory part with humanities subjects. Many science students just despise these and call them "Mickey Mouse courses". This arrogance must go away, says Ashford Lee – it is precisely insights from philosophy, history and literature that can enable us to understand how technology works. Together with us.