ULD-SH
Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein
Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein
www.datenschutzzentrum.de/

Dies ist unser Webangebot mit Stand 27.10.2014. Neuere Artikel finden Sie auf der überarbeiteten Webseite unter www.datenschutzzentrum.de.

Literal translation of the video interview of Martin Rost with Andreas Pfitzmann

German language interview, recorded on Monday, June 21, 2010 (11:00 – 12:30): Martin Rost with Andreas Pfitzmann

https://www.datenschutzzentrum.de/interviews/pfitzmann/

Translation: ULD PrimeLife Team

Last change: 2011-07-27

Disclaimer:

This translation refers to a German language video interview, recorded on Monday, June 21, 2010, at the Technische Universität Dresden.

The recorded video interview has been transcribed into a German written version in February 2011 and has been updated by the interviewer Martin Rost. Transcription and translation have been done to the best of our knowledge and belief; however, we cannot warrant or guarantee the accuracy and completeness of the information contained herein.

Work in progress: In particular the text after 1 hour video time is still under revision to improve its quality. Please check this link again for a newer version.

Contact: martin.rost@datenschutzzentrum.de


Martin Rost: Prof. Pfitzmann, thank you very much for the opportunity to interview you! We are sitting here at Technische Universität Dresden, chair of data protection and security. You are head of the chair. Before we start with informational privacy topics, which are exciting in terms of technical aspects, I am interested in how you got involved with data protection.

Andreas Pfitzmann: Well, that is a long story as well as a short one. The long story is that I've been thinking about my life since end of high school: What do I like to do? Do I like to deal with humans? I considered studying psychology or theology at university. Should I utilize my mathematical and technical talents? Do I want to work with machines? During graduation phase I was told that there is a subject of study dealing with both: computer science, which I knew nothing about because computer science was a new field at that time.

Then I decided: “Yes, that sounds good. It is rather technical-mathematical, but it certainly has more connections to people and society than physics or mathematics.” During all my studies, it was clear to me that I wanted to learn these technical things; in the end I wanted to do something useful for people or for this society. That’s my long story.

And now here comes the short story: It begins in spring 1983 – I had finished my studies in fall 1982. I just became research assistant at the chair of fault tolerance. In spring 1983 Dr. Ruth Leuze, Data Protection Commissioner of the federal state of Baden-Württemberg, held a lecture on data protection in Karlsruhe. Students long had been fighting for a lecture on data protection. The professors did not support such a lecture, because it had something to do with politics, with law, thus with things outside of computer science. The lectures were not advertised well; there were only 20 students attending the first session. And then the federal census was scheduled for spring 1983. First we were 20, then 40 and all of the sudden the lecture hall was crowded with students; we had to move to largest available auditorium, which was half full with people the first time and overcrowded at the end of the semester. It was an enlightening moment for me to see how strong emotions the planned census in Germany triggered. A lot of people were afraid of the government spying on people and that some things may be done with their data, which are adverse to their interests.

Perhaps this was a kind of a feeling like “Good grief, the year 1983 is the year 1984 minus 1, and we are getting very close!” It was about to be a huge people’s movement and I was right in the middle of it. I was amazed. At that time I regularly went to faculty colloquium. We met a lecturer working for a major telecommunication provider and worked on a project called “Bigphone”: integrated broadband telecommunication network. The idea was: In the future we will provide all services such as television or radio broadcasting via one single a network. This service will allow users to choose from unlimited supply; the transmission path itself is not the limit anymore where only a certain amount of television channels can be transmitted. You could choose from all supply. And the price to pay is – of course – that the choice is observable and can be logged in the integrated telecommunication network. In the classical television broadcasting system, the network does not know how many people are watching TV and who watches which TV channel or program. But with “Bigphone” it would be possible to exactly find out which movies are being watched, when viewers are switching channels etc. With “Bigphone” you could accurately track which movies a person watches and whether he switches to another TV program in situations where little violence or much violence is shown. These possibilities bothered me much more than – begging your pardon – the slightly boring questions about the federal census.

So I asked the telecommunications technician whether he knew about the federal census and its impact. I asked him whether he had thought of his work as an engineer on this project ”Bigphone” and its relation to data protection. I asked him whether he believed whether this network would be eventually accepted by anybody. And I was lucky, he was an honest engineer. He told me that he had never thought of data protection and did not know anyone who would. After this talk with the technical engineer and the lecture by Dr. Ruth Leuze I started to think about this problem together with my colleagues during coffee breaks at the institute.

<00:05:54>

Martin Rost: … and the problem has exploded. The technical development has exploded as well.

Andreas Pfitzmann: Yes, we realized how many technical innovations in infrastructure will emerge within the next 10, 20, 30 years. I got the feeling that this is the idea: to combine my technical expertise with my interest in the needs of people. I did not just want to earn money. These are the reasons why I am involved with data protection. Then I worked another one and a half years on what we called data protection through technology.

<00:06:41>

Martin Rost: Already in the 1980ties?

Andreas Pfitzmann: It was in 1983 – technical data protection.

Martin Rost: … data protection through technology. There was a huge influence coming from America: A theoretical academic influence.

Andreas Pfitzmann: The first months I worked completely out of nowhere. I started with nothing. I did not know anything on this topic. At the first conference where I had a paper, Professor Fiedler asked me whether I would know David Chaum. I said “No, I don’t. Could you spell the name, please?” I went to the library and found an article by David Chaum: “Communication of the ACM” from 1981 – this means, a few years earlier when David Chaum had developed the concept “Mixes and Digital Pseudonyms”. I was very interested and excited, but also a little bit disappointed because I was not the first one (working in this area). I was so to speak an independent second. But this has pushed us forward as a group, because David Chaum as cryptographer chose a different approach to tackle the problems; he wanted to solve everything with cryptography. My background was network technology; I wanted to solve the problems in a different way, at best directly on the physical level. From where we stand now we could state: Trying to solve the problems with cryptography leads to a clearer system design. Or to a more flexible system design, which has prevailed. So we got to know David Chaum’s first groundbreaking work through literature. At that time I tried to write him a letter. So I did, and I sent it, but it seemed the letter has never arrived. At first I hadn’t got a chance to meet him. I had no new address, but in spring 1985 I met him at a conference; and I talked to him. He was surprised that someone not only had read his paper, but also understood it, including the more difficult parts of the text. He spontaneously invited me and my group to visit him in Amsterdam. At that time he moved from the US to Amsterdam, Netherlands. So the main members of my team, Michael Waidner, Birgit Pfitzmann and some students regularly went to Amsterdam. We could learn very much from David Chaum within a short time because he was years ahead in this field and a master of cryptography.

That’s the way I got involved with this research field and after short time a lot of people became interested in our work. The first group I noticed who had become particularly interested in this field were lawyers. Their comments were various: “What? You want to encrypt data? With encryption we cannot monitor anymore who sends what data to whom!” At that time data protection lawyers believed that data protection should not take place through technology but through legal rights. They thought technology itself poses the risks, and we all need the law for protection. Well, I hardly agreed with that. But after all it was negative feedback, and this feedback at least indicated some sort of interest in the field. Within GI (Gesellschaft für Informatik e.V., a German non-profit organization for computer scientists), a special interest group started to deal with legal informatics and legally compliant design of information technology.

<00:10:43>

Martin Rost: Please, tell me names!

Andreas Pfitzmann: At that time, Mr Göbel, Mr Fiedler and Mr Redecker were interested in that topic, partly because it was new to them that we wanted to directly provide technical support for data protection or even wanted to build something. They invited us to a lot of small workshops; they discussed a lot with us and helped us to understand what makes lawyers tick. I would not say that I understood everything but we started to understand. This was the first discipline, beyond computer science, where we had very close contacts. In those first years the Data Protection Commissioners did not react very much on our approach – I still remember sending my first papers to Dr. Leuze together with a letter in which I thanked her for bringing my attention to this topic. She sent me a kind letter in return but nothing else happened at that time. From there it took quite a long time until the Data Protection Commissioners really got interested in our approach and involved with our themes.

<00:12:02>

Martin Rost: … in what year?

Andreas Pfitzmann: That was in the years 1988 / 1989 / 1990 / 1991 when people like Hansjürgen Garstka or Helmut Bäumler in Kiel were inviting us and told us that we should educate people at their places. Their office full with lawyers should learn about computer science and technological possibilities. It was a big surprise for me. I liked training these people and I learned a lot through the questions we received. They showed us what they were mainly interesting in and how good our explanations were. There was a loose contact – probably a few years earlier – with the research group “Provet” – Prof. Dr. Alexander Roßnagel – and, when I look back, I think that Alexander Roßnagel was the lawyer which I had contact with and steady exchange the most regular in a long term.

<00:13:20>

Martin Rost: What about the papers from David Chaum? What did you do with them?

Andreas Pfitzmann: At first, we tried to write down some topics of the Chaum papers to make them more understandable from our point of view. We tried to add a perspective of engineers, also a bit the legal perspective, and we tried to expand the extremely good ideas of David Chaum. We thought of how we could implement the ideas. And – years later – we thought of how we could improve the ideas. And I think we succeeded. A lot of fundamental papers emerged at that time. Unfortunately too much of these articles had exclusively been written in German language, which was – from an international scientific point of view – not a good decision at that time. Most scientific colleagues from the US and all over the world basically couldn’t read the papers. But on the other hand it triggered a lot in Germany – a lot of German lawyers and politicians would have been out of their depth in dealing with English papers. The fact that we published a lot of papers in German language, especially in the beginning of our research activities helped us to enhance the discussion about privacy and technology design in the German-speaking area more than in any other area I know of. From 1985 to 1990 the leading literature in this field was in German, not in English. Only later, the English language as the common language in science has prevailed, of course.

<00:15:16>

Martin Rost: The “Mixes” concept by David Chaum – you really tried to implement it together with your colleagues right away, to show that they are not only thoughts that may be put into practice one day, but are feasible today.

Andreas Pfitzmann: Now this is many years later of course. “Multilateral security” kicked off then.

Martin Rost: What’s the meaning of multilateral security? What’s the main idea?

Andreas Pfitzmann: The main idea is to take into account security requirements of all participants or, in an even broader sense, of all who are concerned. The idea is to put people concerned in an active position so that they are involved and able to act themselves and to express their preferences in a course of a system. The idea is, when starting to conceptualize and design a system, first to ask the question: what are the different interests? By analyzing then and writing them down, usually the conflicting interests of the participants become apparent. In this case we have to elaborate: How to negotiate in the case of conflicting interests, how to solve the conflicts? As an engineer I cannot design a system that implements contradicting requirements – that is impossible. For building a system, I need a consistent result when analyzing the requirements. This also means: If these conflicts of interest have to be resolved, this process has be faster than the time it usually takes to build the system, i.e., our infrastructure has to be developed in a way so that different weighting of interests and resolutions of conflicts are supported. At the end of this negotiation process it should be clear what has been agreed upon. And this result should be enforced.

Data security and privacy should not be mere declarations of intent or promises, which can be forgotten, broken, and ignored. We would like to see it being enforced. And with multilateral security we hope that everybody will be able to enforce her/his interests.

If you want to summarize it, we could say that multilateral security is security with minimal assumptions on other parties. Of course, any assumption can be wrong. As a matter of fact, the fewer I have to assume regarding others, the better the chances that there are no false assumptions at all and that in the end the system will ensure this. So much for the concept of multilateral security that has been developed within a “Kolleg” from the Gottlieb Daimler and Karl Benz Foundation, as a joint effort from Günter Müller, Kai Rannenberg und myself and several other people, people from different disciplines, too. We had a lot to do with psychologists, the second expert group after lawyers. I was getting close contact to psychologists and I was beginning to understand them. I was happy when I got the feeling that they understood me, too.

This concept of multilateral security is a kind of superstructure or generalization of the classical security. Classical security meant: The person who designs the system decides on the how much security and which kind of security should be incorporated. For example, when a bank designs a system, the bank is only interested in security for the bank and not interested in security for bank customer. And typically it takes about one and a half decades until the bank notices that because they built the system and because judges little by little understand that the bank could have designed it in a different way, they now have the burden of proof and so lose a lot of lawsuits. A system which in the beginning was very secure for the bank and very insecure for its customers, may turn in a system that is very secure for customers, but very insecure for the bank – at least if the customers hire competent lawyers. As I said – multilateral security is a generalization of classical security. It is an overarching concept that also contains technical data protection because the requirements for a system have to comprise the demanded data protection features, confidentiality properties, which I would like to guarantee and, of course, data avoidance strategies. All these are parts of multilateral security.

<00:22:37>

Martin Rost: Anonymity?

Andreas Pfitzmann: “Multilateral security” as a concept has been developed during the years from 1995 to 1998. Anonymity is a protection goal that can be covered by multilateral security. Anonymity has been our primary objective in terms of designing network and infrastructure. Since 1983 we have been thinking that strong strict confidentiality means that the contents may be known only by those who need to know. But in the case of communication at least someone else will see the content. I mean if I really want that nobody gets to know a certain content, I don’t need to communicate. This other person that sees the content can pass on the information to yet others. In case of anonymity the big research question in the beginning was: Is it possible to design a network where no one can notice who is talking to whom? At first this seems to be absurd. Are we able to design such network? Yes, it is possible to design this network but it causes a significant effort. And the next question is who would like to use it? If I want to communicate with people then I usually want to know with whom I communicate. But the main question is: Do I have to know who exactly I’m communicating with? Or could it be sufficient to have a service description or a role description, so that I communicate with that service, that service address instead. There can be different people behind it. By the way, this is not unknown, for example if somebody calls a Samaritan telephone service or a crisis line, the calling person does not know the person on the other end of the line, but the service description instead. Another aspect of anonymity was for us: If we had anonymity on the communication layer, strong accountability could be implemented where anonymity is not desired, of course by digital signatures. We could set up a directory infrastructure with names or civil identities to determine with high certainty who was sending a message. In short, we said at the time that ISDN, Integrated Services Digital Network, which was being developed, is a very bad compromise, because with ISDN we cannot verify the owner of the message and we cannot check whether this message is unchanged. This means ISDN is not good enough for integrity and accountability. But it destroys already so much anonymity that in this respect it is not good either. So it is something that does not accomplish anything really well.

<00:24:00>

Martin Rost: Did the industry put pressure on you? Or didn’t they take you seriously?

Andreas Pfitzmann: They did not put pressure on me, but some teachers in Karlsruhe asked me: “Mr Pfitzmann, do you realize how much research funds we receive from Deutsche Telekom or from Siemens or from Alcatel? And a lot of things you write, and in the way you write it, so exaggerated (in my point of view: so clear) – do not always find favour.” We wrote in papers that cryptographic systems should be public knowledge and that they should be standardized. It was the beginning of the cryptography debate.

<00:24:59>

Martin Rost: ... where are we temporally?

Andreas Pfitzmann: Published in 1987, so perhaps the discussion started in 1986. At that time we wrote the first paper about the need for publicly known cryptography for the German-speaking area. I think we were – in comparison with the Americans – quite early, and we mentioned in our papers the so-called “Zentralstelle für das Chiffrierwesen (ZFCH)” (central office for the matter of ciphers). This office was a cryptography department, in essence, to develop and ensure cryptography for the diplomatic service and, of course, to break the cryptography of other nations. I had heard about such a thing but I had never spoken to anyone there. I think they had no interest in talking with people like me. The first time I have seen the expression “Zentralstelle für das Chiffrierwesen“ printed on paper was in our own text. They did not at all want to be in the public eye. There was even an enquiry from the ZFCH at another department (chair) in Karlsruhe whether it was possible to somehow silence these people. There was no more than this request. Luckily the answer was: “No.” I don’t know exactly whether the answer was: “Thank god no” or “Unfortunately no”. But the answer was in any case: “No”. That was all I noticed at this point.

<00:26:42>

Martin Rost: ... free research. Your research isn’t restricted in any way, correct?

Andreas Pfitzmann: Yes, that I believe. As long as we did research in the usual way, everything was normal: Some people said: “The work you do is great, we agree with it!” In – as for any research – there were people who said: “No, we do not accept your assumptions, we do not accept your premises. You should do totally different things.” But what happened was simply a scientific dialogue, sometimes quarrel but totally normal.

<00:27:14>

Martin Rost: ... but you have also engineers in your team, so you did implementations.

Andreas Pfitzmann: At that time, not at all.

Martin Rost: … and later on, has anything changed then?

Andreas Pfitzmann: … all right, up until about 1996/1997 we did mainly theoretical work.

This was something that I liked to do – taking ideas from David Chaum or creating my own ideas. Then down to the last detail spelling out how we could develop this technology? How much would it cost – not in DM or EURO – but what are the costs of transmission volume, what are the delay times? As engineers usually state costs. So it became clear in the first 12 to 13 years, that anonymity is feasible, and it also would causes costs. And if anonymity is not done, this is a political or legal or an economic decision not to do it. As a human being, I was quite happy about it because I think that this is a task by a basic scientist. See, what the realities are. See, how we could implement. But as a scientist I am not the one who determines whether implementation is done. Democracy means that society, community and their representatives decide what needs to be done. I think so my generation dealt with it. That was our research approach. Then in 1994 / 95 / 96 there was a strong movement on the internet not just to write papers but also to say we can try out. Basically the internet by its architecture provides the big playground to try out. At least this is what it originally was planned for. Many vulnerabilities can only be commented in this way: It has been designed as a playground. And it still is, so you should not be surprised if it is used as a playground. This came to the fore, and members of my research group who are maybe 8/10/12 years younger than me and grew up in their studies with “Let’s give it a try!” had the very strong feeling, we want to experiment with anonymity and anonymous communication. I found it interesting and said: “Yes, okay, please do so.” Then they started in a way that inside I shook my head in disbelief for quite a while. Because they have not implemented in full strength what we have come up with theoretically. But at first they have implemented in a way so that the performance was halfway okay, and they made a compromise concerning the anonymity. That was very unusual for me because the objective of my desk work had been: “Make it as secure as possible!” So we introduced quite early a description of what the attacker, who is working against the protection mechanisms, could potentially do. We called it attacker model. We said quite early: “Let’s assume that all channels, all transmission lines are intercepted by the attacker.” That was fiction, of course, when we said that the first time around 1985. That was an estimation; it was the absolute horror scenario. “Here we have a huge distance, it will never happen.” Today, we have to say: “Well, secret services and police authorities of the world have enforced that all transmission lines are being intercepted.” And of course, our desk work concepts would still be secure against those things. But my team started with attacker models which were too weak in my opinion. But they have started, and this brought us into contact with many people, organizations and institutions, that we had no prior contact with. It started very innocently. Our anonymity service was used in a way that some pupil – be it justifiable or a prank – wrote in his teacher’s guest book: “The stupid cow is not able to give proper lessons.” And then the teacher turned to us and asked us: “I want to know who did this.” We replied: “Sorry, we cannot find out because we don’t know it ourselves.” I don’t know whether we wrote it (to her), but probably all of us have thought that the teacher should rather make an effort in talking to her pupils or in changing her lessons than trying to find out who has written critical statements in her guest book.

<00:33:11>

Martin Rost: It just became apparent in your words: the specialty of this anonymity service was that it protects against the operator of the service, too. That was really the problem that needed to be solved, wasn’t it?

Andreas Pfitzmann: Yes, there had been already so-called anon proxies at that time. They provided a server that forwarded data it received after having replaced addresses. Only this one server knew exactly what data was sent from whom to whom. From the perspective of a secret service chief, it would have been obvious to provide such a service. There is no cheaper way to get the information on who considers what as confidential and who does not want to be observed if communicating with whom. With our system, we did not go via only one intermediate node, but via several intermediate nodes with different operators. On of the early operators of such a “Mix” was the ULD (Independent Centre for Privacy Protection) in Schleswig-Holstein. Operators had to cooperate with each if it should be found out who communicates with whom. Well, this was a property from the very beginning, but now I could report many weaknesses of the first versions of our software how this linkage could have been done at that time without the necessity that the operators cooperate. This was reason for my head-shaking in the beginning. But Hannes Federrath, Stefan Köpsell and others said: “Andreas, leave it to us.” And this seems to have really worked in spite of the weaknesses existing at that time. Meanwhile not only the teacher, but also the police approached us every now and then, with some requests in their investigations: “We’d like to know ...” Okay. Then our default answer was: “Sorry ...”. We didn’t know because we had no logfiles. Then often the police requested: “Please log the records for the future.” This led to the question what legal basis would oblige us to log, or at least entitle us to log data from the communication etc. So since we have been operating an anonymity service, we got many contacts to the police – less the secret services – but of course to the media and the press.t

I believe that such a running system with its little scandals is much more tangible for media and press. And I believe that with this system, we learned a lot about anonymity and not only us but many users. It has been – and still ist – a big awareness campaign.

<00:36:25>

Martin Rost: The anonymity software was used by the police as well as representatives of industry if they wanted to ensure that their competitors do not know what they are interested in. This means that you actually have allies for anonymity in areas where it would not necessarily be expected, for example, in investigative authorities. However, it occurs to me that it still does not go without saying that an anonymity infrastructure for communication must exist, for example for political online elections.

Andreas Pfitzmann: People obviously have the need for anonymity, especially police, secret service as well as industry. There was a time, when – so we were told – not only child pornography was disseminated through our service, but also a pedophile ring used our software to make appointments for child abuse. At this point we have offered, as researchers, to shut off our service. I have to say, freedom of research is a good thing but I would not rely on my freedom when it comes to child abuse. And the answer I got was the following: “No, do not shut off the anonymity service. You have to continue with your service. Not only because otherwise the people would be warned that the law enforcement is close on their heels. But also: We need your service ourselves, because we use your service to search for illegal content on the internet. If we use an IP address of the Federal Criminal Police Office (Bundeskriminalamt, BKA), then websites will show only that content that is legally compliant.” – At this point I want to remark: What the webserver responds, may depend on the IP address of the request. – So we were told very clearly: “We do not want that you shut down your service. We need your service.”

<00:39:02>

Martin Rost: I remember the story with China: the demand to do research on the internet even from China. Major western companies have accessed AN.ON and the JAP.

Andreas Pfitzmann: We can go further: I think, this service was important for companies as well as for the freedom movement in Iran. This service brought us into contact with many actors. We got messages from China and Iran such as “Thanks for providing this service.” So we did not only receive messages from teachers who felt criticized by their pupils. Of course we had to learn that our service has been misused sometimes. I have to say for myself: It gets under the skin to be informed that something we operate, or where we participated in its design, is used to organize the abuse of children. Statistically, as far as we know, our service had no higher crime rate than the internet altogether. We did not know in the early years whether our service would be a melting pot of people who eventually would come into conflict with the law or with the police. That is not the case. We have had a few spectacular cases, where our service has been used for criminal matters, too.

But compared with millions of downloads by probably hundreds of thousands users – not simultaneous, but distributed on the decade of the service’s existence – the requests by the police cannot be seen as evidence for a higher frequency of conflicts with the law. Our fears from the beginning such as ”Are we doing a service which we have to turn off after half a year because we mainly support criminals in their actions?” have not been confirmed as far as we know.

<00:41:28>

Martin Rost: My suggestion: Let’s expand the perspective. First we have built the infrastructure followed by a phase of identity management on some layers higher.

Andreas Pfitzmann: We have been talking about – let’s say – data avoidance. And not only avoidance in a sense of data storage, but also data avoidance in a sense of avoidance of data collectability (the possibility to collect data; related: observability). That was the topic I started with in research. This is going on. It is still a topic under the perspective: How far can we go with it? How efficient can we be? But it is also clear that there are a lot of services, where data avoidance cannot solve the problem completely because some data have to be communicated, because you want to be recognized by your communication partner in cases of progressing with transactions or continuing a dialogue. That leads us to a kind of work, let’s say, a kind of second generation of research activities. It is mainly related to the key word identities management here in our research group. The idea is that every person does not only have one single identity, and does not always use the comprehensive identity, i.e. identifier of the ID card, date of birth, place of residence, interests, diploma, blood group and I do not know what else. To the contrary, let’s say: “No, in different contexts we want to use different partial identities.” That’s how we call it. For example if I participate in a forum where we exchange information on the latest jokes, my blood group and my educational achievement are completely irrelevant. It may be relevant important whether the last ten jokes, somebody has posted, were funny or not. There, you could create a partial identity with very few personal data, which had only to ensure that no one else could to post bad jokes under this identity. Essentially I need a digital pseudonym– a test key of a digital signature system. Then I would anonymously post my jokes in an anonymous infrastructure, and would sign them with a digital pseudonym to make clear that the jokes originate from me. Only to make sure that no one else could ruin my reputation because my jokes are kind of funny and entertaining.

It’s world apart between the one case of “always doing everything using the full civil identity” and the other case of a partial identity as a jokes teller where you practically don’t need personal data. In the digital world the identities can be distinguished that is not working to the same degree for the objective world. It is important that distinguishing is working in the digital world because basically forgetting cannot be organized in this digital world. In the opera or in the football club, the people usually will gradually forget my face except I have extremely misbehavedor people are particularly struck by me. But in the digital world where you can link and combine all kinds of events, you cannot remove things anymore. In this respect, very well this first generation of anonymous communication supplemented by second-generation identities management: Identities management means only to give only that information to the communication partner, or partners if communicating with a group, that is relevant in this particular situation, in this area of your life.

<00:47:57>

Martin Rost: Your work is clearly political. And you also want to have a political effect. Could you give examples where you have been successful at the political level?

Andreas Pfitzmann: The first question is: What is meant by successful political work? It means for a basic researcher: Your findings in your basic research create new possibilities. Whether people want to make new possibilities happen or not, this will be politically discussed and decided. To decide means sometimes “reacting by doing nothing” … but okay. With this said I would say: “Yes, with great success.” Not only us. There was a bunch of people: David Chaum, our group, also in a lot of other places, be it in German-speaking areas or in international areas. People who do good work. Yes, the development in the field of technical data protection is perceived and is being politically discussed. That was the view of a basic researcher.

So if you ask me as a politically thinking citizen: “To what extent were political decisions made in a way that I would like?” And if success means: A decision really is taken, and even in a direction I would prefer? Then our success is very mixed.

There were some cases where we had success. In most cases nothing happened. And in a few cases things happens explicitly against our advice. Now you can say or you have to say as a politically thinking citizen: That is a healthy situation. It would be surprising in a certain way if an individual could stand up and could state: “They have always taken my advice and everything turned out in a way I wished.” Somehow I have the feeling when someone says that, he must be very stupid or arrogant.

So when did we as researchers get the desired result (from politics)? The most prominent example we have is the topic crypto regulation. From the year 1986 on there was an intense discussion whether the use of cryptography – perhaps as an export implementation – should be regulated because cryptography was ready at the point that it could come to the mass market. There was the fear of course that cryptography will be used by a rogue regimes, by foreign secret services or by terrorists. It has been discussed especially in the US, but also in other industrial nations: “Couldn’t we try to deposit the keys for all cryptographic systems so that always someone is able to read the clear text?”

Keywords, coming from the US, were Clipper chip and key recovery, i.e., key escrow with the Clipper chipand later on key recovery. My group at that time in Karlsruhe and later on in some other places like Hildesheim and Dresden wrote very early some fundamental papers on why we think that cryptography and the support of cryptography, in particular from public-key cryptography, is more useful for the civil society are more useful than the attempt of regulation is bad for terrorists and criminals. Because usually terrorists and criminals don’t need public-key cryptography. They can exchange the keys in a different way. And the one-time pad –  a cryptographic method which cannot be broken by any super computer of the world – has already been invented. That is written in all textbooks; it is known by all people who want to know. (Mr Pfitzmann holds up a USB memory stick) Such a USB memory stick may store all the data about the civil census from 1983 five times. And I can store so much key data for the one time pad that I can speak to someone on the phone a lifetime long, even video telephony is possible for hours. Cryptography –regulation will – this is my belief, and there are very good arguments to support that – practically not harm organized crime and terrorists, but the civil society. This argument is very old, it is already mentioned in the first papers we wrote, and has accomplished nothing. Because I have the feeling that arguments have very little effect in political discussions. At that time – that was in 1992/1993 when the debate about key escrow/key recovery from the US has escalated – we switched to a new area of research: the steganography.

Steganography is an old art how secret data can be hidden in large amounts of data, which appear free from suspicion. That means you can embed encrypted secret data in pictures, and these pictures look the same without attracting attention. We developed some embedding algorithms and pushed forward the research in this field. There were years which I think we had the best research group in the area of steganography in Europe. And we could present embedded data in video conferences. We have developed a demonstrator that showed all the things. We presented the demonstrator for example in state parliament of Hessen where a big congress with Data Protection Commissioners took place. They invited us, I think it was in 1998, and we wanted to present it. But the plenary hall has had no screen. We insisted to get a screen installed in the plenary hall. Then we finally managed that a dowel has been put into the wall of the state parliament in Hessen for installing the screen so that we could demonstrate how it works: “showing pictures”. We were successful. What I would like to see is that anybody would come and say: “And what is behind the pictures you show us? What are you doing exactly?” I would have loved to explain it. But nobody came, nobody wanted to know. They believed in us. We were not bluffing; it was real, what we presented. But somehow they should have asked critical questions, but nobody did. And the pictures were very impressive. People felt moved by it. A secretary of state from the Ministry of Interior in Bavaria called us supporters of terrorism. But a Federal Minister of Justice said: “If this is the way it is, then cryptography regulation obviously does not make any sense.” The pictures were really very impressive.

What are the lessons learned? There are some disciplines where it is rather easy to show pictures. Steganography is such a discipline; if you embed data into pictures, this can be nicely presented. Unfortunately, in many other areas it is not that straightforward, or it is not easy to convey the a message for politics in form of a picture.

Since then one question repeatedly affects me: How can we compress and present our message in a way that the message becomes comprehensible within a few seconds. I have got the impression that political awareness, especially political awareness of politicians, has to be measured in seconds. Not in minutes, not in hours. Well. Here we had some success: The federal government has decided: Within the OECD we vote against the binding rule introduced by the USA that all cryptography, which has been designed and distributed in industrial states, i.e., the “club of the OECD”, should have a back door, meaning: key recovery. That is probably the greatest success we could achieve as politically active citizens with a background of work as scientists in basic research.

But I’d also like to talk about the biggest failure: Our biggest failure was not to be able to convince the politicians that the retention of communication data (data retention) is big nonsense. Contentwise there is some similarity to the cryptography debate. People in small groups who want to plan terror acts or want to exchange certain pictures in the area of child pornography, don’t need a high performance medium of communication. To do this, a modest transmitting volume and a modest real time functionality are quite sufficient. And that is something which can be achieved by everybody who wants to; even with data retention that exists and is enforceable in a lot of states. People can use structures and server in countries where no data retention exists.

Currently we are developing, based on the “DC+”-Net, an anonymity service of the next generation where data retention cannot be implemented. Because there is nothing that can be usefully stored and retained. We have a lot of good scientific arguments for why this would be ineffective. We have not succeeded in consolidating all this in one picture. And what happened was, when the Americans wanted to do via the OECD: In terms of internal policy, they couldn’t have domestically achieved key recovery. They wanted to do this in terms of foreign policy via the OECD. Probably no single nation would have achieved a mandatory data retention. For this reason they applied a European Directive. This we could not have prevented. Now we need to take a very close look how this European Directive can be legally evaluated. Maybe not everything is lost. In my view this is the points where I would say as a citizen: “No, the way we argued and believed that we had good arguments didn’t gain acceptance.” And that may have to do with pictures, again. Pictures meant in a different sense. People who want to solve crimes will show political decision maker pictures of abducted children, corpses hacked to death, or abused children. Using these pictures, they will build up emotional pressure to demand to do something. That is clear. If something can be done, it should be done, that’s obvious. But the emotional pressure is so strong that – in my interpretation – people do not think reasonable anymore and lose their sense of proportion whether proposed measures against child abuse or kidnapping children are suitable ... and whether they really can work.

Our greatest success was: We had pictures, our pictures won. And our biggest failure was: Other people had pictures, too, and their pictures won. The conclusion is that I’ve got the feeling that arguments in a political discussion barely count. There are only pictures that count for a decision.

And there may be a third class of pictures: the collapsing twin towers. All these pictures have been stored in people’s heads. They are there! We feel it is an enormous disaster. If you take a close look: estimated 5.000 dead persons, no, less than 5.000 …

<01:00:19>

Martin Rost: I think 3.600 dead persons.

Andreas Pfitzmann: 3.600 – only a fraction of traffic deaths every year. That means: If I want to protect my citizens I do not have to fight against terrorists, I should ask myself how we organize four-wheeled traffic differently. If I take a look at the Twin Towers: Why did they collapse? I think because of the airplanes … Obviously, the Twin Towers would not collapse without the airplanes. In this sense: “Yes.” But after all we know today: The Twin Towers would collapse several hours later if they ever had collapsed because the Twin Towers’ steel frame would be in accordance with fire regulation rules. This means: terrorists did not cause solidly built Twin Towers to collapse. The building was not in a condition as it should be. It is a form of criticism, when we just say: “Okay, it doesn’t start with fights against terrorism. We should take a look at the building substance and we should arrange for carrying out architectural fire protection precautions.” I did not hear such things from the Americans. It is much more cheaper and for politicians much more prestigious telling the people that they do war on terror rather than taking care of fire protection rules in public buildings or in commercial buildings. The matter is the power of images is enormous. At this point let me make a comment about the pictures: A lot of pictures, which we see, the new multimedia world, are made by amateurs and will be published by them. On one hand I think it is good that information may not be blocked, on the other hand society need to look at a sensible handling with pictures in a way that those taken pictures do not take away wisdom and rationality. Pictures should push us in directions, which are irrational and not very helpful.

<01:02:38>

Martin Rost: How would you describe the difference between data security and data protection? Where will you draw the line? How would you determine the relationship with each another?

Andreas Pfitzmann: Well, the offhand answer is: As a first approximation, data protection means protection against data. And, as a first approximation, data security means protection of data. But okay, let’s take a closer look. In case of data protection I like to protect the individual, the human being. This is a strong motivation for me. I can also imagine that people in data security say: Well, I do not want to protect individuals only; I want to protect groups as well, I want to protect group interests. But my motivation is also the protection of individuals against people’s excessive knowledge on the individual with the possibility to pursuit someone, to manipulate someone and so on … my motivation is very strong. Data security is based on such large quantities of data. Data protection refers to data which are related to a human being, to a life of human being, to their relationship. Data security is about all kind of data: Data of love life of turtles, for example. I do not see any data relevance unless we admit personal rights of turtles and their right to privacy – which is an interesting research question and legal matter. Maybe there are more important questions to ask than this one. It is obvious, when I design a system, that I can implement data protection halfway reasonable if there reasonable data available.

Because I do not build a strong infrastructure for personal data as the one, I use anyway. I do not have any security mechanisms as the one I have for sensitive or valuable data. Let’s make it short I need both of them.

<01:05:10>

Martin Rost: If you think about systems – in case you have to analyze existing systems or you want to design new systems – and you want to make it right ... In which categories do you think of in case of systems?

Andreas Pfitzmann: First of all I want to understand and I want anyone telling, what the needs are. What is the benefit of the system for whom?

What will the system provide?

Martin Rost: Data Protection Commissioners would ask for the purpose. What will the system provide?

Andreas Pfitzmann: I would consider all the data which I do not need for this purpose. Often nobody ask you, that you should design a new system, but rather an already designed system will be presented. Or a rough concept of a system will be presented. When I ask myself: What can I leave out? It is not, that I think something up. It rather exist a concrete system design in my head and I ask myself: What can I leave out? The next question is: if it’s clear, what can be left out: Where can I prevent means of registration? Where can I shorten data storage periods? Then I have a kind of concept of a system with minimum quantity on data, with minimum quantity on storage periods. The next question is: is it acceptable, that a system performs less than what people want to? Another question: Should the system, which we design, be extendable? In which direction will the system be extendable? How much room to move do I have? If I can use this leeway: what data can I leave out? Do I have to add further data or interfaces? How do the user interfaces look like? And then a new perspective of protection goal comes up: “Okay, not only the functionality must be expressed at this interface but I also must have a selection.” What protection mechanisms are in use? Do I recognize people? How do I recognize people? All these play a big role in system design.

<01:08:35>

Martin Rost: We deal with “Ambient Assisted Living”, with ubiquitous computing. And there we have the idea to capture everything from people especially from AAL. There at AAL you come up with the proposals to start up minimum and to leave out whenever possible. Or they won’t listen to us. What are you doing in such situations?

Andreas Pfitzmann: Here we have old conflicts. When I started 1983 dealing with networks this question came up: How can we minimize or even exclude data collection options. It was clear: when data are already collected or when we have certainty that data are collected, I cannot proof for sure, that data are not captured. Or, when data are collected, all data and all different examples of data have been deleted. That was the starting point of my work. And the motivation was: Why do we need this?

Exponential growth of storage capacity and processed communication capacity results in cheap storage, cheap processing and also cheap communication. Another result: cost will be irrelevant. There is always the try to store data on stock, because it’s free. And we do not know if we can use data later on. Also for data that we collect later, the understanding of data and also the evaluation options are getting better.

So even at the point where we have once accepted, we collect data, we would not throw away data normally. And even at the point where we don’t know if we could use the data, we don’t know which information value those data might have at the time point of data collection but also a couple years later. One of the reasons of unknown information value is that data mining algorithm will be improved. Our knowledge of the past was: If I want confidentiality as a goal it is very hard to receive regarding topics such as power, exercise of power and power control. If I am under pressure from the secret service with the allegation: “We know something about you!” Then I want to know: “What do they really know about me? Or do they bluff?” If they threat your family, I want to know: “What do they know about my family? What do they know about current place of resident? There is a kind of soft confidentiality such as: “I hope they do not know all this.” At this point I am not strong enough to say: “No”. Where confidentiality is not high, people will get weak. This has led me to believe, that we have to avoid data collection whenever possible. That was a realistic approach in 1983 and the following years, because data came in computing system and in data networks and only data that people has been collected. Data came also in computing system because they were produced as part of mediation processes.

<01:13:01>

Martin Rost: And AAL is the perfect opposite!

Andreas Pfitzmann: Here comes the perfect opposite. In a way we’re equipping our Computers, which have become much smaller and powerful, with their eyes, ears and hands to grasp, understand and analyse the world. By equipping this area of ubiquitous computing we are building an infrastructure which purpose and types of usage are still unknown so by installing sensors we’re trying to be as universal as possible. We’re still implementing things, especially in the area of multimedia, which informational content is still unknown to us. If you had recorded an e-mail in 1983 we would not only be able to find out was communicated, we would find out the authors errors in punctuation and grammar, too. That’s already a little bit more of information than the textual context itself. It is information about the authors’ education. And if the style varies a lot you can deduct information about importance or if the author was in a hurry etc. But in rooms in which recording takes place like by video cameras we get information about what is communicated, how it is communicated and how it is emphasized by gestures and mimic. Depending on recording solution and picture editing we maybe can see the expression of the eyes and the different temperatures on the skin of the face. By analysing this material maybe a doctor can diagnose heart condition or psychological disorders. We don’t know (yet). Recording multimedia or a large quantity of parameters means that the possibilities to analyse are boundless. I find it absolutely silly when people are convinced that they can protect their date within the infrastructure over years and even decades. You just have to look at how often you have to patch your systems to know-how many days or weeks it takes until somebody is hacking into your system.

<01:16:29>

Martin Rost: Let me come back to the doctor example in which maybe by analyzing the temperatures on the face a risky state of health or physical condition can be diagnosed whereupon the health insurance adapts its policy. Thereby the right for informational self-determination will be restricted through this analysis (and insurance costs) and the prevention problem occurs.

Andreas Pfitzmann: Not so fast. I prefer holding back any definite assessment. If I assume sitting in front of a camera now and letting this material be medically examined so that my doctor calls me and says: “Mr Pfitzmann, we have to make an appointment for some tests – it looks like you’re health is at risk. I’d like to see you within the next 24 hours.” Then I would be thankful for this assessment and would go right away even if it is a false alarm. I prefer 10 false alarms to waking up somewhere in some hospital.

<01:18:12>

Martin Rost: Health insurance?

Andreas Pfitzmann: Of course. If – and that’s the problem with this kind of data – I’m not the first who receives this diagnose. The danger is immanent that health insurances or maybe secret services are using this data to answer questions like “Is he in stress momentarily? Could he be easily hired because of stress or some personal crisis?” As long as I would know that in this ubiquitous computing the data wouldn’t be misused in such a way, I would accept that this analysis can be really helpful to me as a human being. But my experience tells me basically the whole history tells us that in society there were always conflicts of interest. And that the ability to collect and analyse data means (executing) power. And therefore in my point of view something like this ubiquitous computing and the corresponding data gathering creates an immense stability problem for every society. How I’d like to wish for a society to be? I’d like to wish for a society in which, when something happens there’d be no overreaction. There has to be some sort of calmness. Not ignorance but no overreaction as well. We have ubiquitous computing, we are able to observe people and we can do all kind of analysis without these people knowing.

<01:20:27>

Martin Rost: Automated!

Andreas Pfitzmann: Automated for all. Let’s imagine: 9/11. An American president who hasn’t shown himself to the public for 24 hours but then wants to demonstrate his capacity to act all the more who says: We’re evaluating everything in every kind of way and everyone will be preventively watched ore locked away. And after that we will allow all the ordinary people to go on with their ordinary lives again.” To this I can only say: “Catastrophe! So what I wish for as a technician is that the technology which I help create contributes to the society’s stability and robustness. Therefore I’m cautious to create technology or infrastructure which has immanent potential for the destabilisation of society. Because history tells me that up to now no society in which such complex systems are managed had a sufficient amount of calmness and I don’t think this will change over the next years or decades. Therefore I’d like to be more cautious and sensitive here. And that is why – in terms of anonymity and data reduction – connected to identity management ubiquitous computing is a massive problem. And possibly, after thorough analysis one can come to the conclusion: No, we won’t do that!

<01:22:37>

Martin Rost: It’s easy to say not to want that. But there is no authority to decide on that. And secondly: It (already) happens. (Currently / as we speak) There are a lot very well-funded studies running focussing on how the public health system within this welfare state needs to be changed and on how much insurance costs and the cost for the medical sector as a whole need to be risen. There is already a whole lot of research in progress on how much technology has to be pushed forward to allow a certain (service) level. At the moment it looks as if exactly these Systems will be realized. So what can be done?

Andreas Pfitzmann: Something little and maybe something big. The little means that I can try for example in the healthcare sector to equip it with sensor technology in all areas – public and private – in such a way that I can observe everything that happens in this society. To put in an extreme example: If I would be in need of permanent (nursing) care and to be looked after around the clock I could share my space with a ‘care robot’, which observes my every move and looks after me when I fall down, for example. Please don’t generalize this level of need to everyone. In the case ubiquitous computing what would be the motivation to do it? Apparently it’s not about care because this could be done with much less. Is it the secret services which want to observe fully automated by computer to save money because manpower is expensive? In that case we will be presented with something which is called a healthcare system but firstly is a surveillance system. Let me take a leap back to the year 1983. In that year view data was under discussion. That was an old primitive System which allowed to access interactive date via TV. If I were a technician with the goal to survey people in a most efficient way I would build it exactly like that. No local memory storage, small screen to allow complete surveillance of everything the user is doing. Was view data conceited to allow observation of people? Or was it a not-so-well thought-out idea with insufficient technology? Is ubiquitous computing a not thought-out concept? Is it offered to us with complete different goals than the ones mentioned? I don’t know. I’d like to work on it in the direction of mobile gear or mobile companions. But for all I know, some of it is already in use, we got our mobiles which in theory can record our heartbeat when we carry them with us in our breast pocket and can inform a doctor when something unusual happens. In theory they exist and prototypes are already built. This triggers the question: If you can’t stop it, can it be used for other purposes? And then I have to go a long, long way back: In the first years I worked on technical data protection – around 1983 – we naturally thought about a lot of systems in which data protection can be increased. And there were a lot. But it was obvious that there won’t be a complete change overnight. It couldn’t be expected that society is going to switch over to complete anonymity and data avoidance at once. Some people don’t want that. A few things become more expensive. Why did we work on it? Because we believe that, beside knowledge, it’s about the principle, it’s about strengthening the approach to technology it’s enabling us to buy time in this unbelievable dynamic process, in which the performance of calculating, data storage and communication is doubling energy 18 month. Sometimes even faster, sometimes a bit slower. But a doubling every 18 month is a reasonable value considering the development over the last 55 years.

<01:28:27>

Martin Rost: In storage capacity and speed?

Andreas Pfitzmann: Yes, at a constant price. You can say as well that every 18 month you get the same performance at half of the price. To visualize what a doubling of performance at the same price rate means imagine the automotive sector: Every one-and-a-half years the cars would get twice as fast. After 20 month at maximum the legislator would intervene and say: No, no, no – stop! Or look at it in another way: One car with full price and after 18 month the price is half of the original price. Maybe it could be managed (tolerated) for about 36 month until the government would scream: Alarm! How can the German industry can make their profits, when thing are getting so cheap? We have to intervene. Only in IT people think there doesn’t have to be intervention in general. And it’s not about a factor of 2 or 4 over these 55 years. To make it clear: 15 years mean 10 doubling which, in fact, is a factor of about 1000! 30 years mean a factor of 1 million! That means we have an on-going process of growth in technology which, from the 1983 point of view is running over a few decades and will be running at least two decades more. From today’s point of view we know that it had definitely went on for two-and-a-half decades. In my opinion this will go on further for another one to one-and-a-half decades. Such a technological development had never occurred before in the whole history of mankind. And a very naive thought of mine is: If something develops unbelievably fast, way too fast, then it would be a good thing for society to gain more time to adapt. If there is growth with a factor of million, billions, etc. in process you can’t expect to stop it. You can’t expect to have it under control completely. I think the idea that with data avoidance and identity management a complete control is possible, is very naive. Maybe, under good circumstances, we will be able to control some parts of it or maybe we can cushion this process a bit as a whole. If society gains more time it’s a lot.

<01:31:36>

Martin Rost: That means through your activities society has gained time to think about data protection challenges?

Andreas Pfitzmann: Yes. And hopefully time to realize that to provide data protection from the point of view of confidentiality some date shouldn’t be collected, or if they are collected, that they have to stay confidential. When this leaves the realm of technical feasibility in society a whole lot more of tolerance is needed. Data avoidance and identity management have given us some time and can be used in some sectors as an aid. But I expect for ubiquitous computing as well with its negative aspect of various sensory surveillance to emerge nearly everywhere. Now we come to the question: Is there still something that can be done for data protection when confidentiality can’t be realized anymore because recording, storage and communication are for free? What can I do when deletion doesn’t make sense anymore because there are copies everywhere?

The only answer I found yet is: Maybe we can make a virtue of necessity. Let me give you an example: what could hurt me as a college teacher? Let's say I was in a club with debate and discussions possibilities and get the task to represent positions, which are not my positions. And now let’s make it more interesting in a political way: I got the task to justify the racial theory of the Nazis and also the relevant acts of the Nazis. Now it happen everything in a club with discussions possibilities. How can you represent these positions in the club? Maybe I find some good arguments for this ideology or some poor arguments against it but I present these arguments and can convince people. If I succeed you will see it on pictures how convincing I am. You watch pleading about 15 minutes of argumentation on screen that the racial ideology is fine including its consequences. And then you cut out the opener or title strip telling that I am in a club to present positions, which I did not select before by myself. And further more you cut out the closing credits where they all agreed on a game that was not meant seriously. Now we put all that on Internet, on YouTube. In addition a forensic analyse this film and note that there is not cut in film. 15 minutes film only with no other languages – it is really authentic: Andreas Pfitzmann is on this subject and so on …. That film would cause the end of my reputation!

And now we say in this ubiquitous world: “okay”, there is this opener or title strip and there is also the closing credit and the data about: Where do we meet us? What was the purpose of the club meeting? This meeting and film is available in different copies. I do have to make sure that the people, who are watching this film, will get the right information: I was in a debating society. It is not Andreas Pfitzmann in this club who made a speech what he really thinks and feels like it. This is contextual integrity. That means: Integrity is preserved and cannot be falsified or eliminated or suppressed. That is a feature which supports data protection. But such ideas, that information could be interpreted in a wrong way through context changes, that is not new. But maybe we can keep certain areas in some sectors, where we can discuss, debate and communicate in confidence. And we have to keep areas for community and society safe, so that context of information will be protect – and that is “technical privacy protection 3.0” and not only “1.0 data minimization” and “3.0 contextual integrity”.

<01:37:59>

Martin Rost: Prof. Pfitzmann, thank you very much for this interview.