Jeff Jonas has thought deeply about the social and political implications related to technological advances in surveillance, the loss of privacy, and the use of computerized monitoring systems by governments and corporations." />
Logo della Fondazione Giannino Bassetti


Innovation is the ability to achieve the improbable

Intestazione stampa

Jeff Ubois

Jeff Ubois' blog in FGB web site

Home > Jeff Ubois > Transparency, Privacy, and Responsibility: An Interview with Jeff Jonas

Transparency, Privacy, and Responsibility: An Interview with Jeff Jonas

by Redazione FGB [1], 5 June 2007

More directly and obviously than many forms of innovation, software embodies the values of those who create it. This is particularly true of software used to monitor and direct human activity: in the extreme, it can be a tool that saves lives, or one that enables political repression. Software development therefore provides many examples of responsibility in innovation.

As a leading innovator in the field of data analysis, Jeff Jonas [2] has thought deeply about the social and political implications related to technological advances in surveillance, the loss of privacy, and the use of computerized monitoring systems by governments and corporations.

Through engagement with others outside the usual ambit of software developers, Jonas has developed approaches to assessing the possible long-term consequences of his work, as well as new technical approaches to sharing and anonymizing data.

Now a Distinguished Engineer at IBM, Jonas founded Systems Research and Development (SRD) in 1983 as a custom software house. Over the course of career his company built many different kinds of systems including a marketing data warehouse that collects information daily from over 4,200 data sources, which has resulted in a database that tracks the transactional patterns of over 80 million people. Another system developed by SRD was designed to reveal relationships between individuals and organizations that would otherwise remain unnoticed. Initially created to help the Las Vegas gaming industry better understand whom they were doing business with, only to later be applied to help companies and governments detect corruption from within, it was adapted for national security applications following the attacks on September 11. SRD was acquired by IBM in 2005.

Jeff Ubois:
Can you start by giving a bit of background on yourself and SRD? What was the path from Las Vegas to Washington?

Jeff Jonas:
A repeating theme in our development work was inventing ways to match identities. In the early 90's, we built all different kinds of systems for the gaming industry. For example, some to track the fish in the aquarium at the Mirage Hotel, systems for doing payroll-related labor analysis, all kinds of things.

Then we were asked to help the casinos do a better job of understanding of whom they were doing business with. They had lists of people that they needed to know if they were transacting with. One of them - the worst of the worst - used to be called the Black List, but now it's called the Exclusionary List [3]. It's published by the gaming regulators. The people named on this have redress, an appeals process, so they know they're on it.

If a casino gets caught transacting with one of these people, the casino can lose its gaming license. So they're very keen to make sure these people aren't actually transacting with it. A casino can also say, "Hey, you can't come back, you trashed the hotel room, you beat up our staff," or something, so those are two lists, and there's others - one of the more interesting lists is a self-exclusionary list, where you can say, "Please don't let me come in and play."

Because they've got a gambling problem?

Yeah - "And if you let me come in and play, then you may be responsible for what I lose. You market to me and tease me to come back, then I might sue you." The even more odd thing about that list is once you ask to be on it, even though you put yourself on it, you can't take yourself off it, because they're afraid you're might be drinking. You know, it'll be a Friday night, you'll call and say, "Can you take me off the list…?" It expires every year, so hopefully on the date it expires you still don't feel like gambling.

So the casinos became interested in having my company build a system to ensure they knew whom they were doing business with, and we built that system. And all the way up to this point in time, my notion of policy, privacy and related laws was zero. Just companies asking me to build things for them, so I built them. This system created for the gaming industry became known as NORA, Non-Obvious Relationship Awareness . The way we built it, it started finding insider threats - people, employees that have become corrupt, for example, your accounts payable manager is living with your largest vendor. Then the U.S. government became interested in NORA before September 11. Their interest was particularly in finding criminals within government. Then September 11 happened and we found ourselves in the middle of counterterrorism-related work.

Now, historically, this technology that I'd created was designed to help an organization make use of its own data. One of the things that's often misunderstood about it is it doesn't go out and comb the world, or crawl websites, it doesn't peer around the world. It doesn't go into public records - it only takes what an organization already has collected and tries to make more sense of it.

About this time in my awakening to the importance of privacy, I was chatting with Jim Simon, at the time a CIA executive, and he said to me, "You know, terrorists can blow up our buildings, they can kill our people, but we still don't lose. When we change the Constitution to respond, then we lose."

That just sent a chill up my spine. I wasn't raised on the policy side, I'm a programmer. The other thing that raised my awareness was that I was in Washington in a meeting with an organization that would care about the sniper attacks there in 2002. I said, "Here's some ideas," and I started throwing out a whole bunch of ideas. The room went quiet, and somebody looked at me and said, "Some of those ideas are against the law." That caused me to take a step back and say, "I've got to be careful. I've got to make sure I'm inventing things and have ideas that are legal - I'd better study up."

It also highlighted that I didn't know jack. And it highlighted that the federal organizations - I mean, I'm a born-in-California West Coast dude - it highlighted that they actually cared about that. That was comforting in a sense. I ended up starting to spend more and more time with the privacy community and educating myself. I now kind of summarize by saying that I spend about 40 percent of my time on privacy and civil liberty protections.

That's a great issue to think about, in terms of responsibility in innovation. ... Can you tell me more about the evolution of your thinking with respect to the relationship between technology and privacy?

I thought I was a privacy advocate until I met a real one (see "Delusions of Advocacy [4]"). In the middle of my conversation with David Sobel, I just went, "Oh, my God, I could never know this much." It was the beginning of of real understanding. He talked about CALEA [5]and the notion that you could order the nation's phone infrastructures to meet a certain standard, solely for surveillance purposes - instead of letting market forces carry out market demands and market pressures, market products, actually defining something in the market.

Let me just tell you in a nutshell: Particularly on non-transparent systems that the government uses (for example, a watch list surveillance system), when an oversight or an accountability function comes in, how would you be sure there's a log of how they use this system? What if they put their ex-wife in the system? And what if they just deleted it later? What if they're searching the system for a family member or a buddy? What if they just erase the log they have? So an immutable log is this notion that you can create a tamper-resistant, non-erasable history of how a system has been used. It can't be altered.

The twist I put on it in the paper that attorney Peter Swire and I penned for the Markle Foundation [6] is my next concern: If you have a log that had the data that the system had - because you wanted to know what the system knew when - and it had all the queries, and if you kept the data longer than you kept it in the original system - because if somebody committed murder, maybe you want to be able to go back to the full statute of limitations, for example, a lifetime - then it would be an uber-system of record.

In such a case, they would want to prevent secondary data mining against this immutable log, rather than limit its use for forensic work or narrow spot check audits - intentionally making it very hard to do a carte blanche unlock. In other words, it's tamper-resistant; you can find out when people have tried to alter it. That means you could almost have certainty that no one's edited it. And at the same time, protect it from carte blanche data mining, so if you didn't know what question you were asking it specifically, you couldn't just run big queries against it.

Actually nothing exists like that yet today. There's a couple of projects I've run into where people are working in this direction, and I keep encouraging them to continue their work. I keep threatening that if they don't figure this out, I'm going to have to do it myself, but I'd just as soon somebody else do it; I'm busy.

Paul Rosenzweig and I wrote a piece on watchlist redress, and how to solve the watch list false-positive problem at airports. This was published by the Heritage Foundation [7].

This watch-list redress question is interesting, but we should really flag the immutable logs idea, because there's a lot of action around those issues in the library and museum and archive community.

I'm big on both of those - the one thing I'll tell you about the watch-list paper Rosenzweig and I wrote is that we presented a technical means to reduce the false positive - without requiring the government to go after any public records, simply by giving the consumer the ability to opt in one or two data points that would prevent them from being continually wrongly matched. Some in the privacy community actually reached out and privately said, "Don't quote us on this, but that was really a good piece of work."

That issue is interesting because we're going to run into more and more of those things - maximizing efficacy to minimize intrusion on the innocent - not just the false-positive issue, but also how effective is data mining?

Another thing that struck me was: the hopes some people have for applying data mining to solving some of the challenges. You know, this hope that you can detect bad actors through data mining. Some people think you can look at historical data and use that to predict some future act - but it the case of terrorism, that just isn't so. Not to say all data mining for counterterrorism is bad, there are some useful forms. One example is predicate triage. Our government has a list of over 100,000 people that are in the U.S. on expired or illegal visas, and there's a legal right to go after every one of them. But the question is, which one would you go after first? It's a predicate list. If you wanted to apply your resources efficiently, you wouldn't go after the Japanese kid that works at Nintendo in San Jose, right? Your time might be better spent on the person from Yemen who's studying nanotechnology and biology, and has been traveling to Saudi Arabia twice a year. That would be predicate triage. You already have a list of predicates, but to construct a list of predicates only by looking at behavioral or transactional data is very problematic. And I've seen huge data sets so I don't feel like I'm guessing - I don't feel like it's theory. I don't get many people that stand up to me and go, "I've seen a big data set, you're wrong." Just to be clear here, I am speaking about data mining in the context of counterterrorism, where there is little historical data. Using data mining to understand credit card theft, where there is a massive amount of related historical fraud data, is quite effective.

You've written a lot on the limits of data mining. Can you summarize some of the innovations and problems you have been thinking about?

Well, I'm a single parent, and I was getting ready to take my kids on a cruise. By this time, I've been paying attention to the privacy community; I kind of know the boundaries. I know about the intelligence agencies' laws on Executive Order 12333 [8], and USSID 18 [9], which are some of the laws that control what they can collect and use. I understand that when the FBI opens a case, they have various thresholds, and as evidence mounts more invasive surveillance can be carried out.

Anyway, so I already had an ear to the ground re: privacy, and I was getting ready to take my kids on a cruise. And I think it was a USA Today story that indicated there was a threat on Port Canaveral with terrorist scuba divers, and I'm thinking, "Wow, this NORA-type technology that I've invented that the government uses to understand the data it has. It's their data, they've already collected it, by legal means, you're trying to make sense of what you know." But what if the government has this list of over 100,000 people that we would never let in the United States? These are people that we don't want here. The list is secret. The government does not publish it on the website to say, "These are the citizens of the world that we won't let to come into the USA." So they don't give this secret list to the cruise line, and the cruise line does not, for many good reasons, want to give its reservation list to the government. That's an interesting problem.

You mean how to do a double-blind exchange of some kind?

Right. Twenty of these people could all just sneak across the border and get on the cruise and no one would notice. And I'm taking my kids on a cruise. So I'm like, "How do you solve that? NORA can't solve that because neither party really wants to share the full contents of their files." And literally in 20 seconds, I envisioned what became known as ANNA.

Congratulations, by the way.

Yeah, that was exciting. It's been two years, my retention deal's up... But they're so good to me, not only supporting my technical ambitions but also they're supportive of all my work on privacy, and let me speak my mind. For example, the paper entitled "Effective Counterterrorism and the Limited Role of Predictive Data Mining" that Jim Harper and I co-authored was published by the CATO Institute [10]. In the grand scheme of things, they're letting me make a difference. And I really love my job, and they're supportive of me playing this role, so that's really exciting.

But anyway, I'm realizing there's this problem with information-sharing that's going to completely fail, and I get this idea. It hit me, in 20 seconds I solved it-I figured out how to do a double blind fuzzy matching - and in a manner that greatly reduces the risk of either party learning the other party's secrets.

And by the way, this makes at least one reason why Section 215 of the US PATRIOT Act is not necessary [11]. You know, prior to Section 215, if the FBI wanted to go to a cruise line and get some records, they would get a subpoena, a national security letter or a FISA [12] about THE person or persons they were interested in. The Fourth Amendment-speaks to the notion of "reasonable" and "particular" with respect to searches/seizures and warrants. So the FBI would go to the cruise line and say, "We want records on Billy the Kid, Mohammed Atta, and Jeffrey Dahmer." But now, because they have this long list and they don't want to do a subpoena for everybody on the list, Section 215 makes it possible for the government to go to the cruise line and say, "Give us ALL of your records, not just for the few people-give us your entire customer database."

The reason why a government would want this new law is because they don't want to reveal their secret list. So this anonymization technique I envisioned allows a government to shred or anonymize its list, and the commercial enterprise to similarly anonymize its data. And then the analysis is done while the data's anonymized. What's completely different about this from anything that I think ever existed before-I say that now, and some day, someone will say, "I invented that in 1912!" - but in the past, I would encrypt my data and send it to you, and you would decrypt it and use it. With this technique, I encrypt mine and you encrypt yours-and the analysis is done on the data while it remains in an encrypted form. With this anonymization technique, Section 215 - collecting whole databases to do matching with secret watch lists - may be a thing of the past.

I was at a conference at Cornell last week, and the social sciences people are wrestling with just this problem. They can't deal with some of the data-sets they're collecting. They're under certain constraints about the kind of questions they can ask, and the ways they can interrogate data-sets. So they're looking at ways to anonymize data in just this way - so there's another tiny market for you, but an interesting act to play with in the social sciences.

Well, I used to think that government was going to be the big customer for this. Now, I think that governments are going to be five percent of the business and the rest of the world will be 95 percent

I think of people working in epidemiology without violating HIPAA [13]?

Right - the problem today is that with HIPAA, patient data is de-identified first, the problem being it is not possible to accurately count people after being de-identified. With this new technique, you can't tell if it's five cases of Lupus [14]in one little town, or one case reported five times. This technique allows you to hide, de-identify the people, but still count them uniquely. So I invented this technique that I have been calling "analytics in the anonymized data space." And a number of other companies are starting to come out with similar products-which I think is really exciting, because I think it's going to have a really big future.

And this is just philosophical, but I think the reason companies are going to do this - the motivation for companies, governments, or banks, or whoever - is going to be that every time they make a copy of the data, their data-governance challenge is doubled. And they don't want to lose the data-the consequences of losing the data are too horrific. So now, to the regulatory group or the compliance people and the board, you can say, "Look, you can share your sensitive data-your customers, your employees, the same partners you share this data with today - but now you can anonymize it first and get a materially similar result." Then you'd say, "Well, why would you do it any other way?" - organizations wouldn't do it any other way.

So I've been working on how you do analytics on data that has been anonymized. Now, I could have invented ANNA with the kind of encryption that enables decryption. But-and this is an example of why if I hadn't been in a conversation with a privacy community, I wouldn't have invented it - but what happened is I thought, "Yeah, but then the privacy community's going to say..." - I could hear what the debate would be. "Who holds the key to decrypt it?" What if the government changes the law and says, 'Now, we're going after overdue books'?"

So ANNA uses encryption for which there is no key. I used only this existing element of cryptography known as the "one-way hash" that's non-reversible. If you share this encrypted hash value, all you can do is that it points you to the original holder of the data, and you actually have to go ask the original holder for it. I wouldn't have come up with that had I not been in a conversation-in fact, one of the biggest things I am out there trying to convey, is that we have to get more technologists into more conversations with the privacy community. There's not enough conversation. This is a really bad thing. There's not very many technologists who build stuff spending time with the privacy community. I worry we may wake up in the bed that we've made for 10 years when it will be too late, you know the toothpaste will already be out of the tube...

Let's talk about this issue of how technologists can engage with the privacy community, how they can bring that privacy perspective into their process of innovation. Maybe take it further upstream, rather than 10 years later saying, "Oops, we lost privacy." Getting involved earlier in the invention process sounds like a good way to go.

There's a couple choices. One is that you can put privacy-conscious people at the beginning of a project. I'm trying to get even a little bit further ahead of that-reach the people that invent things and have them get some grasp of what the Fourth Amendment is have them consider how to prevent or limit erosions to this amendment. And you know, it's only been in the last few years that I realized, for example, that one of the freedoms we've had - and it's not in the Constitution, but it's the expectation we have-is that in the old days, you would be able to reinvent yourself and Go West.

Right, today there's no character reform.

Yeah, your trail is your trail. These kids putting stuff on MySpace - they can delete their MySpace account, but if somebody else has made a copy of it and posted it in a blog about them...

A great financial opportunity for MySpace: shaking down the presidential candidates of 2036...

Another worry is that it could be used for - I'm not going to call this evil, but I'm going to call it not best for humanity - is organizations that want to sell things are going to look at your trail. So if somebody has just quit smoking, but their trail says they've smoked Marlboros their whole life, as much as they're trying to quit, the juggernaut advertising machine...

"Come back! Here's a free package!"

Yes, it's going to guide people down channels that will be more narrow than the fullest gamut of experiences they might have had, had they not their trail guide applied their "custom" lens.

I have ideas about that, like being able to select somebody else's lens - or somehow making sure that people know they're looking through a lens, and having them be able to see without the lens. I think that idea is really interesting, of bringing the perspective of "Here are some social effects that may or may not be anticipated", or "Here are some ways of thinking about the downstream social effects". A lot of people are trying to work on that across a lot of fields and it's very challenging - it turns out that foreseeing the effects of one's inventions is very difficult. One of the things we're trying to figure out is how do you even notice if there's that kind of responsible thinking going on inside a lab?

You know, I took a stab at this once on my blog - it was called "Designing for Human Rights [15]." Just as an exercise, I took the Universal Declaration of Human Rights, I looked through that, and - while thinking about innovation - it turns out that four of the acts in the Declaration like Article 9, roughly say, "Though shall not arbitrarily detain, arrest, exile, torture, etc." And the operative word there is "arbitrary." So in that post I posited that if you build a system that could possibly be used to arrest, torture, interrogate, and exile people - for law enforcement, intelligence, defense - or that could also affect people's credit and their ability to pick their own school?

Whenever you're going to limit somebody's liberties or freedoms, if you do, if you have a piece of data and you don't know who said it, which system it came from, when it comes time to validate such information where would you turn. I can't be specific here, but I've bumped into a few systems in the world where they have a few records, like a few bad guys, and you don't know where they came from. Well, if somebody told you and then they took it off the list because they were wrong, you would never know. If you wanted some more clarification to make sure it wasn't So-and-so Junior versus So-and-so Senior, you'd have no records to go back to.

Right - the issue of errors and provenance both seem kind of unaddressed.

Yes. And it turns out that if you want to build a system that can uphold the Universal Declaration of Human Rights, you have to know who said it and when they said it. There are several properties: provenance, attribution, pedigree - essentially "source attribution." And if you don't have source attribution, you cannot have a system that's non-arbitrary - you cannot guarantee it will uphold the Universal Declaration of Human Rights.

That's a great exercise - to take a document like the U. N. Declaration of Human Rights and then run through the projects you're working on, and how you might try to ensure or maintain those rights, or look at where those rights might be affected.

Though some of them aren't very system-oriented, I think - there are some articles I couldn't tie to the systems. But more than those four Acts in the Declaration I called out could be. I wanted to start with those four because it leapt out at me so obviously - that would be an example of something to work on with you or somebody in your group, and co-publish a piece with more analysis.

To turn it around for a second, what about cultural policies inside the organization? For example, IBM is funding you to spend time on this - which is pretty remarkable. So what are the cultural policies that are happening inside the institutions where innovation is taking place, and that would foster this kind of inquiry?

Good question, although if I started solely focusing on institutional-cultural policies too much, somebody might notice what I'm doing and recognize I was way outside of my field of knowledge! But clearly, I'm keen on trying to set a good example, and having others play similar roles in my company and in others, too. One thing I talked about recently is that some inventions are best left uninvented. Have you heard of P300 Brain Fingerprinting [16]? People have been sending me links to more stories about that. It's moving along a little faster towards 2.0 brain/mind-reading than I thought. I blogged about this [17] not long ago.

That would also be interesting to run down: What is the list of technologies with huge privacy civil liberties, other implications? What things are going to catch us, in a sense, by surprise?

The other day in Singapore I presented this new graph, I call it my "More Death Cheaper In Future" graph. While Moore's Law enables organizations to be more competitive, people of ill will unfortunately can leverage these same technologies ... For example, somebody dug up an Eskimo out of the permafrost and extracted the 1918 Spanish influenza virus. Someone coded the DNA and made it public. I couldn't believe that, so I made a request to them and said, "Hey, are you kidding me? Is this public? Can you send me a copy?" And they said, "Sure." Now I actually have the full DNA sequence of the 1918 Spanish influenza on my laptop!

So then this other team goes and brings the virus back to life. So Spanish Flu actually has been recreated. It's alive now. They've given Spanish Flu to mice that have the human immune systems, and this bug kills the healthy first. It turns your immune system against you. And it killed something like 50 percent of the mice. Well, with a 10 kiloton nuclear weapon, the amount of deaths you're going to get is maybe no more than 100,000 people.

Versus three billion on a good virus...

This is maybe two orders of magnitude easier to create, to deliver, yet its devastation is three orders of magnitude more, or something. And that's a more definite future - let's play this future out for a second. How dark does this look? Today, on a nuclear thing, it takes an enormous team, significant dollars and time to ready a nuclear device. But this DNA thing, it takes a very small team possibly only months with by comparison very little cash. What happens when - now that it's published - any one person can use their home nano-machine to actually make something that will kill all or half of mankind? It's that easy for any whacko to do. Would we not be required as a world, as a world of citizens, to make sure no one can do that? Would we not have to put a camera on everybody's head, a shock collar on everybody's neck to ensure survival of mankind?

And if the answer is yes, maybe the better solution would not involve surveillance done by just a powerful few. You would want to make sure everybody's watching everybody - more towards the transparent society. Anyway, these are the kind of conversations I'm engaged in, and as I invent things, I do my best. From time-to-time I realize something I have created in the past could have been built better or had better controls. All I can do is re-calibrate and try to make tomorrow's systems better. As a privacy aware technologist ... this keeps me up at night!

I did this other post, "Ubiquitous Sensors? You Have Seen Nothing Yet [18]". I got to thinking about why the hell this is happening, and how can we put the brakes on this? How can we slow this down? And it turns out it is not possible. The reason is that everyone's competing, businesses are competing, governments are competing. And when you compete, you want the best tools, the best data, and the best human capital. And it's insatiable. And so right around the time I did that blog, I realized that the journey towards more technology is a certainty.

And even if you've regulated in one place, it will move offshore to another.

Some other country will be doing it, that's right. And the fear about weapons systems is "They are playing with a weapons system we've never even seen, and it can genetically target just Americans." So, you have to go "Well, we have to understand the behavior of that." Suddenly, you're researching it too. These things make me crazy.

If you wanted to govern innovation of that type, are there approaches to it?

I think about that, but other than creating a matrix that aligns the Universal Declaration of Human Rights against innovation, which is one vehicle, I think there's a risk of being able to create something innovative here, but having only five or 50 people listen. To get ahead on that, it's important to socialize more - these innovators and privacy people, the convergence of these two - that kind of cross-pollination, there needs to be a lot more of that. The conversation between technologists and privacy rights people needs to go up four orders of magnitude. I hardly find anybody else doing it. Quite frankly, it makes it very dangerous for me because there's not many people doing it. And one day, someone's going to decide to shoot me.

You know, what I tell the privacy community is, "You don't quite trust me because I build things for governments." And government doesn't quite trust me because I don't have a top-secret clearance. So I tell the privacy community, "Look, at least if I'm in the conversation with them - if I'm not there, it'll be built somewhere on the XY axis - and if I show up, I'm going to move it in the right direction." It's better than if nobody with any privacy think showed up.

Do you think it's possible to cause the process of innovation to be subordinated to some kind of political system or deliberative body? Is that ridiculous, or a possible aim?

One thought might be that at the end of the day, would you say that a technology - unless it has the "good seal of privacy" approval or "civil liberties" approval - would have the "Universal Declaration of Human Rights" compliance stamp?

One thing we've wondered about is to what extent you can have other stakeholders. Is it possible for someone who's engaged in the process of innovation to think about the stakeholders who may be downstream, or somehow ensure that they interact with other stakeholders that might be affected by innovation?

You know what? I definitely believe you can create stuff more responsibly. Like me, when I thought about encryption - I thought, "Wait, it could be unlocked. Oh, I'll anonymize it. No one could unlock it ever. Ha ha!" Right there, that was an example of that - and I've had a couple of other inventions, I'm not sure what the intellectual property status is with IBM, but I think other inventions would not have come to me had I not been in the conversation. That's why I think one of the most important things to do is to improve the conversation. And again, an alliance, you and I and 500 other people and that'll be a great start!

There's another process involving institutional review boards. There's a lot of research done on human subjects in university settings, and the rules around that have become more Byzantine. And in some ways, I think they are preventing good research from happening. On the other hand, here is a mechanism that was evolved to protect human subjects in medical and social science research. That's been going for a while, so there's a community of practice around that and established standards, and it's at every university that's getting federal money at this point. I think it's a little bit run amok and not always well-conceived, but the impetus is pretty cool.

That's interesting. You know, I spent some time with one technology person, who says, "So what are you doing these days?" I say, "I'm spending a lot of time with the privacy community." They looked at me, horrified, and said, "Why would you possibly do that? They just attack stuff." I respond, "No, it's really quite insightful, I recommend it." They just kept going on and on about how horrible it is. I asked, "When's the last time you've ever talked to anybody in the privacy community?" They said, "Ten years ago." That's horrible. Maybe this can't change, but the people in the technology field - and I remember this, this used to bug me, too - but the privacy community doesn't want to certify anything.

Yes, it's a bit change-resistant. I've kind of drifted in and out of the privacy community over the years, and in some ways, it is change-resistant.

No one ever comes along and goes, "That's finally a good solution by the government. Thank you!" There isn't anything. I don't think there's one example where anybody in the privacy community - I'm talking ACLU, EFF, EPIC. I'm sure there's others, but those are the ones at the tip of my tongue…. One of the reasons technologists don't want to spend time with the privacy community is that it's only attacked. As I don't see this changing, the technologists just have to get over this, because any conversation is better than no conversation.

Yeah. Positive proposals, I think this sort of precautionary principle is another example of that, you know? "Don't do any innovation unless you can absolutely prove that it's safe." But proving a negative is really, well, it's just not possible.

That's true. And almost anything you can invent can be misused. The question I like to ask myself - and a few inventions I've never even opened my mouth about - is "If somebody else I don't like gets a hold of this, will that be bad for my kids? Would it be a dark day for my family?"

If you think about innovation as this process that is partially but not entirely self-governing, what does it mean to be sustainable? I bring that up because you're thinking about it in terms of future generations. Is it possible?

With regard to sustainability, I think you probably can't stop people from doing R&D to see what's possible, until you know the framework of how it's going to be deployed. But then how is it going to be deployed, what are its controls, what's the oversight and accountability? And on and on, a series of checkpoints about how it's going to be used. Is it going to be used by a government or is it used by Amazon? Until you pair the technology up with the use, you can't even begin to assess its spectrum of good to evil. You have to go right to a use case and right to a governance model, and then you can start determining where it is going to sit on the spectrum of good versus evil.

Show/Hide links in this document

Links in this document:

  1. 1] /schedabiografica/Redazione FGB
  2. 2]
  3. 3]
  4. 4]
  5. 5]
  6. 6]
  7. 7]
  8. 8]
  9. 9]
  10. 10]
  11. 11]
  12. 12]
  13. 13]
  14. 14]
  15. 15]
  16. 16]
  17. 17]
  18. 18]
CC Creative Commons - some rights reserved.
Jeff Jonas
Articles by:  Redazione FGB
Search by:
Search video by:

- Mailing list Subscription - Cookies Policy - Privacy Policy -

RSS Feed  Valid XHTML  Diritti d'autore - Creative Commons Gruppo Fondazione Giannino Bassetti in Facebook Gruppo Fondazione Giannino Bassetti in Linkedin Segui la Fondazione Giannino Bassetti in twitter

p.i. 12520270153