OurSQL Episode 200: Information Security and Privacy

For our 200th episode, we interview security expert Bruce Schneier. We talk about plenty of topics including airport security and the TSA, PRISM and the NSA, wholesale surveillance, surveillance backwards in time, finding people who have disposable cellphones, about searches and co-travelers, why Facebook does not offer the ability to pay for your account, a bit about Firefox and its propensity to act in the user's interest, and the future of our public information.

Some links:
Cryptogram
Schneier on Security Blog
Uncle Milton's Ant Farm
CSEC surveillance that looked at IP data from people's mobile devices, mapped that to logins, and had an experimental program to find people that they knew were off-line.
Disney's use of data tracking

And as an added bonus, here is a transcript of the audio:
Sheeri:
We're here with Bruce Schneier, who is an internationally renowned security technologist. His influential newsletter, "Cryptogram", and his blog, "Schneier on Security", are read by over 250,000 people, including yours truly. We can't list all of his accomplishments here, because we don't want to have a 3-hour podcast, but he's authored 12 books, testified before Congress, and has served on several government committees, as well as being a board member of the Electronic Frontier Foundation and a fellow at the Berkman Center for Internet and Society at Harvard University.

Hi Bruce, and thanks for taking the time out to be here.

Bruce: Thanks for having me.

Sheeri: So, how did you get interested in computer science, and then, particularly, security?

Bruce: Well, I've always been interested in security. My belief is that security people are born that way. There's a certain way of looking at the world that makes you think about security. A lot of people are hackers, they think about how to cobble together something and make it work. Hackers is a computer term, but it's really much older. You can be a car hacker, you can be a model railroad hacker - you build things.

When you're a security person, you look at how things fail. You look at how things don't work. You look at how things can be made not to work. Someone might come to me and say "Look, I can put this here, and this there, and that there, and I got something working, isn't that cool?" And I'll say, "Well, turn that that way and it doesn't work any more." And he'll say, "well stop doing that." And I'll say "well no, that's not the way you think. I'm the adversary, I'm going to do that every single time until you fix it. Your system is flawed, I've broken it." And that's a mentality of how to think. Once you've got that mentality you can do security in any domain. Computers is just where technology is these days.

Sheeri: Right, security is basically "what can fail and how can I take advantage of it?"

Bruce: Exactly. A number of years ago I remember someone's nephew got "Uncle Milton's Ant Farm". I don't know if you remember this, I had one as a kid. It's two pieces of clear plastic, and there's a frame and there's sand, and then you put the sand in the frame and you put ants in it and you can watch the ants make tunnels.

Well, the package doesn't come with ants cause that would be weird they'd probably die in the store but there's a card comes in the package you can mail it in and get a tube full of ants. Now, the average person would look at that and say "Oh look, I can get it a tube full of ants." I look at that and say, "Wow I can mail a tube full of ants to anybody I want."

Sheeri: Right - clearly it's legal to mail ants through the mail since this company is doing it.

Bruce: And I can get them to mail somebody ants. So, it's just a way of using a system in an unintended matter for some sort of personal gain. There, it was amusement, but often it's a monetary gain. Systems today are computer systems. Every time you see a system it involves a computer, it involves a network, it involves technology. So more and more security becomes computer security. Even airplane security's computer security. Everything is computer security.

Sheeri: When they have to scan your badge to get on the plane or scan your barcode, then there's a computer there somewhere with a database that says "yes, this is okay. We haven't had somebody in seat 12C so this person can go." I don't know what they're checking behind the background - maybe you do - when they scan what you're doing.

Bruce: Even more than that - when you go through airport security. That x-ray machine is a computer system. And at a security conference just a couple weeks ago someone got an old version of one - I think on eBay, oddly enough, took it apart and found a lot of flaws in the computer systems. The magnetometer, the full body scanner - these are all computer systems. The registered traveler program you get in TSA pre-check - there's a computer system behind that that tells the TSA agent that you can go through this less secure line. There are computers everywhere in the system and subverting the system will often mean subverting the computers. It can also mean subverting the people.

But now the people are relying on computers for their training, for the rules - for everything.

Sheeri: This isn't just a problem in the US, it's a problem internationally. When I travel through Israel because I have family there, it's a three hour process to get through security. You're interviewed by three or four people, they have techniques - I'm sure that they're relying on computers. Do you see the US is kind of relying on computers more because that's kind of the rules and then people don't have to worry about the the objectivity of, let's say, profiling?

Bruce: It's less the objectivity, and more the scale. I've been through Israeli security and it is very intense, it is very serious, it is very human centric, it's very psychological. They're relying on computers less. They're relying on well-trained people. The problem, of course, is Tel Aviv airport is about the size of Sacramento airport - the amount of passengers that get pushed through that country are minuscule compared to the United States. There are things that Israel does that just don't scale to that type of system.

Sheeri: Right, it wouldn't work at JFK.

Bruce: JFK, Chicago, anywhere - so we are forced to rely on more algorithmic systems, more rules instead of judgement, more computerized systems - because that's the only way we can get it to work. There are a lot of exposes about TSA agents. Morale is terrible, training is mediocre. The goal really is to make them as irrelevant as possible. You can't get good people - you can't get enough of them - so you do what you can with the system to obviate the need for good people.

Sheeri: So as we're a database kind of community, we're seeing a lot with PRISM and the NSA. There is the problematic side of it, of course - people are watching you. But to me it's amazing to think there's all that data, and it's stored somewhere and people are data mining it. Are they data mining it? Right now are they just storing all this information and, if at some point in time my name comes up they'll have a huge, robust data source for me? Do they only kind of look after the fact? Or are they starting to data mine, seeing that friends of friends of friends of suspicious people are doing that. There are always moral gray issues but we're starting to really get into that whole "Minority Report" scenario.

Bruce: There are a lot of different "theys" out there, I think that's important. A lot of this data is collected by corporations, and it's collected for the purpose of psychological manipulation. The goal of Facebook, the goal of Google, is to separate you from your money as efficiently as possible. So that, data is being collected for the purpose of persuasion, for manipulation, for advertising. So a lot of the data is collected and immediately used. And it's to categorize you both individually and as a member of a group, and to serve ads that you will find useful.

Another "they" is the government. The government largely seems not to collect its own information, but to get a copy of what corporations already collect. They do a lot of collecting on their own, and they do this for purposes of - so they say - security. Find the bad guys, stop the bad things from happening.

It's really interesting that the nature of this sort of surveillance is changed. On the corporate side they've always collected data on everybody, but they've collected it only in samples - they've done surveys, they've done studies. What the corporations would try to do is generalize to the entire population from detailed analysis of random people within the population - they do focus groups. They're doing the same sorts of things but they couldn't get the scale.

Sheeri: Like Nielsen ratings.

Bruce: Nielsen ratings are a great example. On the government side, they would do the opposite, they would collect very general information on everybody - so think of a census, think of a drivers license application, any other kind of license. So you're collecting very little information - again because it's so hard.

Collecting information has gotten easier, so now we're living in a world of - I think of as - ubiquitous surveillance - that we're being surveilled all the time. An interesting question to ask is, "What's different?". Because certainly you can look at it as "there's just more of it", but there are things that you can do, that you can't do with only point surveillance. Some examples: the one is the notion of watching everybody - wholesale surveillance. Instead of "Follow that car", it's "Follow every car". There are license plate capture devices on street poles in various cities that are following every car.

Sheeri: Traffic light cameras, EZ pass systems or toll collecting systems that have cameras that capture.

Bruce: That's exactly right. You can now follow everybody. You can follow everybody via their cell phones. There have been research projects that will figure out people's lifestyles - where they live, where they work. You could map relationships because you have all that data.

Here are a couple of things that are new: one is surveillance backwards in time. It used to be, when you wanted to surveil somebody, you'd say "follow that person." If you're saving the data you can say, "What'd he do last month? What'd he do last year?" That would've been impossible without the database.

You can surveil everybody to find a single person. For example you might be looking for an individual that has certain characteristics - we know he was at this location. There was a research project, done by the Canadian NSA equivalent, that looked at IP data from people's mobile devices, mapped that to logins, and had an experimental program to find people that they knew were off-line. The scenario they gave in this document was a kidnapper: he lives in a rural area, he comes into the city to make the ransom call (if he did it in the rural area they'd identify him) - can we find him based the signature of his mobile device. Turns out you might be able to.

Another thing is you can search everybody, looking for abnormal behavior. You don't know what abnormal is, but when you've got everybody you can say, "Hey, this person is doing something and nobody else is."

Sheeri: What's out of pattern.

Bruce: Maybe that's suspicious. You can do something called an "about search". This is something that the NSA does - the FISA court signed off on it - it's searching everybody, looking for someone. So there might be a person you're suspicious of - Alice. You can eavesdrop on Alice, you can eavesdrop on Alice's friends - who Alice communicates with. But you can also search the entire database of conversations of everybody on the planet looking for people who mention Alice's name.

That's something you couldn't do otherwise. The neatest thing -

Sheeri - you can do it historically

Bruce - You do it historically. This is the last one I'll give you. Again, the NSA does this and it's a program called co-traveler. This is based on their location database from cell phones - your cellphone is a tracking device, it knows where you are. There are people they are interested in - for whatever reason - they look for other individuals who are physically near them more often than randomly. I'm interested in Alice but look! Bob is in the same cafeteria Alice was on this day, and the same movie theater on that day, and they both went to the same event on this other day, and they were traveling together from city one to city two this other day, they are co-travelers.

So they are now a person of interest. Again, you can't do that unless you can mine the entire database. Another way they use this co-traveler program is they have the cell phones of US agents flagged. They look for other phones that are following them. They look for tails - a really clever idea.

This is sort of a long-winded way of saying that everybody's looking at your data. And the neat thing is the new things you can do because you have *everybody's* data.

Sheeri: Right, including people you don't know, like tracking disposable cell phones.

Bruce: That's right. One of the NSA documents talked about programs they have for linking burner phones. Which, I think, is very clever, because the whole purpose of using a burner phone is not to identify yourself. But if the NSA can chain - or the FBI can chain - burner phones, by:

- one gets turned off
- one gets turned on just after
- the physical location's the same
- or the phone numbers called are similar

they can now put together a pattern in ways that you're trying to prevent them [to].

Sheeri: So you can say "these devices probably belong to the same person" - this device was used for a week, this device was used for a week and half - but they probably all belong to the same person, then you can start to pinpoint location.

Bruce: And probably is the keyword here. The big issue with all of this are error rates. We've seen estimates that Axicom's [spelling?] data about you - they're a credit bureau - a third of it is wrong. Certainly, a lot of the Facebook and Google data is wrong. But the cost of an error isn't that great. If Google gets your profile wrong, they show you an ad for a Chevy you don't want to buy.

If the NSA gets your profile wrong, they might drop a drone strike on your family. When you look at all these programs, they all have an error rate. Depending on what we're doing with them - advertising can tolerate an extraordinarily high error rate; policing can tolerate a much lower error rate; a military action needs a much, much lower error rate; being profiled at the airport, we kind of want a low error rate, but unfortunately we live with a high one.

You think of all of these different activities that involve mining this data - the errors you have really determine whether they're effective or not.

Sheeri: And then you have the frustration - there's risk, but there risk to the user and risk to the company. For example, you brought up the credit rating. I was calling a bank for something and they wanted to verify my information so they did the whole "going through random data about you" - where have you worked before, which of these streets have you lived on. Interesting to me was "Where have you worked before?" and it was "Choose one" and they gave four answers, or none of the above. One of the answers was McDonald's, and one of the answers was Online Buddies, which is a company I'd worked for, for two years. I was like, "Which one do I select? They probably had wrong data on me. They probably didn't know that I worked for 3 months at McDonald's when I was in college.

Bruce: I actually always wonder when I'm given those questions by the companies, how much of it is them verifying data, how much of it is them collecting data. It's true - you don't know. When they say "What's your social security number?" I think, "Don't you already that?" And if they don't know it, why do I want to tell them? I always have that reaction whenever I'm asked those questions. You're right - what is it you want to admit to, this time? It's like playing Clue, which one of your facts do you give up?

Sheeri: I know for a fact this particular system was developed software - I knew the guys who developed it when I was doing consulting, so that didn't actually strike me as weird, because they told me, "Sometimes we'll put in wrong information - we know exactly where the last 10 addresses they've lived and we don't give them those streets. And we don't even give them a street next to that street." Which I thought was interesting, I thought that it would be interesting to trip people up.

If somebody knows that I live in Hyde Park, pick a random street in Hyde Park - main street, River street, that kind of thing. Maybe it's close to where I live. The chances of me living on one particular street, if it's a big street - River street is a big, kind of main street - or Hyde Park Avenue, which is a long street - you might be able to trick someone into doing that, saying "yes" when it's not true.

Bruce: I wonder if that system records when you get it wrong. Let's say there are 4 A/B questions, so you have a 1 in 8th chance of guessing correctly, and getting in. If you guess wrong and hang up, do they record you guessed wrong, or can you call 8 times, each time coming in fresh, and just getting in randomly. I don't know but it would be an interesting way to hack that system.

Sheeri: The other question is, "do they record it permanently on your profile?" Or can you call next week? Maybe you can call back the next second, get a different operator…clearly if you get the same operator, human beings being what they are, they'll be like, "Didn't I just hear your voice?" Or maybe after 2 or 3 times, they'll recognize your voice. Is a computer operator - someone who's on the phone, answering questions - going to mark down in the little comments section, "Called, got question 2 wrong"? Or does the system - because it's all computerized now...

Bruce: It's something we've learned recently, some public hacks that have made the press, that these sort of backup ID systems fail pretty badly. They're very human. The human on the other end wants to help you. If you act like you need help, you can bypass a lot of these controls. You can use your credit card number as authentication - there's a couple of very high profile hacks that fail that way. You can say "I don't remember this" and eventually the operator - the representative, whatever they're called - will let you do the thing, because you're obviously in distress, and they want to help.

Social engineering these backup security systems is way easier than it should be. And this is hard - we build all of these good security systems, but they all fail sooner or later. You forget your password, you move, you don't have your thing, your fingerprint isn't working, whatever it is - so there's a backup system. Remember this security question - if you forget your password, this security question. And often this security question is way less secure than your password was. So we build these security systems with backup systems that are less secure. Again and again and again and again. And these backup systems are, more and more, what criminals are exploiting, because they are much easier to social engineer.

Sheeri: Certainly, you take something like Facebook and Twitter and they say, "What's your father's middle name?" As people are more and more on Facebook and their kids are now on Facebook. It's not going to be hard to figure out - maybe, my father's middle name you might not know, you might not be able to figure out from social media, but my kids - because my husband's on Facebook - so it wouldn't be hard to track that back. It's one of those things where we freely give out that information. Things like "What's your favorite color?"

Bruce: But there's also another way: If I don't care who I hack, there's going to be some percentage of accounts where the father's middle name is Bob and their favorite color is green. So I'm just going to get the Bob/greens and that's good enough for me, because I just want to defraud 100 people and I don't care who they are. Some of this you just get probabilistically.

Sheeri: Right. Do you see the government or huge corporations doing proactive data mining on all that data, or is it really just looking back and saying, "OK now we suspect Bruce," or "Now we suspect Sheeri, let's look at their records"? Or are they going more proactive these days?

Bruce: A lot of we're seeing is proactive. Co-traveler is 100% proactive. About searches are largely proactive. Some of what the corporations are doing - the Facebooks and Googles - they're looking for trends, looking for things they can monetize. There's a lot more proactive use of this data, because once you have it all, you start figuring out how you can monetize it.

The Centers for Disease Control are trying to proactively figure out where outbreaks are happening, and Google's helping them, and Twitter. Because you can sense this stuff based on what people are searching for, what they're talking about.

Sheeri: And Disney, too, these days. Disney has these wristbands that track who you are - you're not allowed to give them to somebody else - and they have things like your FastPass system, but it can also register when you enter a restaurant. What did you do after you went to the restaurant? Did you go to a restaurant and then go on a ride and then go home? Did you go to a restaurant and then get sick and leave? They can track things like food poisoning.*

Disney tracks everything, right? They're trying not only to make your experience the best, and blah blah blah, but trying to use their technology to try to separate you from your money the most.

They don't care whether or not I get sick. They care whether or not I don't spend the rest of the day in the park, and buy ice cream, and buy this, and buy that.

Bruce: They care whether you tell people "I went to Disney and got sick." It's a really lousy tweet and they'd like to avoid that, right? They'd like you to say "I went to Disney, didn't feel good, and this medical team swooped out of nowhere, made me better and gave me a free Mickey Mouse," because that's just a much better experience.

It's sort of interesting because corporations are one side adversaries, on the other side allies. It's a weird relationship. They want us to have a good experience with their products or services, but they do want to maximize the amount of money they extract from us - as they do so, while keeping us happy. It gets even weirder when you deal with the services where we are not customers.

Google search, Gmail, Facebook, Twitter - where we are fundamentally products being sold to advertisers. There's a lot of that going on. And that really is the value of information. I saw a study recently talking about, "Why doesn't Facebook just charge?"

There's two basic models that Facebook could have. One is the model they have, that Facebook is free, and they monetize you with advertisers. The other is that Facebook charges you. And someone did the math, based on Facebook's quarterly earnings, and noticed that the average value of a Facebook user to the company, is six dollars and change a year. Not very much. So an alternative business model is for Facebook to charge a dollar a month, to every user, assuming they'd get the same number of users, which we can't assume, but there's going to be a business model that involves charging them. And the question to ask is, "Why doesn't Facebook offer that option?"

Sheeri: Right, why not have it be an option? They don't have to guarantee it, but if they offered me twelve dollars a year, or even ten dollars a year, they're making double what they make on me now. If they make six dollars a year on me now, I'd pay twelve - I'd probably pay twenty dollars a year. Most people would.

Bruce: But they're not going to do that. And here's why: because if they offer that option, the people who would take it will tend to be the wealthier people with more money. Facebook is making a bet that - while they're making six dollars a year on you now, in the future they can make ten, twenty, fifty, a hundred. That all of that money going to television advertising, once they get enough data - figure you out enough - they can make much more.

So by offering the option of pay and no advertising, and no tracking, they are admitting that this data mining advertising isn't going anywhere, and they don't believe it yet.

Sheeri: Or they don't know where it's going to go, but they know it's going somewhere. And they want to be on that boat.

Bruce: And they're going to ride it.

Sheeri: You said corporations exist to separate people from their money, and that's absolutely true. I am privileged to work for Mozilla, which is a not-for-profit at the top - there is a corporation underneath - but at the top it's a not-for-profit whose mission is to keep the web open and accessible. But one of our values is that "individual security and privacy on the internet are fundamental and must not be trusted [should be treated] as optional." And every so often, we introduce a program and people say "oh, now you're evil if you're trying to monetize" as we do need to do - I like to get a salary, I have to pay for food, I have to pay for my living spaces. We don't live in a world where that's optional. So I need to get paid, and somebody needs to pay my salary, so there needs to be money involved in it somehow. There's a lot of this in open source in general. You start to talk about open source and everyone gets happy and it's "free as in beer" or "free as in speech" or "free as in love", I call it "free as in puppies" because it's a free puppy but then you have to take care of it. So there's a lot of freedom and then you start to monetize on it and people say, "oh now you're going down the evil route, you might as well be closed source" because apparently money makes something closed.

Bruce: Money does change things. You do need it, I agree. Rich or poor, it's good to have money. But there are a lot of people out there that are motivated by things other than money. And it's sort of neat to watch - something like Wikipedia, or something like Linux, which does at its core - there's money involved and there's people being paid and you do need to make it work - but at its fringes are lots of people that are motivated by the desire to do good in the world, a desire to make something cool.

There are a lot of apps for my phone that are free - as in free. They're not big commercial things, they are just "people want to do something neat". On the other hand, there are those free apps that charge you incrementally as you wander through them, which is a very sneaky way of separating people from their money. It's interesting to watch. The neat thing about the Internet to me is that it allows these other motivations to work. It used to be the only motivation had to be money. You couldn't produce a magazine or an encyclopedia, or an operating system, or a browser unless it was a for-profit, financial venture.

Now you have all of these hybrid models. And it's not that money is evil, that's kind of ridiculous. But money is corrupting. Larry Lessig talks about this - that you add money to a system, and it makes things different. So yes, you have idealists everywhere on this spectrum. The solution is going to be some hybrid that works.

I love Firefox, I use it every day. It's kind of rare to talk to people who develop things you use every day. But it is kind of neat. And the fact that they are not Chrome, the fact that they are not IE, not controlled by a giant for-profit corporation I think does give them a security benefit. And that they are more likely to do something in the user's interest than - let's say in Google's example - their customer's interest.

Sheeri: I agree. And I see some of the internal workings. One of the great things about Mozilla is that almost everything is open. Our weekly all-hands meetings are open and streamable on the web. It's really a great place to work. There's a hard line between openness and privacy, partly because people will be open about their own data. I said "I live in Hyde Park". Can you pinpoint me, where I live? Probably, there's probably a public record out there to find my address. People can probably find where you work, if they wanted to. Not that hard. And people are giving out their own information. There' s a consent there that we don't get with Facebook when they're selling our ads to companies. Although there is an implied consent because theoretically we know that we're the product of Facebook, we're the product of Google.

Bruce: But I think in a lot of cases we don't know, or it's not salient at the time. We give out data about ourselves all the time. We talk to strangers, we have friends, we have intimate acquaintances, we have family, we have lovers - we are people, we share - our doctors, our investment counselor, our personal psychologist counselor. We share a lot of data. But we think of those transactionally. I'm sharing data with you.

As a species, we have evolved all sorts of systems to deal with that. And they are extraordinarily complex, and extraordinarily attuned, and extraordinarily social. You can walk into a party with fifty people, forty of them are strangers, ten of them you know, and you immediately how to behave. Who you talk to, what you tell who, who's around you, who's listening, you can navigate that beautifully. Move that same interaction into Facebook, and because of the technology - back to where we were in the beginning of this conversation - because the computers are intermediating everything, suddenly our intuition starts failing.

We don't think, "there's a for-profit corporation recording everything I say and trying to turn that into advertising." We don't think the US and maybe other governments are recording everything I say and trying to find terrorists, or criminals, or drug dealers, or whoever is the bad guy this month. That's not what's salient. What's salient is, "I'm at this virtual party, with my friends and acquaintences, and we're talking about this topic."

So what's different and I think why we really can't talk about "people agree to put this stuff on the Internet, people are consenting to this," is that they're really not. What they're consenting to is the real-world analog they have in their heads and they fully don't understand the ramifications of moving that system into cyberspace.

Sheeri: That's a great point. I know when Google first came out with Gmail, and people - I thought the same way, I was like "wow, this is great, and i know that I'm helping their search, but they're giving me something free, and yeah, I'll help their search. I know they're not reading my e-mail, they're scanning it and looking for words and phrases that go together." The peanut butter and jelly / peanut butter and milk kind of thing, diapers and beer, whatever it is in the supermarket. They're looking for those same correlations in search - that my search will be better. I saw it as a win on two sides - number one, I got free e-mail, and number two, I got better search. I got search that was more tailored to what I liked because I know that they're scanning my e-mail - granted I know that's a drop in the ocean of scanning they're getting.

But nowadays, it's a little more insidious, right? It's why they're scanning, but who are they going to give that to? Who are they going to give my e-mails to when called upon?

Bruce: Right. And we don't actually know. I'm not a Gmail user, I pointedly don't. I'm probably one of the few people left on the planet who use Eudora and keep my mail on my computer. I actually love Eudora search, by the way, and it's the reason I keep it and haven't moved to something like Thunderbird. Nothing is better than Eudora. Unfortunately, my wife's a Mac user, she had to drop Eudora with the latest OS release.

I'm sure sooner or later, when Microsoft is going to break Eudora on Windows - I'm terrified of the day that I have to give that up, all that email. So, I don't put my e-mail on Google, I don't give them my mail, I don't want them to search it, to correlate it. But because the fact that all of you do - last time I checked about a third of my mail is archived on Google - people I correspond with.

And this is an interesting point - a lot of surveillance happens inadvertently. That I am not being tracked by Google, but I am. I am not a Facebook user - I don't have a Facebook account - but I know that Facebook has a dossier on me and if I get an account they will immediately populate it with all my friends. They'll get it largely right. There are any number of photos of me, tagged, on Facebook, not by me. There's discussion about me on Facebook. So even though I don't have a relationship with the company, that company has a relationship with me.

Google Glass will make that even more stark. As soon as some percentage of the population - what was it, 15%, 20%? - wears Google Glass regularly, it doesn't matter if anyone else does, because all that video will be captured, just from another perspective. So this data being collected, there's a lot of it that is about us that we just have no control over.

Sheeri: That kind of co-traveler idea.

Bruce: That's right.

Sheeri: I was at a conference last year where one of the speakers was talking about what you can find out about people that's out there and free. For example, if you search Twitter, my Twitter handle is @sheeri (because hey it's easy) you can search for my name on lists. Some people have put me on lists of people who live in Boston, some people have put me on lists of tech women, MySQL database admin - You can find out a lot about me based on what people have categorized me as. You add in something like LinkedIn, where people are now endorsing you for different things all the time - I get endorsed for things that I'm not an expert on, and I'm kind of like "hrm, I'm not going to accept that." There are people that accept all their endorsements.

Bruce: Because why not?
A lot of work's being done on our public face. The face of us we display on Facebook isn't necessarily our true self, it's a public self. That's a performance space. That's a space where we say some things and not others. And everything is - when I trade e-mail with my wife, I'm a different person than when I trade e-mail with a co-worker. And that's also very human. And a lot of these public, or semi-public, spaces turn into places where - especially among kids - where they know they have to be careful. And that's really why there's this huge surge that came out of nowhere of these ephemeral messaging platforms where people - kids mostly - can say things with the belief that it will disappear in 10 minutes, an hour, in a day.

We can argue whether that's true or not - is the NSA getting a copy? We know that Snapchat was sued last year, there was an FTC investigation, because in fact while they advertised that messages disappear, that they actually didn't. The fact whether it's true or not is separate. There is this desire for ephemera in this world of non-ephemera.

Sheeri: And I think it's great that kids especially want that, because they need the place to experiment, without it coming back to them in 10 years, that they once said this on Snapchat.

Bruce: We're sort of living in this transitional world, where you'll imagine a politician in 10 years being asked to explain his 7th grade blog postings…

Sheeri: …youtube movies…

Bruce: …youtube movies - and let's face it the correct explanation is, "I was an idiot when I was a kid." We all were, but unless that answer is societally acceptable, you're just going to get politicians who weren't idiots when they were kids. They probably will make lousy adults, because that's how you learn. In the future, in 10-20 years, this will happen to everybody and it will be normal. We will just as society get used to the fact that everybody said stupid things, that's just the way it works. Now we're in the transitional period. you saw that with drugs. It was Bill Clinton who first admitted "I smoked but didn't inhale". And then it was Bush who said "I did stuff but I don't want to talk about it." And Obama said, "Well of course I inhaled, that was the point."

Sheeri: Right, and he's done cocaine, it's in his book, his autobiography.

Bruce: And now suddenly that's ok. And now I think for the future, every politician will be assumed, "you had an adolescence, you did dumb things, you grew out of it, that's OK." But the transition period is very awkward for society, and that's going to be very true for a lot of things, as more of our digital history becomes accessible, becomes news, becomes used. And eventually it will become normal.

Sheeri: Well, I'm interested to see where it goes in the future. And thanks so much for taking the time to talk to me, talk to us. Thanks Bruce.

Bruce: Thank you, this was fun.

Feedback
Facebook group
Google+ page
e-mail: podcast at technocation.org
voicemail using phone/Skype: +1-617-674-2369
twitter: @oursqlcast
or Tweet about @oursqlcast


Fatal error: Class 'CToolsCssCache' not found in /home/sheeri/technocation/includes/cache.inc on line 32