TIP: Click on subject to list as thread! ANSI
echo: edge_online
to: All
from: Jeff Snyder
date: 2010-07-23 01:12:00
subject: Protect Your Online Privacy 03

As Stacy Snyder's "Drunken Pirate" photo suggests, however, many people
aren't worried about false information posted by others -- they're worried
about true information they've posted about themselves when it is taken out
of context or given undue weight. And defamation law doesn't apply to true
information or statements of opinion. Some legal scholars want to expand the
ability to sue over true but embarrassing violations of privacy -- although
it appears to be a quixotic goal.

Daniel Solove, a George Washington University  law professor and author of
the book "The Future of Reputation," says that laws forbidding people to
breach confidences could be expanded to allow you to sue your Facebook
friends if they share your embarrassing photos or posts in violation of your
privacy settings. Expanding legal rights in this way, however, would run up
against the First Amendment rights of others. Invoking the right to free
speech, the U.S. Supreme Court  has already held that the media can't be
prohibited from publishing the name of a rape victim that they obtained from
public records. Generally, American judges hold that if you disclose
something to a few people, you can't stop them from sharing the information
with the rest of the world.

That's one reason that the most promising solutions to the problem of
embarrassing but true information online may be not legal but technological
ones. Instead of suing after the damage is done (or hiring a firm to clean
up our messes), we need to explore ways of pre-emptively making the
offending words or pictures disappear.

EXPIRATION DATES

Jorge Luis Borges, in his short story "Funes, the Memorious," describes a
young man who, as a result of a riding accident, has lost his ability to
forget. Funes has a tremendous memory, but he is so lost in the details of
everything he knows that he is unable to convert the information into
knowledge and unable, as a result, to grow in wisdom. Viktor
Mayer-Schoenberger, in "Delete," uses the Borges story as an emblem for the
personal and social costs of being so shackled by our digital past that we
are unable to evolve and learn from our mistakes. After reviewing the
various possible legal solutions to this problem, Mayer-Schoenberger says he
is more convinced by a technological fix: namely, mimicking human forgetting
with built-in expiration dates for data. He imagines a world in which
digital-storage devices could be programmed to delete photos or blog posts
or other data that have reached their expiration dates, and he suggests that
users could be prompted to select an expiration date before saving any data.

This is not an entirely fanciful vision. Google not long ago decided to
render all search queries anonymous after nine months (by deleting part of
each Internet protocol address), and the upstart search engine Cuil has
announced that it won't keep any personally identifiable information at all,
a privacy feature that distinguishes it from Google. And there are already
small-scale privacy apps that offer disappearing data. An app called
TigerText allows text-message senders to set a time limit from one minute to
30 days after which the text disappears from the company's servers on which
it is stored and therefore from the senders' and recipients' phones. (The
founder of TigerText, Jeffrey Evans, has said he chose the name before the
scandal involving Tiger Woods's supposed texts to a mistress.)

Expiration dates could be implemented more broadly in various ways.
Researchers at the University of Washington, for example, are developing a
technology called Vanish that makes electronic data "self-destruct" after a
specified period of time. Instead of relying on Google, Facebook or Hotmail
to delete the data that is stored "in the cloud" -- in other
words, on their
distributed servers -- Vanish encrypts the data and then "shatters" the
encryption key. To read the data, your computer has to put the pieces of the
key back together, but they "erode" or "rust" as time
passes, and after a
certain point the document can no longer be read. Tadayoshi Kohno, a
designer of Vanish, told me that the system could provide expiration dates
not only for e-mail but also for any data stored in the cloud, including
photos or text or anything posted on Facebook, Google or blogs. The
technology doesn't promise perfect control -- you can't stop someone from
copying your photos or Facebook chats during the period in which they are
not encrypted. But as Vanish improves, it could bring us much closer to a
world where our data didn't linger forever.

Kohno told me that Facebook, if it wanted to, could implement expiration
dates on its own platform, making our data disappear after, say, three days
or three months unless a user specified that he wanted it to linger forever.
It might be a more welcome option for Facebook to encourage the development
of Vanish-style apps that would allow individual users who are concerned
about privacy to make their own data disappear without imposing the default
on all Facebook users.

So far, however, Zuckerberg, Facebook's C.E.O., has been moving in the
opposite direction -- toward transparency rather than privacy. In defending
Facebook's recent decision to make the default for profile information about
friends and relationship status public rather than private, Zuckerberg said
in January to the founder of the publication TechCrunch that Facebook had an
obligation to reflect "current social norms" that favored exposure over
privacy. "People have really gotten comfortable not only sharing more
information and different kinds but more openly and with more people, and
that social norm is just something that has evolved over time," he said.

PRIVACY'S NEW NORMAL

But not all Facebook users agree with Zuckerberg. Plenty of anecdotal
evidence suggests that young people, having been burned by Facebook (and
frustrated by its privacy policy, which at more than 5,000 words is longer
than the U.S. Constitution), are savvier than older users about cleaning up
their tagged photos and being careful about what they post. And two recent
studies challenge the conventional wisdom that young people have no qualms
about having their entire lives shared and preserved online forever. A
University of California, Berkeley, study released in April found that large
majorities of people between 18 and 22 said there should be laws that
require Web sites to delete all stored information about individuals (88
percent) and that give people the right to know all the information Web
sites know about them (62 percent) -- percentages that mirrored the privacy
views of older adults. A recent Pew study found that 18-to-29-year-olds are
actually more concerned about their online profiles than older people are,
vigilantly deleting unwanted posts, removing their names from tagged photos
and censoring themselves as they share personal information, because they
are coming to understand the dangers of oversharing.

Still, Zuckerberg is on to something when he recognizes that the future of
our online identities and reputations will ultimately be shaped not just by
laws and technologies but also by changing social norms. And norms are
already developing to recreate off-the-record spaces in public, with no
photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar
on Manhattan's Lower East Side, requires potential members to sign an
agreement promising not to blog about the bar's goings on or to post photos
on social-networking sites, and other bars and nightclubs are adopting
similar policies. I've been at dinners recently where someone has requested,
in all seriousness, "Please don't tweet this" -- a custom that is likely to
spread.

But what happens when people transgress those norms, using Twitter or
tagging photos in ways that cause us serious embarrassment? Can we imagine a
world in which new norms develop that make it easier for people to forgive
and forget one another's digital sins?

That kind of social norm may be harder to develop. Alessandro Acquisti, a
scholar at Carnegie Mellon University, studies the behavioral economics of
privacy -- that is, the conscious and unconscious mental trade-offs we make
in deciding whether to reveal or conceal information, balancing the benefits
of sharing with the dangers of disclosure. He is conducting experiments
about the "decay time" and the relative weight of good and bad information
-- in other words, whether people discount positive information about you
more quickly and heavily than they discount negative information about you.
His research group's preliminary results suggest that if rumors spread about
something good you did 10 years ago, like winning a prize, they will be
discounted; but if rumors spread about something bad that you did 10 years
ago, like driving drunk, that information has staying power. Research in
behavioral psychology confirms that people pay more attention to bad rather
than good information, and Acquisti says he fears that "20 years from now,
if all of us have a skeleton on Facebook, people may not discount it because
it was an error in our youth."

On the assumption that strangers may not make it easy for us to escape our
pasts, Acquisti is also studying technologies and strategies of "privacy
nudges" that might prompt people to think twice before sharing sensitive
photos or information in the first place. Gmail, for example, has introduced
a feature that forces you to think twice before sending drunken e-mail
messages. When you enable the feature, called Mail Goggles, it prompts you
to solve simple math problems before sending e-mail messages at times you're
likely to regret. (By default, Mail Goggles is active only late on weekend
nights.) Acquisti is investigating similar strategies of "soft paternalism"
that might nudge people to hesitate before posting, say, drunken photos from
Cancun. "We could easily think about a system, when you are uploading
certain photos, that immediately detects how sensitive the photo will be."

A silly but surprisingly effective alternative might be to have an
anthropomorphic icon -- a stern version of Microsoft's Clippy -- that could
give you a reproachful look before you hit the send button. According to M.
Ryan Calo, who runs the consumer-privacy project at Stanford Law School,
experimenters studying strategies of "visceral notice" have found that when
people navigate a Web site in the presence of a human-looking online
character who seems to be actively following the cursor, they disclose less
personal information than people who browse with no character or one who
appears not to be paying attention. As people continue to experience the
drawbacks of living in a world that never forgets, they may well learn to
hesitate before posting information, with or without humanoid Clippys.



Jeff Snyder, SysOp - Armageddon BBS  Visit us at endtimeprophecy.org port 23
----------------------------------------------------------------------------
Your Download Center 4 Mac BBS Software & Christian Files.  We Use Hermes II


--- Hermes Web Tosser 1.1
* Origin: Armageddon BBS -- Guam, Mariana Islands (1:345/3777.0)
SEEN-BY: 3/0 633/267 640/954 712/0 313 550 620 848
@PATH: 345/3777 10/1 261/38 712/848 633/267

SOURCE: echomail via fidonet.ozzmosis.com

Email questions or comments to sysop@ipingthereforeiam.com
All parts of this website painstakingly hand-crafted in the U.S.A.!
IPTIA BBS/MUD/Terminal/Game Server List, © 2025 IPTIA Consulting™.