martes, 21 de agosto de 2012

Free Speech for Me, But Not for Thee, PC?

 

Free speech rights ‘for computers’—in all their glory and with all their limitations—are fundamentally derived from human activity, warts and all.
“I’m sorry Dave,” Hal, the legendary talking computer, asserted in 1968’s 2001: A Space Odyssey. “I’m afraid I can’t do that.”
That breakthrough film raised the startling specter of sentient machines capable of speaking intelligently and outwitting their masters—at least until their plugs are pulled.
Even in 2012, we’re still quite distant from the world of 2001, but computer speech has emerged as a fascinating new issue at the intersection of law, technology, and politics. As more and more commercial functions and decisions become automated, a discussion has been taking place among legal and policy whizzes about whether and how to regulate and respect machine “speech.”
In a recent New York Times op-ed, Columbia Law School professor Tim Wu provocatively asked: “Do machines speak? If so, do they have a constitutional right to free speech?”
Wu’s questions seemingly answer themselves. Constitutional rights, as everyone knows, apply only to humans, not to animals, cyborgs, or computers—don’t they?
But Wu is getting at something a bit more subtle:

On the drive to work, a GPS device suggests the best route; at your desk, Microsoft Word guesses at your misspellings, and Facebook recommends new friends. In the past few years, the suggestion has been made that when computers make such choices, they are “speaking,” and enjoy the protections of the First Amendment.
What are the implications of computer speech? Wu contends:
Consider that Google has attracted attention from both antitrust and consumer protection officials after accusations that it has used its dominance in search to hinder competitors and in some instances has not made clear the line between advertisement and results. Consider that the “decisions” made by Facebook’s computers may involve widely sharing your private information; or that the recommendations made by online markets like Amazon could one day serve as a means for disadvantaging competing publishers. Ordinarily, such practices could violate laws meant to protect consumers. But if we call computerized decisions “speech,” the judiciary must consider these laws as potential censorship, making the First Amendment, for these companies, a formidable anti-regulatory tool.
Sounds frightening, right? In Wu’s telling, Hal1 has outsmarted the humans who programmed him. Or, to mix movie metaphors, it seems that our computers have, like Skynet, become “self-aware” and threaten our most basic freedoms. In Wu’s opinion, “the First Amendment has wandered far from its purposes when it is recruited to protect commercial automatons from regulatory scrutiny.”
He proposes, instead, that “the line can be easily drawn: As a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered ‘speech’ at all. (Where a human does make a specific choice about specific content, the question is different.)” (Emphasis added.)
So is Wu correct? Are we hurtling toward 2001, or even 1997?
Not really. As Eugene Volokh observes, “Google’s algorithms in fact reflect specific choices made by engineers about what type of content users will find most useful.”
A law professor at the University of California, Los Angeles and renowned First Amendment expert, Volokh curates the eponymous Volokh Conspiracy “blawg,” which skews center-right/libertarian on most legal and political issues and routinely offers fascinating insights into cyberlaw developments.
Whether they return search results, suggest new online friends, or recommend particular driving routes, computers merely reflect and amplify the ideas, motivations, and expressions—in short, the speech—of the humans who programmed them.
In a white paper he co-wrote for Google, Volokh characterizes search engines as “speakers” for three reasons. First, they “convey information that the search engine company has itself prepared or compiled.” Second, they orient users toward particular materials “that the search engines judge to be most responsive to the query.” Third, and most importantly, these engines “select and sort the results in a way that is aimed at giving users what the search engine companies see as the most helpful and useful information.”
In other words, whether they return search results, suggest new online friends, or recommend particular driving routes, computers merely reflect and amplify the ideas, motivations, and expressions—in short, the speech—of the humans who programmed them. The amplification, Volokh argues, is of particular benefit to society: “The process of automating output increases the value of the speech to readers beyond what purely manual decision-making can provide,” and therefore should remain off-limits to heavy-handed government regulators.
In response, Wu states that “defenders of Google’s position have argued that since humans programmed the computers that are ‘speaking,’ the computers have speech rights as if by digital inheritance. But the fact that a programmer has the First Amendment right to program pretty much anything he likes doesn’t mean his creation is thereby endowed with his constitutional rights. Doctor Frankenstein’s monster could walk and talk, but that didn’t qualify him to vote in the doctor’s place.”
(Volokh answers back, tongue firmly in cheek, with some questions of his own: “Would Frankenstein’s monster have his own First Amendment rights? Substantive due process rights to marry his bride? A right to keep and bear arms against the farmers’ pitchforks? Unsurprisingly, those questions have not been answered.”)
Yet if Dr. Frankenstein programmed his monster to spout certain pre-canned phrases like “How are you doing?” or “I love you”—think Teddy Ruxpin—those nostrums would indeed merit First Amendment protection, not because a monster vocalized them but because a human caused him to. Of course, if the monster were programmed to mouth obscenities to little children in public—think Ted—that speech would enjoy far less protection, in keeping with permissible “time, place, and manner” regulations of speech in the public square.
Most fundamentally, while the privacy concerns Wu raises deserve careful scrutiny, they are no more or less salient because of the automation involved in the process. Facebook, for instance, should be subject to the same speech-vs.-privacy calculus as credit card companies, telemarketers, and government agencies with respect to disseminating private information.
“If the law decides that Facebook may not reveal certain private information about you,” Volokh notes, “that decision should apply to direct leaks by individual Facebook employees as well as to computer algorithms that Facebook employees generate. There is no need for some special First Amendment rule for speech that is produced partly using computer algorithms.”
In general, I’m inclined to agree with Volokh’s take. My bias generally falls on the side of more speech and less regulation.
But I can see a point, at least theoretically, where computerized decision-making could become so unmoored from a human originator that we can no longer plausibly associate that speech with a person.
But I can see a point, at least theoretically, where computerized decision-making could become so unmoored from a human originator that we can no longer plausibly associate that speech with a person. If Skynet truly did instigate a nuclear war of its own volition, I don’t think Miles Dyson would have a leg to stand on if he were to be so audacious as to claim that the network’s nuclear code instructions warrant First Amendment safeguards.
Thankfully, we’re a long way off from such doomsday scenarios. To return to an older future, it bears noting that during 2001, Hal, responding to a news interviewer, boasted: “Let me put it this way, Mr. Amor. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”
Infallibility may be a defining feature of computers in the abstract, but in practice, they are heavily dependent on the rather fallible humans who program them. And, thus, free speech rights “for computers”—in all their glory and with all their limitations— are fundamentally derived from human activity, warts and all.
Michael M. Rosen, a contributor to THE AMERICAN, is an attorney and writer in San Diego.

No hay comentarios:

Publicar un comentario