Lessig’s argument that “code is law” resonates with me personally because it has parallels with product design. When designing a product, it's helpful to think that users don’t make decisions; rather, product designers make decisions and users will follow those decisions 99% of the time (consciously or not). For example, when Facebook launched the News Feed, it was poorly received. People thought it was creepy to be able to see their friends’ conversations directly on their Facebook homepage. Despite the negative user feedback, Zuckerberg kept the News Feed in the product. After a couple of months, people’s opinions about the News Feed reversed. Eventually, users liked being updated about their friends’ digital conversations and whereabouts. What happened in the meantime? The News Feed changed the social norms of digital eavesdropping. Since Facebook made eavesdropping the default, the wall seemed less invasive because Facebook implicitly created the norm that everyone should "creep" on everyone else. Facebook didn’t have to kill News Feed and wait for norms to change. The News Feed itself changed norms. The architecture -- aka the product design -- aka the code of Facebook changed how people interact.
Product design is powerful. Through subtle changes in the design of a questionnaire, you can manipulate people’s answers to really important personal questions. Consider organ donorship -- you’d think that people would have strong opinions about whether or not their organs are harvested after they die. They do. However, by simply changing the default answer on a questionnaire, governments can manipulate people’s choices (and social norms) about organ donorship. For example, Germany opts people out of organ donorship by default, and its donorship rate is 12%. By contrast, Austria opts people in by default, and its donorship rate is 99.9%. The countries are economically and culturally very similar -- the huge dichotomy between the countries is largely caused by the design of each government’s donorship questionnaire. Product defaults can make normative statements that change people’s beliefs.
Take Snapchat as a final example. This novel messaging app has opt-in permanence. By default, all messages sent on the platform disappear -- but if you see a message that you really want to keep, you can save it. This simple product design decision changed how people on the platform communicate.
So how do we keep people from doing bad things on the Internet? Let’s look at the problem through the lens of product design. The Internet is a generative platform that hosts sub-platforms like Facebook, iTunes, and Google. If we look at how these sub-networks stop people from doing bad things, we could apply these strategies to the Internet as a whole. Online web platforms manipulate user behavior largely through incentive alignment and surveillance.
Companies can prevent blackhat hacking by incentivizing people to be whitehat. Smart technical people can’t help but hack into systems. Hackers’ natural inclination towards curiousness makes discovering software vulnerabilities fun. People like Lord Flathead and the Blue Box creators will always exist; why not allow them to hack for good? Google and Facebook have whitehat security portals that pay “security researchers” to find security holes in the companies' software. This gives hackers an economic incentive to disclose, rather than exploit, bugs in Google and Facebook's systems. Perhaps if all major websites had whitehat programs, weev wouldn’t be in prison.
Similarly, iTunes greatly reduced piracy by making it easier to buy music legally. Some estimate that music piracy decreased by 80% after the iTunes launch. Netflix is doing the same thing with videos. Although incentive alignment can be effective, certain platforms choose to control behavior via forced surveillance.
The iTunes App Store is a walled garden of surveillance. It requires that app developers submit their app’s source code for manual review, after which Apple can choose whether or not an app belongs in their store. In effect, developers are forced to disclose their intellectual property to Apple before they can enter the gates of the App Store. Although this idea makes some squeamish, many pundits would argue that this surveillance is a net good. The manual review process ensures that the quality of apps in the App Store is high and the amount of spammy and malicious apps is kept to a minimum. I would agree, given the App Store’s success relative to Google’s Play Store.
Facebook also uses surveillance for good. Its TOS requires that people use their real names. This code is meant to manipulate the social norms of the platform by keeping people personally accountable for their actions. As a result, Facebook is a tame platform compared the anonymous 4chan. There’s less cyberbullying, so there’s less need to call the cyberpolice.
Can the Internet use incentive alignment or surveillance for good? Maybe - it’s outside the scope of this essay. If an Internet regulator incentivized whitehat hackers to disclose vulnerabilities in large corporate systems, that could be a net good. As for surveillance, I’m less convinced. I don’t think there is one organization that is a worthy arbiter to judge “good” or “bad” on the Internet.
I love Lessig’s holistic stance on tech and the law. Rather than taking the narrow view of fighting cybercrime using only the law, he broadens the scope of the issue to encompass all human behavior. Cybercrime isn’t simply a legal matter -- it is a problem that can be ameliorated through applied psychology and architectural design. Just as "code is law", product design is behavior.
Product design is powerful. Through subtle changes in the design of a questionnaire, you can manipulate people’s answers to really important personal questions. Consider organ donorship -- you’d think that people would have strong opinions about whether or not their organs are harvested after they die. They do. However, by simply changing the default answer on a questionnaire, governments can manipulate people’s choices (and social norms) about organ donorship. For example, Germany opts people out of organ donorship by default, and its donorship rate is 12%. By contrast, Austria opts people in by default, and its donorship rate is 99.9%. The countries are economically and culturally very similar -- the huge dichotomy between the countries is largely caused by the design of each government’s donorship questionnaire. Product defaults can make normative statements that change people’s beliefs.
Take Snapchat as a final example. This novel messaging app has opt-in permanence. By default, all messages sent on the platform disappear -- but if you see a message that you really want to keep, you can save it. This simple product design decision changed how people on the platform communicate.
So how do we keep people from doing bad things on the Internet? Let’s look at the problem through the lens of product design. The Internet is a generative platform that hosts sub-platforms like Facebook, iTunes, and Google. If we look at how these sub-networks stop people from doing bad things, we could apply these strategies to the Internet as a whole. Online web platforms manipulate user behavior largely through incentive alignment and surveillance.
Companies can prevent blackhat hacking by incentivizing people to be whitehat. Smart technical people can’t help but hack into systems. Hackers’ natural inclination towards curiousness makes discovering software vulnerabilities fun. People like Lord Flathead and the Blue Box creators will always exist; why not allow them to hack for good? Google and Facebook have whitehat security portals that pay “security researchers” to find security holes in the companies' software. This gives hackers an economic incentive to disclose, rather than exploit, bugs in Google and Facebook's systems. Perhaps if all major websites had whitehat programs, weev wouldn’t be in prison.
Similarly, iTunes greatly reduced piracy by making it easier to buy music legally. Some estimate that music piracy decreased by 80% after the iTunes launch. Netflix is doing the same thing with videos. Although incentive alignment can be effective, certain platforms choose to control behavior via forced surveillance.
The iTunes App Store is a walled garden of surveillance. It requires that app developers submit their app’s source code for manual review, after which Apple can choose whether or not an app belongs in their store. In effect, developers are forced to disclose their intellectual property to Apple before they can enter the gates of the App Store. Although this idea makes some squeamish, many pundits would argue that this surveillance is a net good. The manual review process ensures that the quality of apps in the App Store is high and the amount of spammy and malicious apps is kept to a minimum. I would agree, given the App Store’s success relative to Google’s Play Store.
Facebook also uses surveillance for good. Its TOS requires that people use their real names. This code is meant to manipulate the social norms of the platform by keeping people personally accountable for their actions. As a result, Facebook is a tame platform compared the anonymous 4chan. There’s less cyberbullying, so there’s less need to call the cyberpolice.
Can the Internet use incentive alignment or surveillance for good? Maybe - it’s outside the scope of this essay. If an Internet regulator incentivized whitehat hackers to disclose vulnerabilities in large corporate systems, that could be a net good. As for surveillance, I’m less convinced. I don’t think there is one organization that is a worthy arbiter to judge “good” or “bad” on the Internet.
I love Lessig’s holistic stance on tech and the law. Rather than taking the narrow view of fighting cybercrime using only the law, he broadens the scope of the issue to encompass all human behavior. Cybercrime isn’t simply a legal matter -- it is a problem that can be ameliorated through applied psychology and architectural design. Just as "code is law", product design is behavior.
( Sean H )
www.ChordsAZ.com