Citation: Mitchell, Kimberly et al. “The Exposure of Youth to Unwanted Sexual Material on the Internet: A National Survey of Risk, Impact and Prevention.” Youth and Society. Vol. 34 No. 3, March 2003: 330-358. Accessed 6 April 2009. .
The authors of the study gave a survey to 1,501 Internet-users between the ages of 10 and 17, asking them about their inadvertent exposure to sexually explicit content while online. The results found that 25 % of those polled unintentionally encountered sexually explicit material while on the Internet. The people who discovered sexual content tended to be heavy Internet users and were older teens. About one-fifth of those who accidentally viewed the content were embarrassed and very or extremely upset by it. The minors whose parents had put filtering software on their computers were 40 % less likely to have been exposed to unwanted sexual material. However, most parents did not install filtering softwares on computers. Other forms of parental control, such as restricting the amount time their children could spend on the Internet, did not reduce chance of exposure.
This study is significant to my paper for a few reasons. Firstly, the experiment established that children are inadvertently exposed to sexual content, and that this exposure can cause harm. Knowing that sexual material on the Internet is a problem establishes a greater need for remedies to the situation. Additionally, this study is important because it measures the effectiveness of different types of controls on preventing youth exposure to sexually explicit material in a relatively scientific manner. Since filtering was determined to be more effective than parental restrictions, yet was not perfect at preventing exposure to the content, perhaps resources should be devoted to improving filtering softwares and persuading parents to install filtering programs on their children’s computers. The authors noted that a problem with the study could be that adolescents who have filtering softwares on their computers happen to be more likely to use the web in ways that would shield them from exposure to sexual content, and not the other way around. If this is the case, perhaps the best way to protect minors from harmful content is to educate them better about smart Internet use.
Citation: Reno v. ACLU 117 S.Ct. 2329. 1997. Cornell Law School. 4 April 2009. <ttp://www.law.cornell.edu/supct/html/96-511.ZO.html>.
This source is a Supreme Court decision that curtailed the federal government's ability to prohibit that could be harmful for children. The laws in question were provisions of section 223 of the Communications Decency Act of 1996 that prohibit knowingly transmitting "indecent" and "patently offensive" material on the Web to minors. The Supreme Court ruled that these provisions were unconstitutional, and upheld the ruling of a lower court, because they violated the first and fifth amendments of the Constitution. The court believed the terms "indecent" and "patently offensive" were too broad, and could restrict content that is actually not harmful. Additionally, the provisions were struck down because the court felt there was no good way to specifically target and identify Internet users under the age of 18, making this law difficult to violate or enforce. The portions of the law that prohibited knowingly transmitting obscene materials and child pornography to minors were upheld, because obscene content warrants less free-speech protection than indecent content. In the decision, written by Justice Stevens and agreed to by a large majority of the justices, there was also an overview of the history of the Internet and an explanation as to why cases upholding government laws monitoring commercial interests to protect children did not apply to this case.
Reno v. ACLU relates to my paper because it is a court case imporatnt to the ongoing battle to determine how best to protect children from harmful content online. If free speech bars the government from protecting children from certain types of potentially harmful content, then government regulation is not going to be the only solution needed to help shield children. However, since the government can pass laws regulating obscenity and child pornography, this case does demonstrate that there is a place where government regulation could potentially be helpful and useful. Stevens' decision would support my thesis, because the difficulty he acknowledges in detecting the age of Internet users makes it difficult for any organization to properly filter content. In order for children to be protected from some content, there will need to be intrinsic motivation for indecent websites to self-regulate and to try not to reach children.
Citation: Thierer, Adam. "Parental Controls and Online Child Protection: A Survey of Tools and Minds." Version 3.1. Fall 2008. Progress & Freedom Foundation. 5 April 2009. <http://www.pff.org/parentalcontrols/Parental%20Controls%20&%20Online%20Child%20Protection%20[VERSION%203.1].pdf>
Thierer’s document covers a variety of aspects and issues relating to parental control of children’s media consumption. Different methods of controls are discussed, including informal rules implemented by parents, ratings systems, filtering and monitoring software, increased media literacy, self-regulation by companies and governmental regulation. Much of the document relates to media other than the Internet, but the Internet is discussed, particularly when describing different types of filtering programs and the Internet’s relationship to the problems with governmental regulation. Because no one method of parental controls is completely effective, Thierer concludes that parents take an interdisciplinary approach when regulating their children’s media content, and employ a combination of strategies. Educational and empowerment and informal strategies have the added bonus of being the least likely to restrict freedom of speech. There is also a discussion of how to protect children from sexual predators online. Age verification and extensive data monitoring are seen to be a poor remedies, while the right solution is determined to be “education, empowerment and enforcement.”
This article, much like some of the other documents, places an importance on efficacy and education as optimal ways to protect children from the dangers of the Internet. The focus of the ineffectiveness of other types of controls relates to questions concern those methods’ constitutionality which supports my theseis. The document is a particularly good source because it is very detailed and thoughrough in its analyses of the types of contols. This article also helps to better compare and contrast the views of Thierer and Palfrey, who co-authored another source. While they may have disagreed about reforming CDA 230, the two men both supported internal regulations by parents and community members and desires for non-governmental groups to come up with their own strategies concerning controlling content. Thierer is perhaps more skeptical of technology than Palfrey is, and he places more of an emphasis on educating and empowering parents and children about how to optimally use the Internet.
Citation: Majoras, Deborah Platt. “Rights and Responsibility: Protecting Children in a Web 2.0 World.” Keynote Address at Family Online Safety Institute. 6 December 2007. Federal Trade Commission. 6 April 2009. http://ftc.gov/speeches/majoras/071206fosi.pdf.
This document is the copy of a speech made by the Chairman of the Federal Trade Commission describing methods used to protect children from dangers lurking online, including harmful content, cyber bullying, and privacy invasion. After describing the media use of children and some of the dangers they face online, Majoras summarizes the law enforcement efforts the FTC has taken to prevent exposure to harmful content. The laws the FTC works to enforce have provisions including requiring adult content to be notified as such in the e-mail tagline and preventing websites from asking children too much personal information. Majoras then describes the FTC’s push and efforts to educate and empower parents and children to stay safe. These efforts are viewed by the FTC as important because first amendment restrictions will prevent the government from being able to completely restrict dangerous content themselves. Marjoras also said that it is important for companies to self-regulate content. Majoras concludes by stating that a multidisciplinary approach is needed in solving this problem.
This article is important in the broader context of regulating Internet content for children, because the FTC is a major governmental organization involved in the issue. A governmental organization believing that education and self-regulation needs to supplement governmental regulation enhances the importance of education and self-regulation, which could be seen as an alternative to the government. This article gives good specifics about the role of the FTC in law-enforcement and education, and describes different features of education programs and self-regulating devices; those details could be useful for figuring out the absolute best way to determine how to protect children. Although this article was written by someone in the Bush administration, it is likely that the opinions of Obama’s FTC workers are not too different; protecting children from harmful content on the Internet is a bipartisan issue.
Citation: "Communications Decency Act of 1996: Section 230" 1996. Cornell Law School. 4 April 2009. http://www4.law.cornell.edu/uscode/47/usc_sec_47_00000230----000-.html.
This source is a section of Congressional Legislation that plays an important role in regulating the filtering of online content, with some particulars relating to the filtering of such content to protect children. Titled “Protection for Private Blocking and Screening of Offensive Material,” Section 230 of the Communications Decency Act (CDA) of 1996 guarantees Internet Service Providers (ISPs) a great deal of legal protection. The section of the law begins by describing the increasingly large role that the Internet was providing in people’s lives in 1996. Congress then establishes broad principles that guide its policy concerning the Internet. After that, Section 230 begins to lay out protections for ISPs, saying they are not the speakers or publishers of content provided to them by another service and guaranteeing civil protection for efforts made “in good faith” to filter obscene material. The law also requires ISPs to notify parents of parental control filtering programs that they can use to protect their children. Section 230 concludes by describing the previously mentioned provisions relationships to other laws and by defining terminology used in the document.
This document relates to my project because it has a large effect on the policy concerning the protection of children on the Internet. If ISPs are not considered the author of any of the works people can access through them, they have less of an incentive to develop effective filtering software. Section 230 of the CDA wants ISPs to act “in good faith” and try to restrict access to harmful material to children. However, the term “in good faith” is ambiguous and could be interpreted loosely. Despite the problems with the law when it comes to protecting children, it is understandable that Congress decided to take the approach of siding with the ISPs. In 1996, when the law was written, the Internet was a relatively new development, and many people still did not have access to it. As a result, the government wanted to prioritize helping ISPs, because it wanted the ISPs to expand and be able to give services to a greater number of people. Over a decade later, the online landscape has changed significantly, with the vast majority of people in the United States having Internet access. Perhaps Congress should now focus more on promoting the filtering of harmful content and less on supporting the legal and economic interests of the ISPs. This would likely to be tricky to do, because the ISPs would likely continue to lobby for their position and fight back, and too much government regulation could be seen as violating the Constitution.
Citation: Ashcroft v. ACLU 542 U.S. 656. 2004. Cornell Law School. 4 April 2009. <http://www.law.cornell.edu/supct/html/03-218.ZS.html>.
This document is a Supreme Court decision that ruled the Child Online Protection Act (COPA) unconstitutional. COPA, a law passed by Congress, established a $50,000 fine and six months in prison for knowingly posting content online for commercial purposes that is harmful to minors. A person could avoid conviction for posting such content by making a concerted effort to have prevented minors from having access to the content. The justices ruled that COPA was unconstitutional because it restricted some speech protected by the first amendment of the US Constitution. The definition of content harmful to minors is broader than the definition of obscenity, which is the type of speech not protected by the first amendment. In the decision, Justice Kennedy also wrote that there were probably more effective alternatives to govermental regulation, such as encouraging parents to use filtering software. According to the majority opinion, the government is only allowed to restrict free speech as much as it is absolutely necessary to achieve its desired goal, and there was no proof that free-speech had to have been curtailed as much as it was in COPA in order to protect children.
Ashcroft v. ACLU is important because it helped to define the legal restrictions on governmental regulation of Internet content for purposes of protecting children. This case is similar to Reno v. ACLU in that laws were struck down on first amendment grounds because they restricted types of protected speech. Congress tried to fix the mistakes it made with the CDA by having COPA apply to material harmful to minors, rather than to indecent material. However, the Supreme Court still thought that content harmful to minors was too broad a terminology using a strict scrutiny approach to the law. The court case is also relevant to my paper because it explains how the government could legally help regulate Internet content. By suggesting Congress protect children from potentially threatening content by promoting use of filtering software, Kennedy is essentially laying out for Congress what he believes to be the most constitutionally acceptable method of governmental online content-regulation. Note that by promoting filtering, the government would be indirectly involved with regulation, implying the government cannot fix the problem of youth exposure to harmful content alone.
The article extensively illustrates the development of Web 2.0 and the emergence of Youtube as one of the most popular websites on the internet. The author then summarizes Youtube’s liability protection under the Fair Harbor law. My interest in this article, however, stems from its discussion of the filtering software used by Youtube. “Youtube recently unveiled a video identification service which would create digital fingerprints of material that content providers wish to have protected.” If a video is uploaded to Youtube that matches the fingerprint of a copyrighted work, the owner can request that it be removed. Extensive tests have already been conducted: in one case, the system caught 18 instances of infringement after a service uploaded over 4400 hours of content to Youtube. After a copyright owner identifies infringing work, it can either have the material pulled or, even more incredibly, have its own advertisements added to the video. This technology is very appealing to Youtube because adopting it will show courts that it is doing all it can to remove copyrighted material. However, several factors make this protection unappealing. First, the “fingerprints” rely on a library of original content with which to match against infringing content. Thus, copyright owners will have to provide an extensive library of material to Youtube before being able to find their illegally uploaded material on Youtube. It is similarly unclear whether this technology will be able to identify slightly altered versions of original clips uploaded to the website. Fair Use advocates are equally concerned that the software will remove their own Fair Use works, mistaking them for infringing material.
This is an important article because it discusses Youtube as a company increasingly working for the Copyright holding companies rather than for its own users. Youtube is constantly in danger of copyright litigation: even the DMCA will not protect the company if plaintiffs can prove that Youtube is directly benefitting financially from copyrighted content. By signing deals with content owners that allow the owners to add advertisements to any of their content that was illegally uploaded, Youtube has cleverly created a way to profit from illegal content. Youtube also signed agreements with content owners to provide studio shows and clips on its services. This mitigates the temptation for users to upload illegal videos, especially if they can watch the legal version on the exact same website. However, by blindly implementing filtering software that automatically flags seemingly copyrighted material, Youtube may be dooming Fair Use works. Rather, Youtube should alter the filtering software so that it only flags videos that are either entirely made up of one video clip or contain a part of a copyrighted video with the corresponding audio from that clip playing as well. Many Fair Use artists will take the video but not the audio portion of a clip and mix it with other clips. Youtube can thus appease the studios and courts while still emphasizing the importance of its community of users, whom it built the website for in the first place.