"The goal of the Kuali Open Library Environment Project is to define a next-generation technology environment based on a thoroughly re-examined model of library business operations. The model will then be used to develop specifications for a next generation community-sourced library management system, Kuali OLE (pronounced oh-LAY). This software system will be a part of the academic enterprise technology framework and will scale up to connect with other enterprise technology systems within the academic and administrative computing environment. The software system will also be capable of scaling down for stand-alone library use."
This book is a guide – as its title might suggest – to all things digital when it comes to music. It serves as not so much an analysis on copyright in the music industry as a whole, but rather as a set of legal and technical guidelines so that one may participate in the consumption and production of such music without infringing on copyrights. In other words, it describes for the reader all of the ins-and-outs of the digital music industry so that one may know where in the law his practices may reside.
Hill’s book has entire chapters devoted to the assessment of what is legal, what is not, and how to go about participating in said sanctioned musical practices. He identifies a list of acceptable file-sharing websites, and offers his own commentary on why others are forbidden, as well as why these are acceptable. The book begins with a basic introduction into the technologies and methods used in the digital realm and then goes deeper to list available services and to comment on the merits of various practices. His advice is clear and he condones no illegal activity, yet he makes clear why certain people might be motivated to circumvent copyright laws in terms of digital music. He further lists specific file types and programs that are used in these practices and he identifies useful software. He finishes the book with another broad chapter about the “Conscience of Digital Music” as a whole as well as his prediction of the future of the industry.
Hill’s technological knowledge is a key aspect of this book that has allowed me to delve deeply into the details of digital music production and sharing. He explains these issues in simple terms, while still conveying the complexity of their implications. In writing this final paper, the technological terms and details from this book will provide much-needed expertise in a field that I am not necessarily well-versed in. In my analysis of the acceptability of digital sampling, I must first know how the practice works and what techniques are involved; this book offers me this knowledge, which is key to reaching a conclusion in my final paper on what sampling is acceptable within copyright law.
tagged appropriation bootleg bootlegging burning copyright copyright_infringement digital_music digital_sampling downloading file-sharing grokster kazaa mix-cd mp3 music peer-to-peer piracy remixing ripping sampling sharing software song by minglet ...on 25-NOV-08
This article relates the common belief that software piracy is harmful both to the firms and to the consumers. Because of lower profits, with more people buying the copied products, the firms are financially hurt. Because of higher prices employed since their revenue of sales is cut, the buying customers are hurt if they do not purchase the copied products. The model that this article shows, however, suggests that even with significant piracy, firm profits will raise and prices will be lowered for the consumers. In addition, the article calls piracy an efficient "gift-giving" method. In other words, the product is made available to the public to increase its circulation, but it is only given to those who desire the product. The software does not end up being discarded by someone who has no use for it. The author compares piracy to mailing free copies to all computer owners in an attempt by a firm to make his product more well-known. Not only would many of the copies be discarded by those who do not want them in the first place, but the firm also would have had to pay for the copies to be made then distributed. With piracy, the firm receives free advertisement.
Although this article deals directly with software and piracy, I found that its argument was relevant to my own. Just as pirates serve as free advertisement for the software firms, the pirates in the fashion industry help to circulate news of which are the most current and popular trends. The top designers do not have to pay for copies of their designs to be made known to the public in this way, and they are sure that those concerned about fashion are buying the copies.
Scientists in a variety of disciplines (e.g., biology, ecology, astronomy) need access to scientific data and flexible means for executing complex analyses on those data. Such analyses can be captured as 'scientific workflows' in which the flow of data from one analytical step to another is captured in a formal workflow language. The Kepler project's overall goal is to produce an open-source scientific workflow system that allows scientists to design scientific workflows and execute them efficiently using emerging Grid-based approaches to distributed computation. Kepler is based on the Ptolemy II system for heterogeneous, concurrent modeling and design. Ptolemy II was developed by the members of the Ptolemy project at UC Berkeley. Although not originally intended for scientific workflows, it provides a mature platform for building and executing workflows, and supports multiple models of computation.
Metacat is a flexible metadata database. It utilizes XML as a common syntax for representing the large number of metadata content standards that are relevant to ecology. Thus, Metacat is a generic XML database that allows storage, query, and retrieval of arbitrary XML documents without prior knowledge of the XML schema.
The Metacat database models XML documents as a DOM tree, basically decomposing the nodes of the XML document and storing the node data as a series of records in a relational database via a JDBC connection. At this point, only Oracle and PostgreSQL have been tested as a backend databases, but we have avoided RDBMS specific features in order to maintain portability to other relational databases.
Metacat is implemented as a Java Servlet, and so communicates using basic HTTP protocol semantics. The figure below shows the basic structure of the Metacat architecture. A well defined interface for inserting, updating, deleting, querying, and transforming (using XSL) XML documents is presented. We would like to add the DOM API as an alternative supported mechanism for interacting with Metacat, but have not yet implemented this functionality.
Free 30-day trials of most Adobe software, including Photoshop, Dreamweaver, and Illustrator. Great to try a piece of software before commiting to buying it, and great if you're working on a project and don't need more than 30 days to complete it.
Van der Linden spends quite a bit of time railing on the inferiority of prevailing proprietary software standards, but also notes that Linux has a long way to come, especially in the areas of software availability and integration. When asked about the biggest barrier, he states that it’s the fact that Linux is not already number one. While this is not a specific failing of the open source model, the fact that (at least on the desktop) open source came along fairly late in the game, and with substantially less marketing clout suggests that there are perhaps markets where Linux is not destined to succeed.
The first two examples of reverse engineering that the article gives are open source projects. The ability of open source developers to reverse engineer the competing instant messaging clients developed by internet companies like Yahoo, AOL, and Microsoft has had a dual effect – firstly, the article points out, it has allowed innovation by letting third-party developers (open source or otherwise) to create hybrid programs that bridge the inherent gaps in these incompatible protocols. Additionally, the presence of quality open source messaging software has helped to further the legitimacy of open source platforms such as Linux.
The second example is that of Samba, an open source program that allows Microsoft Windows based file sharing services to be both hosted on or accessed by any number of platforms. Because of Samba, users of Apple’s Mac OS X or Linux can interoperate with Microsoft Windows networks. The article points out that this has (also) lent legitimacy to Linux as a platform and helped it to compete in a world of proprietary standards.
Because of the decentralized nature of the open source movement uses of technology that require strict licenses is necessarily limited as there is no governing body to obtain and regulate use of licenses. This is especially true with licenses that prohibit disclosure of the underlying technology, as does the license from the DVD Copy Control Association. As a result of this, the extremely aggressive legal tactics of the content-owning industry pose a potential threat to the ability to choose what computer software to use, although it is interesting to note that it’s not clear that they have actually posed any hindrance to the open source movement.
There is an established idea in the usability community that software developers do not make good usability designers. This proves problematic for the open source movement, since one of the central tenets is that the software is conceived and developed by individual software developers. There is neither outside perspective available to mandate the hiring of usability professionals, nor capital available to do so. Usability professionals, the paper states, are not prevalent in open source projects the way that developers are because there are fewer of them to begin with, and therefore fewer peers to recognize any individual contributions to usability – peer recognition being one of the most agreed upon incentives for open source development.
The paper outlines some of the other problems related to usability in open source, notably that usability design works best when done before any software development, anathema to the open source model of progressive improvement on rough development. Furthermore, many open source projects try to emulate commercial software, leaving little room for usability innovation. Finally, in a collaborative community with little central authority, it is logistically delicate to remove excessive functionality that may confound usability.
By way of introduction, the paper makes two points – first the obvious point that a complete abandonment of traditional property rights in favor of totally open licensing would have taken away the very thing that had made these companies successful in the first place – the proprietary differences between their software and their competitors. It points out as well that an initial hurdle for a potential alliance between corporation and open source is the latter’s lack of central management – with whom can a corporation negotiate without a central leader to definitively represent an open source project as large as, say, Linux?
The first case study is that of Apple, a company that faced increasing obsolescence of its core operating system (Mac OS) by the mid-1990s, and was unable to come up with a viable proprietary alternative. Apple’s strategy was to “embrace and enhance” existing open source technologies, and to this end it made headlines when it released the core of its new operating system, Mac OS X, as a fully open source project. It retained its competitive advantage, however, by releasing only material which was essentially already available, keeping proprietary the graphical interface which differentiated its product from competitors’ and other high-level components.
IBM embraced open source products in a similar way when the chose Apache, the open source web server, as the basis for their new line of server products. This adoption proved to be a boon for the Apache project, which received support from a major corporation. IBM’s adoption of Linux came later, but its portability (one of the foci of the open source movement) eventually allowed IBM to use Linux as the standard platform for a variety of products. In IBM’s commercial model, money isn’t made off the products themselves, but in the pairing of software with hardware, support, consulting, and other services.
Sun, although initially hesitant to embrace open source, eventually opened up several of its projects under restrictive licenses that allowed people to view and modify the source, but not to redistribute it for profit without paying royalties. In this way, Sun protected its property rights and proprietary advantage while reaping the benefits of community involvement with and contribution to its products.
Two important points can be drawn from these cases and from the article itself: firstly it is interesting to note that in the first two cases, where companies adopted previously existing products, they adopted products whose licenses allowed commercial derivative works. The license governing Linux and many other open source projects does not allow this; this is an important distinction. The second point is the contrast between Apple and Sun’s strategy – open parts vs. partly open. While Apple retains competitive advantage by opening only parts of their product (open parts), Sun retains their advantage by opening their products with important limitations that preserve that advantage (partly open).
Krishnamurthy, Sandeep, Cave or Community? An Empirical Examination of 100 Mature Open Source Projects, May 2002.
The value of this paper is represented in its byline: “An empirical examination of 100 mature open source projects.” The author used as his source one of the premier open source project management websites, home to tens of thousands of projects, and picked a sample that had reached the highest level – “mature.” As the paper notes, the projects sampled had been in existence for 18 months on average, and had released several versions of their product, therefore having the best chance of representing the community development possible within open source projects.
The paper’s most dramatic finding is that most mature open source projects are fairly small – the median number of developers in the sampled projects was four, with a lone developer being the most common case.
Other findings included that most projects did not generate very much discussion, in contrast to the portrait painted by the media of a bustling, communicative group of developers. The study found that products with more developers were viewed and downloaded more often, and also that products with more developers had smaller leadership bases.
Although this study is straightforward and not accompanied by a wealth of discussion, the findings speak fairly loudly toward discrediting a lot of the prevailing image of the open source project. Even the author’s tone in compiling this bibliography suggests that most projects are large networks of disparate talent, colluding to create products that are extensively peer-reviewed for quality. This study shows that to not necessarily be the case, although it is unclear what subjective level of success the surveyed projects had obtained – projects selected for other variables could yield different data, and the discussion in the study suggests that projects in other stages of life (earlier than “mature”) could carry different characteristics as well.
The paper draws an interesting comparison between the corporate sponsorship of the open source movement, which the literature suggest is related to scientific research both in its driving motivations (Bonaccorsi and Rossi, 2003) and its origins, and the employment of the scientific community by pharmaceutical companies. The benefits of both the volunteer open source and academic scientific communities are similar, and companies find success in leveraging these benefits by sponsoring those communities.
An essential point is made when the article points out that open source projects have been most effective in communities where the users are technically-minded, presumably because these users are more willing and able to compensate for the open source community’s lack of progress in the areas of user friendliness and documentation (Bonaccorsi and Rossi, 2003). The paper describes the history and structure of four specific successful open source projects; all of them products meant for system administrators and programmers, rather than end users. This and the paper’s characterization of the open source community as “elitist” seem to support my contention that a community of technically-minded developers creates products suitable for technically-minded users, rather than everyday end users.
The paper discusses at length the contrast between the open source model of leadership, in which there is no “formal authority” that must be obeyed, but only the “real authority” of respected peers who have made leading contributions that are congruent with the developer’s goals. The paper cites evidence that there is little mirroring in commercial software development operations of the open source principles of community visibility of individual contributors and the general desire to make the project accessible to all potential contributors.
Three commercial strategies embracing open source are outlined: the symbiotic relationship, in which companies provide components of an open source project that are either missing from or complimentary to the community’s development; the “code release” strategy, in which companies find it profitable to release internally-developed code as open source in order to stimulate other parts of their profit model; and the support model, in which companies provide products that assist the mass success of the open source model itself, rather than working within it.
In its examination of the first question (why do people work on open source projects?), this paper highlights a point essential to my thesis – that there is a substantial group of software users who are incapable of being software developers – i.e. that the “users as developers” model (von Hippel, 2001) is at best partially true. A subset of users who are either computer hobbyists or “hackers” are the ones doing the actual development of open source software. The article lists several potential motivations: a intellectual gratification similar to that found in scientific research, a passion for the art form of software development, a pleasure taken in an unrestricted creativity not found in today’s corporate world, and (as in Crowston, et al, 2003) visibility to potential employers.
This paper describes the genesis of an open source project as stemming from an “unfilled market” – an individual has a problem for which no commercial product exists, identifies others facing the same problem, and as progress is made in solving that problem, the community of people working to solve their common problem builds and is fostered by constant communication of progress. Leadership emerges naturally from this process – those most involved in the project and most willing/able to progress the project become natural leaders. Specific tasks are not delegated – the project relies on the willingness of its members to solve problems of their choosing as they arise. If this does not effectively solve the project’s problems, an impasse is created, and the project will fade.
This model relies on the assumption that all problems faced by a project will be interesting to and solvable by some member of that project. This paper points out that this is not always the case – certain “non-sexy” problems, including user-friendliness, documentation, and support, fall by the wayside. Their solution in the open source community has been brought by commercial ventures with a “hybrid” business model – that is, they rely on volunteer efforts for the product themselves, but then profit by providing the elements not provided by volunteers themselves. This establishes the essential symbiosis between open source projects and commercial ventures in which the benefits of the volunteer/community model are present, yet corporate sponsorship lends both security and profitable, requisite gap filling.
This paper is less clear in its answering of the third question (how can open source projects challenge established commercial standards?) although it introduces two important points. The first is that there exists “the tendency for that which is ahead to get further ahead, for that which loses advantage to lose further advantage.” The second point the paper introduces is that choice of a product is influenced less by total popularity of that product and more by popularity within a social network.
Crowston, Kevin et al. Defining Open Source Software Success,Twenty-Fourth International Conference on Information Systems, 2003.
This is a paper in the scholarly tradition that examines previous attempts to define what makes an open source software project successful, and then “reexamines” the culture of open source projects to suggest new measures. The article closes by analyzing measures of success suggested by a primary source from the open source community – the forums on a popular site dedicated to the movement.
The traditional measures of success listed are fairly predictable – most notably “quality” in the general sense. A number of measures from computer science literature three decades old are listed – among them understandability, completeness, testability, and efficiency. Proposed measures of user satisfaction in the open source community are dismissed as non-representative, since measurable feedback requires involvement in the projects themselves, which is not necessarily a complete sample of users not to mention not likely to encompass users with negative opinions. Amount of use is similarly difficult to measure due to the idiosyncrasies of open source distribution.
The paper’s own suggestions include measures much more specific to open source development: project output, level of activity on a project in a given timeframe, professional development as a result of contribution to projects, etc. While these measures certainly speak to contributors to open source projects, questions must be raised about the overarching goals of open source projects implied by these measures. If the goal is to solely to stimulate the developers involved, then these measures can be seen as appropriate, however this hobby-group mentality cannot be solely responsible for the fact that large parts of the global economy rely on open source. These measurements speak nothing to the need to create quality products that serve a real purpose, or the need to serve that purpose better than competitive commercial products.
What is telling is that in the article’s analysis of forum comments by those involved in open source projects reveals a similarly self-centered attitude. The top three measures of success found in this analysis were developer satisfaction, user involvement, and developer involvement, with user satisfaction, an important concern in the development of any product, ranking only fourth. It seems curious that open source has seen success as a model at all, given that the satisfaction of those creating the products is much more highly valued than the satisfaction of those actually using them. While other literature specifically points out that developer and user are supposed to be one and the same (von Hippel, 2001), this is only partially true in software development (Bonaccorsi and Rossi, 2003), and I argue that open source will enjoy only niche success if a developer-centric attitude prevails.
The article acknowledges the prevailing wisdom that user innovation communities “shouldn’t exist,” and that product development has traditionally been the dealing of manufacturers and commercial enterprises in general. These commercial enterprises benefit from the economies of scale that come from developing a product that can be sold to many users and protected from competition, rather than lone users developing products and obtaining marketplace protection for them.
However, the article continues, these communities do exist, and can be even more successful than competing commercial ventures. It outlines three conditions for the existence of successful user innovation communities: a sufficient incentive to innovate, an incentive to reveal any innovations made, and a competitive distribution of such innovations relative to commercial products.
The article’s most salient emphasis, however, was that on the sensitivity to specific user needs available in user-innovated products. While this was repeatedly cast as a positive point – that the users themselves naturally have better and more up-to-date information about their needs and that manufacturers with conflicting goals could create sub-optimal products, thereby costing the users an “agency cost” – this point has its negative aspects as well. This condition is only a positive one when communities are homogenous; that is, they all have the same needs. Given similar communities with slightly different needs, the tendency to create products that conform perfectly to those needs could create a fragmented marketplace of many similar products with essentially superficial differences. With this type of fragmentation, it could hinder the ability of any one product to gain enough momentum to continually fund or stimulate development. Specific to the software case, a fragmented market also decreases compatibility, an issue of paramount importance in today’s networked world.
Blizzard Entertainment sued a group of volunteer gamers who created free, noncommercial, open-source software to allow Blizzard game owners to play the games over the Internet. Claiming that the gamers reverse engineered Blizzard’s own Battle.net server software to make their own BnetD server software, Blizzard cited anti-circumvention violations of the Digital Millennium Copyright Act. Both Battle.net servers and BnetD servers were available for free online to enable online game play. However, BnetD was created as an alternative to Battle.net to fix some connection difficulties that some users encountered while using Battle.net.
Blizzard attempted to stop distribution of BnetD, alleging that the software has been used to permit play of pirated Blizzard games. However, the volunteer developers did not design BnetD for this purpose, nor were they are using BnetD for this purpose. The free software was a legitimate use and could not be bluntly labeled as a piracy device. Blizzard argued that the developers reverse engineered sections of the game, thus violating Blizzard’s End User License Agreement (EULA). The Electronic Frontier Foundation (EFF) represented the programmers and declared that BnetD was a legal free product which worked with the original product in order to benefit game owners. The court ruled in favor of Blizzard, ultimately stating that reverse engineering and emulating of Blizzard software in this case were illegal.
The consequences of the ruling were detrimental to game upgrades and user enhancements. If this decision set the precedent, user-developed programs that work with original products would be banned. Furthermore, consumer choice would be limited by the available products. Since users would only be authorized to use a certain company’s products with that same company’s accessories together, this would have a profound impact on software and game products. In a similar analogy, imagine if Brand A’s eraser had to be used in conjunction with Brand A’s pencil. What would happen if computer users were forced to run only Microsoft products with Microsoft Windows? What if gamers could only play certain games with specific designated programs and accessories? Inevitably, such precedent would drastically reduce competition in the marketplace in addition to loss of both innovation and user-generated creativity.
Call#: GA108.7 .C53 1992
tagged bibliographical bibliography books cartography citations database ethics geographic grammar guides information library management organization papers plagiarism references research scholarly software spatial statistics tools writing by nmperez ...on 27-OCT-06