Ever since the World Wide Web went wide in 1994, film studios, music labels and publishers have tried to neuter this unparalleled communications commons.  Much of the Web’s power stems from its open technical protocols known as hypertext markup language, or HTML, which are used to build webpages.  HTML has always put users, not "content-makers," in control of content, and as a result, people could (for example) copy and save the “source code” for a webpage.  Bottom-up innovation could emerge and prevail.    

The truly dismaying news is that the official steward of technical standards for the Web – the World Wide Web Consortium, or W3C – plans to adopt a new set of standards, HTML5, that will let content owners add digital rights management, or DRM, to their web content.  As Cory Doctorow writes on BoingBoing, “the decision to go forward with the project of standardizing DRM for the Web came from Tim Berners-Lee himself [who invented the Web in the early 1990s], who seems to have bought into the lie that Hollywood will abandon the Web and move somewhere else (AOL?) if they don’t get to redesign the open Internet to suit their latest profit-maximization scheme.”

What makes the new HTML5 standards so alarming is that it kicks open the door for still other new forms of proprietary control over Web-based video, images, fonts and more.  Danny O'Brien, International Director at the Electronic Frontier Foundation, has a good account of the struggles to prevent this outcome at the W3C, which could lead to the piecemeal privatization of the Web infrastructure.  

Book publishers love that libraries can act as free marketing venues, introducing readers to new authors and keeping them focused on books.  But publishers don’t like it when libraries act as commons – that is, when they promote easy access and sharing of knowledge.  A successful commons may modestly limit a publisher’s absolute copyright control – and even minor incursions on this authority must be stoutly resisted, publishers believe.     

One of the more egregious such battles now underway is a lawsuit filed by Harvard Business School Publishing, John Wiley and the University of Chicago Press against the Institute for the Study of Coherence and Emergence.  ISCE  is a small, nonprofit membership group that “facilitates the conversation between academics and business people regarding social complexity theory, particularly the implications for the management of organizations.” 

The focus of the publishers’ lawsuit is ISCE’s virtual library of 1,200 books.  May ISCE self-digitize and lend its virtual books to its members on a one-usage-at-a-time basis, for private, educational, non-commercial purposes? 

The publishers say no, and are seeking to establish their legal authority to shut down such unauthorized “reproduction, display and distribution” of the books.  But ISCE counter-claims that the fair use and first-sale doctrines of copyright law give it the legal right to lend its virtual books.  (Fair use is the legal doctrine of copyright law that allows excerpts to be shared noncommercially.  The first-sale doctrine prohibits the seller from controlling what a consumer does with a book or DVD after it is purchased, such as renting it, lending it or giving it away.)  ISCE claims, in addition, that libraries are entitled to special-use privileges under copyright law, which apply in this instance.

Here’s a development that could have enormous global implications for the search for a new commons-based economic paradigm.  Working with an academic partner, the Government of Ecuador has launched a major strategic research project to “fundamentally re-imagine Ecuador” based on the principles of open networks, peer production and commoning.   

I am thrilled to learn that my dear friend Michel Bauwens, founder of the P2P Foundation and my colleague in the Commons Strategies Group, will be leading the research team for the next ten months.  The project seeks to “remake the roots of Ecuador’s economy, setting off a transition into a society of free and open knowledge.” 

The announcement of the project and Bauwens’ appointment was made on Wednesday by the Free/Libre Open Knowledge Society, or FLOK Society, a project at the IAEN national university that has the support of the Ministry of Human Resource and Knowledge in Ecuador.  The FLOK Society bills its mission as “designing a world for the commons.” 

The research project will focus on many interrelated themes, including open education; open innovation and science; “arts and meaning-making activities”; open design commons; distributed manufacturing; and sustainable agriculture; and open machining.  The research will also explore enabling legal and institutional frameworks to support open productive capacities; new sorts of open technical infrastructures and systems for privacy, security, data ownership and digital rights; and ways to mutualize the physical infrastructures of collective life and promote collaborative consumption.

My friend Silke Helfrich recently wrote a great blog post about the importance of infrastructure to the commonsdrawing upon the keynote talk on infrastructure by Miguel Said Vieira at the Economics and the Commons conference in Berlin, in May 2013.  Silke reviewed Miguel's talk, prepared in collaboration with Stefan Meretz – and then added some of her own ideas and examples.  Here is her post from the Commons Blog:  

Infrastructure is, IMHO, one of THE issues we have to deal with if we want to expand the commons….Let’s start with a few quotes from the (pretty compelling) framing of the respective stream at ECC, which was called, “New Infrastructures for Commoning by Design.”

"Commons, whether small or large, can benefit a lot from dependable communication, energy and transportation, for instance. Frequently, the issue is not even that a commons can benefit from those services, but that its daily survival badly depends on them. … When we look at commoning initiatives as a loose network, it does not make sense that multiple commons in different fields or locations should have to repeat and overlap their efforts in obtaining those services (infrastructures) independently…“

We need to sensitize commoners about the urgent need for Commons-Enabling Infrastructures (CEI). That is, we need infrastructures that can “by design” foster and protect new practices of commoning; help challenge power concentration and individualistic behavior are based on distributed networks (as extensively as possible) provide platforms which enable non-discriminatory access and use rights (for instance: a “ticket-free public transport system” is not cost-free, but it is designed in such a way that the funding of maintenance is not tied to the traveller’s individual budget).

Welcome, the Commons Atlas!

Ellen Friedman and the good folks at CommonSpark website (“a collective of commons activators”) are in the early stages of assembling a new sort of resource guide for the commons, “The Commons Atlas.”  This innovative project is a collection of online maps, “threat maps,” datasets and tools for creating data visualizations (geospatial maps, timelines, network maps, mindmaps, infographics, etc. ) related to the commons.

The diversity of visual systems to locate various commons is wonderful!  If you want to find out where you can locate fruit trees and other edibles for personal gleaning, go to Falling Fruit, Forage Berkeley and Mundraub (Germany).

The atlas includes a map of Maker projects in the US, and a map, “Vivir Bien” (good living) that shows where to locate “resources for a solidarity economy.”  Can’t find a place to sit in a city?  Check out Street Seats, which identifies seats and benches where you can sit down in public spaces.

On the Common Atlas, you can find the “Bike-sharing World Map” and and the Great Lakes Commons Map  which plots people’s stories on a map of the Great Lakes along with harms to it.

Our knowledge about what makes digital commons work is terribly under-theorized.  Yes, there are famous works by Lawrence Lessig and Yochai Benkler, and there are lots of projects and websites that are based on commoning such as like Wikipedia, free software, Arduino, open access journals, among countless others.  But can we identify core principles for organizing digital commons?  Can we use that knowledge to engineer the evolution of new commons?  Identifying such principles just might let us move beyond “openness” as the ultimate goal of online life, to a more sustainable goal, the self-governed commons.

It has been a pleasure to discover that some computer scientists are actively exploring how Elinor Ostrom’s principles for successful commons might be applied to the design of software.  Consider this intriguing essay title: "Axiomatization of Socio-Economic Principles for Self-Organizing Institutions: Concepts, Experiments and Challenges,“ which appeared in the ACM Transactions on Autonomous and Adaptive Systems, in December 2012.  

The piece is by British electrical and electronic engineer Jeremy Pitt and two co-authors, Julia Schaumeier and Alexander Artikis. The abstract is here.  Unfortunately, the full article is behind a paywall, consigning it to a narrow readership.  I shall quote from the abstract here because it hints at the general thinking of tech experts who realize that the social and the technical must be artfully blended:

We address the problem of engineering self-organising electronic institutions for resource allocation in open, embedded and resource-constrained systems.  In such systems, there is decentralised control, competition for resources and an expectation of both intentional and unintentional errors.  The ‘optimal’ distribution of resources is then less important than the sustainability of the distribution mechanism, in terms of endurance and fairness, based on collective decision-making and tolerance of unintentional errors.  In these circumstances, we propose to model resource allocation as a common-pool resource management problem, and develop a formal characterization of Elinor Ostrom’s socio-economic principles for enduring institutions. 

I recently wrote the following essay with John H. Clippinger as part of the ongoing work of ID3, the Institute for Data-Driven Design, which is building a new open source platform for secure digital identity, user-centric control over personal information and data-driven institutions.

As the Internet and digital technologies have proliferated over the past twenty years, incumbent enterprises nearly always resist open network dynamics with fierce determination, a narrow ingenuity and resistance.  It arguably started with AOL (vs. the Web and browsers), Lotus Notes (vs. the Web and browsers) and Microsoft MSN (vs. the Web and browsers, Amazon in books and eventually everything) before moving on to the newspaper industry (Craigslist, blogs, news aggregators, podcasts), the music industry (MP3s, streaming, digital sales, video through streaming and YouTube), and telecommunications (VoIP, WiFi).  But the inevitable rearguard actions to defend old forms are invariably overwhelmed by the new, network-based ones.  The old business models, organizational structures, professional sinecures, cultural norms, etc., ultimately yield to open platforms.

When we look back on the past twenty years of Internet history, we can more fully appreciate the prescience of David P. Reed’s seminal 1999 paper on “Group Forming Networks” (GFNs).[1] “Reed’s Law” posits that value in networks increases exponentially as interactions move from a broadcasting model that offers “best content” (in which value is described by n, the number of consumers) to a network of peer-to-peer transactions (where the network’s value is based on “most members” and mathematically described by n2).  But by far the most valuable networks are based on those that facilitate group affiliations, Reed concluded.  When users have tools for “free and responsible association for common purposes,” he found, the value of the network soars exponentially to 2– a fantastically large number.   This is the Group Forming Network.  Reed predicted that “the dominant value in a typical network tends to shift from one category to another as the scale of the network increases.…”

Gavin Andresen, the lead scientist for the Bitcoin Foundation (and one of its only two staff members) sat down with a few of us at the UMass Amherst Knowledge Commons meeting on Wednesday.  Having read so much hype and misinformation about Bitcoin over the past few months, I was excited to have a chance to talk to someone directly connected with this brilliant experiment in algorithmic institution-building.  Bitcoin is, of course, the digital currency that has been in the news a lot recently because of its surging value among traders – and its dramatic crash.  

For months the dollar value of a Bitcoin fluctuated between $20 and $50….but in mid-March the conversation rate soared to around $250 before crashing last week to $140 and then $40 yesterday.  (Today it was back up to $95.)  This kind of stuff is catnip to the mainstream press, which otherwise doesn’t know much or care much about Bitcoin.

Andresen, a self-described geek in his forties with a pleasant manner and trim haircut, strolled into the small conference room in his black rugby shirt and jeans.  Six of us proceeded to have a wide-ranging, fascinating chat about the functional aspects of Bitcoin, the political and social values embedded in its design, and some of the operational challenges of making Bitcoin a new kind of universal currency. 

For those of you who want a quick primer on Bitcoin, I suggest the New Yorker profile  by Joshua Davis in the October 10, 2011, issue; a terrific recent critique by Denis Roio (aka Jaromil), a Dutch hacker who is working to code new sorts of digital money; or the Wikipedia entry on Bitcoin

Bitcoin is of special interest to me for its remarkable success at solving a serious collective action problem – how to create a digital money so secure and authenticated so that no one can steal its value and ruin it as a stable, trusted currency? 

The problem that Bitcoin solves as a matter of algorithmic and cryptographic design is the “Byzantine General’s problem,” which has been described as “the problem of reaching a consensus among distributed units if some of them give misleading answers.”  As one reference describes it, the problem has been compared to the problem of various generals deciding on a common plan of attack:  “Some traitorous generals may lie about whether they will support a particular plan and what other generals told them. Exchanging only messages, what decisionmaking algorithm should the generals use to reach a consensus?  What percentage of liars can the algorithm tolerate and still correctly determine a consensus?” 

Bitcoin solves this classic problem of achieving coordinated action without reliable communication or excessive (or any) defections.  Much of this success stems from the startlingly solid cryptography of the system.  The other safeguard, Andresen explained, has been Bitcoin’s “get big quick” strategy.  If enough Bitcoins can be put into circulation quickly, then it becomes much harder for any faction to corner the market in Bitcoins or to compromise their integrity.  This is important because the viability of any currency depends upon the ability of the issuer to prevent counterfeiting or theft -- a kind of free riding on the social trust that any community invests in its currency. 

When I was in Berlin, Matthias Spielkamp of, interviewed me about the commons, especially the fate of various digital commons such as free software and the future of the Internet itself. is a German website that covers digital and intellectual property issues.  The video of that interview is now online – a short version (6:56) and a long version (24:37). 

Our conversation started with “What is the commons?” and moved on to such questions as “Free software often is a niche product.  Has it been a success?”.... “Can there be a regulation for the benefit of the commons?”.... and “Has governing the Internet become a public issue….or is it limited to specialized circles?” among other questions. 



The Spanish P2P Wikisprint on March 20

Next Wednesday, March 20, a fascinating new stage in transnational cooperation will arrive when scores of commoners in twenty countries take part in a Spanish P2P Wikisprint, a coordinated effort to document and map the myriad peer to peer initiatives that exist in Latin America and Spain. 

The effort, hosted by the P2P Foundation, was originally going to be held in Spain only, but word got around in the Hispanic world, and presto, an inter-continental P2P collaboration was declared! (A Spanish-language version of the event can be found here.

As described by Bernardo Gutiérrez on the P2P Foundation blog, the Wikisprint will bring together an indigenous collective in Chiapas with a co-working space of Quito; a crowdfunding platform in Barcelona with the open data movement of Montevideo; a hacktivist group in Madrid with permaculturists in Rio de Janeiro’s favelas; and a community of free software developers in Buenos Aires with Lima-based city planners; among many others.  

The Wikisprint will map the various Spanish experiences around the commons, open innovation, co-creation, transparency, co-design, 3D printing, free license, p2politics, among other things.  It will also feature debates, lectures, screenings, speeches, self-media coverage, workshops, network visualizations and videos. 

Here is a list of the 20 participating cities.  Anyone can add a new node from a new city.  If you’d like to participate in the Wikisprint, check out this document on the P2P Foundation wiki to see the criteria for inclusion.  There is a special website created for the occasion -- – and a Twitter hashgtag, # P2PWikisprint.

The entire event will be peer-to-peer, meaning communication will take place through an open network topology in which each node is connected to the other without passing through any center.  As Gutiérrez notes, “P2P – with its openness, decentralization and collective empowerment – is no longer something marginal.  P2P is a philosophy, working trend and a solid reality.  P2P is the nervous system of the new world.”

Syndicate content