Internet

Our knowledge about what makes digital commons work is terribly under-theorized.  Yes, there are famous works by Lawrence Lessig and Yochai Benkler, and there are lots of projects and websites that are based on commoning such as like Wikipedia, free software, Arduino, open access journals, among countless others.  But can we identify core principles for organizing digital commons?  Can we use that knowledge to engineer the evolution of new commons?  Identifying such principles just might let us move beyond “openness” as the ultimate goal of online life, to a more sustainable goal, the self-governed commons.

It has been a pleasure to discover that some computer scientists are actively exploring how Elinor Ostrom’s principles for successful commons might be applied to the design of software.  Consider this intriguing essay title: "Axiomatization of Socio-Economic Principles for Self-Organizing Institutions: Concepts, Experiments and Challenges,“ which appeared in the ACM Transactions on Autonomous and Adaptive Systems, in December 2012.  

The piece is by British electrical and electronic engineer Jeremy Pitt and two co-authors, Julia Schaumeier and Alexander Artikis. The abstract is here.  Unfortunately, the full article is behind a paywall, consigning it to a narrow readership.  I shall quote from the abstract here because it hints at the general thinking of tech experts who realize that the social and the technical must be artfully blended:

We address the problem of engineering self-organising electronic institutions for resource allocation in open, embedded and resource-constrained systems.  In such systems, there is decentralised control, competition for resources and an expectation of both intentional and unintentional errors.  The ‘optimal’ distribution of resources is then less important than the sustainability of the distribution mechanism, in terms of endurance and fairness, based on collective decision-making and tolerance of unintentional errors.  In these circumstances, we propose to model resource allocation as a common-pool resource management problem, and develop a formal characterization of Elinor Ostrom’s socio-economic principles for enduring institutions. 

I recently wrote the following essay with John H. Clippinger as part of the ongoing work of ID3, the Institute for Data-Driven Design, which is building a new open source platform for secure digital identity, user-centric control over personal information and data-driven institutions.

As the Internet and digital technologies have proliferated over the past twenty years, incumbent enterprises nearly always resist open network dynamics with fierce determination, a narrow ingenuity and resistance.  It arguably started with AOL (vs. the Web and browsers), Lotus Notes (vs. the Web and browsers) and Microsoft MSN (vs. the Web and browsers, Amazon in books and eventually everything) before moving on to the newspaper industry (Craigslist, blogs, news aggregators, podcasts), the music industry (MP3s, streaming, digital sales, video through streaming and YouTube), and telecommunications (VoIP, WiFi).  But the inevitable rearguard actions to defend old forms are invariably overwhelmed by the new, network-based ones.  The old business models, organizational structures, professional sinecures, cultural norms, etc., ultimately yield to open platforms.

When we look back on the past twenty years of Internet history, we can more fully appreciate the prescience of David P. Reed’s seminal 1999 paper on “Group Forming Networks” (GFNs).[1] “Reed’s Law” posits that value in networks increases exponentially as interactions move from a broadcasting model that offers “best content” (in which value is described by n, the number of consumers) to a network of peer-to-peer transactions (where the network’s value is based on “most members” and mathematically described by n2).  But by far the most valuable networks are based on those that facilitate group affiliations, Reed concluded.  When users have tools for “free and responsible association for common purposes,” he found, the value of the network soars exponentially to 2– a fantastically large number.   This is the Group Forming Network.  Reed predicted that “the dominant value in a typical network tends to shift from one category to another as the scale of the network increases.…”

Gavin Andresen, the lead scientist for the Bitcoin Foundation (and one of its only two staff members) sat down with a few of us at the UMass Amherst Knowledge Commons meeting on Wednesday.  Having read so much hype and misinformation about Bitcoin over the past few months, I was excited to have a chance to talk to someone directly connected with this brilliant experiment in algorithmic institution-building.  Bitcoin is, of course, the digital currency that has been in the news a lot recently because of its surging value among traders – and its dramatic crash.  

For months the dollar value of a Bitcoin fluctuated between $20 and $50….but in mid-March the conversation rate soared to around $250 before crashing last week to $140 and then $40 yesterday.  (Today it was back up to $95.)  This kind of stuff is catnip to the mainstream press, which otherwise doesn’t know much or care much about Bitcoin.

Andresen, a self-described geek in his forties with a pleasant manner and trim haircut, strolled into the small conference room in his black rugby shirt and jeans.  Six of us proceeded to have a wide-ranging, fascinating chat about the functional aspects of Bitcoin, the political and social values embedded in its design, and some of the operational challenges of making Bitcoin a new kind of universal currency. 

For those of you who want a quick primer on Bitcoin, I suggest the New Yorker profile  by Joshua Davis in the October 10, 2011, issue; a terrific recent critique by Denis Roio (aka Jaromil), a Dutch hacker who is working to code new sorts of digital money; or the Wikipedia entry on Bitcoin

Bitcoin is of special interest to me for its remarkable success at solving a serious collective action problem – how to create a digital money so secure and authenticated so that no one can steal its value and ruin it as a stable, trusted currency? 

The problem that Bitcoin solves as a matter of algorithmic and cryptographic design is the “Byzantine General’s problem,” which has been described as “the problem of reaching a consensus among distributed units if some of them give misleading answers.”  As one reference describes it, the problem has been compared to the problem of various generals deciding on a common plan of attack:  “Some traitorous generals may lie about whether they will support a particular plan and what other generals told them. Exchanging only messages, what decisionmaking algorithm should the generals use to reach a consensus?  What percentage of liars can the algorithm tolerate and still correctly determine a consensus?” 

Bitcoin solves this classic problem of achieving coordinated action without reliable communication or excessive (or any) defections.  Much of this success stems from the startlingly solid cryptography of the system.  The other safeguard, Andresen explained, has been Bitcoin’s “get big quick” strategy.  If enough Bitcoins can be put into circulation quickly, then it becomes much harder for any faction to corner the market in Bitcoins or to compromise their integrity.  This is important because the viability of any currency depends upon the ability of the issuer to prevent counterfeiting or theft -- a kind of free riding on the social trust that any community invests in its currency. 

When I was in Berlin, Matthias Spielkamp of iRights.info, interviewed me about the commons, especially the fate of various digital commons such as free software and the future of the Internet itself.  iRights.info is a German website that covers digital and intellectual property issues.  The video of that interview is now online – a short version (6:56) and a long version (24:37). 

Our conversation started with “What is the commons?” and moved on to such questions as “Free software often is a niche product.  Has it been a success?”.... “Can there be a regulation for the benefit of the commons?”.... and “Has governing the Internet become a public issue….or is it limited to specialized circles?” among other questions. 

 

 

The Spanish P2P Wikisprint on March 20

Next Wednesday, March 20, a fascinating new stage in transnational cooperation will arrive when scores of commoners in twenty countries take part in a Spanish P2P Wikisprint, a coordinated effort to document and map the myriad peer to peer initiatives that exist in Latin America and Spain. 

The effort, hosted by the P2P Foundation, was originally going to be held in Spain only, but word got around in the Hispanic world, and presto, an inter-continental P2P collaboration was declared! (A Spanish-language version of the event can be found here.

As described by Bernardo Gutiérrez on the P2P Foundation blog, the Wikisprint will bring together an indigenous collective in Chiapas with a co-working space of Quito; a crowdfunding platform in Barcelona with the open data movement of Montevideo; a hacktivist group in Madrid with permaculturists in Rio de Janeiro’s favelas; and a community of free software developers in Buenos Aires with Lima-based city planners; among many others.  

The Wikisprint will map the various Spanish experiences around the commons, open innovation, co-creation, transparency, co-design, 3D printing, free license, p2politics, among other things.  It will also feature debates, lectures, screenings, speeches, self-media coverage, workshops, network visualizations and videos. 

Here is a list of the 20 participating cities.  Anyone can add a new node from a new city.  If you’d like to participate in the Wikisprint, check out this document on the P2P Foundation wiki to see the criteria for inclusion.  There is a special website created for the occasion -- Wikisprint.p2pf.org – and a Twitter hashgtag, # P2PWikisprint.

The entire event will be peer-to-peer, meaning communication will take place through an open network topology in which each node is connected to the other without passing through any center.  As Gutiérrez notes, “P2P – with its openness, decentralization and collective empowerment – is no longer something marginal.  P2P is a philosophy, working trend and a solid reality.  P2P is the nervous system of the new world.”

For years I have been the rapporteur for the Aspen Institute’s Information Technology Roundtable conference, which every year brings together about 25 technologists, venture capitalists, policy wonks, management gurus, and others to discuss topics of breaking concern.  The most recent topic was the “power curve” distributions that tend to result on open network platforms.

This is extensively discussed in my just-released report on the conference, Power-Curve Society:  The Future of Innovation, Opportunity and Social Equity in the Emerging Networked Economy.  The report notes how a globally networked economy allows greater ease of transactions but also requires fewer workers at lower pay, which tends to aggravate wealth and income inequality.  As I write in the introduction to the report:

Although the new technologies are clearly driving economic growth and higher productivity, the distribution of these benefits is skewed in worrisome ways. Wealth and income distribution no longer resemble a familiar “bell curve” in which the bulk of the wealth accrue to a large middle class. Instead, the networked economy seems to be producing a “power-curve” distribution, sometimes known as a “winner-take-all” economy. A relative few players tend to excel and reap disproportionate benefits while the great mass of the population scrambles for lower-paid, lower-skilled jobs, if they can be found at all. Economic and social insecurity is widespread.

The report also looks at Big Data and the coming personal data revolution beneath it that seeks to put individuals, and not companies or governments, at the forefront. Companies in the power-curve economy rely heavily on big databases of personal information to improve their marketing, product design, and corporate strategies. The unanswered question is whether the multiplying reservoirs of personal data will be used to benefit individuals as consumers and citizens, or whether large Internet companies will control and monetize Big Data for their private gain.

The Death of a Hacktivist

Aaron Swartz’s death is a sobering story about the collision of free culture activism with vindicative prosecutorial powers.  It’s also about an amazing tech wizard and the personal costs of his idealism.  Here’s hoping that Swartz’s tragic suicide at age 26 prompts some serious reflection about the grotesque penalties for a victimless computer crime and the unchecked power of federal prosecutors to intimidate defendants.  Perhaps MIT, too, should reflect deeply on its core mission as an academic institution – to help share more knowledge, not fence it off. 

Swartz was a hacker-wunderkind, a boy genius who played a significant role in many tech innovations affecting the Internet:  RDF tags for Creative Commons licenses; a version of RSS software for syndicating web content; an early version of the platform that became Reddit, the user-driven news website.  In 2006, when I interviewed Swartz for my book Viral Spiral, I was astonished to encounter a 19-year-old kid who had already done the path-breaking technical work that I just mentioned.  

Swartz had been a junior high school student when he was doing mind-bending coding and design work for the Creative Commons licenses and their technical protocols.  “I remember these moments when I was, like, sitting in the locker room, typing on my laptop, in these debates, and having to close it because the bell rang and I had to get back to class….” 

When a windfall of cash came Swartz’s way following the sale of Reddit to Conde Nast, Swartz did not launch a new startup to make still more money.  He intensified his activism and coding on behalf of free culture.  He sought out new projects that would make information on the Internet more accessible to everyone.

In 2006, he worked with Brewster Kahle of the Internet Archive to post complete bibliographic data for every book held by the Library of Congress – information for which the Library charged fees.  A few years later, working with guerilla public-information activist Carl Malamud, Swartz legally downloaded a large fraction of court decisions that were hosted by PACER, the Public Access to Court Electronic Records.  PACER is the repository of US court decisions.  Swartz’s idea was to reclaim documents that taxpayers had already paid for.  Why should we have to pay 10 cents per page to access them?  (Those documents can now be found at Malamud’s site, www.public.resource.org.)

Cloud Computing as Enclosure

As more and more computing moves off our PCs and into “the Cloud,” Internet users are gaining access to a wealth of new software-based services that can exploit vast computing capacity and memory storage.  That’s wonderful.  But what about our freedom to create and share things as we wish, free from corporate or government surveillance or over-reaching copyright enforcement?  The real danger of the Cloud is its potential to limit how we may create and share what we want, on our terms.

There are already signs that large corporations like Google, Facebook, Twitter and all the rest will quietly warp the design architecture of the Internet to serve their business interests first.  A terrific overview of the troubling issues raised by the Cloud can be found in the essay, “The Cloud:  Boundless Digital Potential or Enclosure 3.0,” by David Lametti, a law professor at McGill University, and published by the Virginia Journal of Law & Technology.  An earlier version is available at the SSRN website.   

Lametti states his thesis simply:  “I argue that the Cloud, unless monitored and possibly directed, has the potential to go beyond undermining copyright and the public domain – Enclosure 2.0 – and to go beyond weakening privacy. This round, which I call “Enclosure 3.0”, has the potential to disempower Internet users and conversely empower a very small group of gatekeepers. Put bluntly, it has the potential to relegate Internet users to the status of digital sheep.”

Josh Wallaert, writing at the Places Journal (at the Design Observer Group) – “the online journal of architecture, landscape and urbanism,” has a wonderful post about nominally public spaces on the Internet.  The post, called “State of the Commons,” notes:

….Flickr has become a ghost town in recent years, conservatively managed by its corporate parent Yahoo, which has ceded ground to photo-sharing alternatives like Facebook (and its subsidiary Instagram), Google Plus (and Picasa and Panoramio), and Twitter services (TwitPic and Yfrog).  An increasing share of the Internet’s visual resources are now locked away in private cabinets, untagged and unsearchable, shared with a public no wider than the photographer’s personal sphere. Google’s Picasa and Panoramio support creative commons licenses, but finding the settings is not easy. And Facebook, the most social place to share photos, is the least public. Hundreds of millions of people who have photographed culturally significant events, people, buildings and landscapes, and who would happily give their work to the commons if they were prompted, are locked into sites that don’t even provide the option. The Internet (and the mobile appverse) is becoming a chain of walled gardens that trap even the most civic-minded person behind the hedges, with no view of the outside world…..Canton Public Library, 1903, Canton, Ohio; entry in the Wiki Loves Monuments USA contest. [Photo by Bgottsab], from DesignObserver.com

For better and worse, public-making in the early 21st-century has been consigned to private actors: to activists, urban interventionists, community organizations and — here’s the really strange thing — online corporations. The body politic has retreated to nominally public spaces controlled by Google, Facebook, Twitter and Tumblr, which now constitute a vital but imperfect substitute for the town square. Jonathan Massey and Brett Snyder draw an analogy between these online spaces and the privately-owned public space of Zuccotti Park, the nerve center for Occupy Wall Street, and indeed online tools have been used effectively to support direct actions and participatory democracies around the world.  Still, the closest most Americans get to the messy social activity of cooperative farm planning is the exchange of digital carrots in Farmville.

For anyone scratching their head about how to understand the deeper social and economic dynamics of online networks, a terrific new report has been released by Michel Bauwens called Synthetic Overview of the Collaborative Economy.  Michel, who directs the Foundation for Peer to Peer Alternatives and works with me at the Commons Strategies Group, is a leading thinker and curator of developments in the emerging P2P economy. 

The report was prepared for Orange Labs, a division of the French telecom company, as a comprehensive survey and analysis of new forms of collaborative production on the Internet.  The report is a massive 346 pages (downloadable as a pdf file under a Creative Commons BY-NC-SA license) and contains 543 footnotes.  But it is entirely clear and accessible to non-techies.  Unlike so many popular books on this subject that are either larded with colorful hyperbole and overly long anecdotes, or arcane technical detail, the Bauwens report cuts to the chase, giving tightly focuses analyses of the key principles of online cooperation.  The report is meaty, informative, comprehensive and well-documented.

Two paragraphs from the Introduction give a nice overview:

Two main agents of transformation guide this work. One is the emergence of community dynamics as an essential ingredient of doing business. It is no longer a matter of autonomous and separated corporations marketing to essentially isolated consumers, it is now a matter of deeply inter-networked economic actors involved in vocal and productive communities. The second is that the combined effect of digital reproduction and the increasingly 'socialized' production of value, makes the individual and corporate privatization of 'intellectual' property if not untenable, then certainly more difficult , and in all likelihood, ultimately unproductive. Hence the combined development of community-oriented and 'open' business models, which rely on more 'social' forms of intellectual property.

In this work, we therefore look at community dynamics that are mobilized by traditional actors (open innovation, crowdsourcing), and new models where the community's value creation is at its core (the free software, shared design and open hardware models). We then look at monetization in the absence of private IP. Linked to these developments are the emergence of distributed physical infrastructures, where the evolution of the networked computer is mirrored in the development of networked production and even financing. Indeed the mutualization of knowledge goes hand in hand with the mutualization of physical infrastructures, such as collaborative consumption and peer to peer marketplaces, used to mobilize idle resources and assets more effectively.

Syndicate content