Category: Technology

Bootstrapping complexity

So, last week I posted my remix of Kevin Kelly’s book “Out of Control”. And soon after putting the remix online, I sent a note with a link to Kevin Kelly to make him aware of the remix, hoping that he would approve.

He did approve. Much more than I expected. And it didn’t take him long to reply:

I LOVE the remix! I wish you had been my editor. There is only one thing missing from this fantastic remix – a better title. I was never happen with the book’s title and now that it is more focused, the need is even greater. What would you call it?

Whoa! Initially, I hadn’t considered changing the title as I wanted to make it as clear as possible where the material came from. Good titles are notoriously difficult to find, and I’m sure that Kevin has thought quite a bit about this one.

Considering the remix as a new whole work, I found that it was the notion of bootstrapping and self-organization that had kept me reading the book initially: the recurring patterns of self-sustaining systems, which I knew were to be summed up at the end of the book. What appealed to me was the fact that the book not only describes self-organisation but also invites further experimentation.

So I picked my title with that in mind: “Bootstrapping Complexity” plays on the fact that the book not only describes how complexity comes about but also how complex a venture self-organization really is. In this way, the title meant to signal a positive empowerment to explore self-organization – both by reading the book and by experimenting on the basis of the book.

I’ve updated the remix with the new title. The new PDF version is here.

Out of control – remixed

This summer, I read Kevin Kelly‘s book Out of Control. It is a fascinating book full of fascinating ideas reaching across the board from artificial intelligence, evolution, biology, ecology, robotics and more to explore complexity, cybernetics and self-organising systems in an accessible and engaging way.

But as I read the book, I also found it suffering from a number of frustrating flaws: Not only is it way too long-winded, it is also almost completely void of meta-text to help the reader understand what Kelly is trying to do with his book (having read the book, I’m still wondering).

Indeed, reading the book I got the feeling that Kelly was trying to combine several different books into one: There is a fascinating study of self-sustaining systems. But there is also a sort of business-book take on network economy. And an extended meditation on evolution and postdarwinism.

I’m sure that to Kelly, all of these things are tightly interconnected. But he doesn’t explain these interrelations very well to the reader. His central argument is that as technology becomes ever more complex, it becomes more akin to biological systems (eco-systems, vivisystems, interdependent and co-evolving organisms). But because the individual chapters are set up as essays on their own, there is often little to tie these wildly different ideas together.

I would have preferred a much shorter book, more narrowly focused on the idea of self-organising systems. The whole text of the original book is easily available online at Kelly’s own website, so I thought: Why not remix the online text to make such a book?

So I did.

I’ve put my remix up here. The PDF version is available here. Comments are most welcome.

Making sense of twitter

Following my last post, where I likened Twitter to shouting out the window of a moving truck, I’ve been giving the matter some more thought and dug up some different perspectives on Twitter. Web 2.0 entrepreneur Ross Mayfield even asked his Twitter followers how they would describe Twitter to new-comers.

It’s public but focused on individuals. It’s both asynchronous and real-time. It’s searchable and cumulative. It’s not necessarily shouting.

As this presentation by Twitter co-founder Evan Williams illustrates, Twitter is also quite a lot like passing notes or whispering in a classroom. The difference being that the presenter can check out all the comments afterwards:

Williams’ main point is that Twitter has proven to be much more versatile than they expected, and they’ve been working hard to keep up with the cognitive surplus being invested in defining the etiquette and uses of twitter. David Pogue makes a similar point in his insightful write-up of Twitter in the New York Times:Twitter can be whatever you want it to be: An ego boost, a discussion tool, a research tool, a waste of time, a running dialogue during a presentation. It is the openness of the tool that creates the magic. It is still a complete mess, fragmented and incoherent precisely because all of the users are still in the process of figuring out how best to use it.

I can’t help comparing it to IRC, which I used a lot as part of my fieldwork. IRC is real-time chat channels focused on topics rather than on individuals. It requires you to be online through an IRC client in order to follow the conversation (though some IRC channels do log the conversations), but it can also be asynchronous. People can direct comments to specific individuals or just ask an open question to everybody present. It has many of the same features as Twitter – an allows for much better conversation. But it is limited to channels. You need to get all of your friends together in the same few channels in order to be able to talk with them.

Twitter has a much, much lower barrier to entry: It’s on the web. Sign-up is easy. You don’t have to decide which topics you’re interested in, or try to get your friends involved – you immediately connect with your friends already on Twitter. You can use it on your mobile phone. And perhaps most importantly: You’re limited to 140 characters.

But all of this comes at a high price: There is a massive loss of context: It is much more difficult to make sense of the conversation once you’re there. People are trying to help this by using acronyms.

I find Twitter to be a fascinating example of a technology that has been shaped by use rather than by design. Its greatest advantage is the fact that so many people are using it – not any inherent quality of the design itself. The result is a fairly unaesthetic mess, but it makes it clear just how much potential there is for such easy discussion and access to expert knowledge. Twitter has begun to tap this potential in form a Web 2.0 service. But there is a long way to go, still.

Why Free Software is important

Mako Hill, one of the founding members of Ubuntu whom I interviewed as part of my thesis fieldwork, posted a brilliant explanation of the importance of free software:

Suppose I see a beautiful sunset and I want to describe it to a loved one on the other side of the world. Today’s communication technology makes this possible. In the process, however, the technology in question puts constraints on message communicated. For example, if I pick up my cellphone, my description of the sunset will be limited to words and sounds that can be transmitted by phone. If I happen to have a camera phone and the ability to send a picture message, I will be able to communicate a very different type of description. If I’m limited to 150 characters in an SMS message, my message will be constrained differently again.

The point of the example is this: the technology I use to communicate puts limits and constraints on my communication. Technology defines what I can say, how I can say it, when I can say it, and even who I can say it to.

This is neither good nor bad. It is simply the nature of technology. But it means that those who control our technology control us, to some degree. As information technology becomes increasingly central to our lives, the way we experience, understand, and act in the world is increasingly controlled by technology and, by extension, by those who control technology.

I believe that the single most important struggle for freedom in the twenty first century is over the question of who will set these terms. Who will control the technology that controls our lives?

Free software can be understood as an answer to this question: An answer in the form of an unambiguous statement that technology must be under the control of its users. When free software triumphs, we will live in a world where users control their technological destiny. We simply cannot afford to fail.

Far too many of us fail to acknowledge the importance of controlling the technology we use. We don’t realise how much we depend on these tools and services, and how many unconscious comprises we make everyday in using non-free software. Sure, you and I may not be able to appreciate the openness of free software that allows hackers to develop and extend the software according to their needs. But I would much rather depend on people who I know and trust rather than corporations whose leadership might change from one day to the next.

So to show my support of Free software, I’ve joined the Free Software Foundation. Richard Stallman may be an uncompromising zealot – but when it comes to keeping technology free, that’s actually kind of reassuring. 🙂

Dunbar’s number revisited

A while back, I made a brief reference to the so-called Dunbar number in relation to my list of friends on Facebook.

Since then, I’ve spent some time reading up on Dunbar’s number and the concept of friends on social networking sites, and feel the need to delve deeper into this discussion. danah boyd, one of the leading researchers on Social Networking Sites, has made the point that

Friends lists are not an accurate portrayal of who people know now, who they could ask favors of, who they would feel comfortable introducing at the moment. They’re a weird product of people from the past, people from the present, people unknown, people once met.

Based on my own anecdotal evidence, I find this to be exactly right. I have loads of contacts on Facebook that I haven’t seen, nor kept in touch with in ages, only now I have a sort of ambient awareness of what is happening in their lives. It’s like having a auto-updating version of the various social spheres I happen to be in. I guess the most apt metaphor would be a college yearbook – the original facebook – that updates itself everyday.

So, how does this relate to Dunbar’s number? Well, Robin Dunbar is an anthropologist who hypothesized that “there is a cognitive limit to the number of individuals with whom any one person can maintain stable relationships, that this limit is a direct function of relative neocortex size, and that this in turn limits group size … the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained.”

Dunbar sought to prove this hypothesis by correlating a number of studies measuring the group size of a variety of different primates with the brain sizes of the primates. He used these correlations to produce a mathematical formula for how the two correspond. Using his formula, which is based on 36 primates, he found that 147.8 is the “mean group size” for humans, which he found to match census data on various village and tribe sizes in many cultures.

So that’s the basis of the Dunbar’s number of 150 relationships. But as Christopher Allen has done well to point out, reducing Dunbar’s research to just one number would be misleading. As he concludes: The “Dunbar’s group threshold of 150 applies more to groups that are highly incentivized and relatively exclusive and whose goal is survival.”

Similarly, boyd sums up Dunbar’s point quite well:

Just as monkeys groomed to maintain their networks, humans gossiped to maintain theirs! He found that the MAXIMUM number of people that a person could keep up with socially at any given time, gossip maintenance, was 150. This doesn’t mean that people don’t have 150 people in their social network, but that they only keep tabs on 150 people max at any given point.

But one thing is how many active social relationships we can have – i.e. how many people we can keep up with socially in a reciprocal fashion. Another thing is how we know these people and how well we know them. Our social relationships come with both a context and a strength of your shared bond. The context and the strength of our relations is crucial for how we distribute information, support, and trust among our friends.

Typically, we can group our relations in various groups based on the context of the relation: People we know from work, from school, from hockey practice, or people we know through our significant other, people we’ve been introduced to by another relation. Until social networks like Facebook came along, these groups rarely overlapped and got a chance to meet. But these social networks suddenly expose more about our contextual relationships to different groups of people than we would ever do in real life, and we end up having to reconcile the bar-hopping facet of our identity with the paid work facet.

Clay Shirky does well to analyse the consequences of this new social situations. As he argues: It’s not information overload. It’s filter failure: All of the sudden people are able to discover new social contexts in which their friends are part because the filters, which people had in place are no longer working:

Bit by bit – a review of “Two Bits”

I finally found the time to read Christopher Kelty’s book Two Bits – The cultural Significance of Free Software. Kelty is one of the few other anthropologists studying Free Software in general, and his work has been a huge inspiration in my thesis work on Ubuntu, so naturally, my expectations were high.

As Kelty argues, we’ve been drowning in explanations of why Free Software has come about, while starving for explanations of how it works. Thus, Kelty’s focus is on the actual practices of Free Software and the cultural significance of these practices in relation to other aspects of our lives.

Kelty’s main argument is that Free Software communities are a recursive public. He defines a recursive public as a public “whose existence (which consists solely in address through discourse) is possible only through discursive and technical reference to the means of creating this public.”

It is recursive in that it contains not only a discourse about technology, but that this discourse is made possible through and with the technology discussed. And that this technology consists of many recursively dependent layers of technical infrastructure: The entire free software stack, operating systems, Internet protocols. As Kelty concludes:

The depth of recursion is determined by the openness necessary for the project itself.

This is a brilliant observation, and I agree that the notion of a recursive public goes far to explain how the everyday practices and dogmatic concern for software freedom is so closely intertwined in this public.

The book is divided into three parts, each part using a different methodological perspective to examine the cultural significance of Free Software.

The first part is based on Kelty’s ethnographic fieldwork among geeks and their shared interest in the Internet. I found this to be the weakest part of the book. His ethnography does not cover the actual practices of Free Software hackers, but rather on the common traits among Internet geeks, which certainly supports his argument (that they’re all part of a shared recursive public), but doesn’t give a lot of depth to understanding their motives.

The second part is based on archive research of the many available sources within the various open source communities. In my opinion, this is the best part of the book with both deep and thorough analyses of the actual practices within free software communities, as well as vivid telling of the pivotal stories of “figuring out” the practices of Free Software.

The final part is based on Kelty’s own participation (anthropologist as collaborator) in two modulations of the practices of Free Software in other fields, the Duke University Connexions project, and the Creative Commons. These are stories of his own work “figuring out” how to adapt Free Software practices in other realms. These practices are still in the process of being developed, experimented with, and re-shaped – like all Free Software practices. And this part gives a good idea of what it feels like to be in the middle of such a process, though it offers few answers.

Being a completely biased reviewer, I’ll stop pretending to do a proper review now, and instead focus on how Kelty’s analysis fits with my own study on the Ubuntu Linux community. Kelty argues that there are five core practices, which define the recursive public of Free Software. Kelty traces the histories of “figuring out” these practices very well, and I’ll go through each in turn:

Fomenting Movements
This is the most fuzzy on Kelty’s list of five core practices. I understand it as placing the software developed within a greater narrative that offers a sense of purpose and direction within the community – “fomenting a movement” as it were. Kelty has this delicious notion of
“usable pasts” – the narratives that hackers build to make sense of these acts of “figuring out” after the fact.

In my research, I found it very difficult to separate these usable pasts from the actual history within the Free Software movement, and my thesis chapter on the cultural history of Ubuntu bears witness to that. So I am very happy to see that Chris Kelty has gone through the momentous task of examining these stories in detail. I find that this detective work in the archives is among the most important findings in the book.

Sharing Source Code
A basic premise of collaboration is shared and open access to the work done – the source code itself. The crux of the matter being giving access to the software that actually works. Kelty tells the story of Netscape’s failure following its going open source with a telling quote from project lead Jamie Zawinski:

We never distributed the source code to a working web browser, more importantly, to the web browser that people were actually using.

People could contribute, but they couldn’t see the immediate result of their contribution in the browswer that they used. The closer the shared source code is tied to the everyday computing practices of the developers, the better. As Ken Thompson describes in his reflections on UNIX development at AT&T:

The first thing to realize is that the outside world ran on releases of UNIX (V4, V5, V6, V7) but we did not. Our view was a continuum. V5 was simply what we had at some point in time and was probably put out of date simply by the activity required to put it in shape to export.

They were continually developing the system for their own use, trying out new programs on the system as they went along. Back then, they distributed their work through diff tapes. Now, the Internet allows for that continuum to be shared by all developers involved with the diffs being easily downloaded and installed from online repositories.

As I point out in my thesis, this is exactly the case with the development of the Ubuntu system, which can be described as a sort of stigmergy where each change to the system is also a way of communicating activity and interest to the other developers.

Conceptualizing Open Systems
Another basic premise of Free Software is having open standards for implementation, such as TCP/IP, ODF, and the world wide web standards developed by the W3C – all of which allows for reimplementation and reconfiguring as needed. This is a central aspect of building a recursive public, and one I encountered in the Ubuntu community through the discussions and inherent scepticism regarding the proprietary Launchpad infrastructure developed by Canonical, the company financing the core parts of the development of both the Ubuntu system and community.

Writing Licenses
Kelty argues that the way in which a given software license is written and framed shapes the contributions, collaboration and the structure of distribution of that software, and is thus a core practice of Free Software. Kelty illustrates this by telling the intriguing story of the initial “figuring out” of the GPL, and how Richard Stallman slowly codified his attitude towards sharing source code. This “figuring out” is not some platonic reflection of ethics. Rather, it is the codifying of everyday practice:

The hacker ethic does not descend from the heights of philosophy like the categorical imperative – hackers have no Kant, nor do they want one. Rather, as Manuel Delanda has suggested, the philosophy of Free Software is the fact of Free Software itself, its practices and its things. If there is a hacker ethic, it is Free Software itself, it is the recursive public itself, which is much more than list of norms.

Again, almost too smartly, the hackers’ work of “figuring out” their practices refers back to the core of their practices – the software itself. But the main point that the licenses shape the collaboration is very salient, still. As I witnessed in the Ubuntu community, when hackers chose a license for their own projects, it invariably reflected their own practices and preferred form of collaboration.

Coordinating Collaborations
The final core practice within Free Software is collaboration – the tying together of the open code directly with the software that people are actually using. Kelty writes:

Coordination in Free Software privileges adaptability over planning. This involves more than simply allowing any kind of modification; the structure of Free Software coordination actually gives precedence to a generalized openness to change, rather than to the following of shared plans, goals, or ideals dictated or controlled by a hierarchy of individuals.

I love this notion of “adaptability over planning”. It describes quite precisely something that I’ve been trying to describe in my work on Ubuntu. I used Levi-Strauss’ rather worn duality between the engineer and the bricoleur to describe part of this, but I find Kelty’s terms to better describe the practice of collaboration on a higher level:

Linux and Apache should be understood as the results of this kind of coordination: experiments with adaptability that have worked, to the surprise of many who have insisted that complexity requires planning and hierarchy. Goals and planning are the province of governance – the practice of goal-setting, orientation, and definition of control – but adaptability is the province of critique, and this is why Free Software is a recursive public: It stands outside power and offers a powerful criticism in the form of working alternatives.

As Kelty points out, the initial goal of these experiments wasn’t to offer up powerful criticism. Rather, the initial goal is just to learn and adapt software to their own needs:

What drove his [Torvalds’] progress was a commitment to fun and a largely in articulate notion of what interested him and others, defined at the outset almost entirely against Minix.

What Linus Torvalds and his fellow hacker sought to do was not to produce “a powerful criticism” – those almost always come after the fact in the form of usable pasts to rally around – rather, their goal was to build something that would work for their needs, and allowed them to have fun doing so.

I find that this corresponds very well to the conclusion of my thesis: that the driving goal of the Ubuntu hackers continues to be to build “a system that works for me” – a system that matches their personal practices with the computer. A system that is continually and cumulatively improved through the shared effort of the Ubuntu hackers, each adapting the default system to his or her own needs, extending and developing it as needed along the way. As Kelty writes in his conclusion:

The ability to see development of software as a spectrum implies more than just continuous work on a product; it means seeing the product itself as something fluid, built out of previous ideas and products and transforming, differentiating into new ones. Debugging, in this perspective is not separate from design. Both are part of a spectrum of changes and improvements whose goals and direction are governed by the users and the developers themselves, and the patterns of coordination they adopt. It is in the space between debugging and design that Free Software finds its niche.
(…)
Free software is an experimental system, a practice that changes with the results of new experiments. The privileging of adaptability makes it a peculiar kind of experiment, however, one not directed by goals, plans, or hierarchical control, but more like what John Dewey suggested throughout his work: the experimental praxis of science extended to the social organization of governance in the service of improving the conditions of freedom.

In this way, Free Software is a continuing praxis of “figuring out” – giving up an understanding of finality in order to continually adapt and redesign the system. It is this practice of figuring out that is the core of cultural significance of Free Software, as we continue to figure out how to apply these learnings to other aspects of life. Kelty does well to describe his own efforts “figuring out” in relation to non-software projects inspired by Free Software practices in the final part of the book. Though these reflections do not come across as entirely figured out yet.

All in all, it is a brilliant book. But given its Creative Commons license, it poses an interesting challenge to me: Remixing – or modulating, as Kelty calls it – the book with my own work (and that of others – like Biella) to create a new hybrid, less tied up in the academic prestige game.

(Maybe then I can change the title, because that continues to annoy me: Why is it called Two Bits? Apart from the obvious reference to computing in general, it doesn’t seem to have any other relevance particular to Free Software?)

Dunbar’s number and Facebook

Recently, I made a brief reference to the so-called Dunbar number in relation to my list of friends on Facebook.

Since then, I’ve spent some time reading up on Dunbar’s number and the concept of friends on social networking sites, and feel the need to delve deeper into this discussion. danah boyd, one of the leading researchers on Social Networking Sites, has made the point that

Friends lists are not an accurate portrayal of who people know now, who they could ask favors of, who they would feel comfortable introducing at the moment. They’re a weird product of people from the past, people from the present, people unknown, people once met.

Based on my own anecdotal evidence, I find this to be exactly right. I have loads of contacts on Facebook that I haven’t seen, nor kept in touch with in ages, only now I have a sort of ambient awareness of what is happening in their lives. It’s like having a auto-updating version of the various social spheres I happen to be in. I guess the most apt metaphor would be a college yearbook – the original facebook – that updates itself everyday.

So, how does this relate to Dunbar’s number? Well, Robin Dunbar is an anthropologist who hypothesized that “there is a cognitive limit to the number of individuals with whom any one person can maintain stable relationships, that this limit is a direct function of relative neocortex size, and that this in turn limits group size … the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained.”

Dunbar sought to prove this hypothesis by correlating a number of studies measuring the group size of a variety of different primates with the brain sizes of the primates. He used these correlations to produce a mathematical formula for how the two correspond. Using his formula, which is based on 36 primates, he found that 147.8 is the “mean group size” for humans, which he found to match census data on various village and tribe sizes in many cultures.

So that’s the basis of the Dunbar’s number of 150 relationships. But as Christopher Allen has done well to point out, reducing Dunbar’s research to just one number would be misleading. As he concludes: The “Dunbar’s group threshold of 150 applies more to groups that are highly incentivized and relatively exclusive and whose goal is survival.”

Similarly, boyd sums up Dunbar’s point quite well:

Just as monkeys groomed to maintain their networks, humans gossiped to maintain theirs! He found that the MAXIMUM number of people that a person could keep up with socially at any given time, gossip maintenance, was 150. This doesn’t mean that people don’t have 150 people in their social network, but that they only keep tabs on 150 people max at any given point.

So even if I’m casually surfing through loads of status updates and photos on Facebook, oftentimes I’m not actually maintaining my relationships with these people since I’m lacking the relevant social context to make sense of the information offered to me. To use a phrase of Clay Shirky’s, I am eavesdropping on a public conversation that I have little intention in participating in.

In this way, Facebook relays gossip that otherwise would be unavailable to me directly. As a social tool, it allows my relations to pass on information that otherwise wouldn’t reach me directly. But the problem often is though it allows people to pass on information, it is often very bad at letting people control which information is available to whom. As boyd puts it:

Our relationships have a context to them, not just a strength. That context is crucial for many distributions of information, support and trust. (…) [Social networking sites] expose more about us to different groups of people than we would ever do in real life. All of a sudden, we have to reconcile the bar-hopping facet of our identity with the proper work facet.

Basically, Facebook is offering more social information about us than we would otherwise give out. (yes, it’s technically possible to stop this by using the privacy settings – but nobody can figure those out anyway. Partly because it is an unnatural thing to consciously set up such filters, and partly because you can’t get an easy overview over who can access a given piece of content on your profile.

And that really puts a lot of basic social relations in flux.

As Clay Shirky concludes in this brilliant presentation: It is not the fact that we’re presented with too much information – it’s the fact that our old social filters no longer work. Fundamentally, social tools like Facebook are challenging age-old social norms about who told what to whom. And the challenge seems to be to find new ways – both technical and social – to filter the vast amounts of social information suddenly made available to us.

UPDATE: Many of these issues have been discussed very poignantly in this New York Times article The conclusion hits these themes very well:

Young people today are already developing an attitude toward their privacy that is simultaneously vigilant and laissez-faire. They curate their online personas as carefully as possible, knowing that everyone is watching ?? but they have also learned to shrug and accept the limits of what they can control.

It is easy to become unsettled by privacy-eroding aspects of awareness tools. But there is another ?? quite different ?? result of all this incessant updating: a culture of people who know much more about themselves. Many of the avid Twitterers, Flickrers and Facebook users I interviewed described an unexpected side-effect of constant self-disclosure. The act of stopping several times a day to observe what you??re feeling or thinking can become, after weeks and weeks, a sort of philosophical act. It??s like the Greek dictum to ??know thyself,? or the therapeutic concept of mindfulness. (Indeed, the question that floats eternally at the top of Twitter??s Web site ?? ??What are you doing?? ?? can come to seem existentially freighted. What are you doing?) Having an audience can make the self-reflection even more acute, since, as my interviewees noted, they??re trying to describe their activities in a way that is not only accurate but also interesting to others: the status update as a literary form.

This notion of the status update as a literary form has also been explored recently by Nadja, whom I share office space with at Socialsquare, in this longish article (in Danish).

On technological progress

I bought a new phone recently.

Buying a new phone is a big thing for me, since I’ve been holding on to my old phone for ages, and it has served me well. But the reason behind this sudden purchase was not that my old phone had stopped working, but rather that I finally decided that I needed a new camera.

My digital camera is very old digital camera, which I inherited from my father when he bought a new camera in 2004. This is the camera which I brought with me to Manchester, and on all of my field trips since then, and it has served me well. But for the past 2 years I haven’t really taken any photos at all due to its immense clunkiness.

I remember reading in a PC magazine back in the mid-1990s that any piece of computer software or hardware more than 5 years old is to be considered an antique. So in their wording, my camera is most certainly a modern antique.

So I began looking at cameras and thought I’d give these new camera phones a look-over as well. And it was at that point that I realized the extent of technological progress (as one might be tempted to call it) in the field of gadgets in the past few years, which I’ve sought to illustrate below with a picture of my collection of assorted electronic gadgetry:

Technology

  • Pictured at the right is my old phone, a Nokia 3510i from 2002. At the time hyped for its colour screen.
  • Pictured at the centre is my old portable music player, a Iaudio U3 from 2005, with 1 gigabyte of memory. Its display has been broken in an unfortunate incident rendering menu navigation very random.
  • Pictured at the bottom is my old digital camera, the sturdy but inefficient Olympus c-700 with its 2.1 megapixels camera, and 32 megabytes of memory in a neat package the size of a fist.
  • Pictured at the top is my new phone, a Nokia N73 from 2006. It has a high definition colour screen, 2 gigabytes of memory, built-in music player and a 3.2 megapixel camera.
  • Now, this may not be news to a lot tech-savvy people, but it is a very new feeling to me to have such a multi-functional tool in my pocket. But despite all of its qualities, I cannot help but wonder whether it will prove to be as durable as the three gadgets that it retires…

    Tapping into the cognitive surplus

    Last week I began a 4-week internship at Social Square, one of the leading Danish developers of social software. “Leading” can be somewhat misleading since there’s almost no dedicated developers of social software in Denmark. Actually, the founders have spent the last two years giving talks and writing a book about social software, making the pedagogical effort to show potential clients how they might use social software to their advantage both internally in their organisation and externally in communicating and relating to their users, customers, and clients.

    So what is social software really about? Well, as the very clever Internet theorist Clay Shirky argues, it’s about tapping into the massive cognitive surplus that has been created with all the free time, which people have nowadays in the industrial world. He argues that traditionally, this surplus has been soaked up by gin and television. Now, it’s possible to use that surplus in more creative and constructive ways (think Wikipedia, free software etc.). Shirky is a very entertaining speaker, and I recommend hearing the word from the horse’s mouth:

    UPDATE: Oh, just to counterpoint Shirky’s rather exuberant optimism, I just saw this video with Jonathan Zittrain, another extremely clever internet theorist who has spent a bit more time worrýing about how the internet might be corrupted. As a Ubuntu veteran, I love his analogy between the development of the US constitution and the development of operating systems:

    Online communities work like parties

    Recently, I’ve come across several blog posts using the metaphor of a good party to describe well-functioning online communities. Paraphrasing Matt Mullenweg, founder of the WordPress project, Service Untitled sums up the metaphor thus:

    Parties that are successful bring the right number of people together. Those people end up having a good time and having fun. They will hopefully come for whatever their purpose is and achieve that sort of goal (having fun, learning, meeting people, etc.). When people achieve their particular goals and have fun, they leave feeling happy.

    Good parties almost always have good hosts. It is their job to keep the size of the space appropriate for the number of guests, plan the party, get people involved, and keep things rolling. The host not only needs to be the organizer of many things, but sometimes the life of the party and cheerleader. Sometimes this is is necessary, but not always.

    One or two bad guests can ruin a party and make it miserable for almost everyone. A space that is too large or too small for the number of guests can make for a bad party. A party with a terrible host will likely be bad. Sometimes parties are really great or really bad for no apparent reason.

    Now replace every use of the word party with community, every use of the word guest with member, and host with community leader.


    Lee LeFever
    , who probably first made up the metaphor, lists all the ingredients which a good party and an active online community have in common. Unsuprisingly, his conclusion is simple:

    In the end, if you’re truly interested in online communities, the most important ingredient is you. Without people who care about the community and are willing and excited about making it work, it will not succeed.

    This sounds misleadingly obvious, but in my experience, it’s true. The open source projects that I’ve taken part in all work hard to maintain a solid focus on what they have in common and how to have fun doing it. Ubuntu uses a Code of Conduct to ensure the good intentions of its participants, while two Subversion developers have made a very successful talk on “How Open Source Projects Survive Poisonous People (And You Can Too).”

    Concerns which are very similar to those of discotheque managers and bouncers. And I suppose the tools of kicking and banning aren’t really that dissimilar…