Monthly Archives: November 2008

Bit by bit – a review of “Two Bits”

I finally found the time to read Christopher Kelty’s book Two Bits – The cultural Significance of Free Software. Kelty is one of the few other anthropologists studying Free Software in general, and his work has been a huge inspiration in my thesis work on Ubuntu, so naturally, my expectations were high.

As Kelty argues, we’ve been drowning in explanations of why Free Software has come about, while starving for explanations of how it works. Thus, Kelty’s focus is on the actual practices of Free Software and the cultural significance of these practices in relation to other aspects of our lives.

Kelty’s main argument is that Free Software communities are a recursive public. He defines a recursive public as a public “whose existence (which consists solely in address through discourse) is possible only through discursive and technical reference to the means of creating this public.”

It is recursive in that it contains not only a discourse about technology, but that this discourse is made possible through and with the technology discussed. And that this technology consists of many recursively dependent layers of technical infrastructure: The entire free software stack, operating systems, Internet protocols. As Kelty concludes:

The depth of recursion is determined by the openness necessary for the project itself.

This is a brilliant observation, and I agree that the notion of a recursive public goes far to explain how the everyday practices and dogmatic concern for software freedom is so closely intertwined in this public.

The book is divided into three parts, each part using a different methodological perspective to examine the cultural significance of Free Software.

The first part is based on Kelty’s ethnographic fieldwork among geeks and their shared interest in the Internet. I found this to be the weakest part of the book. His ethnography does not cover the actual practices of Free Software hackers, but rather on the common traits among Internet geeks, which certainly supports his argument (that they’re all part of a shared recursive public), but doesn’t give a lot of depth to understanding their motives.

The second part is based on archive research of the many available sources within the various open source communities. In my opinion, this is the best part of the book with both deep and thorough analyses of the actual practices within free software communities, as well as vivid telling of the pivotal stories of “figuring out” the practices of Free Software.

The final part is based on Kelty’s own participation (anthropologist as collaborator) in two modulations of the practices of Free Software in other fields, the Duke University Connexions project, and the Creative Commons. These are stories of his own work “figuring out” how to adapt Free Software practices in other realms. These practices are still in the process of being developed, experimented with, and re-shaped – like all Free Software practices. And this part gives a good idea of what it feels like to be in the middle of such a process, though it offers few answers.

Being a completely biased reviewer, I’ll stop pretending to do a proper review now, and instead focus on how Kelty’s analysis fits with my own study on the Ubuntu Linux community. Kelty argues that there are five core practices, which define the recursive public of Free Software. Kelty traces the histories of “figuring out” these practices very well, and I’ll go through each in turn:

Fomenting Movements
This is the most fuzzy on Kelty’s list of five core practices. I understand it as placing the software developed within a greater narrative that offers a sense of purpose and direction within the community – “fomenting a movement” as it were. Kelty has this delicious notion of
“usable pasts” – the narratives that hackers build to make sense of these acts of “figuring out” after the fact.

In my research, I found it very difficult to separate these usable pasts from the actual history within the Free Software movement, and my thesis chapter on the cultural history of Ubuntu bears witness to that. So I am very happy to see that Chris Kelty has gone through the momentous task of examining these stories in detail. I find that this detective work in the archives is among the most important findings in the book.

Sharing Source Code
A basic premise of collaboration is shared and open access to the work done – the source code itself. The crux of the matter being giving access to the software that actually works. Kelty tells the story of Netscape’s failure following its going open source with a telling quote from project lead Jamie Zawinski:

We never distributed the source code to a working web browser, more importantly, to the web browser that people were actually using.

People could contribute, but they couldn’t see the immediate result of their contribution in the browswer that they used. The closer the shared source code is tied to the everyday computing practices of the developers, the better. As Ken Thompson describes in his reflections on UNIX development at AT&T:

The first thing to realize is that the outside world ran on releases of UNIX (V4, V5, V6, V7) but we did not. Our view was a continuum. V5 was simply what we had at some point in time and was probably put out of date simply by the activity required to put it in shape to export.

They were continually developing the system for their own use, trying out new programs on the system as they went along. Back then, they distributed their work through diff tapes. Now, the Internet allows for that continuum to be shared by all developers involved with the diffs being easily downloaded and installed from online repositories.

As I point out in my thesis, this is exactly the case with the development of the Ubuntu system, which can be described as a sort of stigmergy where each change to the system is also a way of communicating activity and interest to the other developers.

Conceptualizing Open Systems
Another basic premise of Free Software is having open standards for implementation, such as TCP/IP, ODF, and the world wide web standards developed by the W3C – all of which allows for reimplementation and reconfiguring as needed. This is a central aspect of building a recursive public, and one I encountered in the Ubuntu community through the discussions and inherent scepticism regarding the proprietary Launchpad infrastructure developed by Canonical, the company financing the core parts of the development of both the Ubuntu system and community.

Writing Licenses
Kelty argues that the way in which a given software license is written and framed shapes the contributions, collaboration and the structure of distribution of that software, and is thus a core practice of Free Software. Kelty illustrates this by telling the intriguing story of the initial “figuring out” of the GPL, and how Richard Stallman slowly codified his attitude towards sharing source code. This “figuring out” is not some platonic reflection of ethics. Rather, it is the codifying of everyday practice:

The hacker ethic does not descend from the heights of philosophy like the categorical imperative – hackers have no Kant, nor do they want one. Rather, as Manuel Delanda has suggested, the philosophy of Free Software is the fact of Free Software itself, its practices and its things. If there is a hacker ethic, it is Free Software itself, it is the recursive public itself, which is much more than list of norms.

Again, almost too smartly, the hackers’ work of “figuring out” their practices refers back to the core of their practices – the software itself. But the main point that the licenses shape the collaboration is very salient, still. As I witnessed in the Ubuntu community, when hackers chose a license for their own projects, it invariably reflected their own practices and preferred form of collaboration.

Coordinating Collaborations
The final core practice within Free Software is collaboration – the tying together of the open code directly with the software that people are actually using. Kelty writes:

Coordination in Free Software privileges adaptability over planning. This involves more than simply allowing any kind of modification; the structure of Free Software coordination actually gives precedence to a generalized openness to change, rather than to the following of shared plans, goals, or ideals dictated or controlled by a hierarchy of individuals.

I love this notion of “adaptability over planning”. It describes quite precisely something that I’ve been trying to describe in my work on Ubuntu. I used Levi-Strauss’ rather worn duality between the engineer and the bricoleur to describe part of this, but I find Kelty’s terms to better describe the practice of collaboration on a higher level:

Linux and Apache should be understood as the results of this kind of coordination: experiments with adaptability that have worked, to the surprise of many who have insisted that complexity requires planning and hierarchy. Goals and planning are the province of governance – the practice of goal-setting, orientation, and definition of control – but adaptability is the province of critique, and this is why Free Software is a recursive public: It stands outside power and offers a powerful criticism in the form of working alternatives.

As Kelty points out, the initial goal of these experiments wasn’t to offer up powerful criticism. Rather, the initial goal is just to learn and adapt software to their own needs:

What drove his [Torvalds’] progress was a commitment to fun and a largely in articulate notion of what interested him and others, defined at the outset almost entirely against Minix.

What Linus Torvalds and his fellow hacker sought to do was not to produce “a powerful criticism” – those almost always come after the fact in the form of usable pasts to rally around – rather, their goal was to build something that would work for their needs, and allowed them to have fun doing so.

I find that this corresponds very well to the conclusion of my thesis: that the driving goal of the Ubuntu hackers continues to be to build “a system that works for me” – a system that matches their personal practices with the computer. A system that is continually and cumulatively improved through the shared effort of the Ubuntu hackers, each adapting the default system to his or her own needs, extending and developing it as needed along the way. As Kelty writes in his conclusion:

The ability to see development of software as a spectrum implies more than just continuous work on a product; it means seeing the product itself as something fluid, built out of previous ideas and products and transforming, differentiating into new ones. Debugging, in this perspective is not separate from design. Both are part of a spectrum of changes and improvements whose goals and direction are governed by the users and the developers themselves, and the patterns of coordination they adopt. It is in the space between debugging and design that Free Software finds its niche.
(…)
Free software is an experimental system, a practice that changes with the results of new experiments. The privileging of adaptability makes it a peculiar kind of experiment, however, one not directed by goals, plans, or hierarchical control, but more like what John Dewey suggested throughout his work: the experimental praxis of science extended to the social organization of governance in the service of improving the conditions of freedom.

In this way, Free Software is a continuing praxis of “figuring out” – giving up an understanding of finality in order to continually adapt and redesign the system. It is this practice of figuring out that is the core of cultural significance of Free Software, as we continue to figure out how to apply these learnings to other aspects of life. Kelty does well to describe his own efforts “figuring out” in relation to non-software projects inspired by Free Software practices in the final part of the book. Though these reflections do not come across as entirely figured out yet.

All in all, it is a brilliant book. But given its Creative Commons license, it poses an interesting challenge to me: Remixing – or modulating, as Kelty calls it – the book with my own work (and that of others – like Biella) to create a new hybrid, less tied up in the academic prestige game.

(Maybe then I can change the title, because that continues to annoy me: Why is it called Two Bits? Apart from the obvious reference to computing in general, it doesn’t seem to have any other relevance particular to Free Software?)

What we lose growing up (and how to regain it)

Reading the Presentation Zen blog recently, I came across several good things:

1. A reference to the TED conference – a great conference where various brilliant people get 20 minutes each to present their great idea. Loads of great stuff there.

2. A list of great presentations with some great insights.

A recurring theme in many of these presentations is creativity – and how we find it so hard to play and be creative. The importance of play in creativity is fairly obvious: Playing means reconfiguring, reworking, and reframing. It‚??s the opposite of taking things for granted. It is building new, exploring old, and trying on new roles, as Tim Brown, the CEO of the very playful design company IDEO, illustrates:

Brown argues that we lose this playfulness as we grow up. Play requires trust. It requires that we feel secure enough to risk breaking something through our play. Brown argues that as adults we become overly sensitive to the opinion of others. We fear their criticism, and we conform.

Creativity expert Sir Ken Robinson talks along similar lines in his presentation. He argues that our public school system is forcing us all to think alike, rather than cultivating creativity by acknowledging multiple types of intelligence:

We don‚??t grow in to creativity, we grow out of it‚?¶we get educated out of it.

Robinson says that if the school system had its way, we would all end up as university professors, living only in our heads. But that‚??s not how most of us think or live:

As Garr at Presentation Zen does well to point out, what we lose growing up is our beginner‚??s mind. Beginner‚??s mind is a Zen Buddhist expression perhaps best described by the zen master Shunryu Suzuki:

In the beginner‚??s mind there are many possibilities, but in the expert‚??s there are few.

As children, everything is new to us, and we play with it to figure it out, to understand how to use it, and how not to use it. We learn from our mistakes as long as we‚??re willing to make them. As we grow up, we become experts, we don‚??t see the potential for play anymore – even in things that are new to us. We seek to relate to them according to our habits.

Another aspect of losing our playfulness is that as we grow up, we have to do what we‚??re told: Go to school, do your homework, sit still, stop fidgeting, write the essay, pass the exam, get a job, go to work – and so on. But, as Aaron Swartz has explored in his essay on procrastination (I suspect that you can guess what I was doing when I happened upon it), we have a deep avoidance to doing what we‚??re told:

Numerous psychology experiments have found that when you try to ‚??incentivize‚?Ě people to do something, they‚??re less likely to do it and do a worse job. External incentives, like rewards and punishments, kills what psychologists call your ‚??intrinsic motivation‚?Ě ‚?? your natural interest in the problem. (This is one of the most thoroughly replicated findings of social psychology ‚?? over 70 studies have found that rewards undermine interest in the task.) People‚??s heads seem to have a deep avoidance of being told what to do.

As Swartz goes on to note, the really weird thing is that the same applies when we try to force ourselves to do something! We kill our intrinsic motivation – our playfulness – when thinking of the goal or the consequences rather than the process of figuring it out. And yet we can‚??t allow ourselves to play because we might risk being wrong – and getting criticized.

So how do we regain our creativity, our ability to play?

One part of it is certainly to stop paying attention to what everybody else thinks. Blogger Hugh McLeod has written extensively on how to be creative, and his first point is:

Ignore everybody

But that‚??s only half-true. Remember when you were a child, what was the most fun you had? Wasn‚??t it when you had a friend over and you shared that playfulness together? That vision or dream that you made real through your play? A better way of putting it is: Ignore everybody who‚??re just trying to bring you down. In most cases that may well be everybody, but it certainly doesn‚??t have to be. At IDEO, they‚??ve managed to create a genuinely playful atmosphere where there‚??s daring in trying out new ideas.

You don‚??t have to start all over to regain a beginner‚??s mind. Just take the time to try and play with whatever you‚??ve doing for a bit. Explore it, take it apart and rearrange it. As Douglas Hofstadter has argued, ‚??variations on a theme is the crux of creativity.‚?Ě Make some unexpected variations.

The difference between right and left

Today, I found an interesting presentation delving into a matter, which I touched upon before: the basic differences between the political right and left.

The presentation, given by one Jonathan Haidt, a professor of psychology at the University of Virginia. Based on his research into moral psychology, he claims to have found evidence of 5 moral foundations, which conservatives and liberals value differently:

He concludes that we need both: The sense of realism and stability of the right wing, as well as the compassion and open-mindedness of the left wing. Apparently, it’s a bit like Yin and Yang. I must admit that I like his ideas, even though I find his research bordering the unscientific.

Let the user finish the design

At EPIC, I took part in a very interesting workshop discussion led by Jeanette Blomberg and Elin R√łnby, two of the leading figures within the field of ethnography-supported design.

The theme of the workshop was making visible the object of design in the design process, and centred on this diagram describing the generalized design process:

design process

This diagram indicates four generalized phases in a design process, which have been placed between two overlapping dichotomies: Between reflecting and acting, and between using and designing:

Study – “Reflecting using” – The ethnographic examination and abstract reflection on the context and given circumstances under which a design is being used or may be used at some point.

Design – “Reflecting designing” – The abstract composing of concepts, ideas, and solutions based on the research and analysis of the existing tools and context of use.

Technology/intervention – “Acting designing” The concrete building, implementing, and configuring concepts in the form of real technological design to improve the existing tools and use.

Live/Work – “Acting using” – The concrete and actual use of the implemented design. The un-reflected day-to-day practices taking place in the given context.

But, as the workshop organizers noted, this is not (only) meant to be seen as a flow from “research to design to implementation to use”, but rather as a continuum – allowing for “back-and-forthing” between the four activities. Their argument was that we need more integration between these, and that the diagram wasn’t intended to maintain the boundaries between these activities, but rather to break them down.

That was the focus of the workshop: How can we best integrate these diverse elements of the design process to make the best possible solution?

It was at this point in the discussion that it became apparent that the phrase “object of design” isn’t quite transparent: The organizers had meant the object being designed: How can it be made visible throughout process – including the ethnographic study? But I had understood it as the object for design: The context, the potential users, the social relations in which the designed object will take part.

I argued that the main challenge in integrating the four elements above is to maintain a focus on the context, the actual situations where a given tool would be used. That is my main concern in the ethnographic work I do in relation to the design and development work at Socialsquare: Connecting the site of use with the designers and developers who build the new social tools for our clients.

Design in itself can only offer affordances for use, it cannot tell the users how to use it. When we design and build tools, especially social tools online, we seek to build the tools people want to use, but we can only do that by letting them use them. One of the other workshop participants said it best when he referred to a phrase one of his older engineer colleagues often used: ‘Let the user finish the design.’

Internet tribes

Recently, I read Seth Godin‘s new book Tribes. It is a short clever book full of insights on what it means to build and lead a tribe. Godin’s main argument is borrowed from one of Hugh McLeod‘s one-liners:

The Market for something to believe in is infinite.

Or, as Woody Guthrie put it: “Basically, man is a hoping machine.”

As a marketing guru, Godin’s spin on this is a bit more basic, and goes as follows: Rather than “building a brand” or “marketing your product” or “staying on message” in order to win supporters, customers, members, fellow travellers – or whatever you call the people that you want to interact with you – you have to build a tribe.

A tribe in Godin’s understanding is a group of people with shared interest, shared faith that you do together matters – that is it: A tribe is something to believe in – a group of hoping machines working in unison. And with the proliferation of the Internet, the costs of organizing, building, and leading a tribe has been lowered immensely.

Thus, the book focuses on the non-technical (that is: the social) barriers that remain which is hindering people building or leading a tribe. Reading the book, I underlined a few passages, which I’ve turned into a short one-page remix of the book’s main points:

It takes only two things to turn a group of people into a tribe:
– A shared interest
– A way to communicate

A tribe has three elements:
– A narrative that tells a story about who we and the future we’re trying to build
– A connection between and among the leader and the tribe
– Something to do – the fewer limits the better.

If no one cares, then you have no tribe. If you don’t care – really and deeply care – then you can’t possibly lead.

The art of leadership is understanding what you can’t compromise on.

The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow.

Leadership is uncomfortable:
It’s uncomfortable to stand up in front of strangers.
It’s uncomfortable to propose an idea that might fail.
It’s uncomfortable to challenge the status quo.
It’s uncomfortable to resist the urge to settle.
When you identify the discomfort, you’ve found the place where a leader is needed.

So what’s holding you back?

Fear.

But what you are afraid of isn’t failure. It’s blame. Criticism.

What you have to ask yourself is this: “If I get criticized for this, will I suffer any measurable impact? Will I lose my job, get hit upside the head with a softball bat, or lose important friendships?”

If the only side effect of the criticism is that you will feel bad about yourself, then you have to compare that bad feeling with the benefits you’ll get from actually doing something worth doing.

Consider this: If someone gave you two weeks to give that speech or to write that manifesto or make the decision that would get you started making an impact, would that be enough time? How much time do you think you’d need? What’s preventing you from starting right now?

The Community of Practice on Communities of Practice

Some time ago, I was invited by John D Smith to present my thesis work on Ubuntu as a Community of Practice at the CP Square autumn dissertation fest. CP Square is an online community of researchers and consultants working with Communities of Practice – a term coined by Etienne Wenger and Jean Lave, and which is a central part of the theoretical framework for my thesis.

I gave the online  presentation this evening, and if I hadn’t been so darned busy lately with work and moving to a different commune (more on that in a separate blog post), I would have blogged about the presentation earlier so that you’d all could have had had the opportunity to listen in.

Online in this case means via Skype teleconference  and a community chat channel, which meant visualizing my audience while talking, and linking to images that related to presentation in the online chat (NB: they’re not sorted. It’s a mess. I’ll add my notes to the images soon to give some sense of a sequence). It’s not the easiest of formats – a lot energy and rapport goes lost in the ether. But I thought it worked out well. The participants were attentive and inquisitive while remaining constructive and supportive – a real treat.

Actually, I was surprised to get the invitation. But I’ve really relished the chance to revisit my thesis work. As I reread it, I realised that writing the thesis is only the beginning.

Since I joining Socialsquare, I’ve been working with all sorts of aspects relating to communities online, and it’s been great to return to that the my work on the Ubuntu Community and see new ways to extend my old analyses and apply them in new contexts. But most of all, I’ve come back and found just what a good framing the Community of Practice is for understanding online communities, and I hope to learn a lot more on how to apply it from the CP Square community.

Dunbar’s number and Facebook

Recently, I made a brief reference to the so-called Dunbar number in relation to my list of friends on Facebook.

Since then, I’ve spent some time reading up on Dunbar’s number and the concept of friends on social networking sites, and feel the need to delve deeper into this discussion. danah boyd, one of the leading researchers on Social Networking Sites, has made the point that

Friends lists are not an accurate portrayal of who people know now, who they could ask favors of, who they would feel comfortable introducing at the moment. They’re a weird product of people from the past, people from the present, people unknown, people once met.

Based on my own anecdotal evidence, I find this to be exactly right. I have loads of contacts on Facebook that I haven’t seen, nor kept in touch with in ages, only now I have a sort of ambient awareness of what is happening in their lives. It’s like having a auto-updating version of the various social spheres I happen to be in. I guess the most apt metaphor would be a college yearbook – the original facebook – that updates itself everyday.

So, how does this relate to Dunbar’s number? Well, Robin Dunbar is an anthropologist who hypothesized that “there is a cognitive limit to the number of individuals with whom any one person can maintain stable relationships, that this limit is a direct function of relative neocortex size, and that this in turn limits group size … the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained.”

Dunbar sought to prove this hypothesis by correlating a number of studies measuring the group size of a variety of different primates with the brain sizes of the primates. He used these correlations to produce a mathematical formula for how the two correspond. Using his formula, which is based on 36 primates, he found that 147.8 is the “mean group size” for humans, which he found to match census data on various village and tribe sizes in many cultures.

So that’s the basis of the Dunbar’s number of 150 relationships. But as Christopher Allen has done well to point out, reducing Dunbar’s research to just one number would be misleading. As he concludes: The “Dunbar’s group threshold of 150 applies more to groups that are highly incentivized and relatively exclusive and whose goal is survival.”

Similarly, boyd sums up Dunbar’s point quite well:

Just as monkeys groomed to maintain their networks, humans gossiped to maintain theirs! He found that the MAXIMUM number of people that a person could keep up with socially at any given time, gossip maintenance, was 150. This doesn’t mean that people don’t have 150 people in their social network, but that they only keep tabs on 150 people max at any given point.

So even if I’m casually surfing through loads of status updates and photos on Facebook, oftentimes I’m not actually maintaining my relationships with these people since I’m lacking the relevant social context to make sense of the information offered to me. To use a phrase of Clay Shirky’s, I am eavesdropping on a public conversation that I have little intention in participating in.

In this way, Facebook relays gossip that otherwise would be unavailable to me directly. As a social tool, it allows my relations to pass on information that otherwise wouldn’t reach me directly. But the problem often is though it allows people to pass on information, it is often very bad at letting people control which information is available to whom. As boyd puts it:

Our relationships have a context to them, not just a strength. That context is crucial for many distributions of information, support and trust. (…) [Social networking sites] expose more about us to different groups of people than we would ever do in real life. All of a sudden, we have to reconcile the bar-hopping facet of our identity with the proper work facet.

Basically, Facebook is offering more social information about us than we would otherwise give out. (yes, it’s technically possible to stop this by using the privacy settings – but nobody can figure those out anyway. Partly because it is an unnatural thing to consciously set up such filters, and partly because you can’t get an easy overview over who can access a given piece of content on your profile.

And that really puts a lot of basic social relations in flux.

As Clay Shirky concludes in this brilliant presentation: It is not the fact that we’re presented with too much information – it’s the fact that our old social filters no longer work. Fundamentally, social tools like Facebook are challenging age-old social norms about who told what to whom. And the challenge seems to be to find new ways – both technical and social – to filter the vast amounts of social information suddenly made available to us.

UPDATE: Many of these issues have been discussed very poignantly in this New York Times article The conclusion hits these themes very well:

Young people today are already developing an attitude toward their privacy that is simultaneously vigilant and laissez-faire. They curate their online personas as carefully as possible, knowing that everyone is watching ‚?? but they have also learned to shrug and accept the limits of what they can control.

It is easy to become unsettled by privacy-eroding aspects of awareness tools. But there is another ‚?? quite different ‚?? result of all this incessant updating: a culture of people who know much more about themselves. Many of the avid Twitterers, Flickrers and Facebook users I interviewed described an unexpected side-effect of constant self-disclosure. The act of stopping several times a day to observe what you‚??re feeling or thinking can become, after weeks and weeks, a sort of philosophical act. It‚??s like the Greek dictum to ‚??know thyself,‚?Ě or the therapeutic concept of mindfulness. (Indeed, the question that floats eternally at the top of Twitter‚??s Web site ‚?? ‚??What are you doing?‚?Ě ‚?? can come to seem existentially freighted. What are you doing?) Having an audience can make the self-reflection even more acute, since, as my interviewees noted, they‚??re trying to describe their activities in a way that is not only accurate but also interesting to others: the status update as a literary form.

This notion of the status update as a literary form has also been explored recently by Nadja, whom I share office space with at Socialsquare, in this longish article (in Danish).