sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from

Archive for the ‘collective intelligence’ Category

‘blogging is not serious writing’: Oh, re-he-he-he-heallllly?

Posted by Jenna McWilliams on September 27, 2009

file under: you can’t be serious.

Blogging, writes Jose Quesada over at the Academic Productivity blog, is not serious writing. Quesada references Jaron Lanier’s essay,“Digital Maoism: The Hazards of the New Online Collectivism,” in which Lanier argues that

writing professionally and well takes time and that most authors need to be paid to take that time. In this regard, blogging is not writing. For example, it’s easy to be loved as a blogger. All you have to do is play to the crowd. Or you can flame the crowd to get attention. Nothing is wrong with either of those activities. What I think of as real writing, however, writing meant to last, is something else. It involves articulating a perspective that is not just reactive to yesterday’s moves in a conversation.

Far from challenging either the notion that “writing meant to last” is not “just reactive” or that blogposts are somehow just reactive and not meant to last, Quesada agrees with Lanier’s stance and adds that

[a]ll academics are painfully aware that writing well takes time, and some know that writing well is not a prerequisite for having a successful blog.

So, basically, it doesn’t pay off to painfully slowly distill ideas for a blog post. In a sense, consuming blog posts –let alone microblogging 140-character blurbs- warrants you a so-so level of refinement…. Playing to the crowd –what bloggers must do, according to Lanier- does not require incredibly solid thinking; it’s a completely different skill.

Truly, I’ve had enough of this outdated stance with respect to blogs. It’s worth pointing out that Lanier’s essay dates back to 2006eons ago, from the perspective of the social revolution. Here in 2009, blogs have come into their own as spaces for serious engagement with serious ideas. (Author update 9/27/09, 11:18 PM: Not to press too hard on this issue, but Lanier’s essay is so outdated that it refers to Wikipedia as “the Wikipedia”–not once, not twice, but twenty-one times. Just imagine the alternate universe where we talk about looking up information on the Wikipedia–akin to tweeting on the Twitter or posting a new status update on the Facebook. That would make for a very different the America, that’s for sure.)

Academics have embraced the platform in a variety of ways. Media scholar Henry Jenkins uses his blog for presentation and exchange of serious ideas. Over at the Tiger Beatdown, Sady Doyle takes on the outrages of a deeply sexist society with a playful tone (she explains her blog is about “ladybusiness”) that only heightens her deeply effective expression of rage. HASTAC co-founder and Duke University professor Cathy Davidson uses her blog to work through key issues (social media, literacy practices, academia) in an informal, inviting, colloquial tone. Though I’ve only offered three examples, academics are in fact embracing the weblog in their own interesting ways by the dozens–by the hundreds, perhaps by the thousands.

Quesada argues that “blogging will do nothing in an academic CV.” I couldn’t disagree more. While it may be true that blogposts don’t yet count as “serious” academic discourse on par with publication in peer reviewed journals, not having a blog is increasingly a glaring omission, especially for academics who are or should be focused on the role of social media within their discipline (which is to say just about every academic).

Career advancement issues aside, Quesada seems to be arguing that producing thoughtful, intellectually challenging blogposts is not a productive enterprise for academics–that if they choose to blog, they should use it to reach a popular audience instead of using it to present deeper intellectual work. “What I think could work,” he writes,

is a hybrid between a focused paper (that nobody would read other than a close circle of scientists) and a blog post that ‘plays to the masses’ and tries hard to capture attention at the cost of rigor and polish.

(Shut up! the blogger in me wants to holler. At the cost of rigor and polish? Do you even read any academic blogs? *cough* *sputter* ::regains composure::)

One of the most significant obstacles to intellectual progress is the difficulty of getting interesting but new or untested ideas circulated among other thinkers–academics and non-academics alike. This is especially true for young academics (like me!) who have an awful lot to say but neither the credentials nor the years of research to back up their ideas. My work in maintaining a blog–and using it to present ideas that I think are both rigorous and fairly well polished–allows me to not only offer up my thoughts for examination by thinkers whose opinions matter to me, but also to refine, build on, or dismiss ideas based on input from others. (I got Ted Castronova to comment on my blog!) Further, when other academics whose work I admire keep a blog, I have the opportunity to weigh in on and perhaps contribute to their ideas. (I get to comment on Henry Jenkins’ blog!)

In short, academic blogs drop the barriers to participation in productive, valuable and meaningful ways–and the more seriously academics take this platform, the more likely it is that blogs will increase in significance (and, incidentally, upping the odds that blogging will come to mean something on an academic CV).

We would do well to remember that academic productivity is about much more than finding ways to get your work done efficiently. It’s also about being a productive member of a larger community of thinkers and researchers, all of whom benefit from the wider circulation of more ideas, from more people, in more participatory ways.

Posted in academia, academics, blogging, collective intelligence, distributed cognition, Henry Jenkins, participatory culture, writing | 25 Comments »

why I am a technological determinist

Posted by Jenna McWilliams on August 26, 2009

I’m fascinated by danah boyd’s recent post intended for the New Media Consortium’s upcoming Symposium for the Future. In her post, she cautions new media theorists to avoid what she labels “technological determinism.” She explains:

Rejecting technological determinism should be a mantra in our professional conversations. It’s really easy to get in the habit of seeing a new shiny piece of technology and just assume that we can dump it into an educational setting and !voila! miracles will happen. Yet, we also know that the field of dreams is merely that, a dream. Dumping laptops into a classroom does no good if a teacher doesn’t know how to leverage the technology for educational purposes. Building virtual worlds serves no educational purpose without curricula that connects a lesson plan with the affordances of the technology. Without educators, technology in the classroom is useless.

boyd’s point is well taken, though I’d be hard pressed to find a single new media scholar who embraces the kind of technological determinism she describes in the above passage. There may have been a time when the “if we build it, they will come” mindset was commonplace, but virtually no serious thinker I have encountered, either in person or in text, actually believes that new media technologies can or should offer quick fixes to society’s ills.

The problem, as I see it, is a two-part one. The first issue is one of terminology: Increasingly, we talk about “technology” as this set of tools, platforms, and communication devices that have emerged from the rise of the internet. This is useful insofar as it allows new media thinkers to converge as members of a field (typically labeled something like digital media and learning or the like), but it does so at the expense of the deep, complicated and deeply intertwined history of technologies and what we call “human progress.” In truth, social media platforms are an extension of communications technologies that reach back to the beginning of human development–before computers, television, motion pictures, radio, before word processing equipment, to telegraphs, typewriters, Morse code, pencils, paper, the printing press…all the way back to the very first communication technology, language itself.

“Technology” is not a monolith, and there is a distinct danger in presenting it as such, as boyd does in her final paragraph:

As we talk about the wonderfulness of technology, please keep in mind the complexities involved. Technology is a wonderful tool but it is not a panacea. It cannot solve all societal ills just by its mere existence. To have relevance and power, it must be leveraged by people to meet needs. This requires all of us to push past what we hope might happen and focus on introducing technology in a context that makes sense.

The second problem is a rhetorical one. New media theorists have found themselves engaged in a mutually antagonistic dance with those who prefer to focus on what they see as the negative cultural effects of digital technologies. For better or worse, people engaged directly in this dance find themselves coming down more firmly than they might otherwise in one of these camps and, because the best defense is a good offense, staking out a more strident position than they might take in private or among more like-minded thinkers. Thus, those who dislike Twitter feign disdain, repulsion, or fear and are labeled (or label themselves) luddites; and those who like Twitter find themselves arguing for its astronomical revolutionary potential and are labeled (or label themselves) uncritical utopianists.

In fact, media theorists have been targets of the “technological determinism” accusation for so long that they refuse to acknowledge that technologies actually can and often do determine practice. Homeric verse took the structure it did because the cadences were easy for pre-literate poets and orators to remember. The sentences of Hemingway, Faulkner, and many of their literary contemporaries shortened up because they needed to be sent by telegraph–leading to a key characteristic of the Modernist movement. The emergence of wikis (especially, let’s face it, Wikipedia) has led to a change in how we think about information, encyclopedias, knowledge, and expertise.

A more accurate–but more complex and therefore more fraught–way to think about the relationship between humans and their technologies is that each acts on the other: We design technologies that help us to communicate, which in turn impact how we communicate, and when, and why, and with whom. Then we design new technologies to meet our changing communications needs.

Again, virtually no media theorist that I know of would really disagree with this characterization of our relationship to technologies–yet say it too loudly in mixed company, and you’re likely to get slapped with the technological determinism label. I say this as someone who has been accused more than once, and in my view wrongly, of technological determinism.

Overly deterministic or not, however, I agree with boyd that technologies do not offer a panacea. More importantly, she argues against the use of terms like “digital natives” and, presumably, its complement, “digital immigrants.” These are easy terms that let us off the hook: people under 30 get something that people over 30 will never understand, and there’s nothing you can do about this divide. As boyd explains,

Just because many of today’s youth are growing up in a society dripping with technology does not mean that they inherently know how to use it. They don’t. Most of you have a better sense of how to get information from Google than the average youth. Most of you know how to navigate privacy settings of a social media tool better than the average teen. Understanding technology requires learning. Sure, there are countless youth engaged in informal learning every day when they go online. But what about all of the youth who lack access? Or who live in a community where learning how to use technology is not valued? Or who tries to engage alone? There’s an ever-increasing participation gap emerging between the haves and the have-nots. What distinguishes the groups is not just a question of access, although that is an issue; it’s also a question of community and education and opportunities for exploration. Youth learn through active participation, but phrases like “digital natives” obscure the considerable learning that occurs to enable some youth to be technologically fluent while others fail to engage.

The key question on the minds of researchers in digital media and learning is not (or should not be) how we can get computers in the hands of every student but how we can support participation in the valued practices, mindsets, and skillsets that go along with a networked, digital society. To get this question answered right requires an ability to engage in the complex, thorny, and socially charged issues that boyd and others have identified in their research and writings. It requires development of a common language within the broad digital media and learning community and an ability to communicate that language to the vast range of stakeholders who are paying attention to what we say and how we say it.

Related posts by other writers:

danah boyd: Some thoughts on technophilia
Kevin Kelly: Technophilia

Posted in academics, collective intelligence, danah boyd, education, new media, participatory culture, public schools, schools, social media, social revolution, Twitter | 1 Comment »

stop saying ‘ATM machine,’ and other exhortations of a participatory culture theorist

Posted by Jenna McWilliams on August 21, 2009

I hate grammatical redundancy. Some of the best examples of this are:

  • ATM Machine (Automated Teller Machine Machine)
  • PIN Number (Personal Identification Number Number)
  • ISBN Number (International Standard Book Number Number)

There’s actually a term for this: RAS syndrome, or Redundant Acronym Syndrome syndrome.

“But,” said my buddy Dan, with a look of pure glee, “you say ATM Machine like everyone else, right?”

“I do not,” I answered. And I don’t.

“That’s a dilemma,” Dan said, still gleeful. “The English major part of you conflicting with the participatory culture theorist, who says that whatever the people decide is right.”

He was ribbing me, but in truth it’s a fair enough critique. After all, some of the most influential books on participatory culture and the social revolution include the following titles, all of which intentionally fly in the face of common attitudes toward morality, ethics, and human progress:

Here Comes Everybody (Clay Shirky)
The World is Flat (Thomas Friedman)
Wikinomics: How Mass Collaboration Changes Everything (Don Tapscott)
Tribes: We Need You to Lead Us (Seth Godin)

And, I’ll just admit it, my blog is absolutely peppered with sweeping declarations: Print journalism isn’t viable. Young people are leading the social revolution. The question isn’t ‘is it moral?’, but ‘is it popular?’

Why, after all, isn’t the question ‘is it moral?’ Simply put, most of the time when people ask that question about aspects of the social revolution, what they’re actually asking is more along the lines of ‘is this better or worse than the experiences and culture I’m used to?’ This is a matter of personal preference, and there’s no accounting for taste.

Some of my friends think wearing a wristwatch makes it easier for them to make it to their meetings on time; some of my friends think watches just make them more time-conscious and anxious. If suddenly a critical mass of people started wearing watches and pressuring the rest of their culture to wear watches too, some of my friends would be thrilled (everybody will have to be on time now!), some would be upset (we’re all going to start caring more about the time than about each other!), and some wouldn’t care at all (*shrug* it’s just another tool to help me get through my day.).

Some people think online social networks signal the decline of community. Some people think new, valuable community structures have emerged around these networks. And some people just think online Scrabble is a fun way to spice up a boring work day. All of these people are right, but arguing about whether we’re better or worse off (or the same) is pointless, because is a wristwatch-bearing culture better than one that uses sundials? Your answer depends on a lot of things, like: whether you make your living off of sundial manufacturing; whether you can personally afford a watch; whether you were someone who cared a lot about keeping track of the time in the first place; and whether you think a watch looks good on your wrist.

Please don’t accuse me of absolute moral relativism, though; even participatory culture theorists have their limits. It’s wrong to force everyone to wear wristwatches, for example, just as it’s wrong to ban sundials. If democracy, freedom of the press, or free speech falter when print journalism hits its death throes, I will be among the throngs calling for social change. Participatory media platforms tend, as all previous platforms have, to silence certain groups (nonwhites, nonstraights, older participants, less educated [formally or informally] participants); this is painful and wrong.

And RAS syndrome will always be wrong, no matter what percentage of the population adopts the phrase “ATM machine.”

Footnote upon the construction of the masses:
some people are young and nothing
else and
some people are old and nothing
and some people are in between and
just in between.

and if the flies wore clothes on their
and all the buildings burned in
golden fire,
if heaven shook like a belly
and all the atom bombs began to
some people would be young and nothing
else and
some people old and nothing
and the rest would be the same
the rest would be the same.

the few who are different
are eliminated quickly enough
by the police, by their mothers, their
brothers, others; by

all that’s left is what you


(Charles Bukowski, The Days Run Away Like Wild Horses over the Hills, 1969)

Posted in Clay Shirky, collective intelligence, cults, culture, journalism, language, participatory culture, social media, social revolution | Leave a Comment »

where to move if you want to survive the zombie apocalypse

Posted by Jenna McWilliams on August 18, 2009

the first in a two-part series on how to survive in case of zombie invasion

Though theories on this vary, it seems safe to assume that when the zombie apocalypse comes (as come it certainly will), it will start slow and pick up steam quickly in a fairly predictable pattern. We’ve seen this pattern before in the emergence of previous epidemics, including the bubonic plague, smallpox, HIV, and swine flu.

The difference, of course, is that traditional precautions–handwashing, safe sex, and face masks–won’t protect you in the event of a zombie epidemic. Here’s what you’ll need to survive the zombie apocalypse: Guns, food and water, and access to new media. Through a complex triangulation system that accounts for these key factors, I have pinpointed the geographic location that offers the highest chance of survival: Mobile, Alabama.

Guns: Priority Number One
Because of their effectiveness in destroying brains from a safe distance, guns are by far the most effective weapon against zombies. This means, of course, that your best bet of survival is by residing in the United States. With 90 guns per 100 people, America leads the entire world in small arms ownership–which is a steaming hot pile of insanity during civilized times but a cache of awesomeness when the zombies invade. Through an accompanying world record-level firearm-related death rate, America has also proven its collective ability to aim for the whites (or, as the case may be, the sickly yellows) of their eyes.

The deadliest states also, coincidentally enough, happen to be those with the highest gun ownership rates: Louisiana (45.6%), Alabama (57.2%), Alaska (60.6%), Mississippi (54.3%) and Nevada (31.5%). This makes it easy to narrow the field of competitors for Safest City in Case of Zombie Apocalypse.

Food and New Media: More Closely Linked than Previously Thought
All the guns in the world won’t save you if you don’t know how to deploy them. Given that the majority of U.S. residents are at least passing familiar with what a zombie is and how to kill it, it still seems fairly likely that the first wave of humans casualties will stem from surprise-induced paralysis. Early survivors will be those among us who are naturally attuned to running from danger.

That’s right, the geeks will survive us all.

And what do you think they’ll do first? Why, head to their technology, of course. It’s likely that the first reports of the zombie apocalypse will spread via Twitter, Facebook, or user forums on free software sites. Alert social media users will be able to become informed about the invasion, learn from the early failures and successes of human resistance, and prepare themselves for the onslaught. Preparations will include gathering the abovementioned weaponry, along with sufficient supplies to allow survivors to outlast the epidemic. As zombies, contrary to some reports, don’t die off because they are already dead, the epidemic is likely to last a long time.

We have lost our ability to grow or hunt for our own food, and this is especially true of the geeks among us. In general, however, geeks are highly adept at foraging, given the right circumstances. For geeks, the right circumstances include: a supermarket. Additionally, while it’s feasible that internet access will not outlast the zombies, survival odds increase for those who have prolonged access to networked technologies. That, in all likelihood, means a major metropolitan area. That rules out Alaska (unless–and this seems unlikely–zombies prove vulnerable to cold).

The Finalists: Louisiana, Alabama, Mississippi, Nevada
Of the remaining states with large weapons stores, we can rule out Nevada’s major cities, Vegas and Reno, for the obvious reason that zombies have already invaded them. This leaves just three states, Louisiana, Alabama, and Mississippi. While the survival odds for residents of any of these states are approximately even, one last factor serves as a tie-breaker: relative health of its citizens. All three states rank near the bottom of the life expectancy scale and at or near the top in obesity rates.

The Winner: Mobile, Alabama

With a population of just under 200,000 and an ideal seaside location in one of the most gun-friendly states in the U.S., Mobile offers food, shelter, a temperate climates, access to cubic tons of water that’s just begging for desalinazation, and enough firepower to blow the heads off of as many zombies as can find their way to this southern city. As an added bonus, Mobile boasts a subtropical climate that’s ideal for producing small, year-round rooftop gardens, just in case the Wal-Marts, Save-A-Lots, Winn-Dixies, and Circle K’s run out before the zombies do.

Tomorrow: a mathematical approach to surviving the zombie apocalypse.

Posted in collective intelligence, new media, zombies | Leave a Comment »

luddites hate jetskis

Posted by Jenna McWilliams on July 18, 2009

Today my sister and I almost missed the opening scene of Harry Potter and the Half Blood Prince because she misread her watch. I don’t wear a watch, see, and she wears an old-fashioned analog wristwatch so it was her job to keep track of time.

As our timekeepers get increasingly digital, it appears, we have a tendency toward being less capable of quickly interpreting analog time markers. So at 1:00, she thought her watch said noon. She caught her error five minutes before the show was scheduled to start and thanks to our ability to bustle when required and theaters’ tendency to start movies much later than scheduled, we got there with enough spare time for me to get my popcorn and for my sister to settle her smuggled-in candy on her lap before the previews started rolling.

The argument that relying on technologies makes us dumber is not a new one; Plato kinda started it by opposing writing because he believed that it would

introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have came to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.

It was downhill from there, of course; and it may be that we hit bottom, at least in terms of networked technologies, with Nicholas Carr’s June/July 2008 Atlantic piece, “Is Google Making Us Stupid?”

In considering the changes to his own orientation toward text (he’s less able to read lengthy articles or books; he gets fidgety when he tries to focus on one text for an extended period of time), he writes:

The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

In fact, in drafting this post I zipped along the surface of multiple different texts, from Plato’s Phaedrus to Carr’s piece on Google to Jamais Cascio’s response piece in this month’s Atlantic, “Get Smarter.” (It argues that technologies and pharmacology can help boost our intelligence.) I may not know what swims beneath the surface of any of these pieces, but I am familiar enough with all of them to use my spare cognitive energy and time to craft a blogpost that links the three. And I did it by typing (without watching the keys) at a rate of approximately 100 words per minute. I employed some basic html code, some of which I know by heart and some of which I keep on an electronic clipboard. I was able to publish it immediately, to the delight or dismay or general apathy of my intended reading public. I could (and, if you’re reading this, probably did) direct traffic to this post via Twitter, Facebook, or any number of other blogs.

God knows I could have spent the time reading Plato’s Phaedrus in its entirety, and I’m not disputing that I would have been enriched by the experience. But you can’t argue that what I did with my time instead (synthesizing, devising an argument, increasing familiarity with html basics, crafting the argument with an intended public in mind, then circulating it among that intended audience) was not an enriching experience.

Back to the jet ski metaphor: Comedian and philosopher Daniel Tosh argues that it’s impossible not to be unhappy on a jetski. “You ever seen a sad person on a waverunner? Have you? Seriously, have you?…Try to frown on a waverunner.”

Watch the clip till the end. He talks about how people smile as they hit the pier–and they hit the pier because you’re supposed to hit the gas to turn–“it goes against natural instinct,” he says. Well, maybe at first, but once you get the hang of it, I imagine you learn how to use the gas in ways that keep you from hitting the pier. It’s just that most of us hit the pier once and once is enough: we stick to dry land, which is safer but far less fun.

Okay, I’ll confess: This entire post is really just a plug for Daniel Tosh’s amazing new show, Tosh.0. It airs Thursdays at 10:00 P.M. ET (9:00 Pacific) on Comedy Central, and it may be the funniest half-hour show I’ve ever seen in my entire life. Even so, it might get canceled because of low viewership. Please just give it a try. I guarantee you’ll laugh out loud at least once or your money back.

Tosh.0 Thurs, 10pm / 9c
Motorcycle Granny
Daniel Tosh Miss Teen South Carolina Demi Moore Picture

Posted in collective intelligence, creativity, culture, distributed cognition, humor, joy, movies, television, Twitter | Leave a Comment »

opening up scholarship: generosity among grinches

Posted by Jenna McWilliams on July 5, 2009

why academic research and open exchange of ideas are like that bottle of raspberry vinaigrette salad dressing you’ve had in the back of your fridge since last summer

The folks over at Good Magazine are tossing up a series of blogposts under the heading “We Like to Share.”

The articles are actually a series of interviews with creative types in a variety of fields who share one characteristic: they believe that sharing of ideas and content is valuable and important. The edited interviews are being posted by Eric Steuer, the Creative Director of Creative Commons–a project which, though I admittedly don’t fully understand it, I find deeply ethical and innovative with respect to offering new approaches to sharing and community.

So far, two posts have gone up, the first with Chris Hughes, a co-founder of Facebook and the former online strategist for the Obama presidential campaign, and the second with Flickr founder Caterina Fake. Talking about how much we’ve changed in our attitudes toward sharing, Fake explains that

[i]f you go online today you will see stories about Obama sharing his private Flickr photos. So this is how far the world has come: our president is sharing photos of his life and experiences with the rest of the world, online. Our acceptance of public sharing has evolved a lot over the course of the past 15 years. And as people became increasingly comfortable sharing with each other—and the world—that lead to things that we didn’t even anticipate: the smart mob phenomenon, people cracking crimes, participatory media, subverting oppressive governments. We didn’t know these things were going to happen when we created the website, but that one decision—to make things public and sharable—had significant consequences.

Hughes’ interview is less overtly about sharing as we typically think of the term, but he points out that the Obama campaign was successful because it focused on offering useful communications tools that lowered barriers to access and then

getting out of the way of the grassroots supporters and organizers who were already out there making technology the most efficient vehicle possible for them to be able to organize. That was a huge emphasis of our program: with people all over the place online—Facebook, MySpace, and a lot of other different networks—we worked hard to make sure anyone who was energized by the campaign and inspired by Barack Obama could share that enthusiasm with their friends, get involved, and do tangible things to help us get closer to victory. The Obama campaign was in many ways a good end to the grassroots energy that was out there.

Both interviews, for as far as they go, offer interesting insights into how sharing is approached by innovators within their respective spheres. But though these posts present their subjects as bold in their embrace of sharing and community, their ideas about what sharing means and how it matters are woefully…limited. Fake uses the Obama example to point out how far we’ve come; but really, does Obama’s decision to make public photos of his adorable family mean much more than that he knows how to maintain his image as the handsome, open President who loves his family almost to a fault? I don’t imagine we’d be very surprised to learn that Obama’s advisors counseled him to make these photos widely available.

Indeed, the Flickr approach, in general, is this: These photos are mine and I will let you see them, but you have to give them back when you’re done. It’s a version of sharing, yes, but only along the lines of the sharing we learned to do as children.

The same is true of the picture Hughes paints of a campaign that successfully leveraged social networking technologies. The Obama campaign’s decision to use participatory technologies was a calculated move: Everybody knows that a.) More young, wired and tech-savvy people supported Obama than McCain; and b.) those supporters required a little extra outreach in order to line up at the polls on election day. You can bet that if Republicans outnumbered Democrats on Facebook, you can bet Obama’s managers would have been a little less quick to embrace these barrier-dropping communication tools.

What we’re not seeing so far among these innovators is an innovative approach to sharing–one that opens up copyright-able and patent-able and, therefore, economically valuable ideas and content to the larger community.

I’ve been thinking about this lately because of my obsession with open education and open access. In particular, educational researchers–even those who embrace open educational resources–struggle with the prospect of making their work available to other interested researchers.

This makes sense to anyone who’s undertaken ed research–prestige, funding, and plum faculty positions (what little there is of any of these things) are secured through the generation of innovative, unique scholarship and ideas, and ideas made readily available are ideas made readily stealable. As a fairly new addition to the field, even I have been a victim of intellectual property theft. It’s enough to give a person pause, even if, like me, you’re on open education like Joss Whedon on strong, feminist-type leading ladies.

But, come on, we all know there’s no point to hiding good research from the public. As Kevin Smith writes in a recent blogpost on a San Jose State University professor who accused a student of copyright violation for posting assigned work online,

[t]here are many reasons to share scholarship, and very few reasons to keep it secret. Scholarship that is not shared has very little value, and the default position for scholars at all levels ought to be as much openness as is possible. There are a few situations in which it is appropriate to withhold scholarship from public view, but they should be carefully defined and circumscribed. After all, the point of our institutions is to increase public knowledge and to put learning at the service of society. And there are several ways in which scholars benefit personally by sharing their work widely.

Smith is right, of course, and the only real issue is figuring out strategies for getting everybody on board with the pro-sharing approach to scholarship. The “I made this and you can see it but you have to give it back when you’re done” model is nice in theory but, in practice, limits innovation and progress in educational research. A more useful approach might be along the lines of: “I made this and you can feel free to appropriate the parts that are valuable to you, but please make sure you credit my work as your source material.” This is a key principle at the core of the open education approach and of what media scholar Henry Jenkins calls “spreadability.”

The problem is that there are enough academics who subscribe to the “share your toys but take them back when you’re done playing” approach to research that anybody who embraces the free-appropriation model of scholarship ends up getting every toy stolen and has to go home with an empty bag. This is why the open education movement holds so much promise for all of academia: Adherents to the core values of open education agree that while we may not have a common vocabulary for the practice of sharing scholarship, we absolutely need to work to develop one. For all my criticisms of the OpenCourseWare projects at MIT and elsewhere, one essential aspect of this work is that it opens up a space to talk about how to share materials, and why, and when, and in what context. The content of these projects may be conservative, but the approach is wildly radical.

Posted in academia, academics, collective intelligence, Henry Jenkins, intellectual property, MIT, open education, open source, President Obama, spreadability | 2 Comments »

why you should invite me to your next party

Posted by Jenna McWilliams on May 28, 2009

(hint: because I will entertain your guests with talk of the social revolution)

I was at a party last week when someone asked me what I do for a living. I used the opportunity to engage in what, in retrospect, may have been an ill-timed impromptu pronouncement about the status of the social revolution.

It turns out I’ll need to rethink how I use that phrase “social revolution,” at least in mixed company, because a tubby drunk man wearing a confusing hat walked up to me and tried to steer the conversation toward war atrocities.

“You can’t tell me,” he bellowed, “that the atrocities that are happening during the Iraq War are any different from the ones that happened during World War II. It’s just that we have more media coverage now.”

As I wrote in an earlier post, this is what I’ve decided to call the Space Odyssey mistake. This particular kind of error is explained by Clay Shirky, who describes a scene from 2001 in which

space stewardesses in pink miniskirts welcome the arriving passenger. This is the perfect, media-ready version of the future–the technology changes, hemlines remain the same, and life goes on much as today, except faster, higher, and shinier.

Lately I’ve been finding Christopher Kelty’s notion of a “recursive public” useful in thinking about what, other than hemlines, have changed. As Kelty describes it in Two Bits (available for download, online browsing, and modulation for free online),

A recursive public is a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives.

More to the point, a recursive public is a group of people who exist outside of traditional institutions (governments, churches, schools, corporations) and, when necessary, use this outsider status to hold these entities in check. The engagement of these publics goes far beyond simply protesting decisions or stating their opinions. Kelty, writing about geek culture as a recursive public, explains it thus:

Recursive publics seek to create what might be understood, enigmatically, as a constantly “self-leveling” level playing field. And it is in the attempt to make the playing field self-leveling that they confront and resist forms of power and control that seek to level it to the advantage of one or another large constituency: state, government, corporation, profession. It is important to understand that geeks do not simply want to level the playing field to their advantage—they have no affinity or identity as such. Instead, they wish to devise ways to give the playing field a certain kind of agency, effected through the agency of many different humans, but checked by its technical and legal structure and openness. Geeks do not wish to compete qua capitalists or entrepreneurs unless they can assure themselves that (qua public actors) that they can compete fairly. It is an ethic of justice shot through with an aesthetic of technical elegance and legal cleverness.

This is precisely the difference between 1945 and 2009. It’s not just that we have more media coverage but that, as Shirky proclaims, everybody is a potential media outlet–everyone has the potential to join a recursive public, whether impromptu or planned.

In fact, the notion that we can all engage in reportage is perhaps a bit too simplistic, at least until we can adjust what we mean by “journalism.” When Facebook users joined up in opposition to a change in Facebook’s terms of service and successfully pressed administrators to rethink and reword the terms of service agreement, that was the work of a recursive public, loosely banded and easily disbanded once their purpose had been achieved (if necessary, they will quickly gather again in their virtual space and just as quickly disband). We don’t recognize this as journalism, often don’t even recognize it as civic engagement–but for those who joined this Facebook knotwork, it’s certainly some kind of engagement. And what could be more civic-minded than fighting to define the uses of a public space?

The atrocities of war are approximately the same (though, as always, new technologies mean new modes of torture and murder). What’s different is the following:

All in all, it was a good party. Near the end, someone produced a Donald Rumsfeld piñata. We were going to hoist it up and smash it, but it seemed kind of…irrelevant.

Posted in Clay Shirky, collective intelligence, Facebook, journalism, new media, open source, participatory culture, President Obama, Project New Media Literacies, social justice, social revolution | Leave a Comment »

if I can’t be free of bullies, I don’t want to be part of your revolution

Posted by Jenna McWilliams on May 10, 2009

In an interesting show of poor timing, the New York Times celebrates Mother’s Day by considering why female executives are such obnoxious bullies.

It turns out female bosses are perceived as bullies almost as commonly as male bosses are. A full 40 percent of workplace bullies are women, and 70 percent of the time, they choose women as their targets.

This, of course, comes as no surprise to most working stiffs out there. Bullying from bosses knows no gender and is therefore not constrained by it. But when it comes to an examination of why women are viewed as bullies, and how their “bullying” behavior compares to the behavior of male bosses, it gets a little complicated.

The Times first considers this phenomenon from a pure numbers standpoint. One reason women bully may be because it’s still excruciatingly difficult for them to break into the upper echelons of the country’s top corporations:

After five decades of striving for equality, women make up more than 50 percent of management, professional and related occupations, says Catalyst, the nonprofit research group. And yet, its 2008 census found, only 15.7 percent of Fortune 500 officers and 15.2 percent of directors were women.

The article also suggests, though, that gender stereotypes makes us more likely to see a female boss as “overly aggressive” than we might perceive a male boss engaging in the same kind of behavior.

Research on gender stereotyping from Catalyst suggests that no matter how women choose to lead, they are perceived as “never just right.” What’s more, the group found, women must work twice as hard as men to achieve the same level of recognition and prove they can lead.

Yes, okay, fair enough. But let’s look at it another way: While assertive or aggressive female bosses are more likely to be perceived as bullies, then we can assume that the female employees who largely perceive themselves as targets are also victims of stereotypes. If female bosses are perceived as bitchy or pushy if they assert themselves too strongly, then female employees are likely to be perceived as whiny or gossipy for complaining about behavior that feels inappropriate or excessively spiteful or unjust. This is why lots of woman-on-woman bullying, I believe, never gets reported. Or, if it does, it gets reported when the working relationship is so bad that one or both women are probably on their way out.

Then there’s the issue of the male heads of organizations, the people who often adjudicate bullying complaints. Many of these men self-identify as either feminist or sympathetic to the feminist movement. Many have done what they could to help their female employees advance. They know how hard it is to be a female boss, and because of this they’re likely to support an embattled woman even in the face of multiple accusations of bullying. First, they may carry around that father complex, the one that makes them want to take care of the ladies who need them; second, their politics require them to defend the woman from charges of bullying because it’s just so hard to be a female boss. This, we might say, is the soft bigotry of kneejerk feminism.

Fortunately, a change is on the horizon. The emergence of participatory cultures and new valued practices means we can and must develop new models for formal and informal organizations. Increasingly, effective collaboration, collective meaning-making, and the ability to tap into expertise that’s distributed across networks of people and tools are far more important than being the single visionary of a company. The old, single-genius model is less and less relevant, and bosses–male or female–who adhere to this model will bully themselves right out of a job as the social revolution takes hold.

Posted in bigotry, collective intelligence, distributed cognition, feminism, participatory culture, politics, social justice, social revolution | Leave a Comment »