sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from http://jennamcwilliams.blogspot.com.

Archive for the ‘social media’ Category

short-sighted and socially destructive: Ning to cut free services

Posted by Jenna McWilliams on April 15, 2010

Lord knows I’m not a huge fan of Ning, the social networking tool that allows users to create and manage online networks. I find the design bulky and fairly counterintuitive, and modifying a network to meet your group’s needs is extremely challenging, and Ning has made it extremely difficult or impossible for users to control, modify, or move network content. Despite the popularity of Ning’s free, ad-supported social networks among K-16 educators, the ads that go along with the free service have tended toward the racy or age-inappropriate.

But given the Ning trifecta–it’s free, getting students signed up is fast and fairly easy, and lots of teachers are using it–I’ve been working with Ning with researchers and teachers for the last two years. So the recent news that Ning will be switching to paid-only membership is obnoxious for two reasons.

The first reason is the obvious: I don’t want to pay–and I don’t want the teachers who use Ning to have to pay, either. One of the neat things about Ning is the ability to build multiple social networks–maybe a separate one for each class, or a new one each semester, or even multiple networks for a single group of students. In the future, each network will require a monthly payment, which means that most teachers who do decide to pay will stick to a much smaller number of networks. This means they’ll probably erase content and delete members, starting fresh each time. The enormous professional development potential of having persistent networks filled with content, conversations, and student work suddenly disappears.

Which brings me to my second point: That anyone who’s currently using Ning’s free services will be forced to either pay for an upgrade or move all of their material off of Ning. This is tough for teachers who have layers upon layers of material posted on various Ning sites, and it’s incredibly problematic for any researcher who’s working with Ning’s free resources. If we decide to leave Ning for another free network, we’ll have to figure out some systematic way of capturing every single thing that currently lives on Ning, lest it disappear forever.

Ning’s decision to phase out free services amounts to a paywall, pure and simple. Instead of putting limits on information, as paywalls for news services do, this paywall puts limits on participation. In many ways, this is potentially far worse, far more disruptive and destructive, far more short-sighted than any information paywall could be.

If Ning was smart, it would think a little more creatively about payment structures. What about offering unlimited access to all members of a school district, for a set fee paid at the district level? What about offering an educator account that provides unlimited network creation for a set (and much lower) fee? What about improving the services Ning provides to make it feel like you’d be getting what you paid for?

More information on Ning’s decision to go paid-only will be released tomorrow. For now, I’m working up a list of free social networking tools for use by educators. If you have any suggestions, I’d love to hear them.

Update, 4/15/10, 6:48 p.m.: Never one to sit on the sidelines in the first place, Alec Couros has spearheaded a gigantic, collaborative googledoc called “Alternatives to Ning.” As of this update, the doc keeps crashing because of the number of collaborators trying to help build this thing (the last time I got into it, I was one of 303 collaborators), so if it doesn’t load right away, keep trying.

Advertisements

Posted in education, lame, schools, social media, teaching, technologies | Leave a Comment »

help me collect information on Twitter lurkers

Posted by Jenna McWilliams on April 1, 2010

I’ve gotten interested lately in the role of lurkers within the Twitter social network. I recently posted this tweet:

Though I wasn’t specifically soliciting feedback on this issue, I received lots of responses from Twitter users who wanted to talk about how and why they user Twitter. These are, keep in mind, people who self-identify as lurkers–yet they responded to me through Twitter.

Clearly, this is something people want to talk about.

So I’m interested in finding out more.

I’ve created a short survey, intended to gather some basic information about the use of Twitter by people who consider themselves lurkers or light users of Twitter, and I’d also be thrilled to hear any thoughts you have on the phenomenon of lurking in Twitter or other online social networks, either through the survey or in comments to this post. I’ll post the results of the survey to this blog. The survey is available here.

Thanks!

Posted in participatory culture, social media, Twitter | 1 Comment »

I’m a little bit ridiculous.

Posted by Jenna McWilliams on March 26, 2010

I am, if you didn’t already know, a little bit ridiculous about certain things. For example: When I was in my early 20s, a friend referred to me as a “kneejerk reactionary” and I immediately brought the friendship to a dead stop. That it didn’t even occur to me what a caricature of myself I was being only enhances the ridiculousness.

And in the video below you can see me being ridiculous about Twitter. This clip comes from a brainstorm session populated by members of SociaLens, a new organization I’m part of whose focus is on the role of social media, communication, and community in business enterprises. The SociaLens team is a terribly smart crew, and I’m incredibly lucky to be able to have the chance to work with these guys. The rest of the team, incidentally, is made up of Christian Briggs, Kevin Makice, Jay Steele, and Matt Snyder.

I’m including the clip here because a.) I really enjoy how ridiculously serious I am about why my colleague Matt is using Twitter wrong; b.) I’m really happy about the amount of agony Kevin put himself through in deciding whether to post the video on YouTube; c.) I think the conversation that emerged below Kevin’s post in response to his decision to put the video online is valuable and interesting. For example, Kevin writes:

As someone who is quite open online with myself and even my family, I found it interesting how much trepidation I felt over sharing this video. I edited down the clip I had to a smaller segment, mainly to shield the name of a participant organization mentioned later. The rest I chose to share without prior approval and only my own instincts to follow. It is possible that one of my colleagues might take issue with any aspect of this decision, from specific content to an absence of formality in posting it to YouTube. In some organizations, there is a policy-first approach to transparency, setting codes of conduct and other criteria for employees to follow. In other organizations, the understanding employees have about shared goals and risks will help inform individual decisions. Most importantly, failure is embraced as a chance to learn. I trust my peers, and I believe they trust me. Even if one of them requests for me to take down the clip, that trust will guard against relational catastrophe as we reflect together.

Kevin also writes about the importance of transparency and reflection within organizations, large and small. You could maybe take a look if you wanted.

Posted in awesome, business, convergence culture, humor, social media, Twitter | Leave a Comment »

devising a model for technology in education: my version of writer’s block

Posted by Jenna McWilliams on February 2, 2010



I believe the following principles to hold true:

  • Human goals are mediated by, and thenceforth only achieved through, the widespread adoption and use of new technologies.*
  • Human purposes for adopting and making use of new technologies are often highly individualized (though nearly always aligned with an affinity group, even if that group is not explicitly named and even if that group is not comprised of other members of the learning community).
  • While no educational researcher is qualified to articulate achievable goals for another human, the researcher is ethically obligated to support learners in articulating, and achieving, ethical educational goals.
  • The efficacy and success of new technologies can be measured through multiple lenses, among which only one is the achievement of mainstream educational goals as articulated and assessed through traditional, often standardized, measurement tools.

If you (a) know me, (b) follow me on Twitter or a similar social network, or (c) read my blog, you know that being at a loss for something to say just doesn’t happen to me. (On the one hand, this makes me perfectly suited to social media, blogging, and academia; on the other hand, it means I’ll mouth off about the social revolution in nearly any social situation.)

But for weeks now, I’ve been trying to devise a model to represent the role of computational technologies in education. And for weeks, I’ve been failing miserably. Here’s the closest I’ve come:

As you can see, this model is incomplete. I was in the middle of drawing an arrow from that word “technology” to something else when I realized that this model would never, ever do. So I tried to approach modelling from other perspectives. I tried backing my way in, by thinking of technologies metaphorically; I’ve tried presenting technology integration in the form of a decision tree. Which is fine, except that these don’t really work as models.

And I have to come up with a model. I do. Though I don’t often mention this, I’m not actually only a blogger. In real life, I’m a graduate student in Indiana University’s Learning Sciences Program. Because I believe in the value of public intellectual discourse, I’ve chosen to present as much of my coursework as possible on my blog or through other public, persistent and searchable communications platforms.

I will, at some future point, discuss the challenges and benefits of living up to this decision. For now, you guys, I just need to come up with a goddam model that I can live with.

I tried thinking of technologies as sleeping policemen; or, in other words, as objects that mediate our thoughts and actions and that have both intended and unintended consequences. This was a reaction to a set of readings including a chunk of Bonnie Nardi’s and Vicki O’Day’s 1999 book, Information Ecology: Using Technology with Heart; a Burbules & Callister piece from the same year, “The Risky Promises and Promising Risks of New Information Technologies for Education”; and Stahl & Hesse’s 2009 piece, “Practice perspectives in CSCL.” The theme of these writings was: We need to problematize dominant narratives about the role of technologies in education. Burbules & Callister categorize these narratives as follows:

  • computer as panacea (“New technologies will solve everything!”)
  • computer as [neutral] tool (“Technologies have no purpose built into them, and can be used for good or evil!”)
  • computer as [nonneutral] tool (the authors call this “(a) slightly more sophisticated variant” on the “computer as tool perspective”)
  • balanced approach to computer technologies (neither panacea nor tool, but resources with intended and unintended social consequences)

Nardi & O’Day, who basically agree with the categories identified above, argue for the more nuanced approach that they believe emerges when we think of technologies as ecologies, a term which they explain is

intended to evoke an image of biological ecologies with their complex dynamics and diverse species and opportunistic niches for growth. Our purpose in using the ecology metaphor is to foster thought and discussion, to stimulate conversations for action…. [T]he ecology metaphor provides a distinctive, powerful set of organizing properties around which to have conversations. The ecological metaphor suggests several key properties of many environments
in which technology is used.

Which is all fine and dandy, except the argument that precedes and follows the above quote is so tainted by mistrust and despair over the effects of new technologies that it’s hard to imagine that even Nardi and O’Day themselves can believe they’ve presented a balanced analysis. Reading their description of techno-ecologies is kind of like reading a book about prairie dog ecologies prefaced by a sentence like “Jesus Christ I hate those freaking prairie dogs.”

So the description of technologies as sleeping policemen was an effort to step back and describe, with as much detachment as possible for an admitted technorevolutionary like me, the role of technologies in mediating human activity.

But the metaphor doesn’t really have much by way of practical use. What am I going to do, take that model into the classroom and say, well, here’s why your kids aren’t using blogs–as you can see (::points to picture of speed bump::), kids are just driving around the speed bump instead of slowing down….?

This became clear as I jumped into a consideration of so-called “intelligent tutors,” which I described briefly in a previous post. Or, well, the speed bump metaphor might work, but only if we can come up with some agreed-upon end point and also set agreed-upon rules like speed limits and driving routes. But the problem is that even though we might think we all agree on the goals of education, there’s actually tons of discord, both spoken and unspoken. We can’t even all agree that what’s sitting in the middle of that road is actually a speedbump and not, for example, a stop sign. Or a launch ramp.

The Cognitive Tutors described by Kenneth Koedinger and Albert Corbett are a nice example of this. Researchers who embrace these types of learning tools see them as gateways to content mastery. But if you believe, as I do, that the content students are required to master is too often slanted in favor of members of dominant groups and against the typically underprivileged, underserved, and underheard members of our society, then Cognitive Tutors start to look less like gateways and more like gatekeepers. Even the tutoring tools that lead to demonstrable gains on standard assessments, well…ya gotta believe in the tests in order to believe in the gains, right?

So I’m back to this:

A “model,” explains Wikipedia,

is a simplified abstract view of the complex reality. A scientific model represents empirical objects, phenomena, and physical processes in a logical way. Attempts to formalize the principles of the empirical sciences, use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system for which reality is the only interpretation. The world is an interpretation (or model) of these sciences, only insofar as these sciences are true….

Modelling refers to the process of generating a model as a conceptual representation of some phenomenon. Typically a model will refer only to some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different, that is in which the difference is more than just a simple renaming. This may be due to differing requirements of the model’s end users or to conceptual or aesthetic differences by the modellers and decisions made during the modelling process. Aesthetic considerations that may influence the structure of a model might be the modeller’s preference for a reduced ontology, preferences regarding probabilistic models vis-a-vis deterministic ones, discrete vs continuous time etc. For this reason users of a model need to understand the model’s original purpose and the assumptions of its validity.

I’m back at the original, simple, incomplete model because I’m not ready to stand in defense of any truth claims that a more complete model might make. Even this incomplete version, though, helps me to start articulating the characteristics of any model representing the role of computational technologies in education. I believe the following principles to hold true:

  • Human goals are mediated by, and thenceforth only achieved through, the widespread adoption and use of new technologies.
  • Human purposes for adopting and making use of new technologies are often highly individualized (though nearly always aligned with an affinity group, even if that group is not explicitly named and even if that group is not comprised of other members of the learning community).
  • While no educational researcher is qualified to articulate achievable goals for another human, the researcher is ethically obligated to support learners in articulating, and achieving, ethical educational goals.
  • The efficacy and success of new technologies can be measured through multiple lenses, among which only one is the achievement of mainstream educational goals as articulated and assessed through traditional, often standardized, measurement tools.

Ok, so what do you think?

*Note: I’m kinda rethinking this one. It reads a little too deterministic to me now, a mere hour or so after I wrote it.

Posted in academia, education, graduate school, lame, obnoxious, patent pending, public schools, schools, social media, social revolution, teaching, technologies | Leave a Comment »

technologies as sleeping policemen: or, how I learned to stop worrying and…

Posted by Jenna McWilliams on January 18, 2010

Nicholas Burbules and Thomas Callister worry for us. Or, at least, they were worried, over 10 years ago when they offered up their take on new technologies in a paper called The Risky Promises and Promising Risks of New Information Technologies for Education. Among their concerns: that too many people adopt a “computer as panacea approach” to new technologies. This is uniquely problematic in education, they argue, where

(r)ather than acknowledge the inherent difficulty and imperfectability of the teaching-learning endeavor, rather than accept a sloppy pluralism that admits that different approaches work in different situations—and that no approach works perfectly all the time—educational theorists and policy makers seize upon one fashion after another and then try to find new arguments, or new mandates, that will promote widespread acceptance and conformity under the latest revolution.

As problematic as the “computer as panacea” approach is, it pales in comparison to the relativistic “computer as neutral tool” approach, the one that has people saying that any technology can be used for good or for evil. Burbules and Callister explain that:

this technocratic dream simply errs in the opposite direction from the first. Where the panacea perspective places too much faith in the technology itself, the tool perspective places too much faith in people’s abilities to exercise foresight and restraint in how new technologies are put to use; it ignores the possibilities of unintended consequences or the ways in which technologies bring with them inherent limits to how and for what purposes they can be used. A computer is not just an electronic typewriter; the World Wide Web is not just an on-line encyclopedia. Any tool changes the user, especially, in this instance, in the way in which tools shape the conception of the purposes to which they can be put. As the old joke goes, if you give a kid a hammer they’ll see everything as needing hammering.

They prefer a middle approach, which assumes that a simple cost-benefit analysis fails to account for the possibility that benefits and costs are highly dependent on perspective. They offer as proof the history of antibiotics, which through widespread use greatly decreased humanity’s likelihood of dying from bacterial infection but in the process led to the emergence of drug-resistant forms of bacteria. (“That is a very bad thing,” they write.)

Though it’s fairly simplistic to compare new information technologies to antibiotics, I’ll go with the analogy for now, mainly because I agree with the authors’ effort to problematize attitudes toward new technologies. It’s perhaps more accurate to consider the social effects of antibiotics: they have led to a general increase in life expectancy, but in the process have enabled imperialistic societies (cf. the United States) to effectively colonize cultures, communities, and countries worldwide. In the same way, new technologies offer unprecedented access to information, communities, and tools for mobilization, but they simultaneously support new forms of colonization, both across and regardless of national borders.

Which brings me to the metaphor of technologies as sleeping policemen.

The sleeping policeman: In America, we call it a “speedbump.” It looks like this:

The speedbump’s intended effect is to get drivers to slow the hell down, and it’s commonly used in neighborhoods and suburban areas with lots of kids. And it does get people to slow the hell down, primarily because they have no choice. There are also tons of unintended effects: Parents feel more comfortable letting their kids play outside. And, as this post points out, kids playing outside tend to get to know each other better. They–and, by extension, their parents–connect with other neighborhood residents, and everybody feels more connected: “Parents come to know the nearby children. And, inevitably, they come to know those childrens’ parents. They begin trading favors like driving children around. They become neighborly.”

There are potential negative effects, too. Using sleeping policemen to slow drivers down changes driving practices in unintended ways. When a driver hits the last speedbump, she hits the gas and jets on down the road. This might increase the risk of an accident just beyond the range of the speedbumps. Drivers may choose to avoid areas with speedbumps, thereby increasing traffic through other areas–even, potentially, nearby neighborhoods whose streets lack speedbumps. And when a driver is not forced to monitor her own driving practices, the decision to simply drive more slowly in neighborhoods is taken away from her, thereby increasing the possibility that she will not adopt slower driving as a general practice.

Still, I think we can all agree that the benefits outweigh the costs. Nobody sees the speedbump as a panacea, and I don’t imagine many people see the speedbump as a neutral technology.

So why do we worry so much more about the emergence and increasing ubiquity of new media technologies than we do about sleeping policemen or antibiotics?

One reason is that it’s easier to see new media technologies as actors that shape our practices than it is to see how speed bumps and antibiotics have shaped us.

Actors: Any person or tool that exerts force upon any other person or tool, thereby shaping its use or practice. In Actor-Network Theory, everything is a potential actor, everything a potential actant.

Speed bumps act upon cars, drivers, kids, parents, neighborhood dynamics. Antibiotics have acted upon people, policies, government spending, and attitudes. We live longer now. We therefore reshape our lives, our goals, and our relationships to others. It’s all very chaotic and complicated, because our reshaped attitudes in turn act upon our use of antibiotics. Everything mediates everything.

Because new media technologies have emerged and been adopted so quickly, their role in reshaping thought and action–and even, it’s becoming clear, physiology–is clear, even if the outline of how this reshaping is shaking out remains quite fuzzy. New technologies as sleeping policemen: They shape not only how we drive, but how we think about driving. We move them, we reshape them, we add more or take a few away, we develop cars with better suspension…and it goes on down the rabbit hole.

Posted in academia, education, learning sciences, new media, participatory culture, pedagogy, philosophy, public schools, schools, social media, social revolution | 3 Comments »

on Cory Doctorow on how to say stupid things about social media

Posted by Jenna McWilliams on January 5, 2010

Originally posted at http://jennamcwilliams.blogspot.com.

“There are plenty of things to worry about when it comes to social media,” says writer Cory Doctorow in his fantastic Guardian piece, “How to say stupid things about social media.” Social media environments, he continues,

are Skinner boxes designed to condition us to undervalue our privacy and to disclose personal information. They have opaque governance structures. They are walled gardens that violate the innovative spirit of the internet. But to deride them for being social, experimental and personal is to sound like a total fool.

Yet plenty of perfectly smart people who should know better say exactly the foolish kinds of things Doctorow rightly decries in his post. Mainly, lately, the stupid things have been leveled at Twitter: It’s trivial. It’s banal. It’s too voyeuristic, or it’s a weak imitation of real relationships, or–and this is the one that really gets me–I try to use it in smart, deliberate, consequential ways, even though lots of my followers don’t.

Partially, people who take stances like the above fail to see that the majority of the communication on sites like Twitter falls into the category of what Doctorow calls “social grooming.” He writes:

The meaning of the messages isn’t “u look h4wt dude” or “wat up wiv you dawg?” That’s merely the form. The meaning is: “I am thinking of you, I care about you, I hope you are well.”

Doctorow compares the “banality” of conversations on Twitter and Facebook to the conversations we have with coworkers. We ask a coworker if she had a good weekend, he writes, not because we care about how her weekend went but because we care about developing bonds with the people around us.

Yes, though that’s only part of the answer. In choosing to communicate via Twitter, I’m not only saying “I am thinking of you, I care about you, I hope you are well,” but I am also publicly announcing: “I am thinking of him, I care about her, I hope he is well.” These announcements are interspersed with my Twitter interactions with
people who are not close friends or even necessarily acquaintances–people I care about only in the most abstract sense. I follow just under 350 people, after all, and am followed by around the same number–a far higher number than I am equipped to develop deep relationships with. And lots of people follow and are followed by far greater numbers than I.

The creaming together of the personal and the professional, the public and the private, means that ‘trivial’ social interactions in online social networks, however much they seem to replicate those that pepper our physical interactions, actually represent a new social animal whose form we have yet to fully sketch. We’re all kind of blindly feeling our way around the elephant here. We who embrace social media technologies can scoff at the person who says an elephant is like a water spout after feeling only its trunk, or the person who has felt a little more and argues it’s like a moving pillar topped off by a shithole, but we would do well to remember that in this parable, everyone who tries to describe the elephant, no matter how much of it he has touched, can only describe it by comparing it to objects he has previously encountered. Twitter is similar to a lot of things, but in the end it’s its own elephant, identical to nothing else we’ve seen before.

This is why, as Doctorow points out, people rely on personal experience and therefore read Twitter and similar networks as trivial and banal instead of deeply socially meaningful. But it’s also why we need to take care to treat the social meaning as different from that which emerges through other types of (digital or physical) social interactions.

Posted in Facebook, participatory culture, social media, social revolution, Twitter | 2 Comments »

why I don’t return your phone calls

Posted by Jenna McWilliams on December 19, 2009

First off, I don’t know if this will make you feel any better, but it’s not personal: I don’t return anybody‘s phone calls.

I hate talking on the phone. Hate it. Hate it. I like you tons, and I wish we lived closer so I could see you more often. And even though I know that my unwillingness to answer the phone when you call or to return your phone calls in any reasonable space of time is a constant strain on our relationship, I can’t make myself get any better at it.

Please understand that it’s not personal: I don’t answer anybody’s phone calls. I don’t return anybody’s calls in a reasonable space of time.

Teh social phobia: I haz it.

I’ve worked hard on tackling my anxieties, and I like to think I’ve done fairly well for myself in this respect. If you’ve wondered why I’m so obsessed with social media technologies, part of the answer is that I’ve used them to cobble together a series of workarounds: I’ve developed strategies for engaging in the types of conversations I like to have while avoiding the tools and encounters that cause me the most anxiety. Among which the phone conversation is numero uno.

It was bad enough when you had a land line, and I had a land line, and everybody had a land line. But then we all got cellphones, and every aspect of voice communication got that much harder for poor little rich girls like me. I can’t tell when I’m interrupting you. I can’t hear or rely upon the subtle cues: variation in the tone of your voice, pauses, or breath. The social connection, so essential and so difficult for someone like me to establish in the first place, becomes even more elusive.

There are new technologies whose designs make remote social connections easier to establish (cf. Skype, Google Video). I hope that some day these technologies will become the norm for all of us, overtaking the cellphone (my guardian, my executioner.) I also harbor a secret hope that if cellphones really are here to stay, I’ll eventually cultivate the type of persona that makes people say, Oh, well, that’s just Jenna–brilliant but eccentric. She refuses to talk on the phone! So we use other technologies to communicate with her. (It hasn’t happened yet, but here’s hoping for success in the new decade.) Until then, I hope you can understand that I love you but hate the technology.

Oh, and I sent you a package. It should arrive in the next day or two. You can text or email or tweet me when you get it.

Posted in lame, social media | 5 Comments »