sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from http://jennamcwilliams.blogspot.com.

Archive for the ‘Twitter’ Category

message to twitter community: be cool, you guys.

Posted by Jenna McWilliams on July 6, 2010

I’ve noticed an increase in meanness and vituperation lately among the people I follow on twitter. I’m not completely sure why this is–certainly it’s due in part to the steady increase in the number of people I follow, but I also suspect the tenor of twitter has changed as it has increased in general popularity and ease of use.

The behavior I’m talking about breaks down into two loose categories:

Personal attacks. Twitter is not a tool that affords deep, substantive conversation, but it turns out 140 characters is just about the perfect length for slinging fallacies back and forth. And people leverage this affordance to build up a catalog of fallacies that would have made your high school logic teacher proud:

  • ad hominems (“stop being such a dickhead, @twitteruser. anyone who paid attention past 3rd grade knows Glenn Beck is a p.o.s.”)
  • poisoning the well (“where’s the intelligent debate about affirmative action? God knows we can’t ask the feminists to weigh in–all they do is bitch.”)
  • spotlight fallacy (“gay people seem incapable of arguing for gay marriage without eventually getting hysterical & irrational. http://bit.ly/buSY0y“)
  • hasty generalizations (“law students are more ignorant about the law than any group I know.” )

Bigotry. I don’t know exactly why people feel comfortable making disgusting generalizations about entire groups of people on twitter. I just know it happens an awful lot. Most typically it appears to come from members of some dominant group complaining about ethnic, political, or cultural minorities (though I’m also willing to consider the possibility that I only think this is true because it pisses me off so much more than when it comes from someone who’s part of a minority group).

I’m tired of it. I want twitter to be the space of coolness that it used to be for me. This is not, though certain lawyers may disagree, a desire for a “happysphere”; this is a desire to surround myself with the most civil discourse possible, in the highest possible number of communities I frequent.

Srsly: be cool, you guys. Try being exactly as nice on twitter as you would be in person. That way, when the twitter community makes decisions about which users to follow, they can decide what level of kindness or pettiness they’re willing to put up with, on twitter just as in real life.

Being both a witness to and target of meanness and pettiness has made me reflect on my own behavior, too. I will grant that I have been known to vituperate, from time to time, on twitter and in other social networking spaces (primarily in the form of so-called “vaguebooking”). I’m sorry, and I’m going to try to do better, so that you can fill up your life with as much intelligent, civil discourse as you want to fill it with. I ask that you do the same for me.

Posted in obnoxious, Twitter | 2 Comments »

notes on being the chainsaw you wish to see in the world: Closing remarks for the AERA 2010 annual meeting

Posted by Jenna McWilliams on May 6, 2010

I just got back from my first trip to the annual meeting of AERA, the American Educational Research Association. AERA is apparently the biggest educational research conference in America. I had a fantastic time (highlight: I got to have dinner with Jim Gee!) and my presentation went well (highlight: I argued with the panel’s discussant over why thinking about gender inequity isn’t enough if you’re not also thinking about class inequity!), and I don’t think I made too much of a fool out of myself.

I really enjoyed my first trip to this conference, though when I got home I learned from others that there are significant challenges to be made about the structure, format, and ethos of AERA. I am coming around to that way of thinking and will post my thoughts on this soon.
For now, though, I want to share with you the paper I had to writereallyfast when I got back from the conference. It’s a final paper for a course on computational technologies, and because I was thinking about AERA, social justice, and why the conference’s biggest events mostly featured staid, mainstream thinkers, I decided to write the paper as closing remarks for the conference. I am sure that once the AERA organizers read my closing remarks, they will invite me to deliver next year’s closing remarks in person. I am also available to deliver opening remarks and keynote addresses.

Notes on being the chainsaw you wish to see in the world: On a critical computational literacy agenda for a time of great urgency
Closing Remarks for the AERA Annual Meeting
Jenna McWilliams, Indiana University
May 4, 2010

I want to thank you for giving me the opportunity to speak this evening, at the close of this year’s annual meeting of the American Educational Research Association.

I want to talk to you tonight about the nature of urgency.

Because urgency characterizes the work we do, doesn’t it? The education of our children—our efforts to prepare them to join in on this beautiful and necessary project of naming and claiming the world—it is certainly a matter of the deepest urgency. Even more so because of the war being waged over the bodies and minds of our children.

It’s a war whose contours are deeply familiar to many of us—more so the longer we have been a part of this struggle over education. Certainly the issues we’re fighting over have limned the edges of our educational imagination for generations: How do we know what kids know? How can we prepare them for success in their academic, vocational, and life pursuits? What should schools look like, and how can we fill our schools up with qualified teachers who can do their jobs well? No matter what else, then, at least we’re continuing to ask at least some of the right questions.

Yet a deeper than normal sense of urgency has characterized this year’s annual meeting. It was a “hark ye yet again” sort of urgency: We stood, once again, on a knife’s edge, waiting for word of legislative decisions to be passed down from the policymakers—among whom there are very few educational researchers—to the researchers—among whom there are very few policymakers.

And what sorts of decisions were we waiting to hear on? The same sorts we’ve been wringing our hands over for a decade or more: Decisions over the standardization of education. Development of a proposed set of Common Core Standards whose content seemed painfully anemic to many of us. We’re waiting to learn whether teacher pay will be linked to student performance on standardized tests. Massive budget cuts leading to termination of teachers and programs—these certainly feel familiar to us, though the scope of these cuts and the potential consequences of these decisions seem to loom larger than ever before. The decision by the Texas Department of Education to pervert and politicize its K-12 curriculum by removing references to historical events and even terminology that might offend members of the political Right-—the specifics are new, but the story feels familiar.

A call to action was paired with the clanging of the alarm bells. Ernest Morrell told us that he had counseled his kids to prepare presentations that not only described their work and achievements but that also included a call to action. “I told them, ’Don’t let them leave this room without marching orders’,” he said. “We need to do better. AERA needs to do better.”

He’s right, of course. And I plan to heed Ernest’s advice and not let you leave this room without your marching orders. But first I want to explore the edges of this new urgency, explain why critical computational literacy is part and parcel of the urgency of this moment, and explain exactly what I mean by the term.

There are at least two reasons for the acuteness of the urgency that has characterized this year’s AERA conference. The first is that many of us had hoped for something more, something better, something more honorable from the Obama administration. After eight years living in a political wasteland, many of us felt a glee all out of proportion with reality upon hearing Barack Obama’s position on educational issues. We felt hope. Even a warm half cup of water can feel like a long, tall drink when you’ve just walked out of a desert.

It’s a long revolution, you know. And if Obama authorizes something that looks very much like No Child Left Behind, and if he mandates merit pay based on student performance on standardized tests, and if the recent changes made by the religious right to the Texas state history curriculum stand, and if school board nationwide continue to make terrible, terrible decisions about how to cut costs, and if we see the largest teacher layoff in our history and class sizes creep up to 40 students per room and if computers get taken over by test prep programs and remedial tutoring systems, well, we’ll do our best to live to fight another day. The other day, I listened to Jim Gee talking about his deep anger at the people who run our education system. But he also said something we should all take to heart: “I’ll fight them until I’m dead,” he said. Let’s embrace this position. If they want to claim the hearts and minds of our children, let’s make it so they do it over our cold, dead bodies.

Let’s not let ourselves begin to believe that the stakes are any lower than they actually are. This is the second reason for the urgency this year: There is the very real prospect that the decisions we make within our educational system will get taken up by education departments across the globe. Around 30 of us attended an early-morning session called “Perspectives From the Margins: Globalization, Decolonization, and Liberation.” The discussants, Michael Apple and Dave Stovall, spoke with great eloquence about the nature of this urgency. You’ll forgive me for secretly recording and then transcribing a piece of each of their talks here.

Michael Apple, responding to a powerful presentation on rural science education by researcher Jeong-Hee Kim and teacher-researcher Deb Abernathy, spoke of the far-reaching implications of the local decisions we make:

As we sit here, I have people visiting me from China. They are here to study No Child Left Behind, and they are here to study performance pay. All of the decisions we make that that principal and Deb and you are struggling against are not just struggles in the United States, they are truly global—so that the decisions we make impact not just the kids in the rural areas of the United State, but the rural areas of the people who are invisible, the same people who deconstruct our computers.

Dave Stovall, from the University of Illinois in Chicago, underscored the need to think of the global implications of the policy decisions that intersect within the realm of education:

Arizona is Texas is Greece is Palestine is where we are. This day and time is serious. When a person in Texas cannot say the world capitalism in a public school, we live in serious times. When a person in Arizona can be taken out of a classroom at five years old, to never return, we live in serious times. When we can rationalize in the state of Illinois and city of Chicago that having 5 grams of heroin on a person accounts for attempted murder, we’re living in different times. When we can talk about in Palestine that young folks have now been deemed the most violent threat to the Israeli state, we’re living in different times. And now, how do we engage and interrupt those narratives based again on the work we do?

These times are different and serious, and talking about critical computational literacy may make me look a little like Nero with his fiddle. But critical computational literacy, or indeed its paucity in our education system, is the dry kindling that keeps Rome burning.

I’ll explain why. Let’s talk for a minute about another Apple, the electronics company Apple Corp. The year 2010 marked the release of Apple’s iPad, a tablet computer designed as a multipurpose information and communication tool. Despite mixed reviews of its usability and features, records show an estimated 500,000 units sold between pre-orders and purchases in the first week after the iPad’s release.

This has been accompanied by a push for consideration of the iPad’s utility for education, especially higher education, with schools working to develop technical support for iPad use on campus and at least one university, Seton Hall, promising to provide all incoming freshmen with iPads along with Macbooks. One question—-how might the iPad transform education?-—has been the topic of conversation for months.

“The iPad,” crowed Neil Offen in the Herald-Sune (2010), “could be more than just another way to check your e-mail or play video games. It has the potential to change the way teachers teach and students learn.”

Certainly, these conversations reflect a positive shift in attitudes about what comprises literacy in the 21st Century. If you attended the fantastic symposium on Sunday called “Leveraging What We Know: A Literacy Agenda for the 21st Century,” you heard from the panelists a powerfully persuasive argument that “literacy” is no longer simple facility with print media. Indeed, facility with print media may still be necessary, but it’s no longer sufficient. As the emergence of the iPad, the Kindle, and similar literacy tools make evident, the notion of “text” has become more aligned with Jay Lemke’s (2006) description of “multimedia constellations”—loose groupings of hypermediated, multimodal texts that exist “not just in the imagination of literary theorists, but in simple everyday fact” (pg. 4). Add to this the ongoing contestation of the tools we use to access and navigate those constellations of social information, and the urgency of a need to shift how we approach literacy becomes increasingly obvious.

As anyone who works in the literacy classroom knows, this is by no means a simple task. This task is complicated even further by the dark side of this new rhetoric about literacy: There’s a technological determinism hiding in there, an attitude that suggests an educational edition of Brave New Worldism. Offen’s celebration of the iPad aligns with the approach of Jeremy Roschelle and his colleagues (2000), who a decade ago trumpeted the transformative potential of a range of new technologies. In explaining that “certain computer-based applications can enhance learning for students at various achievement levels,” they offer descriptions of
promising applications for improving how and what children learn. The ‘how’ and the ‘what’ are separated because not only can technology help children learn things better, it also can help them learn better things” (pg. 78, emphasis mine).

More recently, the media scholar Henry Jenkins (2006) described the increasingly multimodal nature of narratives and texts as “convergence culture.” As corporate and private interests, beliefs, and values increasingly interact through cheaper, more powerful and more ubiquitous new technologies, Jenkins argues, our culture is increasingly defined by the collision of media platforms, political ideologies, and personal affinities. Jenkins celebrates the emergence of this media convergence, arguing that “(i)n the world of media convergence, every important story gets told, every brand gets sold, and every consumer gets courted across multiple media platforms” (pg. 3).

Brave new world, indeed. But there is reason to wear a raincoat to this pool party, as a cursory examination of the developing “Apple culture” of electronics confirms. The iPad, celebrated as a revolution in personal computing, communication, and productivity—and marketed as an essential educational tool—is a tool with an agenda. The agenda is evident in Apple’s decision to block the educational visual programming software Scratch: Though Apple executives have claimed that applications like Scratch may cause the iPad to crash, others argue that the true motivation behind this decision is to block a tool that supports media production. The Scratch application allows users to build new applications for the iPad, which Bruckman (2010) suggests goes far beyond Apple’s unstated interest in designing its products primarily for media consumption.

There is no closest competitor to the iPad, so users who want to leverage the convenience, coolness, and computing power of this product must resign themselves to the tool Apple provides. Similarly, as Apple develops its growing monopoly in entertainment (iPods), communications (iPhone), and portable computing (Macbook), Apple increasingly has the power to decide what stories to tell, and why, and how.

Now let’s go back to the other Apple, Michael Apple, who argues quite convincingly about the colonization of the space of the media by the political right wing (2006). We have, he argues, politicians deciding what we pay attention to, and we have corporations deciding how we pay attention to it. This makes the need for critical computational literacy even more important than ever before. Perhaps it’s more important than anything else, though I’ll leave that to the historians to decide.

What is this thing I’m calling “critical computational literacy”? Since I’m almost the only person using this term, I want to start by defining it. It has its roots in computational literacy, which in itself bears defining. Andy diSessa (2001) cautions us against confusing computational literacy with “computer literacy,” which he describes as being able to do things like turning on your computer and operating many of its programs. His definition of computational literacy, he explains, makes computer literacy look “microscopic” in comparison (p. 5). For him, computational literacy is a “material intelligence” that is “achieved cooperatively with external materials” (p. 6).

This is a good start in defining computational literacy but probably still not enough. And please do remember that I will not let you leave this room without marching orders; and if I want you to know what to do, I have to finish up the definition. Let’s add to diSessa’s definition a bit of the abstraction angle given to us by Jeanette Wing (2008), who shifts the focus slightly to what she labels “computational thinking.” She describes this as

a kind of analytical thinking. It shares with mathematical thinking in the general ways in which we might approach solving a problem. It shares with engineering thinking in the general ways in which we might approach designing and evaluating a large, complex system that operates within the constraints of the real world. It shares with scientific thinking in the general ways in which we might approach understanding computability, intelligence, the mind and human behaviour. (pg. 3716)

For Wing, the essential component of computational thinking is working with abstraction, and she argues that an education in computational thinking integrates the “mental tool” (capacity for working with multiple layers of abstraction) with the “metal tool” (the technologies that support engagement with complex, abstract systems).

So. We have diSessa’s “material intelligence” paired with Wing’s “computational thinking”—a fair enough definition for my purposes. But what does it look like? That is, how do we know computational literacy when we see it?

The answer is: it depends. Though we have some nice examples that can help make visible what this version of computational literacy might look like. Kylie Peppler and Yasmin Kafai (2007), who by the way have a new book out on their work with the Computer Clubhouse project (you can buy a copy up at the book fair), offer instructive examples of children working with Scratch. Jorge and Kaylee, their two case studies, are learners who make creative use of a range of tools to build projects that extend, as far as their energy and time will allow, the boundaries of what is possible to make through a simple visual programming language. Bruce Sherin, Andy diSessa, and David Hammer (1993) give an example of their work with Dynaturtle to advance a theory of “design as a learning activity”; in their example, learners work with the Boxer programming language to concretize abstract thought.

Certainly, these are excellent examples of computational literacy in action. But I would like to humbly suggest that we broaden our understanding of this term far beyond the edges of programming. Computational literacy might also be a form of textual or visual literacy, as learners develop facility with basic html code and web design. It might be the ability to tinker—to actually, physically tinker, with the hardware of their electronics equipment. This is something that’s typically frowned upon, you know. Open up your Macbook or your iPhone and your warranty is automatically null and void. This is not an accident; this is part of the black box approach of electronics design that I described earlier.

Which leads me to the “critical” component of computational literacy. This is no time for mindless tinkering; we are faced with a war whose terms have been defined for us by members of the political Right, and whose battles take place through tools and technologies whose uses have been defined for us by corporate interests. Resistance is essential. In the past, those who resisted the agendas of software designers and developers were considered geeks and freaks; they were labeled “hackers” and relegated to the cultural fringes (Kelty 2008). Since then, we have seen an explosion in access to and affordability of new technologies, and the migration to digitally mediated communication is near-absolute. The penetration of these technologies among young people is most striking: (include statistics). Suddenly, the principles that make up the “hacker ethos” (Levy, 1984) take on new significance for all. Suddenly, principles that drove a small subset of our culture seem more like universal principles that might guide cultural takeup of new technologies:

  • Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total.
  • All information should be free.
  • Mistrust authority—promote decentralization.
  • Hackers should be judged by their hacking, not criteria such as degrees, age, race, sex, or position.
  • You can create art and beauty on a computer.
  • Computers can change your life for the better. (Levy 1984)

If these principles seem overtly ideological, overtly libertarian, that’s because they are. And I’m aware that in embracing these principles I run the risk of alienating a fairly significant swath of my audience. But there’s no time for gentleness. This is no time to hedge. I believe, as Michael Apple and Dave Stovall and Rich Ayers and others have argued persuasively and enthusiastically, that we are fighting to retrieve the rhetoric of education from the very brink. It’s impossible to fight a political agenda with an apolitical approach. We must fight now for our very future.

That’s the why. Now I’d like to tackle the how. If we want our kids to emerge from their schooling experience with the mindset of critical computational literacy, we need to first focus on supporting development of critical computational literacy in our teachers. They, too, are subject to all of the pressures I listed earlier, and add to the mix at least one more: They are subject to the kind of rhetoric that Larry Cuban (1986) reminds us has characterized talk of bringing new technologies into the classroom since at least the middle of the 20th century. As he researched the role of technologies like radio, film, and television in schools, he described the challenges of even parsing textual evidence of technologies’ role:

Television was hurled at teachers. The technology and its initial applications to the classroom were conceived, planned, and adopted by nonteachers, just as radio and film had captured the imaginations of an earlier generation of reformers interested in improving instructional productivity…. Reformers had an itch and they got teachers to scratch it for them. (p. 36)

This certainly hearkens, does it not, of the exhortation of Jeremy Roschelle and his colleagues? I repeat:

promising applications for improving how and what children learn. The ‘how’ and the ‘what’ are separated because not only can technology help children learn things better, it also can help them learn better things.

Teachers are also faced with administrators who say things like these quotes, taken from various online conversations about the possible role of the iPad in education.

I absolutely feel the iPad will revolutionize education. I am speaking as an educator here. All it needs are a few good apps to accomplish this feat.

Tablets will change education this year and in the future because they align neatly with the goals and purposes of education in a digital age.

And finally, the incredibly problematic:

As an educational administrator for the last eleven years, and principal of an elementary school for the past seven…after spending three clock hours on the iPad, it is clearly a game changer for education.

Three hours. Three hours, and this administrator is certain that this, more than any previous technology, will transform learning as we know it. Pity the teachers working at his school, and let’s hope that when the iPad gets hurled at them they know how to duck.

We must prepare teachers to resist. We must prepare them to make smart, sound decisions about how to use technologies in the classroom and stand tall in the face of outside pressures not only from political and corporate interests but from well-meaning administrators and policymakers as well. There is a growing body of evidence that familiarity with new tools is—just like print literacy—necessary but not sufficient for teachers in this respect.

There is evidence, however, that experience with new technologies when paired with work in pedagogical applications of those technologies can lead to better decision-making in the classroom. I recommend the following three-part battle plan:

First, we need to start building a background course in new media theory and computational thinking into our teacher education programs. My home institution, Indiana University, requires exactly one technology course, and you can see from the description that it does its best to train pre-service teachers in the use of PowerPoint in the classroom:

W 200 Using Computers in Education (1-3 cr.)Develops proficiency in computer applications and classroom software; teaches principles and specific ideas about appropriate, responsible, and ethical use to make teaching and learning more effective; promotes critical abilities, skills, and self-confidence for ongoing professional development.

Fortunately, we can easily swap this course out for one that focuses on critical computational literacy, since the course as designed has little practical use for new teachers.

Second, we need to construct pedagogy workshops that stretch from pre-service to early in-service teachers. These would be designed to support lesson development within a specific domain, so that all English teachers would work together, all Math teachers, all Science teachers, and so on. This could stretch into the early years of a teacher’s service and support the development of a robust working theory of learning and instruction.

Finally, we might consider instituting ongoing collaborative lesson study so that newer teachers can collaborate with veteran teachers across disciplines. I offer this suggestion based on my experience working in exactly this environment over the last year. In this project, teachers meet monthly to discuss their curricula and to share ideas and plan for future collaborative projects. They find it intensely powerful and incredibly useful as they work to integrate computational technologies into their classrooms.

I’m near the end of my talk and would like to finish with a final set of marching orders. If we want to see true transformation, we need first to tend our own gardens. Too often—far, far too far too often—we educational researchers treat teachers as incidental to our interventions. At the risk of seeming like an Apple fanboy, I return once again to the words of Michael Apple, who argued brilliantly this week that it’s time to rethink how we position teachers in our work. We say we want theory to filter down to the “level” of practice; the language of levels, Apple says, is both disingenuous and dangerous. Let’s tip that ladder sideways, he urges us, and he is absolutely correct. We live and work in the service of students first, and teachers second. We should not forget this. We should take care to speak accordingly.

These are your marching orders: To bring the message of critical computational literacy and collaboration during this time of great urgency back to your home institutions, to the sites where you work, to the place where you work shoulder to shoulder with other researchers, practitioners, and students. I urge you to stand and to speak, loudly, and with as much eloquence as you can muster, about the issues of greatest urgency to you. This is no time to speak softly. This is no time to avoid offense. In times of great urgency, it’s not enough to be the change we wish to see in the world; we need to be the chainsaws that we wish to see in the world. That is what I hope you will do when you leave this convention center. Thank you.

References
Apple, M.W. (2006). Educating the “right” way: Markets, standards, God, and inequality. New York: Routledge.
Bruckman, A (2010, April 15). iPhone application censorship (blog post). The next bison: Social computing and culture. Retrieved at http://nextbison.wordpress.com/2010/04/15/iphone-application-censorship/.
Carnoy, M. (2008, August 1). McCain and Obama’s educational policies: Nine things you need to know. The Huffington Post. Retrieved at http://www.huffingtonpost.com/martin-carnoy/mccain-and-obamas-educati_b_116246.html.
Carter, D. (2010, April 5). Developers seek to link iPad with education. eSchool News. Retrieved from http://www.eschoolnews.com/2010/04/05/ipad-app-store-has-wide-selection-of-education-options/.
Cuban, L. (1986). Teachers and machines. New York: Teachers College Press.
diSessa, A. A. (2000). Changing minds : Computers, learning, and literacy. Cambridge, Mass.: MIT Press.
Jenkins, H. (2006). Convergence culture: Where old and new media collide. Cambridge, MA: MIT Press.
Kelty, C. (2008). Two bits: The cultural significance of free software. Durham, NC: Duke University Press.
Kolakowski, N. (2010). Apple iPad, iPhone Expected to Boost Quarterly Numbers. eWeek, April 18, 2010. Retrieved at http://www.eweek.com/c/a/Desktops-and-Notebooks/Apple-iPad-iPhone-Expected-to-Boost-Quarterly-Numbers-825932/.
Korn, M. (2010). iPad Struggles at Some Colleges. Wall Street Journal, April 19, 2010. Retrieved at http://online.wsj.com/article/SB10001424052748703594404575192330930646778.html?mod=WSJ_Tech_LEFTTopNews.
Lemke, J. (2006). Toward Critical Multimedia Literacy: Technology, research, and politics. In M.C. McKenna et al. (Eds.), International handbook of literacy and technology: Volume II. Mahwah, NJ: Lawrence Erlbaum Associates Inc. (3-14).
Levy, S 1984. Hackers: Heroes of the computer revolution. New York: Anchor Press/Doubleday.
McCrae, B. (2010, Jan. 27). Measuring the iPad’s potential for education. T|H|E Journal. Retrieved from http://thejournal.com/articles/2010/01/27/measuring-the-ipads-potential-for-education.aspx.
New York Times (2010, March 17). Editorial: Mr. Obama and No Child Left Behind. New York Times Editorial Page. Retrieved from http://www.nytimes.com/2010/03/18/opinion/18thu1.html.
Offen, N. (2010, Jan. 28). The iPad and education. The Herald-Sun. Retrieved from http://www.heraldsun.com/view/full_story/5680899/article-The-iPad—education?instance=main_article.
PBS (2010, Jan. 7). How will the iPad change education? PBS TeacherLine Blog. Retrieved from http://www.pbs.org/teacherline/blog/2010/01/how-will-the-ipad-change-education/.
Peppler, K. A., & Kafai, Y. B. (2007). From SuperGoo to Scratch: exploring creative digital media production in informal learning. Learning, Media and Technology, 32(2), 149-166.
Roschelle, J. M., Pea, R. D., Hoadley, C. M., Gordin, D. N., & Means, B. M. (2000). Changing how and what children learn in school with computer-based technologies. The future of children, 10(2), 76–101.
Sherin, B., DiSessa, A. A., & Hammer, D. M. (1993). Dynaturtle revisited: Learning physics through collaborative design of a computer model. Interactive Learning Environments, 3(2), 91-118.
Smith, E. (2010, April 16). The Texas Curriculum Massacre. Newsweek. Retrieved at http://www.newsweek.com/id/236585.
Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Transactions A, 366(1881), 3717-3717.

 

**Update, 5/6/10, 1:09 p.m.: I have changed this post slightly to remove an unfair attack against a presenter at this year’s AERA Annual Meeting. He points out in the comments section below that my attack was unfair, and I agree and have adjusted the post accordingly.

Posted in academia, computational literacy, conferences, convergence culture, education, graduate school, Henry Jenkins, Jim Gee, Joshua Danish, President Obama, public schools, schools, teaching, Twitter | 7 Comments »

help me collect information on Twitter lurkers

Posted by Jenna McWilliams on April 1, 2010

I’ve gotten interested lately in the role of lurkers within the Twitter social network. I recently posted this tweet:

Though I wasn’t specifically soliciting feedback on this issue, I received lots of responses from Twitter users who wanted to talk about how and why they user Twitter. These are, keep in mind, people who self-identify as lurkers–yet they responded to me through Twitter.

Clearly, this is something people want to talk about.

So I’m interested in finding out more.

I’ve created a short survey, intended to gather some basic information about the use of Twitter by people who consider themselves lurkers or light users of Twitter, and I’d also be thrilled to hear any thoughts you have on the phenomenon of lurking in Twitter or other online social networks, either through the survey or in comments to this post. I’ll post the results of the survey to this blog. The survey is available here.

Thanks!

Posted in participatory culture, social media, Twitter | 1 Comment »

I’m a little bit ridiculous.

Posted by Jenna McWilliams on March 26, 2010

I am, if you didn’t already know, a little bit ridiculous about certain things. For example: When I was in my early 20s, a friend referred to me as a “kneejerk reactionary” and I immediately brought the friendship to a dead stop. That it didn’t even occur to me what a caricature of myself I was being only enhances the ridiculousness.

And in the video below you can see me being ridiculous about Twitter. This clip comes from a brainstorm session populated by members of SociaLens, a new organization I’m part of whose focus is on the role of social media, communication, and community in business enterprises. The SociaLens team is a terribly smart crew, and I’m incredibly lucky to be able to have the chance to work with these guys. The rest of the team, incidentally, is made up of Christian Briggs, Kevin Makice, Jay Steele, and Matt Snyder.

I’m including the clip here because a.) I really enjoy how ridiculously serious I am about why my colleague Matt is using Twitter wrong; b.) I’m really happy about the amount of agony Kevin put himself through in deciding whether to post the video on YouTube; c.) I think the conversation that emerged below Kevin’s post in response to his decision to put the video online is valuable and interesting. For example, Kevin writes:

As someone who is quite open online with myself and even my family, I found it interesting how much trepidation I felt over sharing this video. I edited down the clip I had to a smaller segment, mainly to shield the name of a participant organization mentioned later. The rest I chose to share without prior approval and only my own instincts to follow. It is possible that one of my colleagues might take issue with any aspect of this decision, from specific content to an absence of formality in posting it to YouTube. In some organizations, there is a policy-first approach to transparency, setting codes of conduct and other criteria for employees to follow. In other organizations, the understanding employees have about shared goals and risks will help inform individual decisions. Most importantly, failure is embraced as a chance to learn. I trust my peers, and I believe they trust me. Even if one of them requests for me to take down the clip, that trust will guard against relational catastrophe as we reflect together.

Kevin also writes about the importance of transparency and reflection within organizations, large and small. You could maybe take a look if you wanted.

Posted in awesome, business, convergence culture, humor, social media, Twitter | Leave a Comment »

on Cory Doctorow on how to say stupid things about social media

Posted by Jenna McWilliams on January 5, 2010

Originally posted at http://jennamcwilliams.blogspot.com.

“There are plenty of things to worry about when it comes to social media,” says writer Cory Doctorow in his fantastic Guardian piece, “How to say stupid things about social media.” Social media environments, he continues,

are Skinner boxes designed to condition us to undervalue our privacy and to disclose personal information. They have opaque governance structures. They are walled gardens that violate the innovative spirit of the internet. But to deride them for being social, experimental and personal is to sound like a total fool.

Yet plenty of perfectly smart people who should know better say exactly the foolish kinds of things Doctorow rightly decries in his post. Mainly, lately, the stupid things have been leveled at Twitter: It’s trivial. It’s banal. It’s too voyeuristic, or it’s a weak imitation of real relationships, or–and this is the one that really gets me–I try to use it in smart, deliberate, consequential ways, even though lots of my followers don’t.

Partially, people who take stances like the above fail to see that the majority of the communication on sites like Twitter falls into the category of what Doctorow calls “social grooming.” He writes:

The meaning of the messages isn’t “u look h4wt dude” or “wat up wiv you dawg?” That’s merely the form. The meaning is: “I am thinking of you, I care about you, I hope you are well.”

Doctorow compares the “banality” of conversations on Twitter and Facebook to the conversations we have with coworkers. We ask a coworker if she had a good weekend, he writes, not because we care about how her weekend went but because we care about developing bonds with the people around us.

Yes, though that’s only part of the answer. In choosing to communicate via Twitter, I’m not only saying “I am thinking of you, I care about you, I hope you are well,” but I am also publicly announcing: “I am thinking of him, I care about her, I hope he is well.” These announcements are interspersed with my Twitter interactions with
people who are not close friends or even necessarily acquaintances–people I care about only in the most abstract sense. I follow just under 350 people, after all, and am followed by around the same number–a far higher number than I am equipped to develop deep relationships with. And lots of people follow and are followed by far greater numbers than I.

The creaming together of the personal and the professional, the public and the private, means that ‘trivial’ social interactions in online social networks, however much they seem to replicate those that pepper our physical interactions, actually represent a new social animal whose form we have yet to fully sketch. We’re all kind of blindly feeling our way around the elephant here. We who embrace social media technologies can scoff at the person who says an elephant is like a water spout after feeling only its trunk, or the person who has felt a little more and argues it’s like a moving pillar topped off by a shithole, but we would do well to remember that in this parable, everyone who tries to describe the elephant, no matter how much of it he has touched, can only describe it by comparing it to objects he has previously encountered. Twitter is similar to a lot of things, but in the end it’s its own elephant, identical to nothing else we’ve seen before.

This is why, as Doctorow points out, people rely on personal experience and therefore read Twitter and similar networks as trivial and banal instead of deeply socially meaningful. But it’s also why we need to take care to treat the social meaning as different from that which emerges through other types of (digital or physical) social interactions.

Posted in Facebook, participatory culture, social media, social revolution, Twitter | 2 Comments »

may I suggest a new hashtag?

Posted by Jenna McWilliams on October 15, 2009

In my search for interesting new blogs to follow, I recently realized I could easily crowdsource this search to the Twitter community, assuming I could get enough users behind it. Twitter users have leveraged the #followfriday hashtag for recommending follow-worthy users and #musicmonday to offer musical suggestions, to roaring success.

I’m going to try starting a #blogroll hashtag, intended to share interesting blogs with other users. I think this will work best if the blogs are grouped by category, so that people who search #blogroll will be able to sort by their interests. So a #blogroll tweet might look like this:

If this works for you guys, then perhaps we should choose a day. I nominate Wednesdays, since for some reason that’s the day I most often find myself looking for new blogs to read.

Posted in blogging, Twitter | Leave a Comment »

time to smack down the Wall Street Journal

Posted by Jenna McWilliams on October 12, 2009

(Don’t worry; I snuck around the pay wall.)

As my sister Laura put it when she sent me this article on why the Wall Street Journal is five years behind the times why email is no longer the communication tool of choice, “It’s trying so hard to be ‘with it’ and in the flow of the times… But it seems stuffy and like a 40-year-old’s take on new media.”

The piece is called “Why Email No Longer Rules… And what that means for the way we communicate,” and it reiterates points that were interesting to tech folks a handful of years ago. (Here’s a 2007 piece on the decline of email from Gawker; here’s a 2007 piece on the same topic from Slate ; here’s a 2006 blogpost on the issue ; and so on.)

Among the “insights” of the WSJ article are that email seems painfully slow to us now:

Why wait for a response to an email when you get a quicker answer over instant messaging? Thanks to Facebook, some questions can be answered without asking them. You don’t need to ask a friend whether she has left work, if she has updated her public “status” on the site telling the world so. Email, stuck in the era of attachments, seems boring compared to services like Google Wave, currently in test phase, which allows users to share photos by dragging and dropping them from a desktop into a Wave, and to enter comments in near real time.

There is, reporter Jessica E. Vascellaro explains, a new phenomenon wherein we receive a constant stream of information, both personal and professional. There is a downside, as she points out:

That can make it harder to determine the importance of various messages. When people can more easily fire off all sorts of messages—from updates about their breakfast to questions about the evening’s plans—being able to figure out which messages are truly important, or even which warrant a response, can be difficult. Information overload can lead some people to tune out messages altogether.

Additionally, the speed of communication presents problems:

While making communication more frequent, they can also make it less personal and intimate. Communicating is becoming so easy that the recipient knows how little time and thought was required of the sender. Yes, your half-dozen closest friends can read your vacation updates. But so can your 500 other “friends.” And if you know all these people are reading your updates, you might say a lot less than you would otherwise.

Good lord! she exclaims. We’re surrounded by this constant stream of information! How will we manage it?

It makes sense that we would compare new forms of communication–Twitter, Facebook, text messages–to older forms of written communication like email and, going back more than ten years (!), letters, memos, and personal notes. If we compare newer communication technologies to those previous modes of written communication, then Vascellaro’s points ring true.

But here’s where Laura got it right: Thinking of Twitter as a faster, shorter, and less consequential version of email is an old-school paradigm that ignores that other than the fact that it works primarily with printed text, Twitter (and, more broadly, microblogging in general) is not like email at all. Anyone who approaches these new platforms with an attempt to figure out what’s ‘important’ and what’s ‘trivial,’ what needs to be acted on and what can be ignored, is missing out entirely on the spirit of these spaces.

In fact, Twitter, Facebook, and similar participatory platforms support a convergence of multiple types of communication. Twitter, just as one example, supports a type of identity work that was not previously seen in other communication environments. Through the careful combination of tweets about personal information, ‘trivial’ details, and and professional interests, people are painstakingly (sometimes, especially at first, accidentally) crafting and presenting a coherent if fluid and flexible identity, which then informs the identities they present in other spaces, online and off.

What makes Twitter new is the particular combination of people and affordances. Facebook and similar social networks require people to send out friend requests that must then be accepted; it means people can control, fairly strictly, the size of their community. Twitter requires no such permission. Because I can follow almost anyone I want to, and because almost anyone who wants to can follow me, we’re seeing a fascinating intermixture of near and distant connections between people. I follow my best friend, who follows me; I also follow my idol Clay Shirky, who doesn’t follow me (yet); and I follow colleagues who fall all along the friendship continuum. Some of them follow me and some of them don’t (yet).

Vascellaro largely focuses on the professional implications of new communication tools, and agrees that one nice feature of these tools is that information is often available instantaneously–if you need to know whether a colleague has left work yet, you might check her Facebook status. The downside, she explains, is that

a dump of personal data can also turn off the people you are trying to communicate with. If I really just want to know what time the meeting is, I may not care that you have updated your status message to point people to photos of your kids.

In fact, if you’re using Facebook and Twitter just to find out the kinds of information you used to get through email and phone conversations, then the volume of information may feel overwhelming and prohibitive. But if you’re only focusing on how to use these tools to do the work that email used to do, then you’re kind of missing the point: Social media communication tools provide new avenues for doing deep identity work in communities that mix professional and personal relationships.

To be clear, Twitter is not email on steroids. Facebook is not like coffee circles for 500 people at a time. And blogs are not diaries with 100 or 1000 readers. Twitter is Twitter. Facebook is Facebook. And trying to parse these spaces by comparing them to previous one-to-many types of communications (like email) limits one’s ability to see the full range of the affordances of these platforms.

I don’t mean to hammer too hard on Vascellaro, who has written numerous interesting tech-related articles for the WSJ. But a quick look at her homepage reveals her biases (this, by the way, is another interesting aspect of an increasingly participatory culture: a public figure’s digital footprint becomes a matter of public interest). Her site points to her Facebook page, the details of which are locked to the public; the ability to lock down a Facebook page, in my experience, is a feature largely leveraged by people who struggle with the notion of mixing the personal and the professional. But increasingly, the ability to engage with this mixture–even by getting it wrong sometimes–is more valued and valuable than the ability to carefully separate the two.

A look at Vascellaro’s Twitter feed is even more telling. Her first several hundred tweets mimicked the style of Facebook status updates:

Eventually, she switched to a broadcast approach, mainly tweeting about interesting articles or linking to her own writing at the WSJ. These forms of participation are, just to be clear, perfectly legitimate. But other people do it better, and I imagine they get more out of the Twitter experience.

“Better” in this case, means “with deeper engagement in the collective meaning-making process supported by the affordances of Twitter.”

[insert contemplative pause]

Truly, I’ve been sitting here considering whether an examination of Vascellaro’s social networking practices are germain to a critique of her article. She links to her Facebook page (locked to outsiders) and her Twitter feed on her home page, and I believe that there is much to be learned about a technology reporter’s biases through an examination of her use of those technologies. But I wonder how I would feel if someone picked apart my use of Twitter, Facebook, and other social media platforms. At the very least, I might be engaging in an ad hominem attack. But then I think, if a school reporter critiques public schools, we should try to find out where she sends her kids. If a tech reporter smacks down Apple products, we should find out what kind of products she uses at work and at home.

Am I being a crudwad for examining Vascellaro’s digital footprint? Is it relevant to the issues she identifies in her piece? I would love for people to weigh in on this. In fact, I think I’ll try to get Vascellaro herself to weigh in.

Posted in Facebook, journalism, new media, Twitter | 5 Comments »

…and yet I don’t use Facebook much lately

Posted by Jenna McWilliams on September 15, 2009

This blog has been an unofficial Don’t Say Facebook is Over zone. I’m not quite willing to let go of that stance, especially since statistics suggest that Facebook activity continues to increase worldwide.

But you guys, I really don’t use Facebook that much anymore. And I’m not alone: Lots of my friends have drifted away too. Most of us prefer Twitter now, which means that one of the more interesting features of Facebook–the friend newsfeed–is clogged up by lame quiz results and remediated tweets that I’ve already read. All of the interesting stuff is going on over at twitter now, and Facebook is starting to feel like the social networking version of a print newspaper: I already got all the important news elsewhere, and the rest of what’s there feels like filler.

More significantly, gaining a new Twitter follower feels like a bigger win to me than adding a Facebook friend does.

Now, I don’t want to open myself up to accusations of Virginia Heffernanism. I’m not going to argue that my experience is symptomatic of any larger social networking trends. As far as I can tell, Facebook is far from an “online ghost town.” In fact–and this seems important–as Facebook increasingly becomes the domain of an older and generally less social networking-savvy demographic, it’s shifting to accommodate its new users’ needs and interests. Though it has certainly tried, Facebook just can’t keep up with the dynamic, socially complex Twitterspace; and the more it embraces this fact, the more it attempts to fortify the features it can uniquely offer, the more likely its continued success becomes.

Posted in Facebook, social media, Twitter | 3 Comments »

why I am a technological determinist

Posted by Jenna McWilliams on August 26, 2009

I’m fascinated by danah boyd’s recent post intended for the New Media Consortium’s upcoming Symposium for the Future. In her post, she cautions new media theorists to avoid what she labels “technological determinism.” She explains:

Rejecting technological determinism should be a mantra in our professional conversations. It’s really easy to get in the habit of seeing a new shiny piece of technology and just assume that we can dump it into an educational setting and !voila! miracles will happen. Yet, we also know that the field of dreams is merely that, a dream. Dumping laptops into a classroom does no good if a teacher doesn’t know how to leverage the technology for educational purposes. Building virtual worlds serves no educational purpose without curricula that connects a lesson plan with the affordances of the technology. Without educators, technology in the classroom is useless.

boyd’s point is well taken, though I’d be hard pressed to find a single new media scholar who embraces the kind of technological determinism she describes in the above passage. There may have been a time when the “if we build it, they will come” mindset was commonplace, but virtually no serious thinker I have encountered, either in person or in text, actually believes that new media technologies can or should offer quick fixes to society’s ills.

The problem, as I see it, is a two-part one. The first issue is one of terminology: Increasingly, we talk about “technology” as this set of tools, platforms, and communication devices that have emerged from the rise of the internet. This is useful insofar as it allows new media thinkers to converge as members of a field (typically labeled something like digital media and learning or the like), but it does so at the expense of the deep, complicated and deeply intertwined history of technologies and what we call “human progress.” In truth, social media platforms are an extension of communications technologies that reach back to the beginning of human development–before computers, television, motion pictures, radio, before word processing equipment, to telegraphs, typewriters, Morse code, pencils, paper, the printing press…all the way back to the very first communication technology, language itself.

“Technology” is not a monolith, and there is a distinct danger in presenting it as such, as boyd does in her final paragraph:

As we talk about the wonderfulness of technology, please keep in mind the complexities involved. Technology is a wonderful tool but it is not a panacea. It cannot solve all societal ills just by its mere existence. To have relevance and power, it must be leveraged by people to meet needs. This requires all of us to push past what we hope might happen and focus on introducing technology in a context that makes sense.

The second problem is a rhetorical one. New media theorists have found themselves engaged in a mutually antagonistic dance with those who prefer to focus on what they see as the negative cultural effects of digital technologies. For better or worse, people engaged directly in this dance find themselves coming down more firmly than they might otherwise in one of these camps and, because the best defense is a good offense, staking out a more strident position than they might take in private or among more like-minded thinkers. Thus, those who dislike Twitter feign disdain, repulsion, or fear and are labeled (or label themselves) luddites; and those who like Twitter find themselves arguing for its astronomical revolutionary potential and are labeled (or label themselves) uncritical utopianists.

In fact, media theorists have been targets of the “technological determinism” accusation for so long that they refuse to acknowledge that technologies actually can and often do determine practice. Homeric verse took the structure it did because the cadences were easy for pre-literate poets and orators to remember. The sentences of Hemingway, Faulkner, and many of their literary contemporaries shortened up because they needed to be sent by telegraph–leading to a key characteristic of the Modernist movement. The emergence of wikis (especially, let’s face it, Wikipedia) has led to a change in how we think about information, encyclopedias, knowledge, and expertise.

A more accurate–but more complex and therefore more fraught–way to think about the relationship between humans and their technologies is that each acts on the other: We design technologies that help us to communicate, which in turn impact how we communicate, and when, and why, and with whom. Then we design new technologies to meet our changing communications needs.

Again, virtually no media theorist that I know of would really disagree with this characterization of our relationship to technologies–yet say it too loudly in mixed company, and you’re likely to get slapped with the technological determinism label. I say this as someone who has been accused more than once, and in my view wrongly, of technological determinism.

Overly deterministic or not, however, I agree with boyd that technologies do not offer a panacea. More importantly, she argues against the use of terms like “digital natives” and, presumably, its complement, “digital immigrants.” These are easy terms that let us off the hook: people under 30 get something that people over 30 will never understand, and there’s nothing you can do about this divide. As boyd explains,

Just because many of today’s youth are growing up in a society dripping with technology does not mean that they inherently know how to use it. They don’t. Most of you have a better sense of how to get information from Google than the average youth. Most of you know how to navigate privacy settings of a social media tool better than the average teen. Understanding technology requires learning. Sure, there are countless youth engaged in informal learning every day when they go online. But what about all of the youth who lack access? Or who live in a community where learning how to use technology is not valued? Or who tries to engage alone? There’s an ever-increasing participation gap emerging between the haves and the have-nots. What distinguishes the groups is not just a question of access, although that is an issue; it’s also a question of community and education and opportunities for exploration. Youth learn through active participation, but phrases like “digital natives” obscure the considerable learning that occurs to enable some youth to be technologically fluent while others fail to engage.

The key question on the minds of researchers in digital media and learning is not (or should not be) how we can get computers in the hands of every student but how we can support participation in the valued practices, mindsets, and skillsets that go along with a networked, digital society. To get this question answered right requires an ability to engage in the complex, thorny, and socially charged issues that boyd and others have identified in their research and writings. It requires development of a common language within the broad digital media and learning community and an ability to communicate that language to the vast range of stakeholders who are paying attention to what we say and how we say it.


Related posts by other writers:

danah boyd: Some thoughts on technophilia
Kevin Kelly: Technophilia

Posted in academics, collective intelligence, danah boyd, education, new media, participatory culture, public schools, schools, social media, social revolution, Twitter | 1 Comment »