sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from http://jennamcwilliams.blogspot.com.

Archive for the ‘new media’ Category

a model for designing the ELA classroom in support of "literacy science"

Posted by Jenna McWilliams on February 7, 2010

You guys, I think I have a model to show you.

This makes me extremely happy, because as I’ve explained (more than once), I’ve struggled mightily with the very concept of modeling. I’ve also struggled with representation. The purpose of designing this model is to show my take on the role of new technologies in educational environments. But articulating a theory, even a working theory, about the role of technologies has been such an insurmountable challenge for me–which technologies? for which students? and for what purpose?

But the elements for building this rudimentary model have been around me for some time. It just took time and reflection for me to be able to put the elements together.

(image description: this is a pen-and-ink drawing of a classroom. In the center of the room, the class is seated, facing each other, around a square of tables; on the table in front of them are combinations of books, notebooks, and electronic equipment. Around the edges of the room are, clockwise from the upper lefthand corner: an easel labeled “representational literacy;” a table with extra pens and extra notebooks; a chalkboard with a variety of marks on it, labeled “design thinking”; book shelves; a workbench labeled “computational literacy”; open space lining most of one wall; a laptop labeled “new media literacy”; a safe filled with bundles of cash; and a laptop cart. Below the picture is the phrase, “If you can’t build it, then you don’t understand it.”)

Inspiration for this model
Design of the periphery: Multiple intelligences schools. A few years ago, I read the 25-anniversary edition of Howard Gardner’s Multiple Intelligences. Throughout the book, Gardner describes a variety of approaches to integrating his theory of multiple intelligences into learning environments, and one description–of the Key Learning Community in Indianapolis–has stuck with me. In this school, students work in “pods” that represent each type of intelligence outlined by Gardner; a founding principle of this school, he explains, “is the conviction that each child should have his or her multiple intelligences stimulated each day. Thus, every student at the school participates regularly in the activities of computing, music, and bodily-kinesthetics, in addition to mastering theme-centered curricula that embody standard literacies and subject matter.”

You don’t have to agree with this approach to appreciate its effort at offering a range of avenues for learning to happen. From time to time I think about those multiple intelligences schools and wonder what aspects might be applied to my current area of focus, the English / Language Arts classroom. Clearly, more avenues toward literacy is better than fewer avenues; and since we know that traditional literacy practices taught through traditional means are insufficient preparation for the types of literacy practices people are called upon to demonstrate in real life, we might think of “pods” for different groupings or categories of literacy learning.

Design of the center and periphery: A real life ELA classroom. I’ve had the unBELIEVABLE good luck to sit in on Becky Rupert’s ELA classroom at Aurora Alternative High School here in Bloomington, IN. Much of the design of this model is based on how she has arranged her class. To begin with, the main focus of the room is a square of tables where students meet at the beginning of each class. My model does not identify the teacher’s location; that’s because in Becky’s classroom, she sits at the table right alongside her students. She does this on purpose, and it works in service of developing a learning community.

Becky’s classroom is absolutely stuffed with books–you have to move books in order to get to other books. A new addition this year is a laptop cart, which sits against the far wall of the room.


Inclusion of design thinking: my work with SociaLens. For the last several months, I’ve been working with a new organization called SociaLens. The purpose of this organization is to consult with businesses and offer strategies for integrating new types of communications tools and ways of thinking into their organizational plans, with a particular eye toward social media technologies. Two key categories that we think make for highly adaptive, potentially highly successful organizations are new media literacies and design thinking.

Until I started working with SociaLens, I had not thought to consider the connection between these categories. I also hadn’t thought about what educational researchers can learn from corporate innovators and vice versa. But what has been seen cannot now be unseen. I’ve come to see design thinking as an essential element of literacy learning, and especially if you believe (as I do) that computational flexibility (which I’ll describe briefly below) is key to preparation for success in a new media age.


Inclusion of new media literacy, representational literacy, design thinking, & computational literacy “pods”: Some stuff I’ve read. I’ve been immersed in new media literacy research for a good chunk of years, and I drank that kool-aid long ago. If you believe in the value of teaching new media literacy practices in schools, then computational literacy kind of comes with the territory. These categories of literacy are similar in lots of respects: Both are better described as a set of proficiencies and attitudes–what Lankshear and Knobel call a combination of “technical stuff” and “ethos stuff”–than as concrete, teachable skills. Both require a kind of openness–a flexibility–to meet the quickly changing demands with emerging technologies. But new media literacies are the required skills to engage in collaborative knowledge-building or collective meaning-making or problem-solving activities, while computational literacy is, in my mind, linked to a kind of “hacker’s mentality.” It’s the act of simultaneously making use of and resisting the affordances of any technology; of knowing when and how to say “no” if a technology doesn’t meet your purposes; and of finding (or developing) a new technology that better meets your needs and interests.

Design thinking, as I mention above, comes out of my work with SociaLens and the (admittedly very surface-level) reading I’ve done about this approach to problem-solving. This type of thinking has also made an appearance in the recent work I’ve been reading about research in science and math instruction. Many researchers whose work focuses on supporting an inquiry-based focus in science instruction, in particular, emphasize the value of embracing the epistemological basis of science-as-inquiry. As William Sandoval and Brian Reiser explain in their 2004 piece, “Explanation-Driven Inquiry: Integrating Conceptual and Epistemic Scaffolds for Scientific Inquiry,” the epistemic elements of this approach include

knowledge of the kinds of questions that can be answered through inquiry, the kinds of methods that are accepted within disciplines for generating data, and standards for what count as legitimate interpretations of data, including explanations, models, and theories. Placing these epistemic aspects of scientific practice in the foreground of inquiry may help students to understand and better conduct inquiry, as well as provide a context to overtly examine the epistemological commitments underlying it.

Wilensky & Reisman, in their work with computer-based modeling, argue in support of what they call “the engineer’s dictum”: “If you can’t build it, then you don’t understand it.” They work with a modeling language called NetLogo, which is a loose descendant of Seymour Papert’s Logo program. The program requires students to solve problems by developing models of real-world processes like population fluctuation within predator-prey (wolf-sheep) communities and the phenomenon of fireflies synchronizing their flashes. The authors make a strong case that model-based thinking–or what we might also call “design thinking”–is key to students’ ability to engage in deep learning about a specific phenomenon and about scientific inquiry more broadly.

I included a pod for “representational literacy” in this model because of my own recent experience grappling with model-building. The ability to design, critique, and modify representational models is a set of skills with relevance across content areas, and we don’t typically think of it as extremely valuable in the literacy classroom. But it should be news to nobody that “literacy” is becoming an increasingly visual category of proficiencies, and that representational literacy is quickly becoming even more tightly bound up with traditional literacies than it ever was before.

What I haven’t yet noted is that these categories of literacy practices make up what we might call “literacy science.” I mean this term to hold the same place in the literacy classroom as “mathematician” or “scientist” or “historian” or “musician” hold in their respective classroom-based environments. As a culture, we haven’t spent enough time yet thinking about the purpose we hope the new literacy classroom to serve. Science class is supposed, ideally, to get students thinking like scientists; in math class you (ideally) learn to think like a mathematician; in history class you think like a historian; but in general English class has been designed as a sort of catch-all, a place where students can learn the basic reading and writing skills that enable them to think like historians, mathematicians, and so on.

What if we shifted the focus of the ELA classroom to more explicitly broach the notion of “literacy science”: A way of being in the (literate) world characterized by an ethos, a set of skills, and a set of norms and behaviors? What would it mean to turn the ELA classroom into a place where we support the growth of literacy scientists?


Inclusion of open space: a nod to the future work of literacy science. Howard Gardner’s list of multiple intelligences has grown over the years, and my model is designed to accommodate new categories of literacy practices. Filling up the entire classroom does nobody any good, especially since we know–we absolutely know–that new valued practices are emerging along with the breakneck speed of emergent technologies.

I should mention, too, that my model includes a safe filled with bundles of cash. This is a nod not only to the future work of literacy science but also to the current conditions of the typical public school. On top of the training required, every one of the pods in my model costs money, and it’s money that schools simply don’t have.

So that’s it: That’s my current model for the role of technologies in the literacy classroom. I would love to know your thoughts. Comments, questions, and suggestions are most welcome and will be read with great joy, thoughtfulness, and enthusiasm.

References: In case you’re interested in reading the work I identified above, here are the citations.

Wilensky, U. & Reisman, K. (2006). Thinking like a Wolf, a Sheep or a Firefly: Learning Biology through Constructing and Testing Computational Theories — an Embodied Modeling Approach. Cognition & Instruction, 24(2), pp. 171-209. http://ccl.northwestern.edu/papers/wolfsheep.pdf.

Sandoval, W., A., & Reiser, B.J. (2004). Explanation-driven inquiry: Integrating conceptual and epistemic scaffolds for scientific inquiry. Science Education, 88:3, 345-372.
Advertisements

Posted in creativity, education, Joshua Danish, learning sciences, literacy, new media, participatory culture, pedagogy, schools, teaching | Leave a Comment »

technologies as sleeping policemen: or, how I learned to stop worrying and…

Posted by Jenna McWilliams on January 18, 2010

Nicholas Burbules and Thomas Callister worry for us. Or, at least, they were worried, over 10 years ago when they offered up their take on new technologies in a paper called The Risky Promises and Promising Risks of New Information Technologies for Education. Among their concerns: that too many people adopt a “computer as panacea approach” to new technologies. This is uniquely problematic in education, they argue, where

(r)ather than acknowledge the inherent difficulty and imperfectability of the teaching-learning endeavor, rather than accept a sloppy pluralism that admits that different approaches work in different situations—and that no approach works perfectly all the time—educational theorists and policy makers seize upon one fashion after another and then try to find new arguments, or new mandates, that will promote widespread acceptance and conformity under the latest revolution.

As problematic as the “computer as panacea” approach is, it pales in comparison to the relativistic “computer as neutral tool” approach, the one that has people saying that any technology can be used for good or for evil. Burbules and Callister explain that:

this technocratic dream simply errs in the opposite direction from the first. Where the panacea perspective places too much faith in the technology itself, the tool perspective places too much faith in people’s abilities to exercise foresight and restraint in how new technologies are put to use; it ignores the possibilities of unintended consequences or the ways in which technologies bring with them inherent limits to how and for what purposes they can be used. A computer is not just an electronic typewriter; the World Wide Web is not just an on-line encyclopedia. Any tool changes the user, especially, in this instance, in the way in which tools shape the conception of the purposes to which they can be put. As the old joke goes, if you give a kid a hammer they’ll see everything as needing hammering.

They prefer a middle approach, which assumes that a simple cost-benefit analysis fails to account for the possibility that benefits and costs are highly dependent on perspective. They offer as proof the history of antibiotics, which through widespread use greatly decreased humanity’s likelihood of dying from bacterial infection but in the process led to the emergence of drug-resistant forms of bacteria. (“That is a very bad thing,” they write.)

Though it’s fairly simplistic to compare new information technologies to antibiotics, I’ll go with the analogy for now, mainly because I agree with the authors’ effort to problematize attitudes toward new technologies. It’s perhaps more accurate to consider the social effects of antibiotics: they have led to a general increase in life expectancy, but in the process have enabled imperialistic societies (cf. the United States) to effectively colonize cultures, communities, and countries worldwide. In the same way, new technologies offer unprecedented access to information, communities, and tools for mobilization, but they simultaneously support new forms of colonization, both across and regardless of national borders.

Which brings me to the metaphor of technologies as sleeping policemen.

The sleeping policeman: In America, we call it a “speedbump.” It looks like this:

The speedbump’s intended effect is to get drivers to slow the hell down, and it’s commonly used in neighborhoods and suburban areas with lots of kids. And it does get people to slow the hell down, primarily because they have no choice. There are also tons of unintended effects: Parents feel more comfortable letting their kids play outside. And, as this post points out, kids playing outside tend to get to know each other better. They–and, by extension, their parents–connect with other neighborhood residents, and everybody feels more connected: “Parents come to know the nearby children. And, inevitably, they come to know those childrens’ parents. They begin trading favors like driving children around. They become neighborly.”

There are potential negative effects, too. Using sleeping policemen to slow drivers down changes driving practices in unintended ways. When a driver hits the last speedbump, she hits the gas and jets on down the road. This might increase the risk of an accident just beyond the range of the speedbumps. Drivers may choose to avoid areas with speedbumps, thereby increasing traffic through other areas–even, potentially, nearby neighborhoods whose streets lack speedbumps. And when a driver is not forced to monitor her own driving practices, the decision to simply drive more slowly in neighborhoods is taken away from her, thereby increasing the possibility that she will not adopt slower driving as a general practice.

Still, I think we can all agree that the benefits outweigh the costs. Nobody sees the speedbump as a panacea, and I don’t imagine many people see the speedbump as a neutral technology.

So why do we worry so much more about the emergence and increasing ubiquity of new media technologies than we do about sleeping policemen or antibiotics?

One reason is that it’s easier to see new media technologies as actors that shape our practices than it is to see how speed bumps and antibiotics have shaped us.

Actors: Any person or tool that exerts force upon any other person or tool, thereby shaping its use or practice. In Actor-Network Theory, everything is a potential actor, everything a potential actant.

Speed bumps act upon cars, drivers, kids, parents, neighborhood dynamics. Antibiotics have acted upon people, policies, government spending, and attitudes. We live longer now. We therefore reshape our lives, our goals, and our relationships to others. It’s all very chaotic and complicated, because our reshaped attitudes in turn act upon our use of antibiotics. Everything mediates everything.

Because new media technologies have emerged and been adopted so quickly, their role in reshaping thought and action–and even, it’s becoming clear, physiology–is clear, even if the outline of how this reshaping is shaking out remains quite fuzzy. New technologies as sleeping policemen: They shape not only how we drive, but how we think about driving. We move them, we reshape them, we add more or take a few away, we develop cars with better suspension…and it goes on down the rabbit hole.

Posted in academia, education, learning sciences, new media, participatory culture, pedagogy, philosophy, public schools, schools, social media, social revolution | 3 Comments »

response from Mark Bauerlein: on The Dumbest Generation

Posted by Jenna McWilliams on December 10, 2009

I recently received an email communication from Mark Bauerlein in response to my recent critique of his book, the Dumbest Generation.

I asked, and Bauerlein gave me permission, to post his response to my blog. Here it is, in its entirety:

Astonishing, Jenna, that you quote Liz Losh, who actually takes one disgruntled student’s comments on RateMyProfessors as evidence from which to generalize about my teaching.

If you have found any factual or logical errors in Dumbest Generation, I’ll be happy to concede them. After all, we want every harsh judgment in the book to be proven wrong.

Mark

Posted in awesome, education, new media, participatory culture | 2 Comments »

liveblogging the Home Inc Conference: keynote speaker Alan November

Posted by Jenna McWilliams on October 24, 2009

From Alan November’s website:

Alan November is an international leader in education technology. He began his career as an oceanography teacher and dorm counselor at an island reform school for boys in Boston Harbor. He has been director of an alternative high school, computer coordinator, technology consultant, and university lecturer. He has helped schools, governments and industry leaders improve the quality of education through technology.

His opener:
“I used to think I knew the truth. I don’t know it anymore. So whatever I say is only good enough to criticize.”

Here’s why, according to Alan November, we’ve been able to spend over $10 billion on putting technology into schools over the last decade without making any gains on learning. He pulls much of his arguments from Shoshana Zuboff’s 1989 book, The Age of the Smart Machine: The Future of Work and Power.


1. The real solution isn’t bolting technology on top of what we used to do.
November pointed to Zuboff’s notion of “automating,” which is the process of using technology to automatically transfer information. “When you automate,” November said, “at best, you only get incremental improvement. Not surprisingly to me, you often get a decline in quality.

According to November, connecting our classrooms to the Internet has lowered the quality of education int he U.S. Plagiarism has skyrocketed. “Everywhere I go,” he said, “teachers complain about how students are taking the easiest route to learning” through copying and pasting and other plagiaristic approaches.

2. The real issue isn’t technology; the real issue is control. We have teachers and administrators controlling learning and we need to ask how well (or poorly) that serves the needs of the learners.

Here are the solutions November offers:


Zuboff’s notion of informating:
Giving people access to information they’ve never had before. “I’ve been to schools that are technology-rich and information-poor. Teachers don’t have the right information at the right time to do the right job. Students don’t have the right information at the right time to do the right job. Parents do not have the right information–ever, hardly.”

Identify new opportunities for collaboration. This is, according to November, a mark that you’re beginning to use technology well.
“The one-room schoolhouse was a great idea. We need to go back to that. The very structure of the school system is what’s in the way. That structure is a control model.”

If you do those two things well, November argued, then more and more people become self directed. They don’t need an organization to tell them what to do. That’s the ultimate skill, according to November.

“One of the most important questions we need to ask is: Who should own the learning?” Since technology is typically used to reinforce teacher control, we need to think of new strategies for using technology to shift control over learning toward learners and, November argues, parents. He argued that the best thing schools can do is to “build capacity in every family as centers of learning.

“But I can say this until I’m blue. i don’t think anybody’s going to do this–because it falls outside of the boundaries of the current collaboration people have.”

Time? Money? Energy? “It’s all red herrings,” said November. “It’s all about control!”

November says the biggest technology from his perspective that can help lead to a shift in control is Skype.

my thoughts on November’s keynote:

It’s refreshing to see his energy and enthusiasm about rethinking the use of technology in the classroom. I worry, though, that his stance on transferring agency to the family could just shift the control issues from the schools to the family structure. In brief, it’s not just control that makes schools worrisome institutions; it’s the colonizing effect of middle class values on members of non-dominant classes and ethnicities. Collaborate with families and you get the same old divide we’ve been seeing for much more than the last decade. Middle class kids will get inculcated with middle class values, which we know lead to success; lower class kids will learn a different set of values, thereby reifying the divide between the haves and the have-nots.

Add to this the increasing influence of new media technologies–and the participation gap that Henry Jenkins has pointed to–and this concern becomes even more vital.

Control, after all, is much less simple (and simplistic) than we try to make it appear. Add to that the fact that institutional control has nuances that aren’t easy to talk about in the keynote structure.

“If you don’t have the right mission,” November said, “it doesn’t matter what technology you have.” Yes, and we need to consider the broader (if tacit and unexplored) mission of the American education system.

Posted in literacy, liveblogging, MIT, new media | 1 Comment »

time to smack down the Wall Street Journal

Posted by Jenna McWilliams on October 12, 2009

(Don’t worry; I snuck around the pay wall.)

As my sister Laura put it when she sent me this article on why the Wall Street Journal is five years behind the times why email is no longer the communication tool of choice, “It’s trying so hard to be ‘with it’ and in the flow of the times… But it seems stuffy and like a 40-year-old’s take on new media.”

The piece is called “Why Email No Longer Rules… And what that means for the way we communicate,” and it reiterates points that were interesting to tech folks a handful of years ago. (Here’s a 2007 piece on the decline of email from Gawker; here’s a 2007 piece on the same topic from Slate ; here’s a 2006 blogpost on the issue ; and so on.)

Among the “insights” of the WSJ article are that email seems painfully slow to us now:

Why wait for a response to an email when you get a quicker answer over instant messaging? Thanks to Facebook, some questions can be answered without asking them. You don’t need to ask a friend whether she has left work, if she has updated her public “status” on the site telling the world so. Email, stuck in the era of attachments, seems boring compared to services like Google Wave, currently in test phase, which allows users to share photos by dragging and dropping them from a desktop into a Wave, and to enter comments in near real time.

There is, reporter Jessica E. Vascellaro explains, a new phenomenon wherein we receive a constant stream of information, both personal and professional. There is a downside, as she points out:

That can make it harder to determine the importance of various messages. When people can more easily fire off all sorts of messages—from updates about their breakfast to questions about the evening’s plans—being able to figure out which messages are truly important, or even which warrant a response, can be difficult. Information overload can lead some people to tune out messages altogether.

Additionally, the speed of communication presents problems:

While making communication more frequent, they can also make it less personal and intimate. Communicating is becoming so easy that the recipient knows how little time and thought was required of the sender. Yes, your half-dozen closest friends can read your vacation updates. But so can your 500 other “friends.” And if you know all these people are reading your updates, you might say a lot less than you would otherwise.

Good lord! she exclaims. We’re surrounded by this constant stream of information! How will we manage it?

It makes sense that we would compare new forms of communication–Twitter, Facebook, text messages–to older forms of written communication like email and, going back more than ten years (!), letters, memos, and personal notes. If we compare newer communication technologies to those previous modes of written communication, then Vascellaro’s points ring true.

But here’s where Laura got it right: Thinking of Twitter as a faster, shorter, and less consequential version of email is an old-school paradigm that ignores that other than the fact that it works primarily with printed text, Twitter (and, more broadly, microblogging in general) is not like email at all. Anyone who approaches these new platforms with an attempt to figure out what’s ‘important’ and what’s ‘trivial,’ what needs to be acted on and what can be ignored, is missing out entirely on the spirit of these spaces.

In fact, Twitter, Facebook, and similar participatory platforms support a convergence of multiple types of communication. Twitter, just as one example, supports a type of identity work that was not previously seen in other communication environments. Through the careful combination of tweets about personal information, ‘trivial’ details, and and professional interests, people are painstakingly (sometimes, especially at first, accidentally) crafting and presenting a coherent if fluid and flexible identity, which then informs the identities they present in other spaces, online and off.

What makes Twitter new is the particular combination of people and affordances. Facebook and similar social networks require people to send out friend requests that must then be accepted; it means people can control, fairly strictly, the size of their community. Twitter requires no such permission. Because I can follow almost anyone I want to, and because almost anyone who wants to can follow me, we’re seeing a fascinating intermixture of near and distant connections between people. I follow my best friend, who follows me; I also follow my idol Clay Shirky, who doesn’t follow me (yet); and I follow colleagues who fall all along the friendship continuum. Some of them follow me and some of them don’t (yet).

Vascellaro largely focuses on the professional implications of new communication tools, and agrees that one nice feature of these tools is that information is often available instantaneously–if you need to know whether a colleague has left work yet, you might check her Facebook status. The downside, she explains, is that

a dump of personal data can also turn off the people you are trying to communicate with. If I really just want to know what time the meeting is, I may not care that you have updated your status message to point people to photos of your kids.

In fact, if you’re using Facebook and Twitter just to find out the kinds of information you used to get through email and phone conversations, then the volume of information may feel overwhelming and prohibitive. But if you’re only focusing on how to use these tools to do the work that email used to do, then you’re kind of missing the point: Social media communication tools provide new avenues for doing deep identity work in communities that mix professional and personal relationships.

To be clear, Twitter is not email on steroids. Facebook is not like coffee circles for 500 people at a time. And blogs are not diaries with 100 or 1000 readers. Twitter is Twitter. Facebook is Facebook. And trying to parse these spaces by comparing them to previous one-to-many types of communications (like email) limits one’s ability to see the full range of the affordances of these platforms.

I don’t mean to hammer too hard on Vascellaro, who has written numerous interesting tech-related articles for the WSJ. But a quick look at her homepage reveals her biases (this, by the way, is another interesting aspect of an increasingly participatory culture: a public figure’s digital footprint becomes a matter of public interest). Her site points to her Facebook page, the details of which are locked to the public; the ability to lock down a Facebook page, in my experience, is a feature largely leveraged by people who struggle with the notion of mixing the personal and the professional. But increasingly, the ability to engage with this mixture–even by getting it wrong sometimes–is more valued and valuable than the ability to carefully separate the two.

A look at Vascellaro’s Twitter feed is even more telling. Her first several hundred tweets mimicked the style of Facebook status updates:

Eventually, she switched to a broadcast approach, mainly tweeting about interesting articles or linking to her own writing at the WSJ. These forms of participation are, just to be clear, perfectly legitimate. But other people do it better, and I imagine they get more out of the Twitter experience.

“Better” in this case, means “with deeper engagement in the collective meaning-making process supported by the affordances of Twitter.”

[insert contemplative pause]

Truly, I’ve been sitting here considering whether an examination of Vascellaro’s social networking practices are germain to a critique of her article. She links to her Facebook page (locked to outsiders) and her Twitter feed on her home page, and I believe that there is much to be learned about a technology reporter’s biases through an examination of her use of those technologies. But I wonder how I would feel if someone picked apart my use of Twitter, Facebook, and other social media platforms. At the very least, I might be engaging in an ad hominem attack. But then I think, if a school reporter critiques public schools, we should try to find out where she sends her kids. If a tech reporter smacks down Apple products, we should find out what kind of products she uses at work and at home.

Am I being a crudwad for examining Vascellaro’s digital footprint? Is it relevant to the issues she identifies in her piece? I would love for people to weigh in on this. In fact, I think I’ll try to get Vascellaro herself to weigh in.

Posted in Facebook, journalism, new media, Twitter | 5 Comments »

putting the "new" in "new media literacies": a helpful visual aid

Posted by Jenna McWilliams on October 11, 2009

Here’s a Prezi project I’ve been working on to visualize some key features of what Kress, Lankshear & Knobel, Scribner & Cole, and others find salient in emerging new media literacy practices.

This visualization emphasizes Lankshear & Knobel’s characterization of two distinct approaches to the “new” in “new media literacy.” As they explain, the use of ‘new’ in the paradigmatic sense is a sociocultural approach to literacy practices:

the New Literacy Studies comprise a new paradigm for looking at literacy as opposed to the paradigm that already existed that was based on psychology. The use of ‘new’ here parallels that which is involved in names for initiatives or movements such as the New School of Social Research, the New Science, the New Criticism (and New Critics) and so on. In all such cases, the proponents think of their project as comprising a new and different paradigm relative to an existing orthodoxy or dominant approach.

Okay, that’s the paradigmatic sense of ‘new.’ Here’s the ontological sense, which Lankshear & Knobel explain

refers to the idea that changes have occurred in the character and substance of literacies associated with changes in technology, institutions, media, the economy, and the rapid movement toward global scale in manufacture, finance, communications and so on. These changes have impacted on social practices in all the main areas of everyday life within modern societies…. Established social practices have been transformed, and new forms of social practice have emerged and continue to emerge at a rapid rate.

Gunther Kress, in “Literacy and Multimodality,” emphasizes design, perhaps above all else. He explains:

Design does not ask, ‘what was done before, how, for whom, with what?’ Design asks, ‘what is needed now, in this one situation, with this configuration of purposes, aims, audience, and with these resources, and given my interests in this situation?’

Design matters; and my choice to work with Prezi and a publicly accessible blog shaped my engagement with this project. It would have looked very different had I sketched it out on a sheet of paper; even had I sketched it first, with the intention of designing it in Prezi and posting it to my blog.

This visualization is only my first attempt to point to two broad groups of shifts into new media literacy studies. I hope to gather feedback (you can comment below!); return to, revise, and refine this project; and resubmit it for your approval at a later date. Thank you in advance for your input.

To take a look at the visualization, click the arrow below the design. It’s interactive–you can zoom in on any part of the graphic.

Posted in literacy, new media | 7 Comments »

blogging as a pedagogical tool: some initial ideas and a request

Posted by Jenna McWilliams on September 30, 2009

I’m hoping to crowdsource some brainstorming about the pedagogical potential of blogging on learning. Lately, in my work with Dan Hickey’s 21st Century Assessment Project, I’ve been thinking tons about how integrating blogging in the formal English / Language Arts classroom might build a rich new media environment for ELA students. I’ve started a provisional list below but am hoping that others (most importantly for me, people who have worked with blogs in their classrooms) can offer ideas for additions to this list.

First of all, it’s worth noting that my approach to the value of blogging for teaching and learning in Language Arts is deeply informed by the work of a number of teacher-researchers from several fields. Most notable among these are Paul Allison, whose chapter “Be a Blogger: Social Networking in the Classroom” (in Teaching the New Writing: Technology, Change, and Assessment in the 21st-Century Classroom, by Anne Herrington, Kevin Hodgson, and Charles Moran) offers a glimpse into the day-to-day workings of a blogging-focused ELA curriculum; and Sam Rose and Howard Rheingold, who have devised (and made publicly available) an enormous set of resources for teaching in and through new media platforms.

My approach is also informed by my personal experience as a blogger–really, to be fair, as someone who is willing to squeeze out nearly anything in order to make time for posting. By even my most generous estimate, I spend far too much of my time blogging–unless you account for the formative value of blogging for someone like me. I am convinced that the intellectual and identity work required for me to maintain this space has led directly to my growing prowess as a researcher, reader, and writer. You cannot convince me otherwise; so do not even bother trying.

My experiences and the reading I’ve done about the value of blogging for learning informs everything that comes next.

Characteristics of blogging that support new media literacy

Reaching a wide(r) reader base
It’s important to note that blogs differ in purpose from many seemingly similar writing platforms. It’s obvious to most that a blog is different from a personal journal, in that while many of us may hope to have our journals read by a larger public some day, blogs are actually intended to support wider readership. The majority of blogs are public (meaning anybody can view them) and taggable, and they come up as legitimate sites in web searches.

Blogs also differ from forums, chat rooms, instant message programs, and social networking sites like Twitter and Facebook. Of all of these spaces, blogs are generally the most polished, the most text-based, and the most supportive of extended engagement with a single idea.

Shifting from intended audience to intended public

This idea is ripped from Howard Rheingold, who (tapping into some Habermas) writes that

[m]oving from a private to a public voice can help students turn their self-expression into a form of public participation. Public voice is learnable, a matter of consciously engaging with an active public rather than broadcasting to a passive audience.

The move here is away from the “please read what I wrote” approach to “please act on the ideas I’ve written down here.” The regular practice required for building and maintaining a blog’s readership helps to crystallize this shift and helps writers to see there is a broad, if constantly shifting, group of people whose interests align with the broad, if constantly shifting, ideas of a blog. Though the intended public is largely invisible (we have generally only met a fraction of our blog’s readers), consistent practice in finding, drawing in, and engaging this target public makes them less transparent.

Blogs as (genuine) conversations
When I taught college composition lo these many years ago, I always tried to argue to my students that all writing is a conversation–that when we write, we take up ideas that were presented by other writers before us and try to present something new that might be of interest to people who care about the kinds of things we write about.

The argument always felt hollow to me. After all, college students are typically only eavesdroppers. Only a handful of people will ever read what they’ve written, and often the students don’t really care all that much about the assigned writing topics anyway. Add to that the artificial motivator of the ever-elusive ‘A’ and you have a recipe for calamity.

But blogs–now blogs are authentic communication spaces. They really are. Anybody can get almost anybody to read a blogpost and, if the post is engaging enough, to comment on the post for all eternity to see. This very fact ups the ante some: Getting the spelling of someone’s name suddenly matters an awful lot. Making a concise, well supported argument has real, potential consequences: A strong enough argument gets people to sit up and notice. A strong enough argument gets people to act.

A move toward increasingly public spheres of participation

An increasingly participatory culture calls for participation that’s ethical, reasoned, and publicly accessible. After all, the widespread takeup of the spirit of participatory culture requires that we all act in ways that keep the barriers to participation low, the potential for contribution high, and the mentorship possibilities readily available to most or all participants. This can only happen to the extent that all or most of us are willing to operate, to express and circulate our ideas and creative works, in public online and offline spaces. Since so much discourse will increasingly happen in public spaces, it only makes sense that we use the ELA domain to prepare students for engagement in those public spaces.


Blogs as spaces for fostering both traditional and new media literacies

For language arts teachers, blogging presents a fairly obvious avenue for preparing learners for engagement in public spheres of communication, since blogs align nicely with the traditional purposes of the ELA classroom. As a group of readers engage in deep analysis of their own and others’ blogs, they have to think about issues like tone, style, genre, punctuation, word choice, and organization.

The extra toy prize is that students also get to learn about the characteristics of online writing, including what danah boyd identifies as the four properties of online communication (persistence, searchability, replicability, and scalability) and three dynamics (invisible audiences, collapsed contexts, and the blurring of public and private). As my colleague Michelle Honeyford put it, “they hit all the standards and get to learn about online participation for free.”

Confronting the ethics challenge
Nobody’s arguing that we should sign every sixth grader up with a Blogger account. That would just be silly. Media scholar Henry Jenkins is fond of saying that the role of educators and parents is not to look over kids’ shoulders but to watch their backs, and scaffolding learners toward participation in increasingly public spheres allows us to do just that. Lots of teachers (including the famously brilliant Becky Rupert at Bloomington’s Aurora Alternative High School) start their students out by having them post to a private space (she uses Ning) but having them analyze writing from more public spaces. This way, they have a kind of new media sandbox to try out and engage with the norms of online communication before actually being held to the higher ethical standard, with deeper potential repercussions (both positive and negative).

That’s all I have for now, though I would love to hear from you on the list above. What have I missed? What am I ignoring? What struggles are linked to bringing blogs into the classroom, and what challenges have you encountered if you’ve tried to do so?

I hope for this to be a multipart post that will include thoughts on the following categories:

  • Affordances of blogging as a new media writing technology
  • Challenges to integrating blogs into the ELA classroom
  • Resources (including lesson plans, other writing on this topic, etc.)
  • Assessment guidelines for working with blogs

If you have thoughts on any of the above, I’d love to hear from you. If you have any trouble posting comments (I don’t know why, but some of you have) please email me at jennamcjenna(at)gmail(dot)com.

Posted in assessment, blogging, creativity, Dan Hickey, education, Henry Jenkins, Howard Rheingold, literacy, new media, participatory culture, schools, social media, writing | 10 Comments »