sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from

Archive for the ‘patent pending’ Category

update: model for integrating technology into the literacy classroom

Posted by Jenna McWilliams on February 14, 2010

I’ve upgraded.

As part of an ongoing assignment for a course I’m taking called Computational Technologies in Educational Ecosystems, I’ve been designing and modifying a model for the role of technologies in the classroom. A previous version, a cellphone picture of a drawing on a sheet of notebook paper, looked like this:

Well. This is for a class on computational technologies, so a hand-drawn model would never do. Besides, one of the more useful affordances of new design technologies is the ease with which designs can be modified–not the case with hand-drawn designs.

So I upgraded. The upgrade looks like this:

(You can click the image to enlarge it; if it’s still too small, you can open a powerpoint version here.)

As I mentioned in my previous post, I’m focusing in on the English / Language Arts classroom–what I’ve begun to call the “literacy sciences” classroom. I’m calling it this to reflect my vision for the kind of learning that can happen in the ideal ELA classroom. It’s a community where class activities reflect the real-world practices of people engaging in authentic, valuable and valued reading and writing practices. In the real world, reading and writing practices cross multiple media and platforms; and they’re all bound up in the context for which they’re necessary and useful.

Which is why this version includes one tiny but important addition: The open door leading to other content areas. This addition was inspired by reading I’ve done this week on participatory simulations and wearable computing. Vanessa Colella’s 2000 piece, “Participatory Simulations: Building Collaborative Understanding through Immersive Dynamic Modeling,” describes one aspect of these types of simulations: That they treat the classroom as what she labels a “cognitive system.” Colella describes the cognitive system as one comprised of all people, tools, data, and discourse that are both part of and a product of class activities.

What Colella doesn’t point out is that the simulations she describes call for a cognitive system not bound by any specific content domain. Her simulation is of a fast-spreading virus similar to HIV or influenza, and though students’ primary goal is to solve the problem of how the virus spread and to whom, related social and cultural implications are hinted at and have educational potential.

Indeed, the real-world literacy practices of literacy science are not bound to any domain. It’s hard to imagine what “pure” literacy science would look like: A solitary reader, engaging in literary analysis in a room by herself, without any tools other than her eyes and her mind and her memory? Though the cognitive systems that surround literacy performances are not always clear and not always stable, one thing we can say is that they extend far beyond the domain of English / Language Arts.

We must, therefore, prepare learners for this reality by opening up the doors and letting content bleed across boundaries, and letting readers move between contexts. The problems learners must be prepared to address–the deep, thorny problems of our time–call for a breaking down of content silos.

One other addition here is the citations around the borders. These are linked to varying extent to course readings; I’ve added a few other names where relevant. Upon completion of this project, I’ll post a list of all relevant resources, in case you’re interested in perusing them.

Posted in academia, education, graduate school, Henry Jenkins, Joshua Danish, literacy, patent pending, reading, schools, teaching, writing | Leave a Comment »

devising a model for technology in education: my version of writer’s block

Posted by Jenna McWilliams on February 2, 2010

I believe the following principles to hold true:

  • Human goals are mediated by, and thenceforth only achieved through, the widespread adoption and use of new technologies.*
  • Human purposes for adopting and making use of new technologies are often highly individualized (though nearly always aligned with an affinity group, even if that group is not explicitly named and even if that group is not comprised of other members of the learning community).
  • While no educational researcher is qualified to articulate achievable goals for another human, the researcher is ethically obligated to support learners in articulating, and achieving, ethical educational goals.
  • The efficacy and success of new technologies can be measured through multiple lenses, among which only one is the achievement of mainstream educational goals as articulated and assessed through traditional, often standardized, measurement tools.

If you (a) know me, (b) follow me on Twitter or a similar social network, or (c) read my blog, you know that being at a loss for something to say just doesn’t happen to me. (On the one hand, this makes me perfectly suited to social media, blogging, and academia; on the other hand, it means I’ll mouth off about the social revolution in nearly any social situation.)

But for weeks now, I’ve been trying to devise a model to represent the role of computational technologies in education. And for weeks, I’ve been failing miserably. Here’s the closest I’ve come:

As you can see, this model is incomplete. I was in the middle of drawing an arrow from that word “technology” to something else when I realized that this model would never, ever do. So I tried to approach modelling from other perspectives. I tried backing my way in, by thinking of technologies metaphorically; I’ve tried presenting technology integration in the form of a decision tree. Which is fine, except that these don’t really work as models.

And I have to come up with a model. I do. Though I don’t often mention this, I’m not actually only a blogger. In real life, I’m a graduate student in Indiana University’s Learning Sciences Program. Because I believe in the value of public intellectual discourse, I’ve chosen to present as much of my coursework as possible on my blog or through other public, persistent and searchable communications platforms.

I will, at some future point, discuss the challenges and benefits of living up to this decision. For now, you guys, I just need to come up with a goddam model that I can live with.

I tried thinking of technologies as sleeping policemen; or, in other words, as objects that mediate our thoughts and actions and that have both intended and unintended consequences. This was a reaction to a set of readings including a chunk of Bonnie Nardi’s and Vicki O’Day’s 1999 book, Information Ecology: Using Technology with Heart; a Burbules & Callister piece from the same year, “The Risky Promises and Promising Risks of New Information Technologies for Education”; and Stahl & Hesse’s 2009 piece, “Practice perspectives in CSCL.” The theme of these writings was: We need to problematize dominant narratives about the role of technologies in education. Burbules & Callister categorize these narratives as follows:

  • computer as panacea (“New technologies will solve everything!”)
  • computer as [neutral] tool (“Technologies have no purpose built into them, and can be used for good or evil!”)
  • computer as [nonneutral] tool (the authors call this “(a) slightly more sophisticated variant” on the “computer as tool perspective”)
  • balanced approach to computer technologies (neither panacea nor tool, but resources with intended and unintended social consequences)

Nardi & O’Day, who basically agree with the categories identified above, argue for the more nuanced approach that they believe emerges when we think of technologies as ecologies, a term which they explain is

intended to evoke an image of biological ecologies with their complex dynamics and diverse species and opportunistic niches for growth. Our purpose in using the ecology metaphor is to foster thought and discussion, to stimulate conversations for action…. [T]he ecology metaphor provides a distinctive, powerful set of organizing properties around which to have conversations. The ecological metaphor suggests several key properties of many environments
in which technology is used.

Which is all fine and dandy, except the argument that precedes and follows the above quote is so tainted by mistrust and despair over the effects of new technologies that it’s hard to imagine that even Nardi and O’Day themselves can believe they’ve presented a balanced analysis. Reading their description of techno-ecologies is kind of like reading a book about prairie dog ecologies prefaced by a sentence like “Jesus Christ I hate those freaking prairie dogs.”

So the description of technologies as sleeping policemen was an effort to step back and describe, with as much detachment as possible for an admitted technorevolutionary like me, the role of technologies in mediating human activity.

But the metaphor doesn’t really have much by way of practical use. What am I going to do, take that model into the classroom and say, well, here’s why your kids aren’t using blogs–as you can see (::points to picture of speed bump::), kids are just driving around the speed bump instead of slowing down….?

This became clear as I jumped into a consideration of so-called “intelligent tutors,” which I described briefly in a previous post. Or, well, the speed bump metaphor might work, but only if we can come up with some agreed-upon end point and also set agreed-upon rules like speed limits and driving routes. But the problem is that even though we might think we all agree on the goals of education, there’s actually tons of discord, both spoken and unspoken. We can’t even all agree that what’s sitting in the middle of that road is actually a speedbump and not, for example, a stop sign. Or a launch ramp.

The Cognitive Tutors described by Kenneth Koedinger and Albert Corbett are a nice example of this. Researchers who embrace these types of learning tools see them as gateways to content mastery. But if you believe, as I do, that the content students are required to master is too often slanted in favor of members of dominant groups and against the typically underprivileged, underserved, and underheard members of our society, then Cognitive Tutors start to look less like gateways and more like gatekeepers. Even the tutoring tools that lead to demonstrable gains on standard assessments, well…ya gotta believe in the tests in order to believe in the gains, right?

So I’m back to this:

A “model,” explains Wikipedia,

is a simplified abstract view of the complex reality. A scientific model represents empirical objects, phenomena, and physical processes in a logical way. Attempts to formalize the principles of the empirical sciences, use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system for which reality is the only interpretation. The world is an interpretation (or model) of these sciences, only insofar as these sciences are true….

Modelling refers to the process of generating a model as a conceptual representation of some phenomenon. Typically a model will refer only to some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different, that is in which the difference is more than just a simple renaming. This may be due to differing requirements of the model’s end users or to conceptual or aesthetic differences by the modellers and decisions made during the modelling process. Aesthetic considerations that may influence the structure of a model might be the modeller’s preference for a reduced ontology, preferences regarding probabilistic models vis-a-vis deterministic ones, discrete vs continuous time etc. For this reason users of a model need to understand the model’s original purpose and the assumptions of its validity.

I’m back at the original, simple, incomplete model because I’m not ready to stand in defense of any truth claims that a more complete model might make. Even this incomplete version, though, helps me to start articulating the characteristics of any model representing the role of computational technologies in education. I believe the following principles to hold true:

  • Human goals are mediated by, and thenceforth only achieved through, the widespread adoption and use of new technologies.
  • Human purposes for adopting and making use of new technologies are often highly individualized (though nearly always aligned with an affinity group, even if that group is not explicitly named and even if that group is not comprised of other members of the learning community).
  • While no educational researcher is qualified to articulate achievable goals for another human, the researcher is ethically obligated to support learners in articulating, and achieving, ethical educational goals.
  • The efficacy and success of new technologies can be measured through multiple lenses, among which only one is the achievement of mainstream educational goals as articulated and assessed through traditional, often standardized, measurement tools.

Ok, so what do you think?

*Note: I’m kinda rethinking this one. It reads a little too deterministic to me now, a mere hour or so after I wrote it.

Posted in academia, education, graduate school, lame, obnoxious, patent pending, public schools, schools, social media, social revolution, teaching, technologies | Leave a Comment »

new technologies bore the crap out of me.

Posted by Jenna McWilliams on January 24, 2010

Despite what you may have heard, I’m not really all that into new technologies.

Typically, I find out about new technologies long after they’re already old news. This is a constant source of shame for me. (‘Hey,’ I said in late 2009, ‘this cloud computing thing sounds interesting. What is it?‘) As much as I would like to join the ranks of early adopters, I simply lack the constitution. (‘Now, what’s this DROID thing I’ve been hearing so much about this year? Oh, it’s been around since 2007? Well, who makes the Droid? Oh, it’s a mobile operating system and not actually a phone? Can you tell me again what a mobile operating system is?’) My buddy Steve, who likes to find out about new technologies when they’re still in prototype form, regularly subjects me to conversations I don’t really understand about technologies that don’t make sense to me. (Here I would insert a sample conversation if any single representative discussion had made enough sense to me to stick in my memory.)

Technologies bore me. I don’t care about 3-D printers or 3-D TVs. I’m not interested in transparent computer screens. I don’t want to know how my analog watch was made, and I don’t care how light bulbs–even the new, energy-efficient ones–are manufactured.

Though I don’t care about how things are made, I am interested in finding out how things work. This is a subtle but important distinction. I want to learn how lean, mean websites are built, and I want to build some of my own, even though I have absolutely no idea how my computer is able to translate hypertext markup language (html) into artifacts that humans can interpret. I don’t know what a “3G network” is or how my new Droid phone uses it to give me constant access to its features, but I do want to know how to set up my phone so I can quickly access contact information for the people I communicate with the most. I would also like to know how to set up my calendar for weekly views instead of daily views.

It’s not the technology that interests me, but its uses. And as long as I’m thinking about uses for a technology, I might as well think about how to manipulate its features to support practices that meet my needs.

Clay Shirky, despite his recent unfortunate foray into gender politics, is actually pretty smart when he’s talking about things he’s qualified to discuss. In his 2008 book Here Comes Everybody, he wrote that “communications tools don’t get socially interesting until they get technologically boring.” And he’s absolutely correct: For socially interesting things to happen, widespread adoption of a technology is the first requirement, complemented by widespread use of the technology. The automobile has led to a reshaping of our roads, our communities, our attitudes toward travel–has, in short, become socially interesting–because its novelty has long since worn off as car ownership has inched toward universal. Cellphones have supported uprisings, protests, revolutions, because we’ve had them around long enough to know how to leverage them for purposes for which they were not originally intended.

In general, I’ve made uneasy peace with my apathy toward new technologies, with one caveat: It’s early adopters who get the earliest, if not the most, say in how technologies are taken up as they become more widespread. And early adopters tend to be young, white, male, college-educated, and affluent. Which is fantastic for them but not so great for people whose needs and interests don’t align with the needs and interests of the rich-young-educated-white-guy demographic.

Still, you just can’t do things your body wasn’t meant to. I don’t guess I’ll ever be able to force myself to care about 3G networks, but it’s easy enough to start thinking about the social implications of a tool that’s 3G-network enabled and pocketsized. Now we’re talking about the possibility of near-limitless access to information and communication: the building blocks for fostering and supporting civic engagement, community participation, and the chance to dance up alongside those early adopters, join them for a while, and modify the music to make a tune that’s easier to dance to.

Now we just need to figure out a way to get everyone dancing. We start by lowering the actual cost of admission (this is one way that early adopters help support the social revolution: They pay $499 so you don’t have to!), then we start pounding down the perceived cost of admission:

  • technological Woody Allen-ism, the fear of trying a new tool for fear of looking stupid and / or causing irreparable harm to oneself or others;  
  • technological Pangloss Syndrome, the perception that the uses built into the tool are the best possible uses for that tool; and 
  • technological Morpheus Syndrome, the sense that uses for a tool have already been predetermined anyway, so even if there might be better uses we might as well just stick with destiny.

And–hey!–I think I just gave myself fodder for my next three blog posts.

Posted in Clay Shirky, patent pending, social revolution, technologies | Leave a Comment »