sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from http://jennamcwilliams.blogspot.com.

Archive for the ‘technologies’ Category

how Jim Gee and I soothe our guilty consciences

Posted by Jenna McWilliams on June 8, 2010

In the video below of a presentation to the Education Writers Association 2010 Annual Conference, Jim Gee says this about how to introduce innovative ideas into education:

There’s a choice of strategies here…. One strategy is: Let’s take our innovations to the center of the school system and spread them as fast and quickly as we can. People believe that this current school system as it is will just co-opt those innovations and make them … just better ways to do the old thing. Another strategy is: Let’s make these innovative learning and assessment tools and put them at the margins, in places that will tolerate innovation, and then show it works. Now if you think about it, in technology outside of schools, going to the margins first and then to the center–that’s always been the way innovation happens. The only place we’ve ever tried to keep putting the new thing right in the center at once is in schooling, and it’s never worked. What i would love to see is that we hive of some of the (Race to the Top) money for a national center that would trial these new assessments, show they work in places that tolerate innovation, and then spread them there, just the way you would want if we have to keep coal and oil–let’s at least have something trying out new forms of energy, so that we’re ready for these markets but also we can prove they work. if we don’t do that, we’re just gonna get a better mousetrap.

I absolutely agree with the sentiments in the quote above, except for the BP oil spill. Let’s say there’s some innovative energy research going on in the margins, ready to prove it works and to take over where coal and oil left off. That’s fantastic, and it doesn’t do a single goddamned thing to help the birds, the fish, the sea mammals, the tourist industry, the ecosystem, the fisheries, and the human residents of the Gulf Coast. Those are simply casualties, not a single thing we can do to help them now no matter what awesome innovative fuel source we finally embrace, no matter how much more quickly we may embrace a cleaner fuel source as a result. Even if tomorrow’s birds are safe from Big Oil, today’s birds are drowning right in front of us.

Working at the margins of education is a fantastic way to innovate and offer useful evidence that innovations work. I fully support this approach–but not at the expense of the kids who exist at the center of our education system today. Yes, the school system can and does and maybe always will co-opt any innovation we try to introduce. But that doesn’t excuse us from trying anyway. That doesn’t give us license to give up on today’s children, even if it keeps tomorrow’s children safe.

And of course this isn’t what Jim Gee wants to do, anyway. But the Jim Gees of the world who urge us to work at the margin live in symbiosis with the Jenna McWilliamses of the world who believe we must also work from the center, where–ironically–the most marginalized kids in education commonly reside. I can’t innovate as much as I’d like from the center, maybe I can’t help tomorrow’s marginalized kids as much as I’d like either.  And Jim Gee can’t help today’s marginalized kids as much as he’d probably like from the edges. So we need each other, if for nothing else than to assuage our guilty consciences for being unable to do more of what we know must be done.

I should probably also note that Jim Gee is one of my absolute all-time heroes, so I hope he’s not mad at me for this post.

This video also stars Daniel Schwartz, who I believe is one of the smartest guys thinking about assessment and learning these days. I had the great luck to attend an assessment working group with him and a big crew of assessment-focused researchers, and I was amazed and blown away by just about everything he said.

In a recent publication, Choice-Based Assessments in a Digital Age (.pdf), Schwartz and his co-author Dylan Arena make this argument:

Educational assessment is a normative endeavor: The ideal assessment both reflects and reinforces educational goals that society deems valuable. A fundamental goal of education is to prepare students to act independently in the world—which is to say, to make good choices. It follows that an ideal assessment would measure how well we are preparing students to do so.

I can’t remember when I’ve agreed more emphatically with the introductory sentence of a scholarly article about education.

Here’s the video, which is well worth a watch.

Advertisements

Posted in academia, assessment, education, Jim Gee, journalism, learning sciences, public schools, schools, teaching, technologies, video games | Leave a Comment »

short-sighted and socially destructive: Ning to cut free services

Posted by Jenna McWilliams on April 15, 2010

Lord knows I’m not a huge fan of Ning, the social networking tool that allows users to create and manage online networks. I find the design bulky and fairly counterintuitive, and modifying a network to meet your group’s needs is extremely challenging, and Ning has made it extremely difficult or impossible for users to control, modify, or move network content. Despite the popularity of Ning’s free, ad-supported social networks among K-16 educators, the ads that go along with the free service have tended toward the racy or age-inappropriate.

But given the Ning trifecta–it’s free, getting students signed up is fast and fairly easy, and lots of teachers are using it–I’ve been working with Ning with researchers and teachers for the last two years. So the recent news that Ning will be switching to paid-only membership is obnoxious for two reasons.

The first reason is the obvious: I don’t want to pay–and I don’t want the teachers who use Ning to have to pay, either. One of the neat things about Ning is the ability to build multiple social networks–maybe a separate one for each class, or a new one each semester, or even multiple networks for a single group of students. In the future, each network will require a monthly payment, which means that most teachers who do decide to pay will stick to a much smaller number of networks. This means they’ll probably erase content and delete members, starting fresh each time. The enormous professional development potential of having persistent networks filled with content, conversations, and student work suddenly disappears.

Which brings me to my second point: That anyone who’s currently using Ning’s free services will be forced to either pay for an upgrade or move all of their material off of Ning. This is tough for teachers who have layers upon layers of material posted on various Ning sites, and it’s incredibly problematic for any researcher who’s working with Ning’s free resources. If we decide to leave Ning for another free network, we’ll have to figure out some systematic way of capturing every single thing that currently lives on Ning, lest it disappear forever.

Ning’s decision to phase out free services amounts to a paywall, pure and simple. Instead of putting limits on information, as paywalls for news services do, this paywall puts limits on participation. In many ways, this is potentially far worse, far more disruptive and destructive, far more short-sighted than any information paywall could be.

If Ning was smart, it would think a little more creatively about payment structures. What about offering unlimited access to all members of a school district, for a set fee paid at the district level? What about offering an educator account that provides unlimited network creation for a set (and much lower) fee? What about improving the services Ning provides to make it feel like you’d be getting what you paid for?

More information on Ning’s decision to go paid-only will be released tomorrow. For now, I’m working up a list of free social networking tools for use by educators. If you have any suggestions, I’d love to hear them.

Update, 4/15/10, 6:48 p.m.: Never one to sit on the sidelines in the first place, Alec Couros has spearheaded a gigantic, collaborative googledoc called “Alternatives to Ning.” As of this update, the doc keeps crashing because of the number of collaborators trying to help build this thing (the last time I got into it, I was one of 303 collaborators), so if it doesn’t load right away, keep trying.

Posted in education, lame, schools, social media, teaching, technologies | Leave a Comment »

why I am not a constructionist

Posted by Jenna McWilliams on April 6, 2010

and why you should expect more from my model for integrating technologies into the classroom

I recently showed some colleagues my developing model for integrating computational technologies into the classroom. “This is,” one person said, “a really nice constructionist model for classroom instruction.”

Which is great, except that I’m not a constructionist.

Now, don’t be offended. I’ll tell you what I told my colleague when she asked, appalled, “What’s wrong with constructionists?”

Nothing’s wrong with constructionists. I just don’t happen to be one.

a brief history lesson
Let’s start with some history. Constructionism came into being because two of the greatest minds we’ve had so far converged when Jean Piaget, known far and wee as the father of constructivism, invited Seymour Papert to come work in his lab. Papert later took a faculty position at MIT, where he developed the Logo programming language, wrote Mindstorms, one of his canonical books, and laid the groundwork for the development of constructionism.

Here’s a key distinction to memorize: While constructivism is a theory of learning, constructionism is both a learning theory and an approach to instruction. Here’s how the kickass constructionist researcher Yasmin Kafai describes the relationship between these terms:

Constructionism is not constructivism, as Piaget never intended his theory of knowledge development to be a theory of learning and teaching…. Constructionism always has acknowledged its allegiance to Piagetian theory but it is not identical to it. Where constructivism places a primacy on the development of individual and isolated knowledge structures, constructionism focuses on the connected nature of knowledge with its personal and social dimensions.

Papert himself said this:

Constructionism–the N Word as opposed to the V word–shares constructivism’s connotation to learning as building knowledge structures irrespective of the circumstances of learning. It then adds the idea that this happens especially felicitously in a context where the learner is consciously engaged in constructing a public entity whether it’s a sand castle on the beach or a theory of the universe.

Examples of constructionist learning environments include the well known and widespread Computer Clubhouse program, One Laptop Per Child, and learning environments built around visual programming tools like Scratch and NetLogo.

why I am not a constructionist
Constructionism is really neat, and some of the academics I respect most–Kafai, Kylie Peppler, Mitch Resnick, Idit Harel, for example–conduct their work from a constructionist perspective. A couple of things I like about the constructionist approach is its emphasis on “objects to think with” and some theorists’ work differentiating between wonderful ideas and powerful ideas.

Constructionist instruction is a highly effective approach for lots of kids, most notably for kids who haven’t experienced success in traditional classroom settings. But as Melissa Gresalfi has said more than once, people gravitate to various learning theories when they decide that other theories can’t explain what they’re seeing. Constructionism focuses on how a learning community can support individual learners’ development, which places the community secondary to the individual. I tend to wonder more about how contexts support knowledge production and how contexts lead to judgments about what counts as knowledge and success. If it’s true, for example, that marginalized kids are more likely to find success with tools like Scratch, then what matters to me is not what Scratch offers those kids that traditional schooling doesn’t, but what types of knowledge production the constructionist context offers that aren’t offered by the other learning contexts that fill up those kids’ days. I don’t care so much about what kids know about programming; I’m far more interested in the sorts of participation structures made possible by Scratch and other constructionist tools.

If you were wondering, I’m into situativity theory and its creepy younger cousin, Actor-Network Theory. So what I’m thinking about now is what sorts of participation structures might be developed around a context that looks very much like the diagram below. Specifically, I’m wondering: What sorts of participation structures can support increasingly knowledgeable participation in a range of contexts that integrate computation as a key area of expertise?


why I’m mentioning this now
My thinking about this is informed of late by what I consider to be some highly problematic thinking about equity issues in technology in education. A 2001 literature review by Volman & vanEck focuses on how we might just rearrange the classroom some to make girls feel more comfortable with computers. For example, they write that

to date, research has not produced unequivocal recommendations for classroom practice. Some researchers found that girls do better in small groups of girls; some researchers argue in favor of such groups on theoretical grounds (Siann & MacLeod, 1986, Scotland; Kirkup, 1992, United Kingdom). Others show that girls perform better in mixed groups (Kutnick, 1997, United Kingdom) or that girls benefit more than boys do from working together (Littleton et al., 1992, United Kingdom). Other student characteristics such as competence and experience in performing the task seem in any case to be equally important, both in primary and secondary education. An explanation for girls’ achieving better results in mixed pairs is that they have more opportunity to spend time with the often-more-experienced boys. The question, however, is whether this solution has negative side effects. It may all too easily confirm the image that girls are less competent when it comes to computers. Another solution may be that working in segregated groups compensates for the differences in experience. Tolmie and Howe (1993, Scotland, secondary education) argue strongly for working in small mixed groups because of the differences they identified between the approaches taken by groups of girls and groups of boys in solving a problems.

For the love of pete, the issue is not whether girls feel more comfortable working in small groups or mixed groups or pairs or individually; the issue is why in the hell we have learning environments that allow for these permutations to matter to girls’ access to learning with technologies.

Also, just for the record, the gender-equity issue in video gaming cannot be resolved just by building “girl versions” of video games, no matter what Volman and vanEck believe. They write:

Littleton, Light, Joiner, Messer, and Barnes (1992, United Kingdom, primary education) found that gender differences in performance in a computer game disappeared when the masculine stereotyping in that game was reduced. In a follow-up study they investigated the performance of girls and boys in two variations of an adventure game (Joiner, Messer, Littleton, & Light, 1996). Two versions of the game were developed, a “male” version with pirates and a “female” version with princesses. The structure of both versions of the game was identical. Girls scored lower than boys in both versions of the game, even when computer experience was taken into account; but girls scored higher in the version they preferred, usually that with the princesses.

I don’t think that the researchers cited by Volman and vanEck intended their work to be interpreted this way, but this is exactly the trouble you get into when you start talking about computational technologies in education: People think the tool, or the slight modification of it, is the breakthrough, when the breakthrough is in how we shift instructional approaches through integration of the tool–along with a set of technical skills and practices–for classroom instruction.

Looking at my developing model, I can see that I’m in danger of leading people to the same interpretation: Just put this stuff in your classroom and everything else will work itself out. This is what happens when you frontload the tool when you really mean to frontload the practices surrounding that tool that matter to you.

This is the next step in the process for me: Thinking about which practices I hope to foster and support through my classroom model and deploying various technologies for that purpose. I’ll keep you posted on what develops.

One last note
I’ve included here a discussion about why I’m not a constructionist along with a discussion of gender equity issues in education, but I don’t at all want anybody to take this as a critique of constructionism. I declare again: Nothing’s wrong with constructionism. I just don’t happen to be a constructionist. Also, I think a lot of really good constructionist researchers have done some really, really good work on gender equity issues in computing, and I’m just thrilled up the wazoo about that and hope they can find ways to convince people to stop misinterpreting constructionism in problematic ways.

References, in case you’re a nerd

Joiner, R., Messer, D., Littleton, K., & Light, P. (1996). Gender, computer experience and computer-based problem solving. Computers and Education, 26(1/2), 179–187.
Kafai, Y. B. (2006). Constructivism. In K. Sawyer (Ed.), Handbook of the Learning Sciences (pp. 35-46). Cambridge, MA: Cambridge University Press.
Kirkup, G. (1992). The social construction of computers. In G. Kirkup and L. Keller (Eds.), Inventing women: Science, gender and technology (pp. 267–281). Oxford: Polity Press.
Kutnick, P. (1997). Computer-based problem-solving: The effects of group composition and social skills on a cognitive, joint action task. Educational Research, 39(2), 135–147.
Littleton, K., Light, P., Joiner, R., Messer, D., & Barnes, P. (1992). Pairing and gender effects in computer based learning. European Journal of Psychology of Education, 7(4), 1–14.
Papert, S., & Harel, I. (1991). Situating Constructionism. In Papert & Harel, Constructionism. Ablex Publishing Corporation. Available online at http://www.papert.org/articles/SituatingConstructionism.html.
Siann, G., & MacLeod, H. (1986). Computers and children of primary school age: Issues and questions. British Journal of Educational Technology, 2, 133–144.
Tolmie, A., & Howe, C. (1993). Gender and dialogue in secondary school physics. Gender and Education, 5(2), 191–210
Volman, M., & van Eck, E. (2001). Gender Equity and Information Technology in Education: The Second Decade. [10.3102/00346543071004613]. Review of Educational Research, 71(4), 613-634.

my model, in case you were wondering

Posted in computational literacy, education, feminism, gender politics, graduate school, Joshua Danish, schools, teaching, technologies | 15 Comments »

SparkCBC takes on the issue of computational literacy

Posted by Jenna McWilliams on March 23, 2010

As I’ve explained in previous blog posts, I’m a fan of incorporating computational literacy education into the formal classroom–across curricula and content areas. So I was thrilled to see Spark Radio will be tackling the issue of computational literacy in an upcoming broadcast. Spark co-producer Dan Misener explains, using the user-friendly iPad as an example:

(T)he iPad (and its little brothers, the iPhone and iPod touch) abstract much of the computer away. Apple watcher and former Spark guest John Gruber says it’s a bit like the automatic transmission in a car:

Used to be that to drive a car, you, the driver, needed to operate a clutch pedal and gear shifter and manually change gears for the transmission as you accelerated and decelerated. Then came the automatic transmission. With an automatic, the transmission is entirely abstracted away. The clutch is gone. To go faster, you just press harder on the gas pedal.

That’s where Apple is taking computing. A car with an automatic transmission still shifts gears; the driver just doesn’t need to know about it. A computer running iPhone OS still has a hierarchical file system; the user just never sees it.

And from the standpoint of the vast majority of computer users, this abstraction can be a good thing. It makes computing simpler, easier, friendlier. Why should I need to understand what’s going on under the hood of my computer if all I want to do is send email to my friends?…

But I wonder, is the same attitude towards computers dangerous? Does oversimplifying technology –removing necessary complexity — have a downside? By making technology simple, easy, and convenient, do we risk a generation of people who can’t tell the difference between this blog post and the Facebook login page?

As I ponder this, I’m a bit torn. The technology populist in me wants to say, “Of course, make computers easy! What’s wrong with making computers as simple and friendly as possible?”

But another (geekier, snobbier) part of me wants to say, “Yes, computers are hard, and that can be a good thing. I don’t want to use technology designed for the lowest common denominator.”

The question this Spark show hopes to tackle is this:

If I don’t understand how to use my computer, whose fault is it? Is it my fault for not wanting to read manuals or spend time learning a new technology? Or is it the fault of the designers and engineers who build the technology we use?

You can weigh in on the discussion at the Spark blog, then listen in live or or download the podcast of the show; information on broadcast times and podcast download is available here.

Here’s my take on this issue, which I’ve also posted as a comment on the Spark blog:

This is a thorny issue, because easier interfaces help to drop the barriers to participation, but on the other hand this shift means we give up some degree of empowerment to make decisions about which sorts of interfaces, and by extension which sorts of technologies, work best for our specific needs. Indeed, the crafting and marketing of products like the iPad is deeply, deeply political, and the embedded politics that lead to the tools we use is not readily evident to those without a degree of computational literacy. And enormous swaths of the computer-using public are lacking in this area.

On the other hand, computational literacy is very much like other forms of literacy: reading, writing, mathematical literacy, and so on. We don’t blame the math-illiterate learner who has never been exposed to mathematics education, or whose math education was lacking in significant ways. This is the exact case with computational literacy education: It’s nearly nonexistent in formal classrooms, and has become the nearly exclusive domain of those with the luxury of access to computational technologies outside of school. In some ways, then, perhaps we get the technologies we deserve.

Posted in Apple, computational literacy, education, literacy, participatory culture, politics, reading, technologies, writing | 3 Comments »

devising a model for technology in education: my version of writer’s block

Posted by Jenna McWilliams on February 2, 2010



I believe the following principles to hold true:

  • Human goals are mediated by, and thenceforth only achieved through, the widespread adoption and use of new technologies.*
  • Human purposes for adopting and making use of new technologies are often highly individualized (though nearly always aligned with an affinity group, even if that group is not explicitly named and even if that group is not comprised of other members of the learning community).
  • While no educational researcher is qualified to articulate achievable goals for another human, the researcher is ethically obligated to support learners in articulating, and achieving, ethical educational goals.
  • The efficacy and success of new technologies can be measured through multiple lenses, among which only one is the achievement of mainstream educational goals as articulated and assessed through traditional, often standardized, measurement tools.

If you (a) know me, (b) follow me on Twitter or a similar social network, or (c) read my blog, you know that being at a loss for something to say just doesn’t happen to me. (On the one hand, this makes me perfectly suited to social media, blogging, and academia; on the other hand, it means I’ll mouth off about the social revolution in nearly any social situation.)

But for weeks now, I’ve been trying to devise a model to represent the role of computational technologies in education. And for weeks, I’ve been failing miserably. Here’s the closest I’ve come:

As you can see, this model is incomplete. I was in the middle of drawing an arrow from that word “technology” to something else when I realized that this model would never, ever do. So I tried to approach modelling from other perspectives. I tried backing my way in, by thinking of technologies metaphorically; I’ve tried presenting technology integration in the form of a decision tree. Which is fine, except that these don’t really work as models.

And I have to come up with a model. I do. Though I don’t often mention this, I’m not actually only a blogger. In real life, I’m a graduate student in Indiana University’s Learning Sciences Program. Because I believe in the value of public intellectual discourse, I’ve chosen to present as much of my coursework as possible on my blog or through other public, persistent and searchable communications platforms.

I will, at some future point, discuss the challenges and benefits of living up to this decision. For now, you guys, I just need to come up with a goddam model that I can live with.

I tried thinking of technologies as sleeping policemen; or, in other words, as objects that mediate our thoughts and actions and that have both intended and unintended consequences. This was a reaction to a set of readings including a chunk of Bonnie Nardi’s and Vicki O’Day’s 1999 book, Information Ecology: Using Technology with Heart; a Burbules & Callister piece from the same year, “The Risky Promises and Promising Risks of New Information Technologies for Education”; and Stahl & Hesse’s 2009 piece, “Practice perspectives in CSCL.” The theme of these writings was: We need to problematize dominant narratives about the role of technologies in education. Burbules & Callister categorize these narratives as follows:

  • computer as panacea (“New technologies will solve everything!”)
  • computer as [neutral] tool (“Technologies have no purpose built into them, and can be used for good or evil!”)
  • computer as [nonneutral] tool (the authors call this “(a) slightly more sophisticated variant” on the “computer as tool perspective”)
  • balanced approach to computer technologies (neither panacea nor tool, but resources with intended and unintended social consequences)

Nardi & O’Day, who basically agree with the categories identified above, argue for the more nuanced approach that they believe emerges when we think of technologies as ecologies, a term which they explain is

intended to evoke an image of biological ecologies with their complex dynamics and diverse species and opportunistic niches for growth. Our purpose in using the ecology metaphor is to foster thought and discussion, to stimulate conversations for action…. [T]he ecology metaphor provides a distinctive, powerful set of organizing properties around which to have conversations. The ecological metaphor suggests several key properties of many environments
in which technology is used.

Which is all fine and dandy, except the argument that precedes and follows the above quote is so tainted by mistrust and despair over the effects of new technologies that it’s hard to imagine that even Nardi and O’Day themselves can believe they’ve presented a balanced analysis. Reading their description of techno-ecologies is kind of like reading a book about prairie dog ecologies prefaced by a sentence like “Jesus Christ I hate those freaking prairie dogs.”

So the description of technologies as sleeping policemen was an effort to step back and describe, with as much detachment as possible for an admitted technorevolutionary like me, the role of technologies in mediating human activity.

But the metaphor doesn’t really have much by way of practical use. What am I going to do, take that model into the classroom and say, well, here’s why your kids aren’t using blogs–as you can see (::points to picture of speed bump::), kids are just driving around the speed bump instead of slowing down….?

This became clear as I jumped into a consideration of so-called “intelligent tutors,” which I described briefly in a previous post. Or, well, the speed bump metaphor might work, but only if we can come up with some agreed-upon end point and also set agreed-upon rules like speed limits and driving routes. But the problem is that even though we might think we all agree on the goals of education, there’s actually tons of discord, both spoken and unspoken. We can’t even all agree that what’s sitting in the middle of that road is actually a speedbump and not, for example, a stop sign. Or a launch ramp.

The Cognitive Tutors described by Kenneth Koedinger and Albert Corbett are a nice example of this. Researchers who embrace these types of learning tools see them as gateways to content mastery. But if you believe, as I do, that the content students are required to master is too often slanted in favor of members of dominant groups and against the typically underprivileged, underserved, and underheard members of our society, then Cognitive Tutors start to look less like gateways and more like gatekeepers. Even the tutoring tools that lead to demonstrable gains on standard assessments, well…ya gotta believe in the tests in order to believe in the gains, right?

So I’m back to this:

A “model,” explains Wikipedia,

is a simplified abstract view of the complex reality. A scientific model represents empirical objects, phenomena, and physical processes in a logical way. Attempts to formalize the principles of the empirical sciences, use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system for which reality is the only interpretation. The world is an interpretation (or model) of these sciences, only insofar as these sciences are true….

Modelling refers to the process of generating a model as a conceptual representation of some phenomenon. Typically a model will refer only to some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different, that is in which the difference is more than just a simple renaming. This may be due to differing requirements of the model’s end users or to conceptual or aesthetic differences by the modellers and decisions made during the modelling process. Aesthetic considerations that may influence the structure of a model might be the modeller’s preference for a reduced ontology, preferences regarding probabilistic models vis-a-vis deterministic ones, discrete vs continuous time etc. For this reason users of a model need to understand the model’s original purpose and the assumptions of its validity.

I’m back at the original, simple, incomplete model because I’m not ready to stand in defense of any truth claims that a more complete model might make. Even this incomplete version, though, helps me to start articulating the characteristics of any model representing the role of computational technologies in education. I believe the following principles to hold true:

  • Human goals are mediated by, and thenceforth only achieved through, the widespread adoption and use of new technologies.
  • Human purposes for adopting and making use of new technologies are often highly individualized (though nearly always aligned with an affinity group, even if that group is not explicitly named and even if that group is not comprised of other members of the learning community).
  • While no educational researcher is qualified to articulate achievable goals for another human, the researcher is ethically obligated to support learners in articulating, and achieving, ethical educational goals.
  • The efficacy and success of new technologies can be measured through multiple lenses, among which only one is the achievement of mainstream educational goals as articulated and assessed through traditional, often standardized, measurement tools.

Ok, so what do you think?

*Note: I’m kinda rethinking this one. It reads a little too deterministic to me now, a mere hour or so after I wrote it.

Posted in academia, education, graduate school, lame, obnoxious, patent pending, public schools, schools, social media, social revolution, teaching, technologies | Leave a Comment »

using educational technology in support of the status quo

Posted by Jenna McWilliams on February 1, 2010

or, don’t mind me, I’m having a pessimistic day

Schools, god love ’em, are abysmally bad at embracing new technologies. But they’re not, you know, equally bad at embracing all technologies. Some technologies get taken up right away.

The mechanical pencil: Immediate integration. (And Algebra teachers everywhere gave a holler of joy.) The word processor: once it got cheap enough, it got glommed onto by administrators. The increasing popularity of its successor, the desktop computer, was inversely proportional to its cost. Suddenly, there emerged a need for teachers to provide students with basic computing skills  (and Mr. Towers, my 6th grade Computer teacher, gave a holler of joy). This included basic proficiency with word processing programs, and some computer teachers squeaked in some instruction in Logo or similar programming languages. Dry erase boards. Printers and copiers.

These are, of course, technologies that do not challenge the established norms and practices of the educational system–they are, as Joshua Danish recently put it, technologies that help us to do more efficiently what we were already doing.

Which explains, in large part, how new technologies are so often twisted all out of context, the meaning wrung out of them, when they’re brought into schools. In their natural habitats, discussion forums can be some of the most rollicking, crazy, intellectually challenging, capricious and unpredictable spaces for intelligent discourse, places where people get so excited about discussion topics that they’re willing to fight dirty if that’s what it takes to win an argument. In schools, discussion forums are often used as IRE spaces, where students respond to simple questions and, to fulfill class participation requirements, post three comments that amount to “I agree.” YouTube operates on the premise that open conversation, even at its most inane or vicious, is an essential component of an engaged, broad community; SchoolTube, its educational doppelganger, offers a limited number of canned “comments” in a dropdown menu, with no apparent option for adding a personal note of any sort. (There is value to this approach. In the wild, when you encounter a flame war in a discussion forum, you can close a tab and go elsewhere. If we require student use of an online network, then we’re also responsible for protecting learners from forced exposure to trolls.)

I struggle over how to feel about new educational technologies that demonstrate gains in learning. Teachable agents, intelligent tutors–some of these technologies have proven to be quite effective in helping kids master difficult content. But to do this, these tools work within the established constructs of the institution. Here’s how Kenneth Koedinger and Albert Corbett describe the premise behind “cognitive tutors,” computer programs designed to aid instruction:

Cognitive Tutors support learning by doing, an essential aspect of human tutoring…. Cognitive Tutors accomplish two of the principal tasks characteristic of human tutoring: (1) monitoring the student’s performance and providing context-specific instruction just when the individual student needs it, and (2) monitoring the student’s learning and selecting problem-solving activities involving knowledge goals just within the individual student’s reach.

Which is a fine and laudable set of goals, except for the fact that these Cognitive Tutors monitor performance and learning on school-based, decontextualized activities, offering tutoring on math problems like, for example, how to solve the equation 3(2x+5)=9:

This sort of technology works beautifully in support of overloaded teachers who can’t provide individual instruction for students. In this respect, it’s a useful technology, and one that I’m sure leads to gains on, for example, standardized tests. Maybe this sort of technology even works for the kid who’s a fantastic scorekeeper at the bowling alley but flounders in math class. But it seems to me that what this technology teaches, more than anything else, is how to “do school”–how to perform well on bizarre, decontextualized math problems–without actually making explicit why doing well on bizarre, decontextualized math problems is valued.

Cognitive Tutors, in other words, don’t really extend much of a challenge to the status quo; they just help schools do what they were already doing, just a little bit more effectively. In their recent book, Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in America, Allan Collins and Richard Halverson write about schools’ three-pronged approach to tamping down innovative technologies: condemn the technology, co-opt the technology, and marginalize the technology. According to Collins and Halverson, any innovative technology that gets taken up in schools must first have the innovation squeezed out of it. I wonder if intelligent tutors aren’t just another example in support of their skepticism.

Sigh. Don’t mind me. I’m just having a pessimistic day. Here’s a diagram. Click on it to see a larger version. You can also see a .pdf of the diagram here.

Posted in education, public schools, schools, teaching, technologies | Leave a Comment »

new technologies bore the crap out of me.

Posted by Jenna McWilliams on January 24, 2010

Despite what you may have heard, I’m not really all that into new technologies.

Typically, I find out about new technologies long after they’re already old news. This is a constant source of shame for me. (‘Hey,’ I said in late 2009, ‘this cloud computing thing sounds interesting. What is it?‘) As much as I would like to join the ranks of early adopters, I simply lack the constitution. (‘Now, what’s this DROID thing I’ve been hearing so much about this year? Oh, it’s been around since 2007? Well, who makes the Droid? Oh, it’s a mobile operating system and not actually a phone? Can you tell me again what a mobile operating system is?’) My buddy Steve, who likes to find out about new technologies when they’re still in prototype form, regularly subjects me to conversations I don’t really understand about technologies that don’t make sense to me. (Here I would insert a sample conversation if any single representative discussion had made enough sense to me to stick in my memory.)

Technologies bore me. I don’t care about 3-D printers or 3-D TVs. I’m not interested in transparent computer screens. I don’t want to know how my analog watch was made, and I don’t care how light bulbs–even the new, energy-efficient ones–are manufactured.

Though I don’t care about how things are made, I am interested in finding out how things work. This is a subtle but important distinction. I want to learn how lean, mean websites are built, and I want to build some of my own, even though I have absolutely no idea how my computer is able to translate hypertext markup language (html) into artifacts that humans can interpret. I don’t know what a “3G network” is or how my new Droid phone uses it to give me constant access to its features, but I do want to know how to set up my phone so I can quickly access contact information for the people I communicate with the most. I would also like to know how to set up my calendar for weekly views instead of daily views.

It’s not the technology that interests me, but its uses. And as long as I’m thinking about uses for a technology, I might as well think about how to manipulate its features to support practices that meet my needs.

Clay Shirky, despite his recent unfortunate foray into gender politics, is actually pretty smart when he’s talking about things he’s qualified to discuss. In his 2008 book Here Comes Everybody, he wrote that “communications tools don’t get socially interesting until they get technologically boring.” And he’s absolutely correct: For socially interesting things to happen, widespread adoption of a technology is the first requirement, complemented by widespread use of the technology. The automobile has led to a reshaping of our roads, our communities, our attitudes toward travel–has, in short, become socially interesting–because its novelty has long since worn off as car ownership has inched toward universal. Cellphones have supported uprisings, protests, revolutions, because we’ve had them around long enough to know how to leverage them for purposes for which they were not originally intended.

In general, I’ve made uneasy peace with my apathy toward new technologies, with one caveat: It’s early adopters who get the earliest, if not the most, say in how technologies are taken up as they become more widespread. And early adopters tend to be young, white, male, college-educated, and affluent. Which is fantastic for them but not so great for people whose needs and interests don’t align with the needs and interests of the rich-young-educated-white-guy demographic.

Still, you just can’t do things your body wasn’t meant to. I don’t guess I’ll ever be able to force myself to care about 3G networks, but it’s easy enough to start thinking about the social implications of a tool that’s 3G-network enabled and pocketsized. Now we’re talking about the possibility of near-limitless access to information and communication: the building blocks for fostering and supporting civic engagement, community participation, and the chance to dance up alongside those early adopters, join them for a while, and modify the music to make a tune that’s easier to dance to.

Now we just need to figure out a way to get everyone dancing. We start by lowering the actual cost of admission (this is one way that early adopters help support the social revolution: They pay $499 so you don’t have to!), then we start pounding down the perceived cost of admission:

  • technological Woody Allen-ism, the fear of trying a new tool for fear of looking stupid and / or causing irreparable harm to oneself or others;  
  • technological Pangloss Syndrome, the perception that the uses built into the tool are the best possible uses for that tool; and 
  • technological Morpheus Syndrome, the sense that uses for a tool have already been predetermined anyway, so even if there might be better uses we might as well just stick with destiny.

And–hey!–I think I just gave myself fodder for my next three blog posts.

Posted in Clay Shirky, patent pending, social revolution, technologies | Leave a Comment »