sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from

Archive for the ‘conspiracy theories’ Category

(self-)sabotaged by my email program

Posted by Jenna McWilliams on May 19, 2010

file under: goddammit, everything’s ruined.

I discovered yesterday that my email program’s settings were misconfigured, leading to this result: A subset of the email messages I’ve been sending out were never received by the intended recipient. They were never received by anyone at all. Worse, the emails that I know I sent simply no longer exist anywhere in my email archives, even though I double-archive everything through multiple email accounts.

I don’t know how to even begin to deal with this mess.

Because god knows how these dropped emails have shaped my personal and professional relationships. How many people think I’ve ignored them completely, because they never received the email response to their single request? How many people think of me as basically dependable, except for the handful of times that they were waiting for something that never came? How many people think of me as the kind of friend who usually responds to email?

And this doesn’t even touch on how my misconfigured email program has undermined my work at crafting my email identity. Like most people, I make decisions regularly about when and how to send email based on how I hope to be perceived by others. This is an important aspect of building a professional identity these days, and if you don’t spend time thinking about how your email use colors your colleagues’ perceptions of you, you damn well better start thinking about it.

So that’s down the toilet for me too. I had to reconfigure my settings, which meant that every email I was holding in my inbox as part of my ongoing to-do list has also been sent to the archives. Which means that the hundreds of smaller things I’ve been saving to follow up on when the time’s right–those have disappeared on me as well.

I can hear you techno-skeptics now: That’s what happens when you rely too much on technology. That’s where blind faith leads you. That’s why nothing beats good old face to face communication.

Which would be fine, if digital communications tools hadn’t led to an explosion in sheer numbers of personal and professional relationships that need maintaining. There’s simply no way to keep up with those relationships without tools like email. I’ve had days characterized by dozens of email conversations maintained over hundreds of emails. Say what you will about the “richness” of in-person communications as compared to email conversations, but there are times when rich conversations are unnecessary. There are times when shit just needs to get done.

And email can be a fantastic tool for getting shit done, especially when the tool is working as we’ve come to expect it to work. When emails get dropped, though, the tool turns into the exact opposite of a shit-getting-done tool. It becomes a tool that complicates things exponentially.

For me, the lesson here is not that I need to rely less on digital communication tools, and it’s not that I need to approach these tools with a consistent attitude of skepticism. The lesson is that effective use of digital communication tools must be supported with a critical computational literacy approach to those tools.

Because I’m the one who misconfigured my email program in the first place. I trusted the program to autoformat itself instead of using the manual setup feature. Then, when it first became clear several weeks ago that some of my emails were not being received, I assumed the fault lay with others’ programs. I even wondered if someone was hacking into their email accounts, because I trusted my email program.

Even now, I think but am not positive that I’ve resolved the issue. There are a couple of reasons for this. The first is that I’ve never spent a lot of time learning about the language of these sorts of things. IMAP, POP, SMTP–none of those letter groupings mean very much to me (though they certainly mean more to me now than they did before I spent a day repairing my broken email program). But the email programs we use don’t really bother trying to explain those terms to us. They figure it’s information we don’t need to know, since we can trust the programs to know how to set themselves up.

Trusting auto-configuration is one of our biggest mistakes.  I can’t do much to repair the damage I did to myself by allowing auto-configure to misconfigure my email program, but I can commit to never again allowing auto-configure tools to override me. From here on out, I’m committing to always choosing the manual setup option for every new tool or program I use–not because I believe this will lead to smooth sailing from here on out (it won’t), but because I need to learn how to manage the tools I use in order to maintain control over how, when, where, and why I use these tools to interact with others.

Twenty-six years ago, Apple told us it would help us stand up against an Orwellian future. Somehow, in the intervening years, Apple stopped being the solution and started being part of the problem. In fact, if we’ve learned anything at all, it’s that no major technology-based corporation exists to help us think more critically about the tools we use. This is why it’s up to us to make smart decisions. It’s up to us to be the chainsaw–or, if you wish, the flying hammer–we wish to see in the world.

Posted in Apple, computational literacy, conspiracy theories, obnoxious | 2 Comments »

on conceptual models, native competence, and (not) learning to play rugby

Posted by Jenna McWilliams on February 5, 2010

I had the deeply unsettling experience recently of feeling like the stupidest person in the room. This type of experience is (both fortunately and unfortunately) fairly rare for the typical educational researcher, though it’s far more common for members of the learning communities researchers study. For this reason, I believe it’s incredibly important for researchers to examine the contexts that make them feel stupid, if only so they can better understand the groups they’re studying.

The context was a graduate-level class. I’m one of just under a dozen students; the class, “Computational Technologies in Educational Ecosystems,” draws students from my university’s school of education and from the Informatics Department. A key assignment in the course is design, reflection on, and revision of a model that represents our take on the role of technologies in learning environments.

I have previously noted my despair over my apparent inability to complete this assignment in a meaningful way. The most progress I’ve been able to make was in presenting an unfinished model that draws the vaguest possible connection between humans and technology:

Then in class this week we spent a large chunk of time working with a representation developed by the instructor, the fanTASTIC Joshua Danish. His representation, which is also available on his website, is intended to point to key features of the week’s readings on cognitive tutors, Teachable Agents, and computer-aided instruction. Here’s the representation:

This representation literally carries no meaning for me. I mean, I get the basic idea behind it, but only because I did the assigned reading and get the basic themes and goals of computer-aided instruction. I get that research in this area focuses on domain-oriented issues, learning theories, and the role of these tools in classroom environments; but I do not understand how the above representation articulates this focus.

Yet I sat there in class and listened to my classmates interpreting the representation. They understood it; they could ‘read’ it; they could point to areas of weakness and suggest corrections to improve it.

The experience reminded me of the time I tried to learn rugby by joining an intramural team. After 20 minutes of basic instruction, we all got thrown into a game and the first time I got the ball, I apparently did something wrong and the team captain tackled me hard, hollering at me as she pulled me down. I never did find out what I’d done wrong. And actually, I didn’t much care. That was the last time I tried rugby.

Of course, Joshua’s never tackled anybody. He’s a fantastic teacher–one of the best I’ve ever had–who’s deeply invested in fostering an authentic learning community and supporting his students in their growth. But I sat there, watching my classmates speak a language I didn’t understand, getting more and more frustrated, and I absolutely felt like walking right off the field and never coming back.

At least two important lessons are nested in this experience, and one is linked to the other.

1. There are kids who feel this way all the time, every day. It’s easy for educational researchers to forget this point, mainly because most (though certainly not all) of us have experienced raging success in our own educational experiences. We got A’s in everything. Or we found a niche within a certain content area and pursued it with a fair amount of success. Or we figured out how to game the system, so that even if we didn’t get A’s in everything, we still felt somehow smarter than everyone else. Or if we had bad experiences with school early on, we still came to think of ourselves as smart, or at least smart enough to deserve advanced study in education.

So maybe we know in theory that schools are stacked against some kids, that the entire education system is designed on the premise that some kids will always be labeled the failures, the losers, the learning disabled, the stupid. (If it weren’t for the stupid kids, after all, how would we know what an A student is worth?) We know in theory that some kids feel frustrated and lost in school, and that some kids end up feeling like it’s hopeless to even bother trying.

But the fact is that we don’t know how it feels in practice. We can’t know how it feels. And we should never be allowed to forget this.

Even as I was feeling like the stupidest person in the room, I also felt an absolute certainty that this wasn’t my fault. Here, too, my experience diverges from that of many learners in the classrooms we study. I knew that my experience was neither right, nor fair, nor my fault; because of this, I knew to curb my strong initial impulse, which was to throw things, to disrupt the class, to walk out and never return. Instead of following my gut, I saved up all that frustration and spent it on a short burst of research. Which is how I got to my second point:

2. Modeling ability is a disposition, one that is (or is not) cultivated through sustained educational focus. Andrea diSessa calls this disposition “metarepresentational competence”; by this, he means a learner’s ability to:

  • Invent or design new representations.
  • Critique and compare the adequacy of representations and judge their suitability for various tasks.
  • Understand the purposes of representations generally and in particular contexts and understand how representations do the work they do for us.
  • Explain representations (i.e., the ability to articulate their competence with the preceding items).
  • Learn new representations quickly and with minimal instruction.

As Richard Lehrer and Leona Schauble point out, model-based reasoning is not only essential to the established practices within many varied domains, but it’s also a set of proficiencies that can and must be cultivated through focused instruction. In offering their own discussion of metarepresentational competence, they write:

Modeling is much more likely to take root and flourish in students who are building on a history of pressing toward meta-representational competence (diSessa, 2004). Developing, revising, and manipulating representations and inscriptions to figure things out, explain, or persuade others are key to modeling but are not typically nurtured in schooling. Instead, students are often taught conventional representational devices as stand-alone topics at a prescribed point in the curriculum, and may be given little or no sense of the kind of problems that these conventions were invented to address. For example, students might be taught in a formulaic manner how to construct pie graphs, but with no problem or question at hand to motivate the utility of that design over any other, students are unlikely to consider the communicational or persuasive trade-offs of that or any alternative representational form.

Though modeling has its application in most, if not all, content areas, it’s typically emphasized in science and math classes and de-emphasized or ignored in the social sciences and reading and writing instruction. At best, students are told to make a timeline to represent the events of the Civil War (without being shown the affordances and constraints of this sort of representation); or they’re required to make a diorama (or, now, a digital version of a diorama) to prove they understand a key scene in a literary text.

Representations don’t always take the shape of graphs or pictures; in fact, we might say that a musical score or a piece of descriptive writing is a representation in its own right. But as Lehrer and Shauble point out, a thing is only a model insofar as it is treated as such. “One might suggest,” they write, “that a pendulum is a model system for periodic motion. Yet, for most, the pendulum simply swings back and forth and does not stand in for anything other than itself.”

Some disciplines, in fact, actively resist the notion of representation, of language as representational. In a previous iteration, I was a poet and even spent several years’ worth of sustained study in an undergraduate, then a graduate, creative writing program. In the MFA program especially, I was immersed in a sustained discipline-wide effort to divorce language from its representative nature. There was an effort to fight against narrative, against what many writer-types believed was “easy” poetry. This is, as poets are wont to remind us, the basis of Postmodernism.

Though I’m in a Learning Sciences graduate program, I am by no means a scientist, at least in the more general sense of the term. This is even more true if we think of modeling as a key element of scientific practice. For multiple reasons, I do not have what diSessa calls “native competence,” which he explains is a proficiency that develops over time both in and out of school. I could point, for example, to the shame I felt in 6th grade when I was required to build a model of the solar system using styrofoam and coat hangers; my final product, the absolute best work I could have done, was pitiful and humiliating. I remember thinking: everyone else can do this; what’s wrong with me?

Now I know it’s not a problem with me but with a system of schooling, which helps me direct my rage outward but still doesn’t really solve the problem of how I’ll ever build a goddam model that makes any sort of sense to anybody at all.

In case you’re interested in reading the work I reference above, here are the citations:

diSessa, A. A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruction, 22, 293-331.
Lehrer, R., & Schauble, L. (2006). Cultivating Model-Based Reasoning in Science Education. In R. Keith Sawyer (ed.), The Cambridge Handbook of the Learning Sciences. Cambridge: Cambridge University Press.

Posted in academia, conspiracy theories, graduate school, Joshua Danish, learning sciences, Ph.D. | Leave a Comment »

rerun: educational philosophies, in up to 20 words

Posted by Jenna McWilliams on September 17, 2009

Because this blog has recently attracted a new learning-leaning public, I thought it might be time to retread an older post I tossed up about 6 months ago about educational philosophies.

Here’s mine:

Schools are not benign. Kids learn to be what they’re labeled relative to other students. Then they bear that out.

What’s yours? The only rule is this: You only get up to 20 words.

Posted in conspiracy theories, education, human rights, public schools, schools, social justice | 4 Comments »

how to think like a good {fill in the blank}

Posted by Jenna McWilliams on August 21, 2009

“The message of Wikipedia,” writes Michael Wesch, “is not ‘trust authority’ but ‘explore authority.’ Authorized information is not beyond discussion on Wikipedia, information is authorized through discussion, and this discussion is available for the world to see and even participate in.”

This comes from Wesch’s January 2009 Academic Commons article, “From Knowledgable to Knowledge-able: Learning in New Media Environments.” The piece is part of an issue dedicated to exactly this problem: How do we teach and learn in a cultural moment where even the very definition of “knowledge,” “teaching,” and “learning,” and even of “information” is being called into question?

Wesch focuses in on the brick-and-mortar university, arguing that despite growing recognition among higher-ed faculty and administration that university teaching and learning desperately needs to shift away from its authoritarian roots, a series of physical, social, and cognitive structures stymie this effort at nearly every turn. The physical deterrents are, Wesch argues, the easiest to recognize, and they

are on prominent display in any large “state of the art” classroom. Rows of fixed chairs often face a stage or podium housing a computer from which the professor controls at least 786,432 points of light on a massive screen. Stadium seating, sound-absorbing panels and other acoustic technologies are designed to draw maximum attention to the professor at the front of the room. The “message” of this environment is that to learn is to acquire information, that information is scarce and hard to find (that’s why you have to come to this room to get it), that you should trust authority for good information, and that good information is beyond discussion (that’s why the chairs don’t move or turn toward one another). In short, it tells students to trust authority and follow along.

This is a message that very few faculty could agree with, and in fact some may use the room to launch spirited attacks against it. But the content of such talks are overshadowed by the ongoing hour-to-hour and day-to-day practice of sitting and listening to authority for information and then regurgitating that information on exams.

These are a key feature of the social structures that work against change in higher education: The ongoing pressure to standardize curriculum and use (easily quantified) standardized assessments for accountability purposes. Wesch writes:

When I speak frankly with professors all over the world, I find that, like me, they often find themselves jury-rigging old assessment tools to serve the new needs brought into focus by a world of infinite information. Content is no longer king, but many of our tools have been habitually used to measure content recall. For example, I have often found myself writing content-based multiple-choice questions in a way that I hope will indicate that the student has mastered a new subjectivity or perspective. Of course, the results are not satisfactory. More importantly, these questions ask students to waste great amounts of mental energy memorizing content instead of exercising a new perspective in the pursuit of real and relevant questions.

This is, perhaps, one of the most significant dangers inherent in re-mediating assessment: The risk of re-mediating the wrong aspects of current assessment strategies. Rewriting a multiple-choice test is surely not the answer, but it’s often, and understandably, what innovative and new media-friendly educators do. The results of this effort may not be satisfactory, after all, but they’re better than nothing. And short of overhauling an entire course, it’s often a useful stopgap measure.

And what of overhauling an entire course? Wesch, recognizing that “our courses have to be about something,” argues for a shift away from “subjects” (English, History, Science) and toward “subjectivities”–ways of approaching and thinking about content. One simple way of thinking about this shift is by thinking about the difference between learning the steps of the scientific method and developing the mindsets embraced by a profession that embraces the scientific method as a useful approach to experimentation.

The “subjectivities” approach is, in fact, the favored approach of many graduate programs. My sister, who is beginning law school this fall, is immersed in a cognitive apprenticeship designed to make her think, act, and speak like a lawyer. As a new doctoral student in Indiana University’s Learning Sciences program, I’m undertaking the same apprenticeship. A series of courses, including IU’s Professional Seminar in the Learning Sciences and Theory and Method in the Learning Sciences, are intended to equip new grad students with the Learning Sciences mindset.

This approaches, however, gives rise to a key question: If the “subjectivities” approach is intended to is intended to help learners think, act, and speak like a {fill in the blank}, then who decides how a {fill in the blank} is supposed to think, act, and speak?

Jim Gee offers a fascinating critique of “learning to think like a lawyer” in his book Social Linguistics and Literacies. He argues that success in law school is slanted toward people who think, act, and speak like white, middle-class men, explaining that:

[t]o write a competent brief the student has to be able to read the text being briefed in much the same way as the professor does…. Students are not taught these reading skills—the ones necessary to be able to write briefs—directly. Briefs are not, for instance, turned in to the professor; they are written for the students’ own use in class…. One of the basic assumptions of law school is that if students are not told overtly what to do and how to proceed, this will spur them on essentially to teach themselves. Minnis argues that this assumption does not, however, work equally well for everyone. Many students from minority or otherwise non-mainstream backgrounds fail in law school.

(A female friend who recently completed law school agrees with this argument, and struggled mightily with the inequities inherent in her program and inside the field of law in general. I’ve written about her experience here.)

This issue is certainly not limited to law school; it’s a thorny problem in every program designed to help students think like a {fill in the blank.} I understand that this is an issue that IU’s Learning Sciences program has grappled with recently, and I imagine this is the reason that the Professional Seminar in the Learning Sciences, previously a required course, has now been made optional.

What do I know, right? I haven’t even started my first semester in the program yet. But it seems to me that if this issue is worth grappling with (and I believe it is), it’s worth grappling with alongside of the program’s apprentices. I’m for making the course mandatory and then using it to expose, discuss, and clarify the very issues that led to the faculty’s decision.

Here we can take a page out of the Wikipedia lesson book. There’s no point in simply trusting authority when the social revolution supports not just questioning, not just opposing, but actually exploring authority. After all, thinking like a good {Learning Scientist} is about much more than embracing a set of approaches to teaching, learning, and knowledge; it’s also about questioning, contesting and exploring the very foundation of the field itself.

Posted in academia, assessment, conspiracy theories, graduate school, Jim Gee, pedagogy, Ph.D., social revolution | 1 Comment »

the harrison bergeron approach to education: how university rankings stunt the social revolution

Posted by Jenna McWilliams on June 14, 2009

I’ve been thinking some lately about the odd and confusing practice of comparing undergraduate and graduate programs at American colleges and universities and producing a set of rankings that show how the programs stack up against each other.

One of the most widely cited set of rankings comes from U.S. News and World Report, which offers rankings in dozens of categories, for both undergraduate and graduate-level programs. Here, the magazine offers its altruistic rationale behind producing these rankings:

A college education is one of the most important—and one of the most costly—investments that prospective students will ever make. For this reason, the editors of U.S. News believe that students and their families should have as much information as possible about the comparative merits of the educational programs at America’s colleges and universities. The data we gather on America’s colleges—and the rankings of the schools that arise from these data—serve as an objective guide by which students and their parents can compare the academic quality of schools. When consumers purchase a car or a computer, this sort of information is readily available. We think it’s even more important that comparative data help people make informed decisions about an education that at some private universities is now approaching a total cost of more than $200,000 including tuition, room, board, required fees, books, transportation, and other personal expenses.

(To access the entire rankings, developed and produced selflessly by U.S. News and World Report, you need to pay. Click here to purchase the Premium Online Edition, which is the only way to get complete rankings, for $14.95.)

The 2009 rankings, released in April, are in the news lately because of questions related to how the magazine gathers data from colleges. As Carl Bialik points out in a recent post at the Wall Street Journal, concerns over how Clemson University set about increasing its rank point to deeper questions about the influence of rankings numbers on university operations. Clemson President James F. Barker reportedly shot for cracking the top 20 (it was ranked 38th nationally in 2001) by targeting all of the ranking indicators used by U.S. News. Bialik writes:

While the truth about Clemson’s approach to the rankings remains elusive, the episode does call into question the utility of a ranking that schools can seek to manipulate. “Colleges have been ‘rank-steering,’ — driving under the influence of the rankings,” Lloyd Thacker, executive director of the Education Conservancy and a critic of rankings, told the Associated Press. “We’ve seen over the years a shifting of resources to influence ranks.”

Setting aside questions of the rankings’ influence on university operations and on recruiting (both for prospective students and prospective faculty), and setting aside too the question of how accurate any numbers collected from university officials themselves could possibly be when the stakes are so high, one wonders how these rankings limit schools’ ability to embrace what appear to be key tenets emerging out of the social revolution. A key feature of some of the most vibrant, energetic, and active online communities is what Clay Shirky labels the “failure for free” model. As I explained in a previous post on the open source movement, the open source software (OSS) movement embraces this tenet:

It’s not, after all, that most open source projects present a legitimate threat to the corporate status quo; that’s not what scares companies like Microsoft. What scares Microsoft is the fact that OSS can afford a thousand GNOME Bulgarias on the way to its Linux. Microsoft certainly can’t afford that rate of failure, but the OSS movement can, because, as Shirky explains,

open systems lower the cost of failure, they do not create biases in favor of predictable but substandard outcomes, and they make it simpler to integrate the contributions of people who contribute only a single idea.

Anyone who’s worked for a company of reasonable size understands the push to keep the risk of failure low. “More people,” Shirky writes, “will remember you saying yes to a failure than saying no to a radical but promising idea.” The higher up the organizational chart you go, the harder the push will be for safe choices. Innovation, it seems, is both a product of and oppositional to the social contract.

The U.S. News rankings, and the methodology behind them, runs completely anathema to the notion of innovation. Indeed, a full 25 percent of the ranking system is based on what U.S. News calls “peer assessment,” which comes from “the top academics we consult–presidents, provosts, and deans of admissions” and, ostensibly, at least, allows these consultants

to account for intangibles such as faculty dedication to teaching. Each individual is asked to rate peer schools’ academic programs on a scale from 1 (marginal) to 5 (distinguished). Those who don’t know enough about a school to evaluate it fairly are asked to mark “don’t know.” Synovate, an opinion-research firm based near Chicago, in spring 2008 collected the data; of the 4,272 people who were sent questionnaires, 46 percent responded.

Who becomes “distinguished” in the ivory-tower world of academia? Those who play by the long-established rules of tradition, polity, and networking, of course. The people who most want to effect change at the institutional level are often the most outraged, the most unwilling to play by the rules established by administrators and rankings systems, and therefore the least likely to make it into the top echelons of academia. Indeed, failure is rarely free in the high-stakes world of academics; it’s safer to say no to “a radical but promising idea” than to say yes to any number of boring but safe ideas.

So what do you do if you are, say, a prospective doctoral student who wants to tear wide the gates of academic institutions? What do you do if you want to go as far in your chosen field as your little legs will carry you, leaving a swath of destruction in your wake? What do you do if you want to bring the social revolution to the ivory tower, instead of waiting for the ivory tower to come to the social revolution?

You rely on the U.S. News rankings, of course. It’s what I did when I made decisions about which schools to apply to (the University of Wisconsin-Madison [ranked 7th overall in graduate education programs, first in Curriculum & Instruction, first in Educational Psychology] the University of Texas-Austin [tied at 7th overall, 10th in Curriculum & Instruction], the University of Washington [12th overall, 9th in Curriculum & Instruction], the University of Michigan [14th overall, 7th in Curriculum & Instruction, and 3rd in Educational Psychology] the University of Indiana [19th overall, out of the top 10 in individual categories], and Arizona State University [24th overall, out of the top 10 in individual categories]). Interestingly, though, the decision to turn down offers from schools ranked higher than Indiana (go hoosiers) wasn’t all that difficult. I knew that I belonged at IU (go hoosiers) almost before I visited, and a recruitment weekend sealed the deal.

But I had an inside track to information about IU (go hoosiers) via my work with Dan Hickey and Michelle Honeyford. I also happen to be a highly resourceful learner with a relatively clear sense of what I want to study, and with whom, and why. Other learners–especially undergraduates–aren’t necessarily in such a cushy position. They are likely to rely heavily on rankings in making decisions about where to apply and which offer to accept. This not only serves to reify the arbitrary and esoteric rankings system (highest ranked schools get highest ranked students), but also serves to stunt the social revolution in an institution that needs revolution, and desperately.

In this matter, it’s turtles all the way down. High-stakes standardized testing practices and teacher evaluations based on achievement on these tests limits innovation–from teachers as well as from students–at the secondary and, increasingly, the elementary level. But the world that surrounds schools is increasingly ruled by those who know how to innovate, how to say yes to a radical but promising idea, how to work within a “failure for free” model. If schools can’t learn how to embrace the increasingly valued and valuable mindsets afforded by participatory practices, it’s failing to prepare its student body for the world at large. The rankings system is just another set of hobbles added on to a system of clamps, tethers, and chains already set up to fail the very people it purports to serve.

Posted in assessment, Clay Shirky, conspiracy theories, Dan Hickey, education, graduate school, open source, participatory culture, Ph.D., public schools, social revolution | 1 Comment »