sleeping alone and starting out early

an occasional blog on culture, education, new media, and the social revolution. soon to be moved from http://jennamcwilliams.blogspot.com.

Archive for the ‘pedagogy’ Category

a model for designing the ELA classroom in support of "literacy science"

Posted by Jenna McWilliams on February 7, 2010

You guys, I think I have a model to show you.

This makes me extremely happy, because as I’ve explained (more than once), I’ve struggled mightily with the very concept of modeling. I’ve also struggled with representation. The purpose of designing this model is to show my take on the role of new technologies in educational environments. But articulating a theory, even a working theory, about the role of technologies has been such an insurmountable challenge for me–which technologies? for which students? and for what purpose?

But the elements for building this rudimentary model have been around me for some time. It just took time and reflection for me to be able to put the elements together.

(image description: this is a pen-and-ink drawing of a classroom. In the center of the room, the class is seated, facing each other, around a square of tables; on the table in front of them are combinations of books, notebooks, and electronic equipment. Around the edges of the room are, clockwise from the upper lefthand corner: an easel labeled “representational literacy;” a table with extra pens and extra notebooks; a chalkboard with a variety of marks on it, labeled “design thinking”; book shelves; a workbench labeled “computational literacy”; open space lining most of one wall; a laptop labeled “new media literacy”; a safe filled with bundles of cash; and a laptop cart. Below the picture is the phrase, “If you can’t build it, then you don’t understand it.”)

Inspiration for this model
Design of the periphery: Multiple intelligences schools. A few years ago, I read the 25-anniversary edition of Howard Gardner’s Multiple Intelligences. Throughout the book, Gardner describes a variety of approaches to integrating his theory of multiple intelligences into learning environments, and one description–of the Key Learning Community in Indianapolis–has stuck with me. In this school, students work in “pods” that represent each type of intelligence outlined by Gardner; a founding principle of this school, he explains, “is the conviction that each child should have his or her multiple intelligences stimulated each day. Thus, every student at the school participates regularly in the activities of computing, music, and bodily-kinesthetics, in addition to mastering theme-centered curricula that embody standard literacies and subject matter.”

You don’t have to agree with this approach to appreciate its effort at offering a range of avenues for learning to happen. From time to time I think about those multiple intelligences schools and wonder what aspects might be applied to my current area of focus, the English / Language Arts classroom. Clearly, more avenues toward literacy is better than fewer avenues; and since we know that traditional literacy practices taught through traditional means are insufficient preparation for the types of literacy practices people are called upon to demonstrate in real life, we might think of “pods” for different groupings or categories of literacy learning.

Design of the center and periphery: A real life ELA classroom. I’ve had the unBELIEVABLE good luck to sit in on Becky Rupert’s ELA classroom at Aurora Alternative High School here in Bloomington, IN. Much of the design of this model is based on how she has arranged her class. To begin with, the main focus of the room is a square of tables where students meet at the beginning of each class. My model does not identify the teacher’s location; that’s because in Becky’s classroom, she sits at the table right alongside her students. She does this on purpose, and it works in service of developing a learning community.

Becky’s classroom is absolutely stuffed with books–you have to move books in order to get to other books. A new addition this year is a laptop cart, which sits against the far wall of the room.


Inclusion of design thinking: my work with SociaLens. For the last several months, I’ve been working with a new organization called SociaLens. The purpose of this organization is to consult with businesses and offer strategies for integrating new types of communications tools and ways of thinking into their organizational plans, with a particular eye toward social media technologies. Two key categories that we think make for highly adaptive, potentially highly successful organizations are new media literacies and design thinking.

Until I started working with SociaLens, I had not thought to consider the connection between these categories. I also hadn’t thought about what educational researchers can learn from corporate innovators and vice versa. But what has been seen cannot now be unseen. I’ve come to see design thinking as an essential element of literacy learning, and especially if you believe (as I do) that computational flexibility (which I’ll describe briefly below) is key to preparation for success in a new media age.


Inclusion of new media literacy, representational literacy, design thinking, & computational literacy “pods”: Some stuff I’ve read. I’ve been immersed in new media literacy research for a good chunk of years, and I drank that kool-aid long ago. If you believe in the value of teaching new media literacy practices in schools, then computational literacy kind of comes with the territory. These categories of literacy are similar in lots of respects: Both are better described as a set of proficiencies and attitudes–what Lankshear and Knobel call a combination of “technical stuff” and “ethos stuff”–than as concrete, teachable skills. Both require a kind of openness–a flexibility–to meet the quickly changing demands with emerging technologies. But new media literacies are the required skills to engage in collaborative knowledge-building or collective meaning-making or problem-solving activities, while computational literacy is, in my mind, linked to a kind of “hacker’s mentality.” It’s the act of simultaneously making use of and resisting the affordances of any technology; of knowing when and how to say “no” if a technology doesn’t meet your purposes; and of finding (or developing) a new technology that better meets your needs and interests.

Design thinking, as I mention above, comes out of my work with SociaLens and the (admittedly very surface-level) reading I’ve done about this approach to problem-solving. This type of thinking has also made an appearance in the recent work I’ve been reading about research in science and math instruction. Many researchers whose work focuses on supporting an inquiry-based focus in science instruction, in particular, emphasize the value of embracing the epistemological basis of science-as-inquiry. As William Sandoval and Brian Reiser explain in their 2004 piece, “Explanation-Driven Inquiry: Integrating Conceptual and Epistemic Scaffolds for Scientific Inquiry,” the epistemic elements of this approach include

knowledge of the kinds of questions that can be answered through inquiry, the kinds of methods that are accepted within disciplines for generating data, and standards for what count as legitimate interpretations of data, including explanations, models, and theories. Placing these epistemic aspects of scientific practice in the foreground of inquiry may help students to understand and better conduct inquiry, as well as provide a context to overtly examine the epistemological commitments underlying it.

Wilensky & Reisman, in their work with computer-based modeling, argue in support of what they call “the engineer’s dictum”: “If you can’t build it, then you don’t understand it.” They work with a modeling language called NetLogo, which is a loose descendant of Seymour Papert’s Logo program. The program requires students to solve problems by developing models of real-world processes like population fluctuation within predator-prey (wolf-sheep) communities and the phenomenon of fireflies synchronizing their flashes. The authors make a strong case that model-based thinking–or what we might also call “design thinking”–is key to students’ ability to engage in deep learning about a specific phenomenon and about scientific inquiry more broadly.

I included a pod for “representational literacy” in this model because of my own recent experience grappling with model-building. The ability to design, critique, and modify representational models is a set of skills with relevance across content areas, and we don’t typically think of it as extremely valuable in the literacy classroom. But it should be news to nobody that “literacy” is becoming an increasingly visual category of proficiencies, and that representational literacy is quickly becoming even more tightly bound up with traditional literacies than it ever was before.

What I haven’t yet noted is that these categories of literacy practices make up what we might call “literacy science.” I mean this term to hold the same place in the literacy classroom as “mathematician” or “scientist” or “historian” or “musician” hold in their respective classroom-based environments. As a culture, we haven’t spent enough time yet thinking about the purpose we hope the new literacy classroom to serve. Science class is supposed, ideally, to get students thinking like scientists; in math class you (ideally) learn to think like a mathematician; in history class you think like a historian; but in general English class has been designed as a sort of catch-all, a place where students can learn the basic reading and writing skills that enable them to think like historians, mathematicians, and so on.

What if we shifted the focus of the ELA classroom to more explicitly broach the notion of “literacy science”: A way of being in the (literate) world characterized by an ethos, a set of skills, and a set of norms and behaviors? What would it mean to turn the ELA classroom into a place where we support the growth of literacy scientists?


Inclusion of open space: a nod to the future work of literacy science. Howard Gardner’s list of multiple intelligences has grown over the years, and my model is designed to accommodate new categories of literacy practices. Filling up the entire classroom does nobody any good, especially since we know–we absolutely know–that new valued practices are emerging along with the breakneck speed of emergent technologies.

I should mention, too, that my model includes a safe filled with bundles of cash. This is a nod not only to the future work of literacy science but also to the current conditions of the typical public school. On top of the training required, every one of the pods in my model costs money, and it’s money that schools simply don’t have.

So that’s it: That’s my current model for the role of technologies in the literacy classroom. I would love to know your thoughts. Comments, questions, and suggestions are most welcome and will be read with great joy, thoughtfulness, and enthusiasm.

References: In case you’re interested in reading the work I identified above, here are the citations.

Wilensky, U. & Reisman, K. (2006). Thinking like a Wolf, a Sheep or a Firefly: Learning Biology through Constructing and Testing Computational Theories — an Embodied Modeling Approach. Cognition & Instruction, 24(2), pp. 171-209. http://ccl.northwestern.edu/papers/wolfsheep.pdf.

Sandoval, W., A., & Reiser, B.J. (2004). Explanation-driven inquiry: Integrating conceptual and epistemic scaffolds for scientific inquiry. Science Education, 88:3, 345-372.

Posted in creativity, education, Joshua Danish, learning sciences, literacy, new media, participatory culture, pedagogy, schools, teaching | Leave a Comment »

technologies as sleeping policemen: or, how I learned to stop worrying and…

Posted by Jenna McWilliams on January 18, 2010

Nicholas Burbules and Thomas Callister worry for us. Or, at least, they were worried, over 10 years ago when they offered up their take on new technologies in a paper called The Risky Promises and Promising Risks of New Information Technologies for Education. Among their concerns: that too many people adopt a “computer as panacea approach” to new technologies. This is uniquely problematic in education, they argue, where

(r)ather than acknowledge the inherent difficulty and imperfectability of the teaching-learning endeavor, rather than accept a sloppy pluralism that admits that different approaches work in different situations—and that no approach works perfectly all the time—educational theorists and policy makers seize upon one fashion after another and then try to find new arguments, or new mandates, that will promote widespread acceptance and conformity under the latest revolution.

As problematic as the “computer as panacea” approach is, it pales in comparison to the relativistic “computer as neutral tool” approach, the one that has people saying that any technology can be used for good or for evil. Burbules and Callister explain that:

this technocratic dream simply errs in the opposite direction from the first. Where the panacea perspective places too much faith in the technology itself, the tool perspective places too much faith in people’s abilities to exercise foresight and restraint in how new technologies are put to use; it ignores the possibilities of unintended consequences or the ways in which technologies bring with them inherent limits to how and for what purposes they can be used. A computer is not just an electronic typewriter; the World Wide Web is not just an on-line encyclopedia. Any tool changes the user, especially, in this instance, in the way in which tools shape the conception of the purposes to which they can be put. As the old joke goes, if you give a kid a hammer they’ll see everything as needing hammering.

They prefer a middle approach, which assumes that a simple cost-benefit analysis fails to account for the possibility that benefits and costs are highly dependent on perspective. They offer as proof the history of antibiotics, which through widespread use greatly decreased humanity’s likelihood of dying from bacterial infection but in the process led to the emergence of drug-resistant forms of bacteria. (“That is a very bad thing,” they write.)

Though it’s fairly simplistic to compare new information technologies to antibiotics, I’ll go with the analogy for now, mainly because I agree with the authors’ effort to problematize attitudes toward new technologies. It’s perhaps more accurate to consider the social effects of antibiotics: they have led to a general increase in life expectancy, but in the process have enabled imperialistic societies (cf. the United States) to effectively colonize cultures, communities, and countries worldwide. In the same way, new technologies offer unprecedented access to information, communities, and tools for mobilization, but they simultaneously support new forms of colonization, both across and regardless of national borders.

Which brings me to the metaphor of technologies as sleeping policemen.

The sleeping policeman: In America, we call it a “speedbump.” It looks like this:

The speedbump’s intended effect is to get drivers to slow the hell down, and it’s commonly used in neighborhoods and suburban areas with lots of kids. And it does get people to slow the hell down, primarily because they have no choice. There are also tons of unintended effects: Parents feel more comfortable letting their kids play outside. And, as this post points out, kids playing outside tend to get to know each other better. They–and, by extension, their parents–connect with other neighborhood residents, and everybody feels more connected: “Parents come to know the nearby children. And, inevitably, they come to know those childrens’ parents. They begin trading favors like driving children around. They become neighborly.”

There are potential negative effects, too. Using sleeping policemen to slow drivers down changes driving practices in unintended ways. When a driver hits the last speedbump, she hits the gas and jets on down the road. This might increase the risk of an accident just beyond the range of the speedbumps. Drivers may choose to avoid areas with speedbumps, thereby increasing traffic through other areas–even, potentially, nearby neighborhoods whose streets lack speedbumps. And when a driver is not forced to monitor her own driving practices, the decision to simply drive more slowly in neighborhoods is taken away from her, thereby increasing the possibility that she will not adopt slower driving as a general practice.

Still, I think we can all agree that the benefits outweigh the costs. Nobody sees the speedbump as a panacea, and I don’t imagine many people see the speedbump as a neutral technology.

So why do we worry so much more about the emergence and increasing ubiquity of new media technologies than we do about sleeping policemen or antibiotics?

One reason is that it’s easier to see new media technologies as actors that shape our practices than it is to see how speed bumps and antibiotics have shaped us.

Actors: Any person or tool that exerts force upon any other person or tool, thereby shaping its use or practice. In Actor-Network Theory, everything is a potential actor, everything a potential actant.

Speed bumps act upon cars, drivers, kids, parents, neighborhood dynamics. Antibiotics have acted upon people, policies, government spending, and attitudes. We live longer now. We therefore reshape our lives, our goals, and our relationships to others. It’s all very chaotic and complicated, because our reshaped attitudes in turn act upon our use of antibiotics. Everything mediates everything.

Because new media technologies have emerged and been adopted so quickly, their role in reshaping thought and action–and even, it’s becoming clear, physiology–is clear, even if the outline of how this reshaping is shaking out remains quite fuzzy. New technologies as sleeping policemen: They shape not only how we drive, but how we think about driving. We move them, we reshape them, we add more or take a few away, we develop cars with better suspension…and it goes on down the rabbit hole.

Posted in academia, education, learning sciences, new media, participatory culture, pedagogy, philosophy, public schools, schools, social media, social revolution | 3 Comments »

the pedagogy of the oppressed in action, via Dr. Who & the BBC

Posted by Jenna McWilliams on November 14, 2009

“you don’t just give up / you don’t just let things happen / you make a stand / you say no//”

Here’s what a pedagogy of the oppressed can do for us all, starring Rose Tyler as my new hero:

Posted in human rights, Paulo Freire, pedagogy, social revolution | Leave a Comment »

putting some trust in "those little bastards"

Posted by Jenna McWilliams on September 21, 2009

Over at the Chronicle of Higher Education, H. William Rice has posted a thoughtful opinion piece titled “Don’t Shrug Off Student Evaluations.” (The piece is locked to nonsubscribers; because I’m all about open access, I will helpfully link you to a free version here.)

Rice, a long time higher education faculty member, describes a pair of colleagues who took distinctly negative approaches to the notion of students evaluating their professors: One, whom Rice describes as “an elderly faculty member,” explained to Rice that he saw student evaluations as

“an absolute violation of academic freedom,” while jabbing a trembling, crooked finger in my face with a swordlike flourish. “No one has the right to come in my classroom,” he said. (I assume he allowed the students in.)

The other colleague, whom Rice calls “Professor X,” confided in Rice that he read his students’ evaluations before submitting final grades. Professor X had received nearly universally negative reviews and wanted Rice’s advice on whether he should lower students’ grades “to show ‘those little bastards’.”

Rice, of course, takes the more contemplative path by arguing that student evaluations have an important place in academia because they offer educators insight into how well they’re doing their job, where they can improve, and in what areas they continue to succeed. He writes:

Sure, student evaluations have their limits. They should never be the only means of evaluating faculty members, and they should never be used to snoop on professors who deal with controversial subjects in their classes. Yes, administrators have been guilty of misusing them. But the benefits far outweigh the risks, and faculty members who actually want to become better teachers—and who believe that good teaching skills are not bequeathed to them in perpetuity with the awarding of a Ph.D.—should read them over and over again.

Professor X’s great objection to student evaluations was one I frequently hear: “The student does not know the subject, so how can he or she judge my teaching?”

True, students’ perspectives are limited. But so are professors’. A professor cannot know what it is like to be 20 in an age of text messages, Facebook, and YouTube, and to be forced to endure lectures from someone who does not inhabit their socially networked world. I’m not suggesting that faculty members necessarily use that technology in their teaching, only that the point of view of those who do use it might be valuable.

As a former college instructor, I can attest to the deep value of student evaluations, though the danger of misinterpretation is always present. Often, we think about student ratings as a kind of popularity contest for educators–in some ways, I think, rightly so. After all, it’s fairly easy to get high marks from lots of students: Just be friendly, funny, and a soft grader. It helps to make interesting use of new media resources.

Because so much of the student evaluation process hinges on faculty popularity, it’s easy to overlook the much more important questions that only students can answer: Did the professor change the way you thought about the subject? Did you leave the class a better thinker than when you went in? Can you apply what you’ve learned to real-world contexts?

Here I draw from Ken Bain’s excellent text, “what the best college teachers do.” He writes about an experiment conducted by Arizona State University physicists in the early 1980s. They examined whether introductory physics courses changed the way students thought about motion. Most students came in with an intuitive set of theories about how the world works; most of these theories aligned with what the physicists called “a cross between Aristotelian and 24th-century impetus ideas.” The goal of the course was to introduce students to Newtonian physics, which was in many ways directly oppositional to the Aristotelian approach. Given that most undergraduates went in “thinking like Aristotle,” did they leave “thinking like Newton”?

Bain writes:

Did the course change student thinking? Not really. After the term was over, the two physicists…discovered that the course had made comparatively small changes in the way students thought. Even many “A” students continued to think like Aristotle rather than like Newton. They had memorized formulae and learned to plug the right numbers into them, but they did not change their basic conceptions. Instead, they interpreted everything they heard about motion in terms of the intuitive framework they had brought with them to the course.

….Researchers have found that…some people make A’s by learning to “plug and chug” memorizing formulae, sticking numbers in the right equation or the right vocabulary into a paper, but understanding little. When the class is over, they quickly forget much fo what they have “learned.”…Even when learners have acquired some conceptual understanding of a discipline or field, they are often unable to link that knowledge to real-world situations or problem-solving contexts.

Of course, there’s no way to use end-of-semester student evaluations to gauge what kind of long-term impact on learning an instructor has had. Aside from the too-short time scale, there are the real pressures on students to perform, achieve, succeed–and, strange as it may seem, the only way they can definitively prove they’ve done this is through their grade point average. This means that evaluations are nearly inextricably linked to students’ perceived achievement in the class; linked, that is, to what they think will be their final grade.

This isn’t to say that student evaluations don’t have a place in higher education: I firmly believe that they do, if for no other reason than to boot the universally bad instructors who either don’t care about or aren’t capable of teaching effectively and to toss the best instructors a little closer to the tenure finish line.

Most of us fall somewhere in the middle of the good-teacher continuum, which means that if we want to find out whether we’ve had an impact on students’ thinking, we may need to supplement student evaluations with some evaluations of our own.

Here’s one thing we might try: A set of surveys, administered at the beginning of the class and again at the end, that zero in on the key conceptual frameworks of the course’s domain. While in introductory physics the key issue may be “how students think about motion,” in geometry it may be “how students think about shapes.” In English, my field of choice, it may be something like “how students think about effective written communications.” You start there, think about the key issues that shape your conceptual framework, and design a set of questions that can gauge students’ intuitive answers (at the beginning of the course) and informed answers (at the end of the course). The nice added benefit of doing this sort of thing is that it forces you to think about and articulate your foundational approach to the subject matter–useful for any educator, no matter how expert.

Indeed, the goal for all educators, no matter what discipline, no matter what the age of their students, should be to help all learners move, even a little, toward how real practitioners in the subject area engage with the world.

And let’s try to put a little more faith in our students: “Those little bastards” may care more about grades than we’d like, but they also tend to recognize real, effective teaching when they encounter it. They may not, as one of Rice’s straw men explained, be expert enough about the subject area to teach the class, but they’re certainly experts in learning–they’ve been doing it their whole lives. Let’s trust that, given the right questions, they’ll offer up the answers we need in order to improve our teaching practices.

Posted in academia, academics, education, pedagogy, teaching | 3 Comments »

why I chose openness: David Wiley, I’ve completed my homework assignment!

Posted by Jenna McWilliams on August 27, 2009

In a recent post on his blog iterating toward openness, David Wiley makes a request of all adherents to the “openness” movement who read his blog:

Without any special authority to do so, may I please give you a homework assignment? Would you please blog about why you choose to be open? What is the fundamental, underlying goal or goals you hope to accomplish by being open? What keeps you motivated? Why do you spend your precious little free time on my blog, reading this post and this question? If each of us put some thought and some public reflective writing into this question, the field would likely be greatly served. The more honest and open you are in your response, the more useful the exercise will be for you and for us.

The assigmnent is the result of a previous post in which Wiley wrote:

While I think everyone in the field of “open education” is dedicated to increasing access to educational opportunity, there is an increasingly radical element within the field – good old-fashioned guillotine and molotov type revolutionaries. At the conference I heard a number of people say that things would be greatly improved if we could just get rid of all the institutions of formal education. I once heard a follow up comment, “and governments, too.” I turned to laugh at his joke, but saw that he was serious. This “burn it all down” attitude really scares me.

As you can imagine if you know even a small chunk of the history of projects like the Free / Libre / Open Source Software movement (it keeps getting more words tacked onto it for a reason), Wiley’s post generated some fierce responses. So the request, and Wiley’s decision to back away from his initial stance, appears to be an effort to consider the broad range of issues that attract people to the openness movement in general, and open education in particular.

Back in 1984, Seymour Papert said this:

There won’t be schools in the future…. I think the computer will blow up the school. That is, the school defined as something where there are classes, teachers running exams, people structured in groups by age, following a curriculum– all of that. The whole system is based on a set of structural concepts that are incompatible with the presence of the computer… But this will happen only in communities of children who have access to computers on a sufficient scale.

Twenty-five years later, we are forced to conclude that one of the following is probably true:

  • The computer didn’t actually blow up anything at all; schools are basically the same as they always were, with the same curricula, approaches, and values; or
  • The computer did blow up the school, but nobody noticed that the school was blown to bits, and kept operating as if education continues to serve the purpose it served 50, 100, or more years ago.

Either way, the results are the same: schools equip kids with a set of mindsets and skillsets that prepare them increasingly less well for the culture into which they will emerge.

Perhaps Papert’s mistake was in attributing intention to the computer; if, to further extend the metaphor, the computer really was an explosive device, then it had no ability to decide how, when, and here to detonate.

I was drawn to the open education movement because it attempts to do on purpose what we thought computers would do by default: blow wide open the walls, and therefore the constraints, surrounding education. In arguing against the binary nature of the notion of “openness,” Wiley argues that “[i]n the eyes of the defenders of the ‘open source’ brand, if you’re not open enough you’re not open at all…. It is just as inappropriate for you to try to force your goals on others as it is for others to try to force their goals on you.”

Of course he’s right, but on the other hand, things aren’t always quite so clearcut in the field of education. I shouldn’t be able to force my values on educators, researchers, and administrators who disagree with my approach to teaching and learning; but if I leave them be, then they are free to inculcate young people with exactly the wrong set of skills, ideals, and values–the kind that reify outdated, unfair, and wrongheaded assumptions about how the world can, does, and should work.

I am, as I hope I have made clear, an increasingly radical element within the field. I am a revolutionary. And this is precisely what drew me to openness as a movement. In fact, I wish the open education movement would embrace a more inclusive name, perhaps something like Free / Libre / Open Education, or FLOE. In fact, I think I’ll start calling it exactly that.

I agree with Wiley that the term “open” is problematic, but for the exact opposite reason that Wiley gives. I think people are too likely to call almost anything open, even if the door is only open a centimeter. If it’s open exactly that far, and there’s a doorstop behind it preventing it from opening any further, then that door is effectively closed.

Related posts by other writers:
David Wiley: A few notes about openness (and a request)
Jeremy Brown: Bard Quest 2: Wiley’s motivation, Tomaševski’s motivation, and the real reason people get into Open Education
Jared Spurbeck: Why my creative work is “open”
davidp:Optimal, not ideal

Posted in education, open education, open source, participatory culture, pedagogy, politics, public schools | 3 Comments »

how to think like a good {fill in the blank}

Posted by Jenna McWilliams on August 21, 2009

“The message of Wikipedia,” writes Michael Wesch, “is not ‘trust authority’ but ‘explore authority.’ Authorized information is not beyond discussion on Wikipedia, information is authorized through discussion, and this discussion is available for the world to see and even participate in.”

This comes from Wesch’s January 2009 Academic Commons article, “From Knowledgable to Knowledge-able: Learning in New Media Environments.” The piece is part of an issue dedicated to exactly this problem: How do we teach and learn in a cultural moment where even the very definition of “knowledge,” “teaching,” and “learning,” and even of “information” is being called into question?

Wesch focuses in on the brick-and-mortar university, arguing that despite growing recognition among higher-ed faculty and administration that university teaching and learning desperately needs to shift away from its authoritarian roots, a series of physical, social, and cognitive structures stymie this effort at nearly every turn. The physical deterrents are, Wesch argues, the easiest to recognize, and they

are on prominent display in any large “state of the art” classroom. Rows of fixed chairs often face a stage or podium housing a computer from which the professor controls at least 786,432 points of light on a massive screen. Stadium seating, sound-absorbing panels and other acoustic technologies are designed to draw maximum attention to the professor at the front of the room. The “message” of this environment is that to learn is to acquire information, that information is scarce and hard to find (that’s why you have to come to this room to get it), that you should trust authority for good information, and that good information is beyond discussion (that’s why the chairs don’t move or turn toward one another). In short, it tells students to trust authority and follow along.

This is a message that very few faculty could agree with, and in fact some may use the room to launch spirited attacks against it. But the content of such talks are overshadowed by the ongoing hour-to-hour and day-to-day practice of sitting and listening to authority for information and then regurgitating that information on exams.

These are a key feature of the social structures that work against change in higher education: The ongoing pressure to standardize curriculum and use (easily quantified) standardized assessments for accountability purposes. Wesch writes:

When I speak frankly with professors all over the world, I find that, like me, they often find themselves jury-rigging old assessment tools to serve the new needs brought into focus by a world of infinite information. Content is no longer king, but many of our tools have been habitually used to measure content recall. For example, I have often found myself writing content-based multiple-choice questions in a way that I hope will indicate that the student has mastered a new subjectivity or perspective. Of course, the results are not satisfactory. More importantly, these questions ask students to waste great amounts of mental energy memorizing content instead of exercising a new perspective in the pursuit of real and relevant questions.

This is, perhaps, one of the most significant dangers inherent in re-mediating assessment: The risk of re-mediating the wrong aspects of current assessment strategies. Rewriting a multiple-choice test is surely not the answer, but it’s often, and understandably, what innovative and new media-friendly educators do. The results of this effort may not be satisfactory, after all, but they’re better than nothing. And short of overhauling an entire course, it’s often a useful stopgap measure.

And what of overhauling an entire course? Wesch, recognizing that “our courses have to be about something,” argues for a shift away from “subjects” (English, History, Science) and toward “subjectivities”–ways of approaching and thinking about content. One simple way of thinking about this shift is by thinking about the difference between learning the steps of the scientific method and developing the mindsets embraced by a profession that embraces the scientific method as a useful approach to experimentation.

The “subjectivities” approach is, in fact, the favored approach of many graduate programs. My sister, who is beginning law school this fall, is immersed in a cognitive apprenticeship designed to make her think, act, and speak like a lawyer. As a new doctoral student in Indiana University’s Learning Sciences program, I’m undertaking the same apprenticeship. A series of courses, including IU’s Professional Seminar in the Learning Sciences and Theory and Method in the Learning Sciences, are intended to equip new grad students with the Learning Sciences mindset.

This approaches, however, gives rise to a key question: If the “subjectivities” approach is intended to is intended to help learners think, act, and speak like a {fill in the blank}, then who decides how a {fill in the blank} is supposed to think, act, and speak?

Jim Gee offers a fascinating critique of “learning to think like a lawyer” in his book Social Linguistics and Literacies. He argues that success in law school is slanted toward people who think, act, and speak like white, middle-class men, explaining that:

[t]o write a competent brief the student has to be able to read the text being briefed in much the same way as the professor does…. Students are not taught these reading skills—the ones necessary to be able to write briefs—directly. Briefs are not, for instance, turned in to the professor; they are written for the students’ own use in class…. One of the basic assumptions of law school is that if students are not told overtly what to do and how to proceed, this will spur them on essentially to teach themselves. Minnis argues that this assumption does not, however, work equally well for everyone. Many students from minority or otherwise non-mainstream backgrounds fail in law school.

(A female friend who recently completed law school agrees with this argument, and struggled mightily with the inequities inherent in her program and inside the field of law in general. I’ve written about her experience here.)

This issue is certainly not limited to law school; it’s a thorny problem in every program designed to help students think like a {fill in the blank.} I understand that this is an issue that IU’s Learning Sciences program has grappled with recently, and I imagine this is the reason that the Professional Seminar in the Learning Sciences, previously a required course, has now been made optional.

What do I know, right? I haven’t even started my first semester in the program yet. But it seems to me that if this issue is worth grappling with (and I believe it is), it’s worth grappling with alongside of the program’s apprentices. I’m for making the course mandatory and then using it to expose, discuss, and clarify the very issues that led to the faculty’s decision.

Here we can take a page out of the Wikipedia lesson book. There’s no point in simply trusting authority when the social revolution supports not just questioning, not just opposing, but actually exploring authority. After all, thinking like a good {Learning Scientist} is about much more than embracing a set of approaches to teaching, learning, and knowledge; it’s also about questioning, contesting and exploring the very foundation of the field itself.

Posted in academia, assessment, conspiracy theories, graduate school, Jim Gee, pedagogy, Ph.D., social revolution | 1 Comment »

putting the "our" in "open source": on the dearth of women in the open source programming movement

Posted by Jenna McWilliams on August 4, 2009

In case you haven’t seen it yet, I wanted to link you to Kirrily Robert’s keynote at this year’s O’Reilly Open Source Convention. Robert’s keynote, “Standing Out in the Crowd,” focused on the dearth of female developers in the open source movement. She offers this image from the 2008 Linux Kernel Summit:


Image credit: Jonathan Corbet, lwn.net

Robert writes:

This is a normal sort of open source project. I’ll give you a minute to spot the women in the picture. Sorry, make that woman. She’s on the right. Can you see her?

While women are a minority in most tech communities, Robert explains, the gender disparity in open source development is more pronounced than in other technology disciplines. While women make up between 10-30% of the tech community in general, they comprise about 5% of Perl developers, about 10% of Drupal developers, and (according to an EU-funded survey of open source usage and development, called FLOSSPOLS) about 1.5% of open source contributors in general.

Robert surveyed female developers to find out why women seem to be so reluctant to contribute to open source projects; the most common reason was some variation of “I didn’t feel welcome.” She points to a pair of innovative projects whose members have actively worked to recruit women. One is the Organization for Transformative Works’ (OTW) Archive of Our Own (or AO3); the other is Dreamwidth, a blogging and community platform forked from the LiveJournal codebase. Both projects focused on recruiting women, not to be inclusive but because they felt it was essential for the success of the projects.

The entire talk is worth a read-through or a listen, but I want to highlight one key point from the set of strategies she offers for recruiting diverse candidates: Find potential users of the application and teach them programming, instead of recruiting good programmers and teaching them about the value of the application. She says:

If you’re working on a desktop app, recruit desktop users. If you’re writing a music sharing toolkit, recruit music lovers. Don’t worry about their programming skills. You can teach programming; you can’t teach passion or diversity.

I’ve been thinking about this very aspect of the open education movement since the Sakai 2009 Conference I attended last month. Sakai offers an open source collaborative learning environment for secondary and higher education institutions, emphasizing openness of knowledge, content, and technology. This embrace of openness was evident in every aspect of the conference, except for one: The notable lack of educators in the panels and audience.

If you want a good open education resource, you need to start by recruiting open source-friendly educators. Otherwise, you run the risk of developing a highly robust, highly functional tool that’s limited only in its ability to offer the features educators actually want.

Posted in distributed cognition, feminism, open education, open source, pedagogy, sakai | Leave a Comment »

some thoughts on Sakai 09: here’s the church, here’s the steeple, open the doors…

Posted by Jenna McWilliams on July 8, 2009

I’ve been liveblogging day one of the Sakai 2009 Conference in Boston, MA. One key theme in all of the panels I’ve attended so far is this:

We (designers, programmers, educators, faculty, administrators, students) need a shared language for talking about open education, open technologies, and Sakai.

Which makes me wonder: Why is there such a dearth of faculty and, specifically, education faculty at this conference? My sense so far is that the majority of the 500-odd attendees of this conference are in administration, technical support, and Information Technology. A few presenters are also faculty members, but this is generally a split role–IT folks also teach part time.

If it’s true that we need a common language, if it’s true that we need to think about how to support adoption of and deep engagement with the tools made possible through Sakai, then we need a wider variety of people at the table. A common language cannot be devised until everybody starts talking.

Sakai is an ambitious and admirable project, and designers and programmers have developed a robust system that supports a wide range of interaction types. What’s missing seems to be the conversation about pedagogy. Presenters have pointed to the tool’s affordances, but I wonder how much work has been done exploring what kinds of learning experiences are supported through Sakai, and how the results of Sakai-based courses compare to those of offline courses of courses that work through other types of collaborative learning environments.

Where are the education folks? Were they not invited?

Posted in conferences, education, pedagogy, sakai | 5 Comments »