Michael Griffin in Philosophy investigates pedagogies that support the growth of citizenship skills
We narrowed down “citizenship skill” into its constructs of perspective taking, cognitive empathy, intercultural understanding, and the ability to integrate and relate opposing ideas on polarizing questions.
–Dr.Michael Griffin, Assistant Professor
Policymakers and students both describe “citizenship skills” as desirable learning outcomes and graduate attributes in higher education (UBC 2009, Banks 2007, Sax 2004). This project aims to identify methods of teaching and learning within the humanities that are correlated with a positive increase in citizenship skills, using validated psychological measures of perspective-taking, empathy, interpersonal and intercultural fluency, and tolerance of ambiguity (outlined below). We aim to test the hypothesis that the rigorous and charitable study of literature and philosophy drawn from diverse cultural traditions positively influence traits perceived to be conducive to good citizenship (cf. Kidd & Castano 2013); if true, we aim to identify content and pedagogical perspectives and practices that are correlated with this influence, to modify the pilot courses (enrolling approximately 560 students) in year 2, and to disseminate these results within and beyond the university community.
This project began when I was reading some of the work that my colleagues were doing in my two departments, Philosophy and Classics, to understand the graduate attributes for our undergraduates. What are the learning outcomes a good undergrad should be taking out of the degree? We knew we were trying to cultivate communication skills and critical thinking but more specifically, certain kinds of analytical reasoning or maybe intercultural understanding. There seemed to exist in the social sciences some constructs for defining these kinds of traits but there’s not a lot of dialogue between those empirical constructs and the language that we’re using for learning outcomes. That’s how I got interested in the intersection. We narrowed down “citizenship skill” into its constructs of perspective taking, cognitive empathy, intercultural understanding, and the ability to integrate and relate opposing ideas on polarizing questions. Those traits seemed to be measurable variables of any socio-psychological skills that fit within the language we use to define Humanities outcomes.
At the same time, a personal motivation as a teacher of Humanities is that I wanted to be a part of a process that would cultivate their sensitivity to looking at the world in a way different from their own, their family’s or their culture’s. I’d like to think that after a 4 year philosophy degree or a 4 year classical or eastern studies degree, they are a little more able both to articulate their own views with that sort of sensibility and to relate to other people’s or other culture’s views on different questions. My interest was in finding if that could be true and what styles of teaching and learning could intersect those social/psychological constructs with learning outcomes. The purpose of applying to TLEF was for support to conduct pilot pre- and post- surveys in a few different classes in different disciplines to see whether there was a positive or negative change at the year level (1st to 4th year) or at the course level (over a 13 week course) in any of these kinds of measurable traits.
Are there specific changes you made in the course? Any strategies that you implemented in order to encourage the citizenship skills?
We haven’t made any intentional interventions yet; instead, we’ve been looking at whether there is a positive or negative change in any course or discipline to determine what signature pedagogies or kinds of subject matter in those disciplines might account for the changes. Since we’re just coming to the end of year 1, we don’t yet know what the outcomes will be, but preliminary results suggest that in some Humanities courses, such as History of Philosophy or Philosophy of Literature, there is some statistically significant positive change in the targeted measurable traits even over a short 13-week period that we’re not seeing in other courses. In fact, we’re seeing it reflected negatively in some control courses. The mechanism behind this is still unknown, but we’ve attempted to control for other variables, like the instructor. If we think about the factors that differ between courses, we would hypothesize that a course that’s very heavy on interactive oral debate in the classroom and on intentional detailed feedback on written assignments might be a course that cultivates these traits, or citizenship skills, more strongly. So we can say, “what if, in this class that doesn’t necessarily have a lot of this component – isn’t taught with a lot of oral debate offering different perspectives on a polarizing question – what if we just introduced that feature and left everything else the same? Would we see any difference in that class in that year?”
You proposed a paper in which students have to represent a viewpoint opposing their own: is that one of the interventions happening in year two?
We’ve modified the proposal slightly because we’re no longer requiring that they take the opposite position from their intuitive one. Instead, we are using an integrative complexity tool developed in Peter Suedfeld’s lab here at UBC, which is basically qualitative coding, that allows us to analyze any sample of argumentative writing on the question, regardless of their position. What we are looking for in the qualitative coding part of this, separate from the self-report part, is the student’s ability to differentiate between one position and another when given a question where several stances can exist. Imagine the complexity of the discourse in the Republican presidential race right now: often you might hear candidates just offer one or two black and white proposals on an issue, and while that is very common in political discourse, we’re looking for the ability to pick out shades of grey. Are students able to integrate the different viewpoints into one common framework to explain why such different positions exist on this difficult question? We can say that this intervention is more of a summative tool used to check which courses show positive or negative change and then start to think about whether the writing practice, the oral debate, or maybe hearing feedback from their peers is suggesting that students are beginning to see the merit of different points of view by the end of the term.
The year 2 interventions have evolved into something like introducing a wide open classroom environment where time would be set aside each week for instructors to hear different points of views from students backed up with good arguments… or again trying for more granular feedback on written assignments. Another change is that we were able to get some of our batteries of questions into the UES this year, the university-wide survey, meaning that in a couple of weeks we will have access to 5000 responses across majors. This will give us much more resolution on the change from year 1 to year 4 in these traits than we can across 13 weeks. By observing the development in these perspective taking skills, we can start to think about what signature pedagogies are present in one major and maybe are not happening in some other major. At this stage, it’s about starting to guess at what mechanisms might be and starting to look at rates of change.
What approaches are you taking to evaluate the impact of different pedagogies?
There are two parts to answering this question: one is our methodological approach and the second is how we are thinking about evaluating the project at the end-how successful that has been. On the methodology side, this is mostly self-report surveys running through Qualtrics for batteries of validated constructs and social psychologies. For example, to observe their attitude to punishments, students would be presented with a scenario and then questioned on the likelihood of them responding in a certain manner – these are the kind of things we could use to get at the quantitative scores. The qualitative scoring for integrative complexity, the scoring essays on polarizing questions as I mentioned before, is a model which comes out of Peter Suedfeld’s lab in Psychology. The lab extracts qualified scores from students’ writings and answers to prompts, they’re not double-blind score with reliability. We then put all the data together in SPSS and look for any correlation with either discipline or pedagogies used in a given class. The priority is getting the data, first and foremost, so we can set up a workshop, maybe at CTLT, and talk to people about what to do with it. Secondly, if we are able to identify maybe two or three signature pedagogies that seem to correlate with change in these traits and make a publishable case for it at the end of the second year, I think that would be a good measure of success for the project. Our subject pool would be in the thousands, and I think that generates a broad enough range of meaningful data across a lot of majors to allow us to at least make some educated hypotheses about what pedagogies might be associated with the targeted outcomes.
How can the work you’re doing be applied to inform teaching practice?
The TLEF is not really a research grant but rather a grant to support meaningful pedagogical change, so there is a challenge in just timeline. Coming from an open-ended kind of exploration in year 1, we wanted to be able to make some good educated guesses at what pedagogies are correlated with, if not directly causing, positive change in the attributes we are interested in by the end of year 2. This looks doable sitting at the front of two years but halfway through it starts to look more challenging. It certainly is a challenge to get to the point of offering meaningful recommendation that are evidence-based at the end of the grant, but it’s the right kind of challenge.
The second is recruiting classes to participate in pedagogical research that is open ended and without a specific approach target. Recruitment in general is hard because it’s not part of the disciplinary culture in a lot of majors. It’s been really easy to get Psychology students and Sauder students to participate in research because they have subject pools and it’s all part of the expectations of undergraduates. However, a lot of this is coming out of an interest for me in the Humanities and it’s not part of the disciplinary culture of Humanities to do really quantitative or qualitative research on students in class for pedagogy. So I think just getting instructors on board with the idea of maybe offering a bonus mark for participating in a survey is tricky in its own right. It’s been a learning experience for me in getting used to different disciplinary cultures around the university too.
Do you have any advice for instructors who might want to engage in a similar type of project?
On the recruitment side of it, starting early would have been helpful. I found that there are people interested in similar sorts of things all over campus, but this being UBC, as it is with any sort of large public research university, everybody’s sort of naturally isolated to their own areas. It’s hard to find out who’s doing stuff that might be really educational. Leveraging the cross-faculty networks that exist on campus already, especially through Arts ISIT or CTLT, helps a lot. Personally, just being able to talk to people there to get recommendations of who else to talk to and bounce ideas off of is great, so my advice is to make those connections happen early on in the project. It can happen in a less structured way and not necessarily in a workshop, but defining the framework, the goals and the criteria for success from the beginning is probably a good recommendation.
What are you next steps, in terms of how you’re going to share this project? How can people learn more about it?
Well we will get the results from the UES hopefully in the next couple of weeks, after which there will be more qualitative coding to do, which is going to take a while. Out of that, hopefully in the next year, we can experiment with some interventions if we think we can see some mechanisms or signature pedagogies that seem relevant. For knowledge mobilization, I think my end goal would be to offer a community or practice workshop to stimulate conversation between people – anybody who’s interested to hear about what we’ve been doing and make suggestions. That would be one way of sharing the project and the second would be going for a publication. It would be a good criteria of success for sure if we get something published. Even if in the initial stages it is revised and resubmitted, this process of publishing in a scholarship for teaching and learning would be very educational for me and it would be great to get reviewers’ feedback.