Archives for : June2015

Report response: Sabrina of Schell Games

This guest post by Sabrina Culyba is the first in a series of responses to our first report.

  – Editor

—–
Haskell-Sabrina-web2-600x400I’m Sabrina Culyba, a game designer at Schell Games. I was excited to see the first Impact with Games report and the conversation it represents. The report team put out a call for feedback and I’d like to share some from my own perspective to keep the conversation going.

Defining Impact
In a parallel to the first claim of the report “Impact is defined too narrowly,” I was intrigued by the aspect of the report on games that create collective social change.

First some background: In my work at Schell Games, I’ve mostly approached impact from a player-centric perspective. At our studio, we call these games “Transformational Games” and approach our design as a means to transform the player, with the view that larger social change spreads from individual transformation. This isn’t to say that we don’t design for social or group interactions, but our focus has typically been how these interactions contribute to, or stem from, individual players.

So this report and further conversation with Ben Stokes challenged the “individual first” view a bit, opening up a different perspective for me about group transformation as its own thing separate from individual transformation. For me, group transformation is a new layer to creating change that I am eager to dig into more in the future.

Surfacing Forms of Evaluation
Claim #3 “Evaluation Methods are Inflexible” really came to life for me recently during a thread on a educational games mailing list of which I am a member. The conversation started out as an attempt to collectively list the entities out there who are publishing assessments on learning games, whether based on formal research, teacher experience, factual stats, subjective review, or other means. Almost immediately the discussion evolved into a debate about what constituted valid evaluation of a game, including some pointed statements from individuals about what other people are doing wrong in this space. I wish I could say that we came to an insightful conclusion but the reality was that there was a lot of collective cross talking, rather than collaborative problem solving. It’s clear there’s a lot of frustration out there and I think this is one of the most important conversations to continue to push forward from this report. Personally, I agree with the sentiment in the report that we should have multiple lenses through which to evaluate the efficacy of a game. And I also think that one of those lenses should be rigorous research. Two of the most interesting relevant points to come out of that mailing list conversation were:

  1. Context of use matters: A great teacher can create an amazing learning experience out of a game whether or not it was designed well or with learning in mind. Similarly a proven, effective game for impact used in the completely wrong context may fail to change anything. We need to talk about context when we talk about a game’s efficacy.
  2. Problems with Access & Timing: Without a clear way to find out what research or evaluation has been done on which games, teachers and others looking to use games for change are forging their own path or relying on word of mouth. They aren’t going to simply wait for a 3-5 year study (assuming there is one on a game that fits their need.) What tools can we provide to collect and surface useful short term metrics such as anecdotes, expert review, or informal data?

A Missing Piece: Higher Costs (for transformation & evaluation)
Under “Anticipated Project Benefits for Game Designers & Makers,” the report says:

If the lack of evaluated games is any indication, a common scenario is to focus on creating the game and worry about evaluation once it is done (if at all).

The assumption implied here is that game makers do not see the importance of considering assessment early — that their approach is to focus on the game first.

I’ve also heard this echoed from other sources, sometimes even with a statement that game makers don’t want their games to be evaluated, lest their ineffectiveness be revealed. While this could be true in some cases, I think this understates the role of funding in what is produced.

Developers actually have little recourse when there is insufficient funding; too often there are insufficient funds to do a proper pre-production that includes significant integration with prior research and planning for assessment, and too often there is no funding for post-ship followup to evaluate the game’s effectiveness. In fact, lack of funding that enables the creation of games that are both of a high entertainment quality and also incorporate research, teacher tools, assessment, etc., is, I think, an elephant in the report. There’s really no way around the fact that for two games of around the same size & production value, a transformational game has more moving parts, more stakeholders, and more metrics for success to hit than a game developed purely for entertainment.

And yet I still see RFPs for these kinds of games that would barely cover the costs of cloning an existing game. How do we talk about that? Can we talk about it?