Risky Assumption 4: To Scale Impact, Our Games Must be MASS MEDIA

What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.

Look for a revised report in the coming weeks. The ideas for these “assumption” posts come in large part from the feedback and ideas we have received in recent months from the community. Thank you all.


Risky Assumption #4: To Scale Impact, Our Games Must Be MASS MEDIA.”

Who doesn’t want scale? Surprisingly strong emotions often swirl around the topic of scaling. The problem is that assumptions on scaling can obscure alternatives to how change happens in the world.

The most common assumptions are true… sometimes.  Consider:

  • “We want impact… as mass media” (e.g., we need a massive audience — so without a million downloads, why bother?)
  • “We want scale… just like commercial videogames” (e.g., unless we can compete with commercial titles, how can a game have impact?)
  • “We want scale… by changing policy” (e.g., unless the game changes a law, who cares if it affected public opinion — because we need structural change, right?)

…none of these is “wrong” per se, and the policy emphasis is strange enough to many artists, but all three can obscure other possibilities.  

Consider these alternative scaling approaches:

  1. Games can be used in a campaign that seeks to “shift the culture” of a community by triangulating several local interventions (e.g., to establish a “college going culture” in a particular high school, see this FutureBound study). Such triangulation is hard to achieve nationally, and so is more often pursued in cities, states, or even within a particular school.
  2. Some game projects embrace local customization as an approach to achieving scale, despite the costs. Theses projects resist the idea that a single international implementation would be effective for local communities. Much like local parks and economic planning, these games approach scale as the “mass localization” of an approach, in opposition to replication.

Both emphasize a level of granularity beyond players and mass media. Instead of starting at the individual level (player) and scaling directly to “mass audience” level, they insist on the importance of establishing a coherent context like local culture.  

Even traditional games can benefit from multiple models for scaling. Most simply, one game may actually have impact on multiple levels. For example, a game might set out to shift individual behavior, but discover it has shifted cultural norms as well. Simply to be good observers of our own games, we may need to actively stay open-minded to secondary and unintended impact models.

More proactively, a team with enough capacity and care might begin to combine several kinds of scale deliberately. For example, after launching a mass media game in the Android store, the team might also launch a series of community-based discussion groups.  In fact, this may be the best strategy for ambitious goals like policy and social reform, which are never unidirectional but transform when society reaches a tipping point. Ultimately, our best games may be appropriated to target additional goals and secondary campaigns, gathering coherence for reform like a snowball.

Overall, we try to stay agnostic and resist picking one “best” model for scale. Our recommendation is to beware the assumptions that come with singular notions of scale — especially seeking scale via a mass media approach.  Better games will come from making decisions about scale, rather than defaulting into an assumption.  As a field, we can help each other identify secondary scaling opportunities and listen more deeply when we make room for multiple pathways to societal change.

…positive reframing: There are multiple ways to reach scale (not just as mass media) for many games, and definitely for the field as a whole.”

Sound useful? Let us know what you think!

—–
Other assumption posts include: #1 (design as separate from research), #2 (delay the ‘research design’), #3 (the logic model is obvious).

(This post was written by Benjamin Stokes, Gerad O’Shea, and Aubrey Hill.)

Featured on Packard Foundation site

2015-09-21--PackardNeat!  Our report was just featured on the “What We’re Learning” section of the David and Lucile Packard Foundation’s website.  We’re in their focus area on “Impact, Theory and Practice.”  We’re proud to have Packard funding this work!

Risky Assumption 3: The Logic Model is Obvious

What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.

Look for a revised report in the coming weeks. The ideas for these “assumption” posts come in large part from the feedback and ideas we have received in recent months from the community. Thank you all.


Risky Assumption #3: The Logic Model is Obvious.” 

It is not uncommon for game projects to launch without publicly declaring how they expect impact to come about. That’s understandable — it is pretty easy to describe a vision for the outcome, but much harder to explain the causal logic that leads to success. We can describe the gap as a missing or underdeveloped logic model.

(For those new to the nonprofit sector, logic models are used by organizations to plan and account for their impact, and are often spelled out when organizations dive into strategic planning.)

Particular danger comes if design teams consider their model “obvious.” What that often means in practice is that the “logic” is only descriptive — without causal claims. For example, “the players will learn math through Dominoes” is a start, because it implies a causal factor (Dominoes). However, it does not specify how playing dominoes actually leads to math skills. To do that, you might say that “math is deeply learned through practice, and Dominoes forces players to practice basic math (especially dividing by five).” More radically, you might also say that “playing Dominoes in teams can create a ‘need to know’ that catalyzes much faster acquisition of math skills like division — including by showing players the social benefits of being skilled at dividing by five.”

What are the benefits?

  • Unexpectedly, articulating your logic can be wildly generative.  Even simple models lead to new ideas — including new ideas about how to optimize design, wrap around services, and track impact.  
  • For the field, there will be fewer misunderstandings between stakeholders.  That’s because all games have multiple pathways to impact; in other words, they’re complex!  (In terms of the report’s main claims, we can reduce fragmentation in claims #1 and #3 with better logic models.)
  • Finally, by specifying the logic of a game, the whole field will understand the game better.  Looking across games, the logic model is what allows us to generalize” success and try to improve a whole set of games… categorically!

Fortunately, anyone can articulate the logic model with a bit of effort. Simply state “what caused what” (or take your best guess!). Be brave. Making your logic public can feel a bit exposed and out on a limb — but it also shows a kind of deeper confidence. When the game is just being released it is tempting to keep you cards close, but there are deep benefits to the field (and the game!) of proactive transparency.

…positive reframing: Articulate HOW your impact is happening (be transparent, be brave, reveal your logic model!)

Sound useful? Let us know what you think!

—–
Other assumption posts include: #1 (design as separate from research), #2 (delay the ‘research design’), #4 (innovation is about game types – forthcoming), and #5 (there is one way to scale – forthcoming).

(This post was written by Benjamin Stokes, Gerad O’Shea, and Aubrey Hill.)

Risky Assumption 2: When Funding is Scarce, Delay the ‘Research Design’

What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.

Look for a revised report in the coming weeks. The ideas for these “assumption” posts come in large part from the feedback and ideas we have received in recent months from the community. Thank you all.


Risky Assumption #2: “When Funding is Scarce, Delay the ‘Research Design’ — and the Research. In times of funding scarcity (i.e., always), difficult decisions about priorities have to be made. Scarcity raises questions about what can be separated, and what can be sequenced.  While it may be appropriate to delay the execution of third-party research, we warn that it is dangerous to defer the “research design.”

Research design (aka the “blue print” of the study) can be just as important and difficult as game design. But don’t confuse the research with the research design. The research design is a planning phase, and is part of the design process — without data. We can think of the research design as a kind of “creative problem solving” that is required to convince ourselves — and others — that there was impact, what kind of impact, and based on what evidence and logic.

Difficult decisions about the sequence of design and research still need to be made, even assuming the research design is determined early.  One way to empower designers and producers is to make the strategy more visible, so that all stakeholders can understand how research is sequenced strategically. For example, consider these diverging viewpoints (we are not endorsing any of these as right, but do think all should be on the table):

  1. Delay all research. Only fund research when the product shows promise.
  2. Always allocate 5% to research. Such rigid formulas are not unusual for “program evaluation.”
  3. Either 0% or 500%. The cost of some research designs go far beyond the development resources, leading some to take the attitude that anything less than full funding is a waste of resources.
  4. Scaling is the only question worth investing in for research.
  5. Quality is the only question worth investing in for research, since the market should handle everything else.

The greatest danger may come from repeatedly picking the same option without thought. To counterbalance, our field might push each game project to declare how they sequence and frame design and research, thus necessitating some (public) reflection about which combination is best for their situation. Similarly, funders with a wide portfolio of games should be pushed to reflect on how they approach research across a set of games; for example, some projects might be primarily about answering a research question, while others extend established research and so might need less resources to establish they are indeed aligning with a proven impact model.

…positive reframing: Always have a research design, but decide case-by-case on the investment to collect specific data.

 

Sound useful? Let us know what you think!
—–
Other assumption posts include: #1 (design as separate from research), #3 (the logic model is obvious – forthcoming), #4 (innovation is about game types – forthcoming), and #5 (there is one way to scale – forthcoming).

(This post was written by Benjamin Stokes and Gerad O’Shea.)

Risky Assumption 1: Research is Separate from Design (and is Conducted Externally)

What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.

Look for a few additional thoughts as well as a revised report in the coming weeks. (These additional thoughts are partly due to some great feedback and ideas we have received in recent months from the community. Thank you all!)


Risky Assumption #1: Research is Separate from Design (and is Conducted Externally).” In this provocation, we caution against separately framing design and research. In our view, a frame of “mutual iteration” will yield better impact for many projects, and simultaneously reduce fragmentation. In part, this requires a broader notion of “research” as overlapping with standard design practice.

With that in mind, we urge more respect for user testing as a kind of essential research, and thus more respect for designers as applied researchers, since all good games require play testing. This is a surprisingly overlooked reality, both by designers and researchers. Ultimately, although there are some understandable reasons for emphasizing and scrutinizing robust research design, we argue that placing research on a pedestal, also comes with risks. Most importantly, impact could be lessened if research is delegated to external sources at the expense of deeper integration with design iteration.

Game designers may not realize their options — let alone their own role in “research.” In particular, when designers see game testing and usability as separate from “research,” they may fail to capture valuable data on impact. For example, if they only ask whether their players are “engaged” in a narrow sense, they may miss deeper engagement with the issues that brought the player to the game in the first place (e.g., to connect with others, to engage with a social issue, to have an excuse to make a difference). Of course, some research is impractical for making short-term decisions. But we argue that there is great value in empowering designers to optimize the game with the “research” model — i.e., the model for observing impact that might be used in a formal evaluation after the game has launched.

Additionally, we suspect that there is particular tactical value in mutual advice between designers and researchers. Specifically, designers can be asked to recommend how they might evaluate the game (summative); simultaneously, evaluators can be asked to recommend how they might improve the game (formative). Improving the linkage between formative and summative research (and formative and summative researchers) seems likely to reduce fragmentation and improve our field-level conversation. Along the way, we are helping to take the word “research” a notch down from its pedestal to be more accessible to all.

…positive reframing: Iterative Design Should Include “Mutual Iteration” with the Research Approach and “Paper Prototype” Evidence (they should co-evolve; good designers must think like researchers and vice-versa)

Sound useful? Let us know what you think!
—–
(This post was written by Benjamin Stokes and Gerad O’Shea.)

Report response: Sabrina of Schell Games

This guest post by Sabrina Culyba is the first in a series of responses to our first report.

  – Editor

—–
Haskell-Sabrina-web2-600x400I’m Sabrina Culyba, a game designer at Schell Games. I was excited to see the first Impact with Games report and the conversation it represents. The report team put out a call for feedback and I’d like to share some from my own perspective to keep the conversation going.

Defining Impact
In a parallel to the first claim of the report “Impact is defined too narrowly,” I was intrigued by the aspect of the report on games that create collective social change.

First some background: In my work at Schell Games, I’ve mostly approached impact from a player-centric perspective. At our studio, we call these games “Transformational Games” and approach our design as a means to transform the player, with the view that larger social change spreads from individual transformation. This isn’t to say that we don’t design for social or group interactions, but our focus has typically been how these interactions contribute to, or stem from, individual players.

So this report and further conversation with Ben Stokes challenged the “individual first” view a bit, opening up a different perspective for me about group transformation as its own thing separate from individual transformation. For me, group transformation is a new layer to creating change that I am eager to dig into more in the future.

Surfacing Forms of Evaluation
Claim #3 “Evaluation Methods are Inflexible” really came to life for me recently during a thread on a educational games mailing list of which I am a member. The conversation started out as an attempt to collectively list the entities out there who are publishing assessments on learning games, whether based on formal research, teacher experience, factual stats, subjective review, or other means. Almost immediately the discussion evolved into a debate about what constituted valid evaluation of a game, including some pointed statements from individuals about what other people are doing wrong in this space. I wish I could say that we came to an insightful conclusion but the reality was that there was a lot of collective cross talking, rather than collaborative problem solving. It’s clear there’s a lot of frustration out there and I think this is one of the most important conversations to continue to push forward from this report. Personally, I agree with the sentiment in the report that we should have multiple lenses through which to evaluate the efficacy of a game. And I also think that one of those lenses should be rigorous research. Two of the most interesting relevant points to come out of that mailing list conversation were:

  1. Context of use matters: A great teacher can create an amazing learning experience out of a game whether or not it was designed well or with learning in mind. Similarly a proven, effective game for impact used in the completely wrong context may fail to change anything. We need to talk about context when we talk about a game’s efficacy.
  2. Problems with Access & Timing: Without a clear way to find out what research or evaluation has been done on which games, teachers and others looking to use games for change are forging their own path or relying on word of mouth. They aren’t going to simply wait for a 3-5 year study (assuming there is one on a game that fits their need.) What tools can we provide to collect and surface useful short term metrics such as anecdotes, expert review, or informal data?

A Missing Piece: Higher Costs (for transformation & evaluation)
Under “Anticipated Project Benefits for Game Designers & Makers,” the report says:

If the lack of evaluated games is any indication, a common scenario is to focus on creating the game and worry about evaluation once it is done (if at all).

The assumption implied here is that game makers do not see the importance of considering assessment early — that their approach is to focus on the game first.

I’ve also heard this echoed from other sources, sometimes even with a statement that game makers don’t want their games to be evaluated, lest their ineffectiveness be revealed. While this could be true in some cases, I think this understates the role of funding in what is produced.

Developers actually have little recourse when there is insufficient funding; too often there are insufficient funds to do a proper pre-production that includes significant integration with prior research and planning for assessment, and too often there is no funding for post-ship followup to evaluate the game’s effectiveness. In fact, lack of funding that enables the creation of games that are both of a high entertainment quality and also incorporate research, teacher tools, assessment, etc., is, I think, an elephant in the report. There’s really no way around the fact that for two games of around the same size & production value, a transformational game has more moving parts, more stakeholders, and more metrics for success to hit than a game developed purely for entertainment.

And yet I still see RFPs for these kinds of games that would barely cover the costs of cloning an existing game. How do we talk about that? Can we talk about it?

Others spreading the report

A few sample posts from other websites about our first report:

Soft-launch for Report #1

g4c-festival-12-thumbHighlights from our soft-launch of the report are below, at the 12th annual Tribeca/G4C Festival.  They include:

  • a VIP breakfast to launch the report
  • our panel talk
  • table discussions to gather feedback

At a VIP breakfast on Day 1, we launched the report to a packed room of industry experts. In the picture below, the President of the Entertainment Software Association Michael Gallagher talks with HH Prince Fahad Al Saud of Saudi Arabia (who also presented on his game Saudi Girls Revolution). Our presentation came from team members Nicole Walden and Benjamin Stokes.

IMG_3368-midSecond, our big panel was at the Games & Media summit, at the intersection of film and games. We featured speakers who are “impact designers” from the world of documentary film, museum games and game design education. The session particularly focused on a tricky balancing act: “Optimizing for Impact AND Creativity.” Here are two pictures:

panel-g4c-stokes-isbister-chattoo-macklin-mid

(On the left: Benjamin Stokes from our advisory group, Katherine Isbister –– Game Innovation Lab at NYU Polytechnic School of Engineering, Caty Borum ChattooCenter for Media and Social Impact (CMSi) at American University, and Colleen MacklinPETLab at Parsons, The New School for Design.)

Panel description:

“Can we reclaim evaluation to better empower artists, our audience, and marginalized voices? What tricks of impact design can filmmakers borrow from games and vice versa? This session taps experts in ‘impact design’ who are trying new ways to maximize impact. A key focus is on shifting the hidden power relations inherent in assessment, to develop approaches that increase creativity (not stifle it). Seeking to democratize assessment and optimize it as a tool for quality rather than judgment, the panel will highlight several ambitious assessments and provide tips for teams and the field.”

We also shared pieces of our report with attendees during two practitioner lunch discussions. We received some really interesting feedback from game designers, funders and researchers — but also art historians, game distributors, and educators. Much of this feedback will end up in future reports and on our blog (we’d share a picture, but were too busy eating to take any!).

We also announced a slew of collaborators (see logos below, and list on the report page) — and more are coming!  If your organization would also like to help spread the discussion of “games + impact” — including how our field is fragmented, and what can be done — let us know!

collaborators-April24

Draft report for Festival

Our first report on fragmentation will be in “open draft” mode, beginning April 22nd, 2015.  This is just in time for the Games for Change Festival.  Take a look!

Seeking your ideas in April and May, especially on ‘fragmentation’

your-ideas1Until mid-May, we are especially eager for ideas on what should be added to our “Fragmentation” report.  Are there types of fragmentation we’re missing?  What do you think of the categories?  Should we be more sympathetic to the inherent need for academic typologies to exclude some game types?

If you have ideas, please contact us!

Sincerely, the editorial team