GDC talk: Training Designers to Collaborate with Researchers

How can we empower designers to increase “impact,” especially in collaborating with researchers?

Next week Benjamin Stokes will be presenting at the Game Developers Conference (GDC) in San Francisco. This talk extends our research on #GameImpact with G4C, and amazing conversations with pioneers in training designers for research collaborations, including Heather Desurvire, Mary Flanagan, and Jessica Hammer.

Training Designers to Collaborate with Researchers: Reframing, Scaffolding, and Roles

Feb. 27th, 2:10pm — See details.

Graduates in design are under increasing pressure to collaborate with social scientists, including to measure impact, and to improve the product itself. How should game educators prepare them?

Reports show growing fragmentation between designers and researchers; silos are deepening as language is politicized. This session will analyze several models for training students to collaborate with researchers on “impact.”

Are entirely new courses needed on “game research methods,” beyond usability? How can students be empowered to stand up for good design, even as they share power with outside experts?

Takeaways: Attendees will take away several distinct strategies (for the classroom and beyond) for training designers to work with external researchers. Learn what several universities are doing, including different approaches to usability training, managing up, and reframing creativity for impact. Each strategy builds the capacity of students to collaborate with outsiders.

Repost: most typologies are “deep but not connected”

Since we are introducing a new typology, below we repost our brief analysis of others’ typologies from our first report.

CLAIM: Typologies are deep but not connected

What about overviews of the field by experts? Overviews gain their power by drawing boundaries, often using typologies. To achieve clarity and depth, typologies have to leave things out (usually for good reasons). Field leaders and academics create typologies to fill specific gaps in conceptualizing the field, declaring what counts, and elevating the most important categories. The value is often greatest for a specific target audience — such as a particular sector or discipline.

Yet there is a downside to growth. As overviews proliferate by sub-sector, the ordinary consumers of these resources find it hard to see the big picture. Assumptions are often hidden in the sector or discipline of origin. How do various typologies relate? First we show how each must exclude just to accomplish its overview.

Consider a few different ways games have traditionally been organized:

Continue Reading >>

Proposing “impact types” across fields (typology part B)

We are quietly circulating a new way to visualize our field, emphasizing the range of social impact that is possible with games. We have been testing our visuals and concept at several conferences recently.

In brief: these “six types” of impact define a perimeter for what games can do. Big picture perspective on impact is an art, as we explained in our last post on the range of impact. Our approach is unusual, in part because it is:

  • visual,
  • useful (not just truthful), and
  • more inclusive.

Others have tried, but we found that they all seemed to be preaching to a disciplinary or academic audience. As we said in our first report, most typologies are deep but not connected. So our method begins with practitioner communities that have distinct tools for measuring impact.

Here is a visual explanation…

Feature #1: Professional communities are distinct


…did any of these surprise you? Our goal is to show the breadth of impact. For each type, there are distinct professionals trying to make sense of games — with their own tools. Look how different the impact indicators are too…

Feature #2: Indicators are very different


Feature #3: Simple language to hint at layers

impact-structuredOur titles are crazily short — typically just two words for each category. At best, they hint of more layers. Such minimalist language risks being simplistic. Of course things are more complex. But that’s the point: rather than maximum detail, we want to prioritize useful framing.

Framing is a delicate art. The language must be simple, yet valid and inviting to newcomers. Our goal is to orient new funders and producers, facilitate comparisons between disciplines, and reduce friction in the design process.

It is essential to hit the right level of granularity. We would rather error on the side of simplicity and accessibility. As our prior post argued, we need perspective across the range of impact. Our bias is toward accessibility, especially since academic publications of typologies tend to be biased toward discipline-specific language.

Feature #4: Tested across groups (in the wild)

We refine our framing “in the wild” by trying it. Validity comes from observing the typology in action. Most prominently, Asi Burak (see our advisory) has been testing it in keynote presentations — especially to non-game audiences. He also deserves credit for much of the visual look and the case study work below.

Success is measured in translation across very different stakeholders — not just disciplines, but gaps in practice between designers, funders, and researchers.  (The dangers of fragmentation are spelled out in our first report.)

We are bravely trying this all in public, including with this blog post. Interviews at conferences, often following presentations, are a primary way to gather data (e.g., see our talk at DiGRA-FDG). Our methodology may shift to more controlled laboratory conditions soon, but we resist the temptation to dive into the laboratory prematurely. The pressing challenge is to develop a valid portfolio of impact categories. Once we establish first-order legitimacy with the design, there will be a variety of ways to test and refine the components and language.

Feature #5: Horizontal format — no “super theories”

It is tempting to create a sequence with these types, and declare one as “most important.” We resist this approach, based on the research in our first report, as divisive and generally arbitrary.

super-theory-engagement-pyramidThere is a bit of truth in the temptation to sequence. For example, broadcast media strategists often talk of a “funnel,” beginning with raising awareness to a mass audience, after which a subset actually learns something, and hopefully some of them change their behavior or vote. That sequence is legitimate. But it is also legitimate to build some habits (like reading the news) that lead to learning later.

We similarly reject blanket statements (e.g., “policy change is the only real way to have an impact”) as exclusionary. Our goal is to have an open conversation about impact, and avoid the tendency to shut some people out prematurely.

So… how did we actually pick the categories?

  • visible success stories — with recognizable names that serve as shortcuts, decreasing the amount of explaining we need to do
  • research basis — with published evidence of impact
  • portfolio contrast — so that each is distinct

Beware some assumptions:

  • overlap is inevitable — every game does more than one of these, including the examples above; in fact, one of our goals is to help projects be brave enough to admit
  • this will evolve — first as new games take off, and as our understanding of what games can do continues to broaden; this typology is therefor a work in progress, and your feedback is welcomed!

Feature #6: Framing with examples

Our goal is to frame the categories, and to stay concrete. Actual games are shown immediately (and discussed below). In contrast to most academic norms, we chose to be illustrative rather than verbosely precise.

Specific games include:



MORE ON THESE GAMES (preliminary images for now… more details to come as we get feedback):






(Slides based on originals from Asi Burak.)


Can you use this typology or image? A: YES! This is licensed for free re-use, we just ask you to give attribution to the Game Impact Project and link back to this site in some way.

Our next iteration will be coming soon. If you have ideas or reactions, let us know!

Posted on behalf of Benjamin Stokes, Aubrey Hill and Asi Burak

gathering input at conferences

We have been having a great time getting feedback at events. Last month we were in NYC for the 13th annual Festival of Games for Change. We had some great conversations, like with the visualizations of Dot Connector studio on engagement models.

Here is a snapshot of the “crowd-sourced” discussion we facilitated about barriers to impact:


To gather input, the crowd presented a fascinating sample: about 10-20% were highly experienced, including designers and academics who had been attending similar events for years; another 30% were newcomers, hungry for perspective; and perhaps 50% were somewhere in-between, including funders with deep experience in a content domain, but eager for ways to be smarter and learn from other disciplines.

Asking the right questions is one of our primary goals. We found particular traction from these questions:

  • What barriers? Newcomers especially wanted a glimpse of what’s hard, and how to get started.
  • What language? Experts immediately wanted to debate the right language.  Conflicting views emerged immediately.
  • Is this our field? Critical mass is necessary for engagement, and the right identity frames helped build a broad tent.

More analysis to come soon, after our talk with folks at DiGRA-FDG next week…

The range of impact (typology part A)

What types of impact are possible? If you can imagine a kind of social change, games are there. Thousands of possibilities exist — from tiny to massive, global to local, economic to sociological.

It is tempting to map them all:

But then we lose perspective.  What we need is the big picture.  Especially to address a few of the most important needs in our field:

  1. To orient new funders and producers.  What options are possible?  Where to begin? The goal is to articulate the forest, not just trees with deep roots.
  2. To compare similar projects, i.e., based on similarities in logic models and theories of change, not content areas
  3. To reduce friction in the field, e.g., talking past each other

The dream is more like:


What’s the difference?

  1. Organizes with big categories
  2. Less overwhelming
  3. Hints at layers of depth

But what should the umbrella nodes and categories be? If we’re not proactive, one group may define impact for the field leaving out a crucial breadth of perspectives and practices:

Any one discipline in isolation has blinders, leading to some classic problems:

  • Not inclusive of people (splinters the movement)
  • Not inclusive of impact types — overlooks key forms (e.g., sociology rarely invoked by psychologists)
  • Confuses truth with being useful; in fact, narrow truths often inhibit broader understanding, and broad understanding may require some deliberate and strategic ambiguity

Wanting to create a useful understanding of impact means that this a design challenge, with obstacles to be overcome (like getting the framing right). Solutions to this design challenge will be successful if they:

  1. change how we design
  2. bring people together (cohesion)
  3. foster more useful debate (the right kind of disagreements)

What does it look like?  Watch for our next post…

Visualization ideas from Dot Connector Studio

We are collecting “tools to think with” for strategy with games and social impact.

Last week at G4C we met with Jessica Clark and Katie Donnelly and discovered some neat visualizations compiled by their Dot Connector Studio team, including some based on prior work from AIR, CMSI, TFI Interactive, etc., and reinterpreted (see their full overview).

Below are a few that we think might be useful for the #GameImpact project:

(1) Engagement Models — 10 different models with visuals.  See their full list (PDF).


(2) Partner Types — useful to resist simply “build it and they will come”!


(3) Roadmap for creating new projects — a great way to represent some of the “hooks and triggers” for strategic questions.  The focus here is on film, but much applies to games.  See also their full PDF.



Thanks again to Dot Connector Studio for sharing these!

Session at DiGRA-FDG: Increasing coherence in ‘impact’

Join us in Scotland on August 4th for the first joint convening of the Digital Games Research Association (DiGRA) and the Foundations of Digital Games conference (FDG):


Our session is: “Increasing coherence in ‘impact’: crossing disciplines and framing.” We’ll be presenting alongside talks by Jesper Juul and Hanli Geyser at 4pm.


In the past decade game design for “impact” has proliferated. Yet fragmentation is also growing between researchers, designers and funders in their ability to compare game proposals and communicate effectively about impact. Success in this endeavor may require new umbrella language to guide meaningful comparison and improve efficacy — especially across stakeholders. Fortunately, strategies for reducing friction and aligning design with research are surfacing.

In a report published last year by Games for Change and ETC Press (2015), we first revealed some of the hidden barriers in language and framing around “game impact.” Based on dozens of interviews with sector leaders (primarily in the United States), the report identified five areas of concern that increase confusion and undermine impact.

Findings to be discussed (and explored outside the United States) include:

  • the gulf between research and practice is growing as silos begin to deepen; some types of impact are persistently marginalized by disciplinary divides;
  • we need common language and new frames to compare impact across domains, especially with diverse stakeholders
  • for research to affect practice, special care is needed to avoid framing research in opposition to creativity.

In response to the report, more than 30 individuals submitted formal suggestions, including some leading game studios and academics. The feedback opened new areas of inquiry. In the past several months, we identified several “risky assumptions” that may drive fragmentation. Diagnosing assumptions is more delicate and subjective than documenting fragmentation; yet it yields more actionable insights.

Continue Reading >>

Panel at G4C Festival: Increasing Social Impact with Tools to Design Across Sectors

g4c-13fest-smJoin our session on June 24th at 4pm with Colleen Macklin, Asi Burak and Benjamin Stokes.

Title: Increasing Social Impact: Tools to Design Across Sectors


For two years, the GameImpact project investigated sources of failure in articulating game impact. Our first results (published last year) showed the fault lines — especially across sub-fields. Now we introduce and debate several “thinking tools” for executive producers, lead designers and funders. Here are strategies to avoid the holes between research and design, between impact and intention.

New chapter: Countering 4 Risky Assumptions

We are thrilled to announce that chapter two is now available for download. With some neat infographics, “Countering Four Risky Assumptions,” (PDF, 1.2mb) describes concrete steps to reduce the fragmentation in our field. The full report on Impact with Games now includes this new chapter, alongside updates to our initial research based on ideas received over the past year.

Chapter 2: Countering Four Risky Assumptions Info Graphic

The launch for the new chapter officially takes place on April 18th at the G4C-Tribeca Games and Media Summit in NYC.


Each of the “risky assumptions” in this new chapter cuts across disciplines and design practices. They are sneaky, and seem to aggravate the field fragmentation that is described in the main report. But they can be countered. The evidence for these deep assumptions, though well attested by leaders in the field, is often indirect; therefore, this chapter offers careful provocations rather than definitive conclusions.

The publication also highlights some great ideas that emerged from our community, as part of revising the first report (see summary of changes).

We look forward to hearing your thoughts!

Revisions to Report #1 based on feedback

Based on your feedback, we are pleased to release an updated report for download. (There have been more than 5,000 downloads in the first year of our report, “Impact with Games: A Fragmented Field,” according to ETC Press, our publisher.)  This includes our new chapter 2: countering four risky assumptions (also published separately).

Many meaningful comments were received after our initial launch. (Thank you!) Some responses (both ours and others) we weighed and chose not to incorporate into the final report. These include the temptation to pick sides. For example, several people asked us to weigh in on “which genres have more impact.” While this may be a fascinating debate, there are benefits to deliberately stepping back, not picking a side — seeking to frame that conversation rather than join in specific debates.

We also tabled some suggested solutions that turned out to concern other problems (not fragmentation). These include:

  • Why the market is tough — this is true, and limited resources do aggravate fragmentation, but this report is more concerned with how we talk past each other about impact (whereas the feedback we heard in this area was mostly about how it is hard to get paid doing good work, which is a significant but separate concern)
  • More games need evaluation — this may be true, but our focus is on research that advances the whole field, rather than pushing for evaluation for every single game
  • Several wanted to “measure engagement” as a proxy for multiple kinds of impact — which is an important strategy, and something we may get to in future reports; but it is less a clear sign of fragmentation (and so is less relevant for this report). We hope to investigate such opportunities in future reports.

We did take action to make several SUBSTANTIVE CHANGES in response to feedback. First, we introduced a new chapter on “Countering Four Risky Assumptions.” The idea is to identify some hidden causes, related to practical development processes, that might contribute to fragmentation across the board and to propose counter approaches. We specifically challenged the assumptions that:

Second, we made our language more consistent throughout. For example, around “social impact games” vs “impact” — we decided to be more consistent, and refer to the set of possible games as “social impact games” (a broad umbrella, with the main criteria being they were designed for an impact, or else are being studied as having an impact), and secondarily to discuss “impact” in multiple forms.


Benjamin, Aubrey and Gerad (on behalf of the editorial team)