Neat! Our report was just featured on the “What We’re Learning” section of the David and Lucile Packard Foundation’s website. We’re in their focus area on “Impact, Theory and Practice.” We’re proud to have Packard funding this work!
Archives for : September2015
What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.
Look for a revised report in the coming weeks. The ideas for these “assumption” posts come in large part from the feedback and ideas we have received in recent months from the community. Thank you all.
Risky Assumption #3: “The Logic Model is Obvious.”
It is not uncommon for game projects to launch without publicly declaring how they expect impact to come about. That’s understandable — it is pretty easy to describe a vision for the outcome, but much harder to explain the causal logic that leads to success. We can describe the gap as a missing or underdeveloped logic model.
(For those new to the nonprofit sector, logic models are used by organizations to plan and account for their impact, and are often spelled out when organizations dive into strategic planning.)
Particular danger comes if design teams consider their model “obvious.” What that often means in practice is that the “logic” is only descriptive — without causal claims. For example, “the players will learn math through Dominoes” is a start, because it implies a causal factor (Dominoes). However, it does not specify how playing dominoes actually leads to math skills. To do that, you might say that “math is deeply learned through practice, and Dominoes forces players to practice basic math (especially dividing by five).” More radically, you might also say that “playing Dominoes in teams can create a ‘need to know’ that catalyzes much faster acquisition of math skills like division — including by showing players the social benefits of being skilled at dividing by five.”
What are the benefits?
- Unexpectedly, articulating your logic can be wildly generative. Even simple models lead to new ideas — including new ideas about how to optimize design, wrap around services, and track impact.
- For the field, there will be fewer misunderstandings between stakeholders. That’s because all games have multiple pathways to impact; in other words, they’re complex! (In terms of the report’s main claims, we can reduce fragmentation in claims #1 and #3 with better logic models.)
- Finally, by specifying the logic of a game, the whole field will understand the game better. Looking across games, the logic model is what allows us to “generalize” success and try to improve a whole set of games… categorically!
Fortunately, anyone can articulate the logic model with a bit of effort. Simply state “what caused what” (or take your best guess!). Be brave. Making your logic public can feel a bit exposed and out on a limb — but it also shows a kind of deeper confidence. When the game is just being released it is tempting to keep you cards close, but there are deep benefits to the field (and the game!) of proactive transparency.
…positive reframing: Articulate HOW your impact is happening (be transparent, be brave, reveal your logic model!)
Sound useful? Let us know what you think!
—–
Other assumption posts include: #1 (design as separate from research), #2 (delay the ‘research design’), #4 (innovation is about game types – forthcoming), and #5 (there is one way to scale – forthcoming).
(This post was written by Benjamin Stokes, Gerad O’Shea, and Aubrey Hill.)
What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.
Look for a revised report in the coming weeks. The ideas for these “assumption” posts come in large part from the feedback and ideas we have received in recent months from the community. Thank you all.
Risky Assumption #2: “When Funding is Scarce, Delay the ‘Research Design’ — and the Research.“ In times of funding scarcity (i.e., always), difficult decisions about priorities have to be made. Scarcity raises questions about what can be separated, and what can be sequenced. While it may be appropriate to delay the execution of third-party research, we warn that it is dangerous to defer the “research design.”
Research design (aka the “blue print” of the study) can be just as important and difficult as game design. But don’t confuse the research with the research design. The research design is a planning phase, and is part of the design process — without data. We can think of the research design as a kind of “creative problem solving” that is required to convince ourselves — and others — that there was impact, what kind of impact, and based on what evidence and logic.
Difficult decisions about the sequence of design and research still need to be made, even assuming the research design is determined early. One way to empower designers and producers is to make the strategy more visible, so that all stakeholders can understand how research is sequenced strategically. For example, consider these diverging viewpoints (we are not endorsing any of these as right, but do think all should be on the table):
- Delay all research. Only fund research when the product shows promise.
- Always allocate 5% to research. Such rigid formulas are not unusual for “program evaluation.”
- Either 0% or 500%. The cost of some research designs go far beyond the development resources, leading some to take the attitude that anything less than full funding is a waste of resources.
- Scaling is the only question worth investing in for research.
- Quality is the only question worth investing in for research, since the market should handle everything else.
The greatest danger may come from repeatedly picking the same option without thought. To counterbalance, our field might push each game project to declare how they sequence and frame design and research, thus necessitating some (public) reflection about which combination is best for their situation. Similarly, funders with a wide portfolio of games should be pushed to reflect on how they approach research across a set of games; for example, some projects might be primarily about answering a research question, while others extend established research and so might need less resources to establish they are indeed aligning with a proven impact model.
…positive reframing: Always have a research design, but decide case-by-case on the investment to collect specific data.
Sound useful? Let us know what you think!
—–
Other assumption posts include: #1 (design as separate from research), #3 (the logic model is obvious – forthcoming), #4 (innovation is about game types – forthcoming), and #5 (there is one way to scale – forthcoming).
(This post was written by Benjamin Stokes and Gerad O’Shea.)
What if much of the fragmentation discussed in our first report comes from a few hidden assumptions held by project leaders and funders in our field? Below we identify one such “risky assumption” that may impact several areas, as well as an idea for reframing.
Look for a few additional thoughts as well as a revised report in the coming weeks. (These additional thoughts are partly due to some great feedback and ideas we have received in recent months from the community. Thank you all!)
Risky Assumption #1: “Research is Separate from Design (and is Conducted Externally).” In this provocation, we caution against separately framing design and research. In our view, a frame of “mutual iteration” will yield better impact for many projects, and simultaneously reduce fragmentation. In part, this requires a broader notion of “research” as overlapping with standard design practice.
With that in mind, we urge more respect for user testing as a kind of essential research, and thus more respect for designers as applied researchers, since all good games require play testing. This is a surprisingly overlooked reality, both by designers and researchers. Ultimately, although there are some understandable reasons for emphasizing and scrutinizing robust research design, we argue that placing research on a pedestal, also comes with risks. Most importantly, impact could be lessened if research is delegated to external sources at the expense of deeper integration with design iteration.
Game designers may not realize their options — let alone their own role in “research.” In particular, when designers see game testing and usability as separate from “research,” they may fail to capture valuable data on impact. For example, if they only ask whether their players are “engaged” in a narrow sense, they may miss deeper engagement with the issues that brought the player to the game in the first place (e.g., to connect with others, to engage with a social issue, to have an excuse to make a difference). Of course, some research is impractical for making short-term decisions. But we argue that there is great value in empowering designers to optimize the game with the “research” model — i.e., the model for observing impact that might be used in a formal evaluation after the game has launched.
Additionally, we suspect that there is particular tactical value in mutual advice between designers and researchers. Specifically, designers can be asked to recommend how they might evaluate the game (summative); simultaneously, evaluators can be asked to recommend how they might improve the game (formative). Improving the linkage between formative and summative research (and formative and summative researchers) seems likely to reduce fragmentation and improve our field-level conversation. Along the way, we are helping to take the word “research” a notch down from its pedestal to be more accessible to all.
…positive reframing: Iterative Design Should Include “Mutual Iteration” with the Research Approach and “Paper Prototype” Evidence (they should co-evolve; good designers must think like researchers and vice-versa)
Sound useful? Let us know what you think!
—–
(This post was written by Benjamin Stokes and Gerad O’Shea.)