How can you measure behaviour




















And therein lies the rub. No matter how well intentioned the indicators that we've set up are, the people on the shop floor will find a way to circumvent them. In fact, we have found clients that we've worked with to be quite ingenious when it comes to finding ways to "beat the system. The key to achieving results and sustaining the process is to combine process indicators with behavioral indicators.

We have found it extremely beneficial to focus on behaviors as part of any initiative. In the past we used to talk about "Best Practices" we now talk specifically about "Best Behaviors. If we're really on the ball, we'll not only develop results or lagging indicators, but often process leading indicators.

Tied into other systems these provide quantitative evidence of success or lack thereof. What happens when the pressure is reduced i. Often organizations revert to the old and comfortable ways, or we find that the quantitative evidence has been creatively dealt with and results aren't what we think or wish.

We have discovered that it is not enough to just manage the numbers. What is of the most value is that along with developing the process is to develop a list of behaviors we want the organization to exhibit.

Then develop behavioral metrics that are aligned with the desired behaviors. After process installation, or hard wiring, we then program the organization by coaching and facilitating to those desired behaviors and then provide qualitative measures.

The human psyche is broken down into three main elements for the sake of Change Management. Changing beliefs, knowledge and vision is the intellectual or cognitive component.

Changing what is done, how it is done, and what is gained is the action component. How we respond to the success, failure or stress of the endeavor is the emotional aspect and not one to be ignored.

All three elements are interdependent. Assessment of current behaviors and beliefs is important to establish when doing baseline "as is" metrics and indicators.

An example of this is the typical belief that "we are heroes if we drop everything to correct breakdowns. There is a rush or sense of pride and accomplishment. In most Work Management process improvements, our desire is to change the reactive belief to one that stresses zero breakdowns and planned maintenance.

This leads to more profitability for the company, paying off for the individual by maintaining employment, providing a different level of satisfaction, and removing the chaos from the day.

The new belief is one that states, responding to breakdowns means that the process has failed, and that if not corrected could lead to the demise of the company. Interviewing all levels of the workforce to find their beliefs and how they go about their jobs is important for establishing baselines.

This is used to identify how the organization has moved, once the process improvement begins. It can also be used to establish scorecard "red light" behaviors. It is extremely important that prior to commencing any installation or implementation activities that the new desired behaviors are identified. Can you think of some? Few studies also conduct follow-up measurements, so the durability of any immediate spillover effects is unknown. There has also been a reliance on correlational or longitudinal designs which are unable to shed light on causal processes; and within longitudinal designs approaches differ in how to detect spillover Capstick et al.

Finally, there have also been few attempts to bring together quantitative and qualitative approaches, thus providing complementary insights and addressing respective weaknesses in approaches Creswell, In the following section, we describe how spillover should be measured in experimental and non-experimental approaches that seeks to build on this literature and address limitations in the methods used to date.

We now turn from our observations of previous spillover research to a discussion of how we propose spillover research should ideally be conducted in order to reliably detect any spillover effects and expose mechanisms through which they may operate. Drawing on best practice in research design and reflecting principles of transparency and validity e. Rigorously designing and implementing randomized controlled experiments allows the researchers to obtain an unbiased estimate of the average treatment effect of a behavioral intervention e.

Because of sample selection bias, it is only by randomly assigning subjects to a treatment or to a control group that the researchers can identify the causal effect of a behavioral intervention on an observed outcome Heckman, ; Burtless, ; Angrist and Pischke, ; List, ; Gerber and Green, In practice, a variety of different randomized controlled experiments is available to researchers interested in testing behavioral spillovers. It is useful to refer here to the influential taxonomy of experiments in social sciences originally proposed by Harrison and List : conventional lab experiments involve student subjects, abstract framing, a lab context, and a set of imposed rules; artefactual field experiments depart from conventional lab experiments in that they involve non-student samples; framed field experiments add to artefactual field experiments a field context in the commodity, stakes, task or information; and, finally, natural field experiments depart from framed field experiments in that subjects undertake the tasks in their natural environment, and subjects do not know that they take part into an experiment.

The main idea behind natural field experiments is that the mere act of observation and measurement necessarily alters what is being observed and measured. In key areas of interest for behavioral spillovers, such as health, the environment or pro-social behavior, for instance, there are potential experimenter demand effects i. Other, more recent, typologies of randomized controlled experiments are online experiments Horton et al. Lab-field experiments have been used to look at the unintended spillover effects of behavioral interventions in health Dolan and Galizzi, , ; Dolan et al.

Investigating experimentally the occurrence of behavioral spillover requires a mixed, longitudinal experimental design combining elements of between- and within-subjects design. Participants in an experiment are randomly allocated by the researcher either to a control group, or to at least one behavioral intervention group.

In the control group C , subjects are observed while they engage in a first behavior behavior 1 and then in a different, subsequent, behavior behavior 2. Each of the two subsequent behaviors is operationally captured and reflected into at least one corresponding outcome variable: B1 and B2.

In practice, the choice of behavior 1 and behavior 2, as well as the choice of the corresponding outcome variables B1 and B2, is often based on theoretical expectations, previous literature, or qualitative evidence. It is also based on other, more pragmatic, considerations related, for example, to the ease of observing some specific positive or negative spillovers in the lab or the field, and to the ethical and logistical acceptability of changing some behaviors in an experimental setting.

In what follows, we illustrate the measurement of behavioral spillovers in the simplest possible case of one single behavioral intervention group, and one single outcome variable for both B1 and B2. The extension to more complex cases is straightforward.

In the treatment group T , a behavioral intervention e. The between-subjects design naturally allows the researcher to test the effects of the behavioral intervention on the targeted behavior 1, by directly comparing B1 across the control and the treatment groups, that is, by comparing B1C versus B1T. The between-subjects design, together with the longitudinal dimension of the experiment, also allows the researcher to check if the behavioral intervention has a ramification effect on the non-targeted behavior 2, thus affecting the outcome variable B2.

In particular, the outcome of behavior 2 in the control group B2C serves as the baseline level for the extent to which behavior 2 is affected by behavior 1 in the absence of any behavioral intervention targeting behavior 1 B1C see Table 3. Such an experimental design also allows the researchers to estimate not only the sign and the statistical significance of the behavioral spillover effects, but also their size. This, in turn, allows the researcher to conclude whether a behavioral intervention causes behavioral ramifications which are small or large compared to the directly targeted change in behavior.

In case of permitting or purging behavioral spillovers i. Two further considerations are in order here. First, the above described definition and framework to measure behavioral spillovers in an experimental setting is sufficiently general and comprehensive to nest as a special case the situation where the behavioral intervention consists of behavior 1 itself.

In such a case, in fact, the behavioral intervention in the treatment group merely consists of exposing subjects to behavior 1 e.

In the control group, on the other hand subjects go through behavior 2 without being previously exposed to behavior 1. Second, the decision about the timeframe is crucial for the measurement of behavioral spillovers.

Following subjects over longer timeframes implies, naturally, that it is more likely that spillover effects are effectively detected Poortinga et al. Considering substantially long timeframe ideally a few weeks or even months after the end of the intervention is desirable in order to be able to assess the durability of spillover effects.

Considering even longer timeframes ideally over 3 or 6 months after the end of the intervention is particularly important to be able to detect the formation of new habits sustained over time Lally et al. In any case, in order to favor transparency and replicability of experimental results, it is crucial that the researchers pre-specify in advance the timeframe over which subjects are followed up over time. The timeframe, in fact, is a key point of the checklist that we propose below.

An analogous strategy can be used in non-experimental settings along the line of the difference-in-difference empirical approach e. In the standard difference-in-difference approach, two areas e. In principle, an analogous comparison can be made considering the outcome variable of behavior 2 B2, instead of B1 , to see whether the natural event also has ramifications on a different, subsequent behavior, far and beyond the initial change on behavior 1.

Analogous considerations to the ones described above can be made here concerning the sign, significance, and size of the behavioral spillovers in a non-experimental setting e. In principle, two locations e. The researcher can compare not only the change over time of the outcome variable for the domain directly involved in the phenomenon or originally targeted by the intervention e.

A different, but potentially complementary, approach to studying spillover involves using qualitative methods, such as interviews analyzed thematically e. Verfuerth et al. As such, qualitative methods provide valuable insight in their own right into spillover phenomena, but can also be combined with quantitative approaches in mixed-methods designs to address quantitative limitations Verfuerth and Gregory-Smith, Various approaches can be used to ensure the quality of qualitative data, such as member validation i.

Others have noted that the diversity of qualitative methods requires a range of criteria for assessing quality and validity Reicher, ; but most agree at least that transparency and consistency are key Braun and Clarke, The importance of being systematic is therefore a criterion of quality shared by both quantitative and qualitative methods.

A growing literature advocates the use of mixed-methods approaches in order to triangulate and provide complementary insights. Despite associations of qualitative and quantitative methods with divergent epistemological and ontological paradigms Blaikie, , this should not imply that qualitative and quantitative methods are essentially incommensurate Bryman, Rather, the distinction between particular qualitative and quantitative methods can be understood as primarily technical, and not necessarily philosophical.

Qualitative and quantitative methods offer different insights into spillover and each is better suited to answering different types of research question e. How is the development of identity and practices experienced over time and contexts?

What causes and mediates spillover? Furthermore, using multiple methods allows interesting lines of inquiry exposed through one method to be explored further through another Whitmarsh, The distinct challenges of researching spillover imply both qualitative and quantitative approaches are warranted to address different facets of the problem.

Mixed-methods designs may be sequential or concurrent, or both Creswell, This might take the form of interviews with a sub-sample of experimental participants, or one or more open-ended questions in a post-intervention survey. Where spillover is detected through quantitative experimental methods, qualitative data may help explain why this effect has occurred, and how this has been subjectively perceived and experienced. In the event that spillover is not detected via the experimental methods outlined above, qualitative methods may explain why not, or they may expose other, unquantified spillover effects.

Qualitative, quantitative, and experimental methods should thus be seen as complementary, rather than substitute, empirical methods to explore and assess behavioral spillovers.

So far, there exist few mixed-methods studies of spillover, but those that have been undertaken appear to demonstrate that a mixed methodology can elucidate multiple aspects of spillover processes and experiences Barr et al. Exploring and detecting behavioral spillovers is a research and policy task which should be undertaken using a systematic and transparent approach, in the same spirit of, and closely in line with, the recent best practices favoring and advocating systematization and transparency in psychological and behavioral sciences Ioannidis, ; Higgins and Green, ; Simmons et al.

In the previous section, we outlined how this might be achieved using different research designs. Abstracting from these exemplar designs, here we propose a checklist of points which should be explicitly stated and addressed by the researcher prior to undertaking of experimental and empirical analysis. The item checklist is in line with, and in the same spirit of, other checklists designed to systematically assess the methodological quality of prospective studies, for example by the Cochrane Collaboration Higgins and Green, The checklist is also in line with, and in the same spirit of, other more general checklists guiding researchers through pre-registration of studies and pre-analysis plans e.

The website will also include a data template where data from deposited studies could be shared, collated, and combined in order to conduct collaborative systematic reviews and meta-analyses of the literature. The 20 questions of the checklist are below.

In what follows we briefly illustrate each question with a real case study, the recent study by Xu et al. What are the setting and population of interest? Is this an experimental or a non-experimental study? If this is a non-experimental quantitative study, what is the empirical identification strategy e. If this is a quantitative study, what is the control group? How have the behaviors been selected e. What is the targeted behavior 1? What are the outcome variables for behavior 1 i.

Please list them and briefly describe each outcome variable, indicating whether this is directly observed or self-reported behavior.

How many intervention groups there are? What are the behavioral interventions targeting behavior 1? Please list them and briefly describe each of them. What is the non-targeted behavior 2? What are the outcome variables for behavior 2 i. If there are multiple outcome variables for behavior 2, does the study correct for multiple hypotheses testing? Please describe which correction is used. There is no explicit correction for multiple hypotheses testing. What is the expected underlying motive linking behavior 1 and behavior 2?

What is the expected time frame during which behavioral spillovers will be tested, and during which the durability of spillover and habit formation will be assessed? What is the expected participant attrition between behavior 1 and behavior 2?

However, attrition was not only high, but it was asymmetric across different conditions. At the end of the experiment 3 months after , only out of the participants originally recruited remained in the study: 80 out of in the EA group, 36 out of in the MI group, and 79 out of in the control group all the participants in the mixed condition group were excluded.

What is the expected direction of the changes in the outcome variables for behaviors 1 and 2 between the intervention groups and the control group i. What are the expected sizes and standard errors of the changes in the outcome variables for behaviors 1 and 2 between the intervention groups and the control group?

What is the minimum expected sample size to test and detect the occurrence of behavioral spillover? If collecting qualitative data, how will the quality of this data be ensured and assessed e. If using mixed-methods approaches, how will insights from different methods be combined? We have critically reviewed the main methods to measure behavioral spillovers to date, and discussed their methodological strengths and weaknesses.

We have proposed a consensus mixed-method approach which uses a longitudinal between-subject design together with qualitative self-reports: participants are randomly assigned to a treatment group where a behavioral intervention takes place to target behavior 1, or to a control group where behavior 1 takes place absent any behavioral intervention.

In the spirit of the pre-analysis plan, we have also proposed a systematic checklist to guide researchers and policy-makers through the main stages and features of the study design in order to rigorously test and identify behavioral spillovers, and to ensure transparency, reproducibility, and meta-analysis of studies.

While ours is arguably the first methodological note on how to measure behavioral spillovers, it has of course limitations. The main limitation is that our experimental and empirical identification strategy relies on our specific definition of behavioral spillover — i. While we have suggested here that a similar approach to ours i. Even applying our more specific definition of behavioral spillover, it would be possible to define alternative methodological checklists that, for example, apply solely quantitative or qualitative methods cf.

However, as we have argued, we believe there is benefit in combining methods as they can offer different insights or address different research questions relating to spillover. We would like to conclude by briefly mentioning a few other directions where we envisage promising methodological developments in the years to come. First, the current technological landscape naturally lends itself to a systematic measurement of behavioral spillovers in a variety of research and policy domains.

Today an unprecedented richness of longitudinal data are routinely collected at an individual level in terms of online surveys, apps, smart phones, internet of things IoT and mobile devices, smart cards and scan data, electronic administrative records, biomarkers, and other longitudinal panels.

This is creating, for the first time in history, an immense potential for following up individuals across different contexts and domains, and over time, for months, years, and even decades. On the one hand, the scope for systematically testing the occurrence of behavioral spillovers using rigorous empirical and experimental methods is therefore enormous. On the other hand, the endless wealth of research hypotheses, outcome variables, and data points makes even more important for researchers to embrace the best practices discussed above in order to ensure transparency, openness, and reproducibility of science.

Second, a promising methodological line of research about behavioral spillover concerns the rigorous investigation of the factors mediating and moderating the occurrence of behavioral spillover, for example in terms of accessibility Sintov et al. Further work in this direction is likely to develop also thanks to the triangulation of different sources of data enabled by the above described shift in the technological landscape.

All these future developments reinstate the importance of developing a collective discussion about clear and transparent methodological guidelines to measure behavioral spillovers.

We hope that with the present article we have contributed to at least start such a discussion. The time is ripe to foster a collaborative endeavor to systematically test behavioral spillovers across all research and policy domains, contexts, and settings.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Adair, G. The hawthorne effect: a reconsideration of the methodological artefact. Alpizar, F. Behavioral spillovers from targeted incentives: losses from excluded individuals can counter gains from those selected.

Google Scholar. Effects of exclusion from a conservation policy: negative behavioral spillovers from targeted incentives. Angelovski, A. Behavioral spillovers in local public good provision: an experimental study. Angrist, J. Austin, A. Baca-Motes, K. Commitment and behavior change: evidence from the field. Banerjee, R. Example: Diego hit Cecile 5 times. Rate: Same as frequency, but within a specified time limit. Example: Diego hit Cecile 5 times in 2 minutes. Duration: This measurement refers to the amount of time that someone engaged in a behavior.

Example: Evan had a tantrum for 42 minutes. Fluency: This measurement refers to how quickly a learner can give responses within a period of time. Example: Randy read 15 sight words in 60 seconds. Response latency: Latency refers to the amount of time after a specific stimulus has been given before the target behavior occurs.

Aimed at remembering the highlights of the exhibit. After two keynotes and multiple symposia, we were all looking forward to the reception. In the Manchester Museum , which honestly is a marvelous place to discover both the collection and the building itself, drinks, dinner, and discussions took up the rest of the evening. Get the latest blog posts delivered to your inbox - every 15 th of the month. Subscribe to the blog and get the latest blog posts delivered to your inbox - every 15th of the month!

All rights reserved. Don't miss out on the latest blog posts. Behavioral research , conferences , measuring behavior.



0コメント

  • 1000 / 1000