Democracy 360- Methodology

Democracy 360: Methodology

Democracy-360-Header-2

The Methodology

What is Samara’s Democracy 360?

The Democracy 360 is Samara Canada’s made-in-Canada report card focused on the relationship between citizens and political leadership. The Democracy 360 combines 23 quantifiable indicators, focused on three areas: communication, participation and political leadership. The Democracy 360 will allow Canadians to compare and assess their democracy over time. Samara plans to revisit the Democracy 360 every two years to measure improvement or decline—the second edition of the report is due out in 2017.


Where Did the Idea for the Democracy 360 Come From?

During Samara’s exit interviews in 2009, a former MP identified a niche: Canadians need a more systematic understanding of how well our democracy is working—one that doesn’t rely on anecdote. The Democracy 360’s conceptual design, data collection and analysis were subsequently conducted over the next few years. 

Samara has three main objectives with the Democracy 360:

  1. Engage: Frame and provoke, in increasingly wider circles, a debate about how Canada’s politics can be made more relevant and responsive.
  2. Educate: Prompt Canadians to compare and assess progress on Canada’s democratic vitality with evidence in hand, looking beyond voter turnout.
  3. Enroll: Identify precise areas where change can be advanced, both by Samara and by others.


Why the Name “the Democracy 360”?

The circle echoed by the title’s “360” draws on several themes: to circle back and take stock, to provide a 360-degree scan, and to draw attention to a useful metaphor that highlights the interdependence between  citizens and their elected leaders—the “vicious circle” and the “virtuous circle”—when it comes to building a responsive democracy.

How Did the Research Unfold?

The design, data collection, analysis and scoring process has involved advice from academics (see Samara’s advisory committee), practitioners in the civic and political engagement space, and Samara’s community. A few of the methodological components and findings have also been presented at academic conferences by Samara researchers, and many elements of the Democracy 360 were tested through the research and release of Samara’s Democracy Reports from 2012 to 2014.

Though the relationship between citizens, MPs and political parties has always been at the core of the Democracy 360, how to organize this analysis has generated a great deal of discussion. Initially, the Canadian Democratic Audit series inspired focus on measuring democratic values like representativeness, inclusiveness and participation. Though these values underpin many of the 360 indicators, we decided to organize the Democracy 360 by three areas of activity: communication, participation and leadership.

What Does the Democracy 360 Measure and Not Measure?

Quality: The benefit of the Democracy 360’s evaluation is in its breadth, not the depth of its indicators.  For many of the indicators, for example “householders,” measuring the quality presents a challenge.  While this level of analysis is important, it is beyond the scope of this broader project. Thus, the Democracy 360 focuses on whether the activity did, or did not, happen. This measurement choice was made in order to establish a benchmark.  We encourage future research to probe deeper within each measure (such as quality or frequency).

All democratic players: By focusing on elected political leadership, the Democracy 360 misses the full complexity of democracy, including the work of senators, public servants, political staff, journalists and judiciary—a trade-off made to keep the scope of the Democracy 360 project feasible. Samara also believes that the relationship between the citizen and their representative is at the heart of democracy, which is why other projects, like the Samara MP Exit Interviews have also probed the nature of this representative relationship in Canada.

Municipal and Provincial Political Leadership: The Democracy 360 does not systematically evaluate provincial and municipal political leadership in the same way that it focuses on federal leadership. However, some measures of political participation and Canadians’ views on politics are not specific to the federal arena. Admittedly, this is conceptually less tidy but also reflects the realities of how citizens think about politics as observed in Samara’s focus group research—politics as an activity is not neatly demarcated by three levels of governance for many Canadians.

How Were the Indicators In the Democracy 360 Selected?

With a long list of potential indicators, five criteria were used to select the indicators which measure communication, participation and leadership in Canada:

  1. Accuracy:  Is the measure precise?
  2. Reliability: Is the measure an accurate and consistent capture of the activity?
  3. Feasibility: With respect to finite time and resources, can the data be collected and analyzed?
  4. Replicable: Can the measure be captured again in a similar fashion?
  5. Dynamic: Is the indicator’s change (improvement or decline) measurable?

Where Did the Democracy 360 Data Come From?

The Democracy 360 brings together a number of data sources, including Samara’s public opinion research and website content analyses, as well as data external to Samara, such as from the House of Commons and Elections Canada. The data in the 2014 Democracy 360 dates from 2011 to 2014. There are four general sources of data in report card: 

  1. 2014 Samara Citizens’ Survey

    Public opinion data in the Democracy 360 was drawn from the Samara Citizens’ Survey, which was conducted in English and French using an online sample of 2406 Canadian residents over 18 years of age living in ten provinces. Data was collected between December 12 and December 31, 2014. The survey has a credibility interval of 1.99 percentage points, 19 times out of 20.

    Responses were weighted to ensure they reflect a national representative sample of Canadians, in consideration of gender, region, age as well as whether respondents were born inside or outside of Canada, whether respondents spoke English, French or another language at home, and self-reported voter turnout. Questions that asked about Canadians’ participation were limited to the last 12 months. Data missing at random were imputed using the mi commands in STATA12.

    Provincial breakdowns only include statistically significant information (p value <= .10).  

    Samara worked with Professors Peter Loewen (University of Toronto) and Daniel Rubenson (Ryerson University) to complete the data collection, cleaning, weighting and imputation.  The survey was conducted by Qualtrics.

    Please request the Samara 360 Survey Appendix for precise data manipulation, survey question wording, and unweighted frequencies [[email protected]].

  2. House of Commons Records

    Members’ Expenditures Report (April 1, 2013-March 31, 2014)

    The 2013 to 2014 Members’ Expenditures Reports were used to determine the value of the householder indicator.  The reported value reflects the percentage of MPs that spent money on householders during the reporting period. It combines funds spent in Member’s Office Budget and Resources published by the House of Commons to determine the total amount spent on householders. The Government of Canada makes MPs’ Expenditure data publicly available in XML. A Ruby script was used to transform the data to CSV format. The script can be found here.  The total number of MPs included in the analysis was 312.

    Diversity Data on Parliament (December 2014)

    Demographic information about MPs was compiled using information on parl.gc.ca, including age, gender, place of birth and Indigenous status. Visible minority status is not formally reported by the House of Commons, so was determined using the MPs’ biographical pages on parl.gc.ca. The data was updated in December 2014.

    Using the demographic data, Samara created a Proportionality Index score for each group. This score compares how close a group’s representation in the House of Commons matches their proportion in Canada’s population. Canadian population figures were drawn from Statistic Canada’s 2011 Census. A score of 100 equals perfect parity between a group’s presence among MPs in the House and their share of the Canadian population.

  3. Publicly Accessible Data

    2014 Member of Parliament Websites

    The Member of Parliament website project analyzed 299 MP’s websites against a 16-point checklist. Data collection occurred May to July 2014 by Samara volunteers and staff. Websites excluded from the analysis included five empty ridings (by-elections were pending at the time of data collection) and the four party leaders.

    2013 Electoral District Association Websites

    The EDA Website project analyzed websites of riding associations, the local chapters of our national parties, against a 15-point checklist. In total, Samara researchers searched for 1307 sites (308 for each national party and 75 for the Bloc Québécois). Data collection occurred in August and September 2013 by Samara staff and volunteers.

    The analysis was redone in 2014. However, the summer of 2014 marked a time of transition for many EDAs as several riding boundaries shifted to reflect the introduction of 30 new ridings. As a result, Samara’s findings did not present a complete picture of EDA websites in Canada and the Democracy 360 uses the last complete year (2013).

    Members of Parliament on Social Media 2013

    Full Duplex's analysis in the 2013 report “Peace, Order and Google Government” shares the number of MPs using social media accounts on Twitter, Facebook and YouTube. The data was collected using Sysmos Heartbeat (a media monitoring tool) in November to December 2013.  Subsequent analysis was executed using Heartbeat and Compass.  The full report can be found here.

  4. Elections Canada

Elections Canada’s Estimation of Voter Turnout by Age Group and Gender at the 2011 Federal General Election reports turnout by age, gender and province. Elections Canada relies on their administrative data to generate these figures.  To improve the accuracy of turnout figures, Elections Canada uses the number of Canadian citizens over the voting age as the denominator in calculations rather than the number of registered voters (which recognizes the voter registration list may be incomplete).

Why Create a Composite Score?

Bringing together indicators into an overall composite value, index or “score” is an inherently subjective activity—read Malcolm Gladwell on this point. Inevitably, there are a number of choices to be made; whether certain indicators should be given greater weight because they matter more, or if all indicators can all be treated the same. Seldom is there an obvious or “correct” way to combine such values that leads to complete agreement. 

Despite the challenges associated with composite scores, they can still be worthwhile constructs because of the other advantages they offer, such as ease of comprehension and comparison. 

Comprehension: With over twenty indicators, each indicator on their own may mean little to the next, or, in other words, it is difficult to see the overall picture with several data points in play. Bringing the values together can assist with this challenge. With that said, the combination can also obscure differences among indicator values (especially large ones), which is why Samara published “The Democracy 360: The Numbers” which shows the values for each of the 23 indicators. 

Comparison: A composite score is a powerful tool for comparison. Some indicators may improve or decline in any given year, but these changes provide little insight into the overall picture. A composite score can reveal whether, on the whole, there is improvement or decline over time. As well, when the three areas (communication, participation and political leadership) each have a composition score, it is easier to see which area is faring better than another.

How Was a Composite Score Generated?

Each of the indicators in the Democracy 360 are scored out of 100; the lower the score, the worse the result. While the scale allows for extremes—a perfect score of 100 or zero—it’s unlikely that some indicators will ever reach the upper and lower limits. Nevertheless, the use of a 100-point scale across all indicators provides consistency and reduces subjectivity, compared to creating a uniquely adjusted point-scale sensitive to the unique variation in each indicator.

The structure of the Democracy 360 is nested and hierarchical.  The three areas (communication, participation and leadership), as well as the indicators within each area, were weighted equally. The weight applied to each indicator is dependent on the number of indicators within each area. As for the indicators with sub-indicators, the indicator reflects the average of all the sub-indicators. 

The Democracy 360’s weighting scheme is based in conceptual rather than statistical theory—or, in other words, it focuses on the three key areas of democracy that Samara has identified. However, researchers at Samara did conduct a factor analysis of the indicators that rely on survey data alone, since this process required the omission of the three other data sources, the results were revealing rather than directive. 

Given the Democracy 360’s conceptual focus, Samara determined that each category would be treated as equally important. This meant the composite score applies an equal weighting scheme to each of the three areas and an equal weighting scheme within each area to each of its indicators. 

Scores for each of the three areas (communication, participation and political leadership) are calculated by averaging the scores of the area’s indicators. The three areas (each weighted equally) were added to create a total score out of 100. 

Communication + Participation + Leadership = Samara 360

Score out of 33 + Score out of 33 + Score out of 33 = Samara 360 (out of 100)

Category Number of Indicators Sub-Indicators Weight to Each Indicator Sum (out of 33.33) Percentage
Communication 7 8 4.76 18.66 55.99%
Participation 5 13 6.66 15.53 46.59%
Leadership 9 17 3.70 14.1 42.29%
Total 21 38 -- 48.29 48.29%

Why Give a Letter Grade In the Democracy 360?

Letter grades were not part of the original conception of the Democracy 360, but advice from communications experts suggested a report card component would be a valuable tool to help Canadians understand the data. This reflects a trade-off between adding another layer of subjectivity to the data, but also ensuring the research is made accessible and useful to Canadians.

How Was a Letter Grade Assigned?   

Deciding on a letter grade scale was more complicated than simply applying the grading system familiar to most school settings (e.g. A above 80%, F below 50%). Communications advice suggested that an overly negative report card (with several Fs), year over year, would be discouraging and risk greater alienation from the political arena and political participation if Canadians do not see hope for improvement. The advice also suggested the report card would be less effective as a communications tool if the grades were unlikely to change year over year. Given the mix of indicators in the Democracy 360, it is reasonable to anticipate small shifts in improvement or decline, but not large shifts (that is to say 10% points or more) in two years time.

This meant Samara needed a letter grade scale where (1) the value of an F grade would need to be lower than the commonly used 50% threshold and (2) each letter grade should cover a fairly narrow numerical range so that there would be greater sensitivity to changes in the letter grades assigned.

To design this grade system, Samara’s researchers first sought some outside input from a small group of experts, practitioners and engaged citizens in the democratic space to help determine where the Democracy 360 scores are now, in relation to where they could be if Canada’s democracy was stronger in 10 years time. Specifically, they each provided their opinion of what a “great” score for each indicator in roughly 10 years would look like. The group did not have access to the current Democracy 360 values though they were provided some historical and comparative data as reference points.  

What Is a “Good” Grade?

Indicator values provided by the outside consultants were combined using the same weighing scheme for the composite scores. Overall, their input suggested a goalpost for the Democracy 360 of 60%, which is 12 percentage points higher than the 2014 score of 48%.

The goalposts for the three areas (communication, participation and leadership) helped to establish the upper limit for the letter grade scale. For example, the 70% goalpost for communication would correspond with the highest grade on this scale (A+).

Inherently, determining the rest of the scale values was arbitrary—but was driven by a desire to have each letter value be cover about the same numerical range (that is to say two to three points).

2015 Grading Scale

Letter grade  A+ A-  B+  B-  C+  C-  D+ D- 
 Scale 65 or higher  64-63 62-60  59-58  57-55   54-53 52-50  49-48  47-45  44-43  42-40  39-38  37 or lower 

An alternative grading system which used a unique grading scale for each category also held merit. This approach would have allowed each category to be graded on the basis of a different “great.” In practice, this would mean that an A+ in leadership would equal 47% whereas an A+ in communication would equal 70%. However, as the primary aim of the report card is a communications tool, the desire to keep the design simpler resulted in the choice to develop one letter grade system for the three areas and overall score. This means that a “great” report card does not necessarily include all A+s. But perhaps that is realistic—perfection need not be the enemy of improvement.

 

2014 Score

2014 Letter Grade

Goalpost

Great" Letter Grade

 Communication 55.99%  70.00%  A+ 
 Participation 46.59%  C-  62.00%  A- 
 Leadership 42.29%  47.00%  C- 
 Total 48.28%  60.00%  A-