A group of New Zealand’s leading political pollsters, in consultation with other interested parties, have developed draft NZ Political Polling Guidelines.
The purpose is to ensure that AMRO and MRSNZ members conducting political polls, and media organisations publishing poll results, adhere to the highest “NZ appropriate” standards. The guidelines are draft and comments, questions and recommendations back to the working group are welcome.
This code seeks to document best practice guidelines for the conducting and reporting of political polls in New Zealand. It is proposed that the guidelines, once approved and accepted, will be binding on companies that are members of the Association of Market Research Organisations (AMRO) and on researchers that are members of the Market Research Society of New Zealand (MRSNZ).
The code only covers “political polls”, which for the purpose of the code are polls that related to public votes such as national elections, local body elections and parliamentary referenda. This is in recognition of the fact that reporting of polls may have an impact on how people vote.
For each issue, the code details, where appropriate:
- Best practice for the market researcher conducting the poll
- Best practice for the market researcher in reporting results
- Best practice for the media in publishing and commenting on results
It is intended that the code assist politicians, political scientists, journalists and members of the public to be confident that political polls do in fact largely represent the opinions of the wider public and are a guide as to likely voting behaviours if an election were to be held at that time.
The development of the guidelines is in recognition of the fact that reporting of polls can have an impact on how people vote. Inaccurate polls, or polls that are interpreted and reported inaccurately, can impact on voting attitudes and behaviours and thus influence the democratic process.
It behoves all members of the polling and media communities to treat polling responsibly. Reliable polls (rather than informal surveys) require a high degree of rigour. These guidelines are designed to ensure that rigour is understood and applied.
The guidelines have been developed in the light of the European Society of Market Research (ESOMAR) international guidelines. ESOMAR is the global authority on research best practice and both AMRO and MRSNZ are ESOMAR members. It is recommended that those interested also read the ESOMAR /WAPOR guide to opinion polls and published surveys.
NB: The term “must” indicates a requirement, while the term “should” indicates recommended best practice.
|Size||A minimum size of 500 is required for nationwide polls.||Report must include the sample size, and the sample size of “decided” voters.||Story should include the sample size.|
|Sampling Method||Should be either “random” or “quota”.||Report must disclose the sampling method.||Story should include the sampling method.|
|Response Rates||Researchers should aim to maximise response rates by conducting multiple call-backs.||Report should disclose that multiple call-backs occurred.|
|Representativeness||The sample should represent either those self-identified as eligible to vote or those likely to vote.||Report should disclose the population the sample represents.||Story should include the population the sample represents.|
|Likely Voters||Those who say they are unlikely to vote should be excluded from the analysis.||Report must exclude those unlikely to vote from the analysis of voting behaviour.|
|Phone||When employing random probability sampling, both the household dialled and the respondent selected in the household should be random.
When employing quota sampling, the household dialled should be randomly selected, but the person in each household may be selected to achieve specific quota requirements.
|Report must disclose how a respondent is selected.|
|Online||No panel member must be asked to complete the same poll question more than once every six months.
The final panel sample should reflect a true cross-panel of eligible New Zealand voters, which may be achieved by screening or weighting.
The panel should stay open for at least 72 hours.
Researchers should try and minimise people signing up to their panel, just to participate in political polls as such self-selection can bias the result.
The panel should be managed in line with the ESOMAR guideline for online research.
|Report should disclose panel recruitment and makeup, and that it complies with the ESOMAR guideline for online research.|
|Omnibus||If the political questions are part of a longer omnibus poll, they should be asked early on.||The report must disclose if the questions were part of an omnibus survey.||The story should disclose if the questions were part of an omnibus survey.|
|Question Order||It is recommended the principal voting behaviour question be asked before all other questions.||The report must disclose the order of questions asked and any political questions asked before the principal voting behaviour question.||The story should disclose any other questions which may have impacted the responses to the principal voting behaviour question.|
|Weighting Method||A random sample poll should ideally be weighted using an industry accepted weighting methodology to correct for the probability of selection and/or non-response.||The report should confirm the sample was weighted.|
|Weighting Variables||A minimum of gender and age should be weighted.||Report should disclose the variables the poll is weighted on.||Story should include that the sample is weighted.|
|Variables not to weight on||When weighting to correct for demographic non-response, the calculated sample weights should be based on known or estimated population characteristics (for example, from Statistics New Zealand or the Electoral Commission).Weighting should not be based on previous voting behaviour, which is subject to memory accuracy.
|Margin of Error|
|Maximum Error||The maximum sampling margin of error must be 4.5 percentage points for national polls, at the 95% confidence level.||The report must disclose the maximum margin of error.||The story should include the maximum margin of error.|
|Maximum errors for breakdowns||The report should disclose the sample size and maximum margin of error for demographic breakdowns.||Stories should avoid reporting breakdown results from very small samples.|
|Significance||The report should highlight results that are statistically significant. This includes trend changes, not just from the previous poll.||Stories should focus on changes that are statistically significant.|
|Errors for results < 50%||The report should include the maximum margin of error for results below 50%, such as for 10%.||Media should be careful not to assign the maximum margin of error to low polling parties.|
|Other Errors||Care should be taken to eliminate sources of error not associated with the sampling process.||It is acceptable to state the margin of error for a simple random sample, at the 95% confidence level, but other sampling errors should be reported on if deemed significant.|
|Data Collection Dates||The final poll before an election should be conducted as close as possible to the reporting date.||The report must disclose the dates the data collection occurred.||The story should disclose the dates the data collection occurred.|
|Median Date||As more responses often occur earlier in the poll, the date the median response was collected should be calculated.||The median date of collection should be included in the report.|
|Undecideds||The poll script should probe initially undecided voters as to a lean or preference.||The report must state the number and percentage of “undecided” and “refused”.||The story should include the percentage that was undecided.|
|Trends||Reports should highlight significant trends.||Stories should focus on significant trends, which may not be just between the current and last poll, but over a number of polls.|
|Seats||Reports should include seat projections, and any assumptions used for electorate seats.||Stories should include analysis of not just individual party results, but also likely “bloc” results as the highest polling party may not be most likely to get to form Government.|
|Reports||The agency should prepare a report suitable for publication with full results and methodology.||The online version of stories should link to the full report, as quickly as practical.|
|Terminology||The term “poll” should only be used for scientific polls that are done in accordance with international and national industry codes of practice. The term “survey” should be used for forms which have self-selecting samples such as text in or website surveys.|
The guidelines also include a proposed easy reference guide for media:
- If possible, get a copy of the full poll report and do not rely on a media release.
- The story should include the name of the company which conducted the poll, and the client the poll was done for, and the dates it was done.
- The story should include, or make available, the sample size, sampling method, population sampled, if the sample is weighted, the maximum margin of error and the level of undecided voters.
- If you think any questions may have impacted the answers to the principal voting behaviour question, mention this in the story.
- Avoid reporting breakdown results from very small samples as they are unreliable.
- Try to focus on statistically significant changes, which may not just be from the last poll, but over a number of polls.
- Avoid the phrase “This party is below the margin of error” as results for low polling parties have a smaller margin of error than for higher polling parties.
- It can be useful to report on what the electoral results of a poll would be, in terms of likely parliamentary blocs, as the highest polling party will not necessarily be the Government.
- In your online story, include a link to the full poll results provided by the polling company or state when, and where, the report and methodology will be made available.
- Only use the term “poll” for scientific polls done in accordance with market research industry approved guidelines, and use “survey” for self-selecting surveys such as text or website surveys.
It is important to stress that these guidelines are a draft only, and the pollsters are keen to get feedback from interested people. Specifically suggestions for possible other requirements, amendments to proposed guidelines or even removal of a specific point.
Please send any comments, questions and submissions on the guidelines to firstname.lastname@example.org.
The date listed as the closing date of 1 March is incorrect. Feedback is still welcome, and it is planned in May to hold public consultation meetings in Auckland and Wellington.
Note that I was a member of the working group that developed the guidelines, as were representatives from UMR, Ipsos, Digipoll, Reid Research and Colmar Brunton and the MRSRZ President and AMRO Executive Director.