GOOD-PRACTICE GUIDELINES for health and social care surveys

This note can be downloaded as a pdf file here.

EXECUTIVE SUMMARY: THE GUIDELINES

      1. Surveys are research tools: employ trained and experienced people to design and administer survey questionnaires.
      2. Be clear about the purpose of the survey: is it to be used for monitoring, to aid problem-solving, or to inform a planning process?
      3. Acknowledge ‘political’ motivations, and set them aside.
      4. Be clear about what you want to find out from the survey, and how you might use the findings.
      5. Allow for the possibility that respondents may want to use the questionnaire to tell you things that you haven’t thought of.
      6. Your survey should be focused and concise, and not used for a ‘fishing expedition’.
      7. Identify your target population.
      8. Take expert advice on the size and composition of the sample that you need in order to get reliable data, and on how to reach your sample.
      9. Take precautions when commissioning ‘easy-read’ and other versions of questionnaires.
      10. Decide what information you are going to present.
      11. Don’t formulate questions in ‘management-speak’, and do ask respondents to draw on their own experience.
      12. When asking a question that invites a judgment, offer an appropriate scale.
      13. Avoid complex questions.
      14. Avoid leading questions.
      15. Run a pilot version of your questionnaire.
      16. Give trained and experienced researchers the task of collating, analysing and reporting findings and drawing conclusions.

Preface
In today’s National Health Service, clinical commissioning groups (CCGs) are required, under their constitutions, to involve the public in planning, developing and considering proposals for changes. They are expected to encourage patients and the public to examine and give feedback on plans for commissioning services and contribute towards decision-making on any proposals for changes in commissioning arrangements that would have an impact on service delivery or the range of services provided. CCGs are enjoined to work in partnership with patients and the local community to secure the best care for them. The same expectations can be found in the contracts held by the frontline bodies that actually provide health care services. Local authorities too are under duties to consult and ‘engage’ with ‘stakeholders’, duties given extra force by the expectation that policies should be ‘evidence-based’.

In Cornwall, in 2016/17, the Kernow Clinical Commissioning Group, the Cornwall Partnership Foundation Trust and Cornwall Council (the local authority with responsibility for social services) alighted on the questionnaire-based survey as an appropriate means of securing such ‘engagement’. But these bodies seem not to have fully appreciated that designing a questionnaire to be completed by members of the public, and then analysing the responses received, require specialist knowledge, skills and understanding. In short, running a questionnaire-based survey is a social research project.

Social research is the process of systematically gathering, analysing and interpreting information about the behaviour, knowledge, beliefs, attitudes and values of human populations. Such research is intrinsically difficult: people are highly complex, and language is imprecise; human beliefs, values, attitudes and motivations are hard to pin down; there are invariably ethical issues to be faced; memory is fallible; and research respondents are not always able or willing to report their feelings or behaviour accurately or honestly; and there are considerable statistical problems in drawing valid inferences about large and shifting human populations.[1]

Researchers have to grapple with these problems afresh in every project. So good research needs craft skills (of observation and reasoning) and intelligent creativity and ingenuity in the way those skills are applied, together with the ability to understand the way people think, feel and behave.

The intention behind this note is that it should serve as an initial step towards a set of guidelines that NHS managers – the managers of healthcare providers and commissioning bodies – and local authority managers can use to draw up a clear brief for the researchers and assure themselves that at the very least some basic errors in questionnaire design will be avoided.


Guideline 1. Surveys are research tools: employ trained and experienced people to design and administer survey questionnaires.

The task of designing a survey questionnaire should be undertaken by people who have the requisite training and experience, so they can bring social research skills and an appreciation of ethical issues to bear. It follows that designing a questionnaire – including writing the ‘preamble’, the information that accompanies it – is not a task to be assigned to a public relations specialist or back-office administrator. It requires trained and experienced researchers.

Bear in mind that recommendations based on the findings from a questionnaire that has been rigorously designed, carefully deployed and written up on the basis of wide experience are likely to be given more careful attention, carry more conviction and be given more weight than others that lack these qualities.

Guideline 2: Be clear about the purpose of the survey: is it to be used for monitoring, to aid problem-solving, or to inform a planning process?

Tasks of different kinds require different approaches. Is there an on-going situation, such as the day-to-day delivery of a service, that you want to monitor? Or is there a problem, a ‘what shall we do about X?’ issue, that needs to be resolved? Or are you in the process of creating a plan, e.g. to reorganize a service or create a new one, in which case you may want to use one questionnaire early in the exercise to discover opportunities and possible ‘elephant traps’, and another, different one later in the process to help choose between alternatives that you have identified. Because monitoring, problem-solving and planning (early and late in the process) require different approaches, it is essential not to muddle them.

Guideline 3. Acknowledge ‘political’ motivations, and set them aside.

In the fields of health and social care there are always many interests in play. Senior managers have an interest in gaining a reputation for decisiveness and meeting their targets and ‘running a tight ship’; politicians have an interest in getting their constituents to re-elect them; professionals have an interest in doing a job successfully and having their success recognised, and in defending their autonomy; providers, who are always under pressure to limit their spending, have an interest in successfully bidding for funds; treasurers and finance officers have an interest in showing that they are tough in scrutinizing those bids; and so on.

Hence there is the danger that people may propose a survey because they see it as a means of advancing their interests, especially if they anticipate being able to control the analysis and publication of the findings and conclusions. They may also, in the present-day era of ‘engagement’, and especially in the case of planning major changes, see a survey as a means of showing that there is ‘community support’ for their proposals. (The use of leading questions is one familiar technique for doing this: see Guideline 14 below.) Having control of completed questionnaires allows selective ‘cherry-picking’ of supportive comments, while those not supporting the desired proposals can be ignored. This behaviour is unethical. It may also be counterproductive, since outside observers, in campaign groups and the media, may be able to reveal and publicize that strategy, with consequent reputational damage.

The only ethical use for a survey is to elicit information – facts and opinions – in a way that is unbiased, that does not favour or penalize any particular interest. 

Guideline 4. Be clear about what you want to find out from the survey, and how you might use the findings.

Monitoring: Here you might want to collect facts about the use of a service that are not collected as a matter of routine through the service’s immediate providers, e.g. about the journeys that service-users make. And if you want the service-users’ opinions of the service,  these can be gathered by asking them to rate the qualities of a service, e.g. on a scale from 1 (Very Poor) to 10 (Excellent) or by asking them to set out their opinions in a comment box.

An important potential use of a monitoring survey is to establish correlations between categories of service-user and their use and opinion of the service. Researchers should be tasked with identifying significant (relevant) categories: these might typically include age, gender, marital or partnership status, household type, and access to transport, as well as types and levels of need (e.g. disability) for the service. A search for correlations can show whether there exist certain groups of service-user who are receiving a lower standard of service than others, a situation of which you will want to be aware.

A monitoring survey that is repeated annually, with little or no change from year to year, will enable a comparison to be made of performance over time, so managers can see whether there is a trend, or indeed a significant departure from a trend, that demands attention.

Problem-solving: A ‘what shall we do about X?’ issue faced by a large and complex organization is liable to be centred in one or two departments but have ramifications throughout the organization and the world beyond. Questionnaire-based surveys can help you to explore these ramifications and use them to build a picture of the situation, understand the mechanisms in play, and go on to formulate possible courses of action and sketch out the likely impact, the consequences, of each. Such a questionnaire should be tailored to each of the groups involved – e.g. actual and potential service-users, staff in the department or departments most affected, members of different professional groups, staff in other departments. It should ask respondents to describe the situation from their own experience and give their opinions (on causes and effects and possible ways forward) based on that experience, rather than asking them to put themselves in the place of managers.

Planning: Early in the process, the task will be to identify and clarify issues. Typically, there will be one or more ‘imperatives’, already existing or foreseen, such as political pressures, spending heading for an overrun, staff shortages or an impending review by an outside body. Some of these issues may become apparent from monitoring exercises, but a survey that is focused on an issue and invites views on ways of responding can be better suited to the situation. Again, a questionnaire should be tailored to the different groups involved.

Later in the process, when possible alternative courses of action have been identified, the objective will be to evaluate and compare these possibilities. A survey can be used to discover what respondents anticipate will be the likely impact of each one on themselves, i.e. what they see as the likely real-life consequences (outcomes) of implementing each alternative, and whether there is agreement that the criteria being applied to evaluate them are appropriate. Respondents should always be asked to make judgments on the basis of their own experience, not as if they were managers having to allocate resources among competing services.

Guideline 5. Allow for the possibility that respondents may want to use the questionnaire to tell you things that you haven’t thought of.

While you do want to be clear about what you want from the survey (see Guideline 4 above), the very fact of issuing a questionnaire implies that you accept an obligation to take note of the responses that you get, and, if you don’t accept some requests or suggestions, to explain why not. So you need to recognise that to some extent you are surrendering control of the situation. You need to keep an open mind and be prepared for this, rather than go in with a determination to get what you want, otherwise the situation may become confrontational, with unpleasant results.

A practical step that you can take is to include at the end of your questionnaire the question: ‘Is there anything we haven’t covered so far in this questionnaire that you would like to tell us about?’ This would be particularly appropriate in a pilot (trial) questionnaire – See Guideline 15 below – but should not be out of place in the version that you actually use.

Guideline 6. Your survey should be focused and concise, and not used for a ‘fishing expedition’.

The guidelines presented here consistently follow the principle that surveys need to be focused. You may be tempted to ‘include a question about X’ in a questionnaire on the basis that it might attract some interesting answers. Respondents may judge such a question to be irrelevant and distracting, and possibly intrusive too. Indeed respondents may be antagonized, which could affect how they respond to the whole of the questionnaire. By all means invite comments, e.g. by providing boxes for them, but questions that amount to ‘fishing expeditions’ will detract from the validity and effectiveness of the survey as a whole, so the temptation to include such questions in a questionnaire should be firmly resisted.

As a general rule, questionnaires should be kept as concise as possible. A good test that can be applied is to ask about every potential question: ‘What shall I be able to do with the answer?’ If there will be nothing you can do except say ‘Fancy that!’ the question should not be included.

Guideline 7. Identify your target population.

Who are the people whose experiences are of concern to you? Are they past or present service-users (patients or clients), or potential future ones (the public at large)? Of service-users, are they children? or adults who depend on carers to speak for them? or people who need help with completing questionnaires? (The report on a questionnaire-based survey carried out in Cornwall in early 2016 stated that 0.3% of the 2,450 respondents were aged under 11: one has to wonder how able these 70 or so children were to answer questions such as ‘What are the three most important things to you when you experience health and social care services and support in Cornwall?’[2] Do you want – or should you want? – to hear from the people who actually provide services face-to-face – in GP surgeries, acute hospitals, community hospitals, urgent care centres, minor injury units and other clinical settings, or in patients’ and clients’ own homes? Your questionnaire will need to be tailored to each target group.

Guideline 8. Take expert advice on the size and composition of the sample that you need in order to get reliable data, and on how to reach your sample.

‘Reliable data’ is data on which you can safely base your organization’s policies for future action, data in which you have confidence. For example, if you want to gauge support for proposed changes to a service, you will want many more than a handful of replies to a survey of opinion, because the mere handful might be completely unrepresentative of the population likely to be affected. You will want to be confident that you have replies from a representative sample of your target population – e.g. one which includes all age groups, all the different localities within the geographical area, patients from all the GP surgeries within that area, and people from across the whole range of health and social care ‘need’ groups – and a sufficiently large number within each group. (You may also want to ensure that responses from individual patients and clients are not outweighed by organized responses from groups with special interests.)  Precisely what constitutes a sufficiently large number is a matter for a statistician to determine: lay people should not take it upon themselves to make such judgments.

There is a great deal of accumulated experience of distributing survey questionnaires, among official bodies, market research organizations, etc. Since not everybody is online and not everybody has a landline telephone connection, the only way to ensure blanket coverage of a geographical area is by door-to-door distribution of printed questionnaires, although even this has the possible disadvantage that a multi-person household receives only a single questionnaire. Coverage of a group of patients or clients can of course be achieved by sending questionnaires through the post to members of that group only.

Lately we have seen questionnaires made available by placing them on a website (such as  that of Cornwall Council), leaving copies at public libraries and in the waiting rooms of doctors’ surgeries, and handing them out at public meetings. The coverage attained in this way may be poor: the health and social care survey carried out in Cornwall in early 2016 achieved 2450 responses from the public: not a tiny number in total but a mere 0.5% of the Duchy’s half a million population. In effect, this survey amounted to nothing more than a brain-storming exercise. But unfortunately even this minimal response rate yielded enough comments for the organizers to cherry-pick ones that enabled them to say: ‘You told us that you want to see …’ No compilation of all the comments received has ever been published. This is not acceptable.

Guideline 9. Take precautions when commissioning ‘easy-read’ and other versions of questionnaires.

Nowadays it is possible to have a draft questionnaire ‘translated’ into an easy-read version, to be published alongside the original one. Typically the easy-read one will be profusely illustrated with sketches, cartoons and pictograms, and the language will be simpler. The easy-read version should not be adopted and published without careful scrutiny. The search for simpler wording may have led to changes in meaning, possibly not ones that are visible to the translators, resulting in inconsistencies between the two versions. As a consequence the responses that are received to the two versions might not be comparable in some respects, and this might not be apparent at first sight. So the two versions should be rigorously checked for consistency.[3]

Of course, if one version of a questionnaire is easy to read, it may be that the other is in some respects difficult to read. As a matter of good practice, you should always refer back to the original to see if its wording could be simplified, or indeed if the wording of the easy-read version could be substituted.

It is now common for a questionnaire to be published in paper-based form (including a pdf format for printing out and completing by hand) and on-line, where the responses can be typed in on the screen and submitted electronically. Again, paper-based and on-line versions should be closely checked for consistency.

Guideline 10. Decide what information you are going to present.

The preamble, or introduction, to a questionnaire gives you an opportunity to present background information, such as your reasons for carrying out the survey. At the forefront should be a description of the situation that respondents can recognise: ‘At present we provide …’ and/or ‘You may have noticed …’. This immediately conveys to respondents that they have a stake in the process that is under way. Likewise, if you are engaged in a comprehensive review of health services in a locality and seeking respondents’ views on current provision and future possibilities, you could open with a simple list of what is available – e.g. GP surgeries during working hours, out of hours service, urgent care centre, minor injury unit, A&E Unit of District General Hospital, NHS 111, Ambulance (999) – and continue with the statement ‘We are going to ask you which of these you would prefer to use in certain circumstances’. This would be comprehensible to potential respondents and would give them the confidence that they know what they are being asked about. It would also help to reassure them that those who designed the questionnaire know what they are doing.

By the same token you should definitely not open your preamble by outlining the financial or other contextual pressures that managers are under: this conveys the impression that these come first in managers’ minds, and it also fails to provide an immediate ‘hook’ to capture respondents’ attention.[4]

Guideline 11.   Don’t formulate questions in ‘management-speak’, and do ask respondents to draw on their own experience.

A common failing of health and social care questionnaires is that the questions are formulated in ‘management-speak’, and this is a language that is foreign to respondents, using as it does terms such as ‘priorities’, ‘model’, ‘integrated’ (as in ‘An integrated 111, GP out of hours and urgent care centre model’), ‘pathways of care’ and ‘system reform’. Not only do questions such as ‘To what extent do you agree with each of our priorities?’ make use of language that will mean nothing to an ordinary member of the public: such questions require respondents to put themselves in the position of managers, a considerable intellectual feat.[5]

A good question will be expressed in language that allows respondents to relate it to, and answer from, their own experience. Where comments are invited, it will allow and encourage them to tell their story, in their own words. And this is where lessons can be learned about their experiences of ‘the system’, and about how they reacted to them.

Guideline 12. When asking a question that invites a judgment, offer an appropriate scale.

If you want to ask a question that invites a judgment – such as ‘How good …?’ or ‘To what extent do you agree …?’, how do you do it? You can simply provide a box for comments, so that people can use their own words to reply. This presents a difficulty for respondents in finding the right words and for you in aggregating their responses. There are two alternatives. One is to provide a sequence of labelled comment boxes –  e.g. ‘Very poor’, ‘Poor’, ‘Adequate’, ‘Good’, ‘Excellent’ – and ask respondents to tick one. The other is to offer a scale, e.g. from 1 (Very poor) to 10 (Excellent) and ask respondents to pick a point on that scale that represents their view. What you should not do (to take a recent actual example) is ask a question of the form ‘To what extent do you agree …?’ and then offer the boxes ‘Agree’, ‘Neither agree nor disagree’, ‘Disagree’ or ‘Don’t know’: this question invites a ‘scale’ answer – a measure of ‘extent’ – but the offered format does not permit one.[6] Again, it is a basic principle of questionnaire design that questions should not confuse the respondent.

Guideline 13. Avoid complex questions.

Here, from a recent survey on ‘NHS 111 and out of hours service integration’, is an example of a complex question: ‘What was your reason for contacting NHS 111?’[7] This is difficult to answer because it packs two questions into one: ‘What prompted you to call a health advice/treatment service?’ and ‘Why did you choose to call NHS 111 as opposed to some other service or to visit a service in person?’ These two questions should be asked separately. To take another example, the same questionnaire asked: ‘How do you feel the NHS 111 service dealt with your call?’, to be answered on a scale from ‘very poor’ (1/10) to ‘excellent’ (10/10). This did not allow the respondent to say ‘I was kept waiting a long time but the advice I got was very helpful’. There should have been separate questions about waiting time and quality of the response. Note too that complex questions are not only difficult to answer: they distract the respondent from ‘going with the flow’ of the questionnaire.

Guideline 14. Avoid leading questions.

A leading question is one that suggests to respondents the answer that should be given to it, that puts an answer into their mouths. There are striking examples in a questionnaire recently distributed to clients of one of the NHS Trusts in Cornwall and designed by the Trust itself. It contains seven questions of the form: ‘Do you agree that … services that we provide are doing well?’ Respondents are offered the choice of ‘Yes’, ‘No’ or ‘Don’t Know’ as their reply.[8]

This is a doubly leading question: first, because asking ‘Do you agree?’ puts the onus on the respondent to justify not agreeing with the proposition (agreeing with it requires no justification); and second, because offering no choice other than ‘doing well’ similarly puts the onus on the respondent to justify not agreeing. Because of these built-in biases against disagreement, the results are inherently not trustworthy: they cannot be taken as genuine indicators of client satisfaction. The lesson for the designers of questionnaires is that leading questions should be avoided like the plague.

Respondents could of course have been asked ‘How well are our services doing?’ and offered a numerical scale on which to respond: this would have been a neutral way of posing the question. (Interestingly, this questionnaire also invited respondents to provide feedback in the form of comments: ‘From your experience, do you have any suggestions on how we could improve … services?’ Putting the emphasis on ‘your experience’ makes this a good question.)

Guideline 15. Run a pilot version of your questionnaire.

It is a cardinal principle of good practice that a questionnaire should be ‘piloted’, tried out on a sample of respondents. This is best done by sitting with a person who is filling it in, noting any difficulties that they have, discussing those difficulties with them and trying alternative wordings out on them. A trial enables you to see whether a question is self-explanatory and easy to comprehend. If a respondent answers a question with a query of their own, e.g. by saying ‘What does that mean?’, it is not a good question. Likewise, if the respondent finds it difficult to answer, or can see more than one possible answer, that is an indication that the question needs changing. Piloting a questionnaire can also show that a question is using overly-technical language or is complex and needs to be ‘unpacked’. (See Guidelines 11 and 13 above.)

However much experience you have, and with the best will in the world, having drafted a questionnaire and ‘lived with it’ through the drafting process, your perception of it will be different from that of a respondent freshly confronted with it. Piloting enables you to see it from the respondent’s point of view, and this can be invaluable in helping you to ‘get it right’, with consequent benefits for the insights you gain and the value of the report that you produce.

Guideline 16. Give trained and experienced researchers the task of collating, analysing and reporting findings and drawing conclusions.

Ideally the researchers who drew up the questionnaire will have the task of analysing and reporting the findings. They will be sensitive to subtleties and nuances in the ways that respondents respond, especially if they have also piloted the survey.

Most questionnaires in the field of health and social care will yield information of two kinds: quantifiable data, capable of being aggregated and examined for correlations, etc., and comments, which will be much less easy to collate and draw worthwhile conclusions from. Working with comments is undoubtedly a task for experienced researchers: they will have an eye for recurring topics and will be alert to the significance of variations in the language that respondents use.

Most importantly, as people with well-developed skills of observation and (usually) a background in science subjects, experienced researchers are used to seeing the world in terms of phenomena, to looking for behaviours and mechanisms, and to applying rigour in observation and the treatment of observations. In contrast, graduates in subjects where essay-writing, rather than field and laboratory work, is the norm, tend to see the world in terms of ‘themes’, a word which repeatedly appears – sometimes as ‘common themes’ and ‘priority themes’, and themes which ’emerge’ – in the report on the ‘engagement’ that took place in Cornwall in early 2016 (Shaping the future of Health and Social Care in Cornwall and the Isles of Scilly).[9] Such people tend to use terms without defining them (e.g. ‘priorities’, ‘pathways of care’ and indeed ‘theme’) and sometimes technical terms wrongly (e.g. ‘variance’, when they mean ‘variation’), perhaps because they aren’t aware that these are technical terms, with precise meanings.

It cannot be emphasized strongly enough that a survey, if its results are to be useful and taken seriously, needs to be undertaken and written up by people who know what they are doing and can approach their task with rigour and understanding.

Notes and references (All URLs last accessed 10/08/2020)

1. The  UK-based Social Research Association has useful resources on its publications page: https://bit.ly/SOCRES

2. Shaping the future of Health and Social Care in Cornwall and the Isles of Scilly: A short report to present the findings from engagement with public and providers to help shape future health and social care provision and improve the wellbeing of our residents (May 2016)  https://bit.ly/2iwtpTb

3. See How not to run a Health and Social Care Survey, as demonstrated by Cornwall Council and NHS Kernow (May 2016) for an illustration of discrepancies between types of questionnaire.  http://bit.ly/2jviIyp

4. For an illustration of this defect, see Questioning a Questionnaire: Was this one fit for purpose? (On the NHS 111 and out of hours service integration questionnaire published by NHS Kernow in October 2016)  (December 2016)  https://bit.ly/2jWOtQe

5. Have your say on the Cornwall and the Isles of Scilly Health and Social Care Plan 2016-2021  (December 2016)   https://bit.ly/2jQG19f

6. As Note 5.

7. As Note 4.

8. Cornwall Partnership NHS Foundation Trust, Council of Governors Members Survey 16.  (Undated but 2016)  https://bit.ly/2juZ1GU

9. As Note 2.