Methodological issues associated with collecting sensitive information over the telephone - Experience from an Australian non-suicidal self-injury (NSSI) prevalence study

Anne W. Taylor, Graham Martin, Eleonora Dal Grande, Sarah Swannell, Simon Fullerton, Philip Hazell, James E. Harrison

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

Background: Collecting population data on sensitive issues such as non-suicidal self-injury (NSSI) is problematic. Case note audits or hospital/clinic based presentations only record severe cases and do not distinguish between suicidal and non-suicidal intent. Community surveys have largely been limited to school and university students, resulting in little much needed population-based data on NSSI. Collecting these data via a large scale population survey presents challenges to survey methodologists. This paper addresses the methodological issues associated with collecting this type of data via CATI. Methods. An Australia-wide population survey was funded by the Australian Government to determine prevalence estimates of NSSI and associations, predictors, relationships to suicide attempts and suicide ideation, and outcomes. Computer assisted telephone interviewing (CATI) on a random sample of the Australian population aged 10+ years of age from randomly selected households, was undertaken. Results: Overall, from 31,216 eligible households, 12,006 interviews were undertaken (response rate 38.5%). The 4-week prevalence of NSSI was 1.1% (95% ci 0.9-1.3%) and lifetime prevalence was 8.1% (95% ci 7.6-8.6). Methodological concerns and challenges in regard to collection of these data included extensive interviewer training and post interview counselling. Ethical considerations, especially with children as young as 10 years of age being asked sensitive questions, were addressed prior to data collection. The solution required a large amount of information to be sent to each selected household prior to the telephone interview which contributed to a lower than expected response rate. Non-coverage error caused by the population of interest being highly mobile, homeless or institutionalised was also a suspected issue in this low prevalence condition. In many circumstances the numbers missing from the sampling frame are small enough to not cause worry, especially when compared with the population as a whole, but within the population of interest to us, we believe that the most likely direction of bias is towards an underestimation of our prevalence estimates. Conclusion: Collecting valid and reliable data is a paramount concern of health researchers and survey research methodologists. The challenge is to design cost-effective studies especially those associated with low-prevalence issues, and to balance time and convenience against validity, reliability, sampling, coverage, non-response and measurement error issues.

Original languageEnglish
Article number20
JournalBMC Medical Research Methodology
Volume11
DOIs
Publication statusPublished or Issued - 2011
Externally publishedYes

ASJC Scopus subject areas

  • Epidemiology
  • Health Informatics

Cite this