Sonia Whiteley
Sonia Whiteley is a multi-disciplinary, applied social research strategist who creates effective research solutions for government, not-for-profit, academic and commercial clients. She specialises in large-scale research programs to support evidence based decision making about policy and practice, particularly in the areas of education, welfare reform, housing and justice. Her most recent research projects include the Australian Early Development Census and the Quality Indicators for Teaching and Learning for the Australian Government Department of Education.
less
InterestsView All (6)
Uploads
Papers by Sonia Whiteley
The QILT measures will work together to provide a coherent insight into student engagement, the student experience and post-study outcomes. The challenges of meeting this broad range of requirements to deliver an indicator framework that provides timely evidence for institutions to improve the experiences of current and future students and to position themselves in the higher education landscape will be discussed.
• The University Experience Survey (UES) (which will become the Student Experience Survey (SES) in 2015),
• The Graduate Outcomes Survey, and (GOS)
• The Employer Satisfaction Survey (ESS).
Each of these surveys are at different stages of maturity. The SES is well established in the higher education sector. The ESS is in a post-pilot phase. The GOS is currently in development. Australia’s 40 universities have previously been involved in national student and graduate surveys. However more than 100 private higher education providers that are in-scope for QILT have little experience with this type of survey program.
As survey managers, our challenge is to deliver high quality survey outcomes on budget and with minimal error while supporting an extensive change management process across the sector. Using a Total Survey Error (TSE) framework to evaluate the design of each survey and develop a structured approach to making decisions about continuous improvement is already part of our research practice. Attempting to address all of the potential survey errors during a single cycle of data collection would be costly and make it difficult to determine which mitigation strategy was effective. A risk management approach has, therefore, been developed to assess each error of representation or measurement so that it can be prioritised for remediation. The integration of TSE and risk management frameworks across the QILT surveys will be discussed.
Data from a recent Queensland investigation will be used to illustrate instances of planned attrition, ‘parking’ with the intention of changing institutions and ‘churning’ to alter course enrolments. Implications for student selection mechanisms, course advice and the contextualisation of retention rates will be discussed.
To make this discussion more concrete and accessible the conceptual and methodological points have been illustrated with practical examples from the Course Experience Questionnaire (CEQ).
The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and future investigations.
As one of the key aims of a TSE approach is to find a balance between achieving a survey with minimal error and a survey that is affordable, it is unlikely that a considerable number of enhancements to regular programs of research can be made in a single cycle. From an operational perspective, significant alterations to data collection processes and procedures have the potential to create more problems than they solve, particularly for large-scale, longitudinal or complex projects. Similarly, substantial changes to the research approach can have an undesired effect on time series data where it can become difficult to disentangle actual change from change due to methodological refinements.
The University Experience Survey (UES) collects feedback from approximately 100,000 undergraduate students at Australian universities each year. Based on previous reviews of the UES, errors of measurement appeared to make less of a contribution to TSE than the errors of representation that were associated with the survey. As part of the 2013 and 2014 collections, the research design was modified to directly address coverage errors, sampling errors and non-response errors. The conceptual and operational approach to mitigating the errors of representation, the cost effectiveness of the modifications to the research design and the outcomes for reporting will be discussed with practical examples from the UES.
TSE provides a conceptual framework for evaluating the design of the University Experience Survey (UES) and offers a structured approach to making decisions about changing and enhancing the UES to support continuous improvement. The implications of TSE for institutional research will be discussed using the UES as a case study.
Of course, more often than not, the research has no effect on policy outcomes at all.
The purpose of this paper is to examine the nexus between research and policy using a framework developed to analyse the interplay between interventions to promote research use, underlying theories about research use and the policy cycle. The feasibility of developing a plan that would facilitate the increased use of institutional research will also be investigated as will a more generic suite of strategies that can be pursued by institutional researchers to maximise the impact of their investigations at each stage of the policy cycle. Issues relating to the policy cycle, evidence-based policy and approaches to improving research use by decision-makers will be discussed.