Findings From Four-Day Work Week Trial – Should We Be Sceptical?

by Michael Sanders

The following February 2023 events will take place: Results of the largest ever four-day workweek trial in the world were publishedThis indicates a number of positive effects on employee wellbeing as well as organisational revenue.

We know this based on evidence job quality Wellbeing is important, so it is reasonable to expect that changing work patterns could have benefits for your wellbeing. How confident can we be in the claims made by this report?

Michael Sanders, Evidence Associate, discusses the study and the need to be skeptical.

The new report, published by Autonomy, details the results from a four-day pilot in which 61 UK companies conducted work over the period June 2022 to December 2022. The 2,900 participants voluntarily adopted shortened work weeks.

This report is very positive. However, there are four reasons to remain skeptical. It is not a sign that four-day workweeks are bad. It is not clear if this study will lead to an answer.

1. A counterfactual is needed

To determine the effect of an intervention (in this case, a 4-day week), we must have an idea of what would happen if we didn’t give people the intervention. This can be done by conducting a randomised trial in which some firms shift to a 4-day week randomly while others don’t.

Instead, participants are compared before and after the implementation. This type of analysis has a lower confidence level in our findings. It is difficult to believe that the intervention caused any changes. There are many factors that can be attributed to the intervention. Participating organisations might have experienced lower staff turnover or happier workers than the average, but they could be benefiting from a shifting tide. According to the report authors, “great resignation” is a reason organisations should participate.

See also  How Promotional Should Marketing Content Be?

Although a randomised trial might not have been possible in this case, there are other options that may be available. One example is collecting data from a matched group of organisations or the nine organizations that were interested in participating but weren’t able to due to various reasons. Similar methods are used for the evaluation of the National Citizens’ Service. Because of its consistency in study design and quality, the evaluation was included in evidence reviews by the Centre such as Social capital It allows for robust conclusions to be drawn. This shows that a program can be designed, delivered, and evaluated with conscious and consistent effort. It can also produce valuable, focused learnings.

2. Inadequacy of independent evaluation

Only people who wanted to see the week succeed were able to conduct the study. It is understandable that they are most motivated to study it. However, it makes the results less reliable because of inherent bias.

Take, for example: meta-analyses John Hattie, an Australian academic, found that attainment interventions in education contexts had large average effects. Nearly all the interventions were evaluated by the developers or “true believers”. However, the Education Endowment Foundation trials, which were evaluated more or less impartially by independent evaluators, showed dramatic results. smaller effects On average, it is about one-fifth of the size.

3. Insufficient clarity regarding outcome measures

The study provides information on the results of a wide range of psychological constructs, including stress and burnout. It also captures a general feeling of positive and negative emotions. It would be reasonable for a report to include details about the measures used (e.g., the Copenhagen Burnout Inventory) and/or publish the entire survey in an annex. The report authors do not practice either of these practices.

See also  Priyanka Chopra’s ‘Mumbai Nights’ Popcorn Just Launched, and It Features This RD-Beloved Ingredient

4. Inadequacy of statistical clarity, detail, and consistency

When there is no strong counterfactual, a standard method of analysis is to compare the mean of pre- and post-treatment. Although this analysis is well-documented and mentioned in some places it is not common. Pie charts instead show how many people are experiencing increases, decreases or no change in the metrics. This is a unique approach to analysis.

Sometimes, statistically significant changes are mentioned when changes are reported. We must assume that the significance of the tests not mentioned in the paper does not mean that the changes are not significant. This applies to burnout, work stress and mental health scores, anxiety, depression, fatigue, and negative emotions.

It is difficult to determine who the study is looking at from the reports. The revenue figures, which indicate a 1.4% rise in revenue over six months of study in high inflation environment, are based on less than 40% of the sample. The baseline survey was completed by 70% of the participants. The endline response rate is remarkable, but it becomes even more impressive when you consider that we don’t know who responded at what time. Simple mean regression could be used to explain the results if the people who are most stressed and least happy at baseline are those most likely to respond at the endline.

General reflections

The conditions that enable us to thrive at work – such job security, learning opportunities and supportive environments – have a positive impact on our productivity as well as our well-being.

See also  Wellbeing And Debt In The UK

It is important to understand if a 4-day work week makes a difference. This question should be continued to be questioned.

We need to have a solid understanding of the impact of investment decisions and workplace changes in order to ensure that successful investments are made. This will help us support both the economic future and individual well-being. This is especially important as ideas evolve from small trials to large-scale implementations. Although it is possible to conduct solid workplace trials, case studies are still the best.

It is crucial that we improve the quality and quantity of evidence. We need to ask whether interventions work, how long they last, who they serve, and if they meet the cost-effectiveness threshold. This is true across all sectors of civil, public and private society.

Leave a Comment