The cancellation of Ontario’s basic income project not only violates our obligation as a society to ensure economic security for all. It also breaches the ethical obligations we have to those participating in research, and underscores the need for a multi-faceted research methodology in designing better income security programs.
The pilot promised a comparison of those receiving a monthly basic income in test sites in three areas of Ontario with those who did not. The research was aimed at ascertaining “whether a basic income helps people living on low incomes better meet their basic needs and improve their education, housing, employment and health.”
The Liberals put their faith in an evaluation design that approximated a randomized controlled trial. In research like this, a discrete variable (the basic income payment) is received only by those in an “experimental” group, and a comparison is done with a similar “control” group (who do not get the payment) to see if different, and potentially better, outcomes accrue to the experimental group.
My colleague at the University of Manitoba, Prof. Gregory Mason, recently made the made the case that it was time to abandon the project.
He argued that because the basic income pilot encountered several practical problems when setting up its evaluation methods as a more or less “pure” randomized controlled trial, there was scant valid and useful data to be garnered from the project.
But, respectfully, I believe that a great deal was lost with the cancellation of the project. The moral and ethical implications of scrapping the program must not be ignored.
Some 4,000 recipients of benefits in the pilot—the members of the “experimental” group—are now without the financial support that was promised to them.
This abrupt and unexpected cancellation of the pilot by the Ford government amounts to a profound moral violation of the responsibility we have towards those who participate in research. This obligation is consistent with, but also goes beyond, the responsibility of narrow ethical research techniques as approved by research ethics boards.
The negative impact on those people has been extensively reported in the media, including in the pilot sites of Hamilton, Thunder Bay and Lindsay.On the campaign trail in the spring of 2018, Ford committed to allowing the three-year basic income pilot run its course. But Ford broke the promise less than two months after he was elected. The cancellation was an act of bad faith on the part of the new government to Ontario voters, and more importantly to the individuals already receiving basic income payments.
While these stories may be anecdotal, they describe real and significant hardships for those who had been promised a chance for a better life. The cancellation of both the pilot project, and of data collection and analysis from the three pilot communities, is a profound failure to uphold an ethical and moral obligation to research participants.
This ethical breach is not the fault of the team of academics and program evaluators who were in place to carry out the research. The blame must be assigned to their new political masters.
Prof. Mason’s argument suggests that the only worthwhile research design for the Ontario basic income pilot was a randomized controlled trial (RCT). But there are several tools in the research methodology toolkit besides a RCT design. Other methods could have been used to gather meaningful and useful data on the Ontario basic income pilot.
For instance, researchers might have amassed systematic data from those receiving a basic income payment in order to better understand the advantages and disadvantages, from the recipients’ point of view, of this new design for income assistance.
Quantitative techniques such as surveys, and qualitative techniques like interviews and focus groups, could have provided in-depth and nuanced evidence directly from the research participants themselves, even in the absence of a control group.
Comparative research could have also been done on the costs and benefits of a basic income payment compared to existing social assistance and disability support benefits using aggregate program, administrative and financial data.
All research methods have advantages and disadvantages. In certain contexts (for example, pharmaceutical testing), RCTs might be seen as the most rigorous and desirable methodology. But when tackling social scientific questions that are inherently complex and in constant flux, RCTs may not only be impractical, they may also have inherent drawbacks.
Alan E. Kazdin is a past president of the American Psychological Association, and (as quoted by Rebecca Clay in 2010) cautions that “overreliance on RCTs means missing out on all sorts of valuable information.” A 2016 study delved into the difficulty of applying the RCT method specifically to economic questions, making the point that “an RCT cannot simply be a matter of simple extrapolation from the experiment to another context.”
One thing seems clear—the dysfunctional and oppressive nature of our current “last resort” income assistance system makes research into better approaches absolutely imperative.
Not proceeding with the basic income project, and not collecting available data from it, means that we are passing up a golden research opportunity.
Even if it were possible to run a highly rigorous RCT research design in a basic income project, there’s one big problem.
Research subjects in a pilot know that their benefit will cease when the research project ends. The recipients of an actual, operational basic income program, however, would know that there is no end date for the benefit—they will receive it for as long as they’re eligible.
So it would be reasonable to assume that the economic and social choices of basic income recipients (on questions such as employment, education, accommodation and fixed household expenditures) would differ between these two conditions.
Those with long-term assurance that their financial safety net is in place might take more risks and make longer-term plans to improve their economic situations. Thus, extrapolating from a time-limited basic income experiment run as a RCT to a real-world scenario seems an artificial and potentially misleading exercise.
While it’s important to make the case for a variety of methods (beyond just RCTs) in basic income research, this may be a moot point in regard to Ontario’s pilot. Despite national and worldwide dismay that the project is being cancelled, Ford seems committed on ideological grounds to stop the payments and halt the related research.
It can only be hoped that those who have been receiving basic income payments in the project will be given “a lengthy runway” to adjust to their new circumstances. Ontario’s minister of Children, Community and Social Services has given, so far, only a vague commitment that this will be the case.
Hopefully the project’s participants can also continue to tell their stories in the media and to academic researchers. We researchers need to gather evidence in a variety of ways if we are to contribute to the design and delivery of better income security programs.
James Mulvale is an associate professor at the University of Manitoba.