ECDC's logo
Quantitative and qualitative research on the programme outcomes

Research and evaluation have always been fundamental to the work of the Child Development Programmes, for the obvious reason that it is necessary to prove effectiveness to the outside world, to the parents who are visited, to those people who make the programme visits and to the organisations that fund the training of the programme visitors. Experience has shown that those least in doubt of the need for and effectiveness of the programmes are the parents themselves.

Rather than present a detailed account of over 25 years of assessing the programmes’ effectiveness or otherwise, brief reports will be given here of a number of the ECDC’s key research and evaluation studies. The full ECDC documents are available; smaller studies are free on request; the large studies can be obtained at cost price from the ECDC (see Contact page in this website).

The nature of these studies varies from ‘hard’ controlled research to ‘softer’ comparisons between programme and non-programme families. The softest studies are those based on observing changes in the visited families, without a comparison group; here one can only claim anecdotal evidence, but a large volume of such evidence is powerful and can have a certain Bayesian-type credibility.

Most professional practice based on experience    Nearly all professional practice offers much the same broad range of different levels of evaluating medical, psychological, nursing, educational or social procedures. In any one field there are a relatively small number of randomised controlled trials (rcts) that go some way to establishing effectiveness, although most rcts, because of their tight controls, lack adequate field or external validity. But there are a much greater number of professional practices and programmes whose continued usage depend primarily on the personal experiences of the professionals who see their ‘patients’, ‘clients’ or other ‘subjects’ as having benefited from their practices, and so those practices have become an accepted part of the professional discipline because on the balance of probabilities, numerous positive experiential judgements are assumed to be reasonably reliable - though of course are always open to challenge or alternative explanations.

The problems of judging research integrity and effectiveness are compounded by the strong ‘political’ views of those professionals who are in principle opposed to newer, more radical methods of dealing with human problems, and who use every strategy to belittle innovations with which they do not agree. The same criticisms apply of course to advocates of new ideas, whose enthusiasm is not always tempered by an evaluative caution. At root it is the age-old problem of new scientific paradigms having to fight their way to acceptance over many years or even decades. Unfortunately some of the once new paradigms can themselves become, in time, rigid standards whose supporters reject other yet newer form of practice as unacceptable.

The ‘rcts’ are a good example of methodological aggrandisement. While they are uniquely powerful when assessing the effectiveness of a basic trial of the value of a single new pharmaceutical drug, for example, most field studies of any useful size are of such multivariate complexity and involve such large numbers of subjects and varying contexts that it is virtually impossible to undertake hard rcts on them. Yet all too often promising field studies are damned because they have not satisfied the strict rct advocates, or the studies themselves have been narrowed down to the point of questionable validity in order to fit the rct constraints, and so the field innovations are abandoned and their potential for further development is lost. Despite those and other serious flaws, rcts continue to be treated across large areas of academe as the only acceptable ‘gold standard’ for judgement. In contrast, many US researchers have long recognised that there are numerous intermediate type studies (such as quasi-experimental research) which can provide strong and acceptable evidence for effectiveness, even if they are not rcts in the narrow sense by which most studies are judged or interpreted on this side of the Anglo-American ‘pond’.

No UK parenting programme considered worth supporting    The latest example of this rather limited approach has been the UK government’s decision in 2007 that not one of Britain’s many parenting programmes was worth developing and funding more widely as a demonstration model. Instead an American programme backed by three positive rct studies was chosen for expensive replication in primary care trust demonstration sites across ten centres in the UK. The government has now announced that this development is soon to be followed by replication across a further ten ‘non-research’ NHS sites, followed soon after by another ten NHS sites in which research will be undertaken.

Unfortunately the UK’s Sure Start programmes - focused on pre-school children and their parents in over 500 local sites across the country - have been declared close to failure by several large research studies, despite the fact that thousands of Sure Start workers are convinced that many of the local projects did achieve a great deal, and that many or most of the participating parents in particular projects have been encouraged and enabled in ways that they would not have been without Sure Start. The lack of a strong intervention structure in most Sure Start projects was certainly a limiting factor, reducing the level or possibility of success. Because the outcomes of several major research studies on Sure Start have proved almost completely negative, the authorities have (as described in the previous paragraph) looked abroad for an alternative parenting programme, selecting a well-researched programme from a cultural and professional environment totally different from that of the UK, rather than re-examining Sure Start and assessing whether the degree of structure and discipline in its projects may in fact have been the crucial factor in their successes or failures across a wide range of outcomes.

It should be noted that the USA’s massive Headstart programmes were also initially dismissed by researchers as having little or no effect, but were later found to have many positive outcomes when comparing Headstart with non-Headstart families in the children’s later teenage and adult years. A massive (but seldom cited) series of studies by a 12-university consortium, analysing samples ranging from 8,000 down to 26, found impressive improvements in functioning ten years after the children and their parents had participated in Headstart. On every major academic criterion the relatively large samples of intervention children were ahead of their control peers, with less class retention, lower school dropout rates and improved reading and mathematics scores. (Lazar, I, Hubbell V R et al, 1977, The persistence of preschool effects, Community Service Laboratory, Cornell University; and Lazar, I and Darlington, R B, 1978, Lasting effects after preschool, Education Commission of the States.) Headstart suffered from the same initial problems as Sure Start in that across the vast number of Headstart projects there were many examples of poorly trained staff and lack of adequate controls. But there were also a great many successful projects where training and structures were satisfactory.

It is in the light of these considerations that the many and varied studies of the effectiveness of the Child Development Programmes need to be judged.

See also:   Early Childhood Development Centre: main references and key documents

Return to page on Research Findings from the CDP

         To print article

         Hold Ctrl

         Click P