
Paul Ekblom has designed and undertaken various evaluations of crime prevention, ranging from quick-and-dirty to full-blown prospective quasi-experimental designs, and from individual projects to major programmes. He has also developed and contributed views on evaluation more generally.
This page covers:
- Purposes of evaluation
- Individual evaluation studies
- Systematic reviews of evaluations and syntheses of findings
- Advisory material for evaluations
- Commentaries on evaluation
- The place of evaluation in the 5Is framework

Purposes of evaluation
Evaluation has a range of purposes, for different stakeholders/ dutyholders, including:
- Improving performance, through feedback and learning, of crime prevention, security and community safety
- Informing decisions to continue, modify, extend or terminate interventions
- Capturing knowledge of good and bad practice at operational, programme and policy levels
- Guiding intelligent, context-sensitive replication
- Supporting accountability/ value for money

Individual evaluation studies
- Police truancy patrols (Ekblom 1979b) – quasi-experimental study which suffered from a range of unfortunate events (e.g. school closures due to snow during the planned trial week), yet managed to generate some practical findings. Not least among these was that the demand, from shopkeepers, that truancy patrols be established was based on the false assumption that it was truants who were doing the theft from shops and malls.
- Post Office robberies (Ekblom 1987, 1988a) – retrospective evaluation of the impact of fitting security screens – involved an early attempt to unpick causal mechanisms (before encountering Scientific Realism – see Ekblom 2018b).
- Safer Cities Programme burglary schemes (Ekblom et al. 1996 – main report; Ekblom 1992, 1996b) – industrial-scale evaluation of several hundred schemes in a number of cities in England & Wales. This involved adapting/inventing a whole suite of approaches and solving a range of practical/ methodological problems, drawing on geographic information systems and an early example of multilevel modelling to yield dose-response relationships across multiple sites in multiple cities. An account of the various innovative techniques developed for the Safer Cities evaluation is in Evaluating Crime Prevention (Ekblom and Pease 1995). Technical papers on ‘scoping, scoring and modelling’ are here: Ekblom (1994b), Ekblom et al. (1994).
- Attempted impact evaluation of Grippa clips for reducing theft of customers’ bags in bars – thwarted by company management who pulled out of agreement after the pilot study (and after large numbers of clips had been manufactured). See Ekblom et al. (2012) on the product design; Bowers et al. (2009) on ‘CRITIC: A Prospective Planning Tool for Crime Prevention Evaluation Designs’ which sought to simultaneously consider statistical power issues and the practicalities of product testing; and a process evaluation (Ekblom 2012e, and presentation) on the problem of multiple levels of ‘Involvement failure’ in the project. This demonstrates the use of the 5Is Framework in ‘failure mode analysis’.

Systematic reviews and syntheses of findings
Paul Ekblom has been involved in systematic reviews and syntheses of what works, e.g. to feed into a toolkit for preventing terrorism/crime at complex stations. This was a particular challenge because evaluative evidence of counter-terror action is extremely rare, due to a combination of factors (fortunately rare attacks, understandable secrecy etc). The review therefore adopted a mixed methods approach with emphasis on plausibility derived from tested theory and practical considerations, leading to a dual rating system on quality of action and quality of evidence.
The following two slides show this rating system, for the literature review and the fieldwork:


The evidence was consolidated on a ‘Security Action Tree‘, based on the Conjunction of Criminal Opportunity Framework covering interventions, and the 5Is framework for action under the task streams of Intelligence, Implementation and Involvement. The evidence and practice quality ratings were displayed on this tree, and also presented to the toolkit users when listing options for interventions to select.

Advisory material for evaluations
Paul Ekblom has also advised on others’ evaluations e.g. of the Urban Programme in England & Wales (cf. Robson et al. 1994); the Design Improvement Controlled Experiment (DICE) which tested Alice Coleman’s design-based ideas for estate-based crime reduction; and the UK implementation of Communities that Care. The latter was a replicative delivery package for setting up the capacity of local teams to generate and undertake good quality preventive projects or services themselves. It aimed to foster rigorous evidence-based action improving the lives and behavioural outcomes for young people, customised to local problems and context.
Work on evaluation (as an expert adviser to the EU Crime Prevention Network in support of its Multiannual Strategy 2021-25) has involved developing a detailed set of natural-language questions that impact and process evaluations ought to ask. This is based on the 5Is Framework with some wider extensions and refinements.

Commentaries on evaluation
- Planning local crime prevention evaluations: How to make intelligent choices. Safer Places Network Seminar, KTH, Stockholm (2017). This supplies guidance for practitioners and their research colleagues for designing evaluations which will serve their aims.
- Managing the risks of evaluating crime prevention interventions using the 5Is framework. International Environmental Criminology and Crime Analysis seminar, London (2007).
- Identification and application of Best Practice in Crime Prevention – some fundamental questions and some attempted answers. Presentation at European Crime Prevention Network (EUCPN) Best Practices Conference, Finland (2006). This covers functions and levels of evaluation at project, programme and policy levels. It attempts a consistent suite of definitions of terms such as input, output, outcome etc. It also puts forward a set of evaluation criteria for guiding the choice of preventive actions from a knowledge base, which go far beyond the conventional ‘impact/cost-effectiveness’ questions, as the slides below set out.
- From the Source to the Mainstream is Uphill. Discusses difficulties in applying the findings of evaluations in the roll-out of practical projects and programmes (Ekblom 2000b).
- Evaluating Crime Prevention. A major review chapter in Crime and Justice 19 (Ekblom and Pease 1995). Includes coverage of Safer Cities evaluation methods/purpose-built analytical software app, and early discussion of Realist Evaluation.
- Evaluating Crime Prevention: the Management of Uncertainty (Ekblom 1990). Does what it says on the tin.
The Best Practice in Crime Prevention 2006 EUCPN presentation (listed above) put forward a set of ‘dimensions of choice’ to use when practitioners, programme managers and policymakers are selecting from a knowledge base, which best practice actions to replicate. It follows that these are the kinds of question that evaluations should be designed to systematically address (but rarely do):



Other notes on evaluation
- Benchmarking crime prevention and community safety in EU. EC Forum on General Crime Prevention – Elements of an Evolving European Crime Prevention Policy (2006). Some developed ideas on how benchmarking, evaluation and good practice relate.
- Impact evaluation for practitioners: making it easier and better. ICPC conference, Vancouver (1997). In the original overhead transparency format!
- Improving the evaluation of crime prevention projects. Research note (1993).
- The evaluation of crime prevention: problems, issues and context. Presented at workshop on evaluating crime prevention, Montpellier (Ekblom 1992). This gives an impression of a range of practical challenges and issues in evaluation.

The place of evaluation in the 5Is framework
The 5Is Framework has the twin functions of acting as a process model for crime prevention, security and community safety, and capturing knowledge of practice by describing the action in a structured and detailed way through headings and subheadings that reflect the process.
The 5Is process model comprises five task streams – Intelligence, Intervention, Implementation, Involvement and Impact. The last covers both impact and process evaluation. It draws on much of the material referenced in earlier sections of this page.
Essentially every task listed under each of the 5Is task streams can be evaluated in terms of how well it was carried out (process) and what it achieved (intermediate outcome). The ultimate outcome of a project or service as a whole is described under Impact.
Ekblom (2012e) describes a process evaluation case study of 5Is used in ‘failure mode analysis‘ – identifying a range of things that went wrong in the involvement of various stakeholders in the Grippa project, intended to reduce the theft of bags in bars.
The evaluation material in the 5Is Framework can be used to guide practitioners in doing their own within-project evaluation. It can also be used to assist professional researchers/ evaluators undertaking rigorous and independent evaluations of the action undertaken by the practitioners. Professional evaluations can be done summatively (where the evaluators maintain strict detachment from the project action) or formatively (where they have ongoing involvement during the course of project implementation, feeding in guidance and collaborating with the practitioners to maximise the chance of success). Action researchers can also initiate formal field trials/ demonstration projects themselves.
The 5Is Framework is not anchored to a particular technical type of evaluation methodology. However, it is more attuned to Realist Evaluation with its focus on exploring causal mechanisms and contexts, both of interventions and of wider processes e.g. in Implementation and Involvement. Newton (2022) sets out a framework for combining 5Is and Realist Evaluation.
The Impact task stream of 5Is (which covers both impact and process evaluation) is introduced here. It includes a list of the broader kinds of information to be sought from impact evaluations. Chapter 15 in the 5Is book sets out the task stream in detail.
The complete master list of 5Is headings for use as a knowledge capture framework covering process and impact evaluation information can be viewed here and are downloadable as a Word document here.