Evaluating for the Future: How Evaluative Monitoring can transform Complex Evaluation Landscapes 

As global funding landscape shift and climate and environmental challenges grow more complex, traditional monitoring and evaluation approaches are under increasing pressure to deliver faster insights and more actionable learning. Two of our Ecorys UK evaluation specialists are at the forefront of this evolution and share their experience and learnings: 

About our experts


Cara Stoney is a Senior Research Manager at Ecorys, working on international projects with a focus on climate and environmental challenges, as well as development finance and innovative financing mechanisms.

Tom Bowe is a Research Manager at Ecorys, specialising in climate and environment work across both the International and UK Markets with a focus on biodiversity conservation, climate resilience, green infrastructure, and climate / nature finance. 

Drawing on our work evaluating a complex, UK ODA-funded climate and nature portfolio, we have had the opportunity to test approaches that are becoming increasingly important for the future of evaluation. One of these is the role of evaluative monitoring in complex portfolio evaluations

In this blog, we share key lessons from our experience and explore how evaluative monitoring can provide critical insights and improve decision-making within complex operating environments. We recently shared our approach at the Ecorys Summer Methods Festival, which provided a valuable space to exchange experiences and insights with colleagues across the sector, which have further informed our reflections. 

Why evaluative monitoring matters 

In our experience, monitoring and evaluation or often treated as separate functions rather than as part of a connected system: 

  • Monitoring typically tracks programmes/policies against indicators to verify the expected results against those achieved. 
  • Evaluation, by contrast, usually assesses effectiveness and outcomes at a set point in time. This can be conducted at the beginning (formative), end (summative) or during (developmental) initiatives, and can include process, impact, and value for money evaluations.  

Evaluative monitoring bridges these two approaches. It applies evaluative thinking to the ongoing analysis of monitoring data, asking not just: “What do we see happening?”, “What is working—or not?” but also “Why is this happening and what should we do about it”. 

This deeper layer of enquiry: 

  • Enables evidence-informed improvements and adjustments in real-time 
  • Allows reflection and assessment to be more structured, and in the case of large portfolios, more systematic across programmes and projects

Unlike traditional monitoring, evaluative monitoring does not stop at measuring progress against indicators. It digs deeper and places data within its wider context of governance, decision-making, incentives, and stakeholder relationships. This enables us to better understand how and why change is (or isn’t) happening.

Evaluative monitoring is not a completely new approach, but it is still emerging and gaining traction as a valuable tool. Experiences from programmes such as BRACED have shown the value of this approach. Instead of treating monitoring purely as an accountability exercise, evaluative monitoring interrogates design assumptions and explores the mechanisms underlying outcomes. This requires integrating evaluative thinking into monitoring processes to answer the what, why, and how of change.

Key drivers behind the shift 

We’re seeing a growing demand from donors and development partners for deeper insights and nuanced approaches. Factors driving this shift include: 

  • Limited detail in programme and project reporting and challenges accessing high-quality data. 
  • Budget constraints, and reduced appetite for conventional evaluations. 
  • A stronger emphasis on learning, adaptation and recommendations throughout implementation. 
  • Long timescales for environmental and climate outcomes to materialise 
  • Limitations of traditional baseline, midline and endline evaluation designs for portfolio-level learning. 

Evaluative monitoring directly addresses these challenges by providing real-time feedback and continuous learning loops—ensuring insights are embedded throughout a programme lifecycle rather than only captured at the end.

Applying evaluative monitoring in practice 

In our work, evaluative monitoring has allowed us to add depth and practical value beyond standard monitoring deliverables. Some of the ways we have applied it include: 

  1. Providing more detailed narratives on output and outcome performance. 
  2. Outlining reasons for variations and contextual challenges
  3. Offering actionable, specific recommendations for partners and funders. 
  4. Explaining where results are missing or unavailable. 
  5. Holding validation workshops with implementing partners to verify findings and share experiences. 

One of the ways we have applied evaluative monitoring is to routinely collect data on the likelihood of programmes and projects leading to transformational change. Our team integrated a ‘signals of change’ framework to critically examine outputs and outcomes. This helps us identify the underlying factors that trigger and accelerate systemic, sustainable change. For instance, a driver might be building capacity in ways that foster independence, ownership, and institutionalisation beyond the project lifecycle. We also gather data on mechanisms and enablers that support these drivers, such as creating incentives through evidence-sharing or enabling scalability and replication of innovations. 

Challenges and considerations 

While evaluative monitoring offers clear benefits, we have also experienced some challenges:  

  • Scope must be clearly defined: Evaluative monitoring is not a formal evaluation, and clarity is needed on project deliverables. Evaluative monitoring work tends to be guided by the results framework, and not the evaluation framework. However, outputs can be used to inform future evaluations, as well as learning priorities. 
  • Resource availability: Evaluative monitoring is resource intensive, so it requires careful budgeting and planning.  
  • Data variability: In the case of complex portfolios, the quality and availability of data and collaboration can differ, requiring adaptable approaches. 

Despite these challenges, evaluative monitoring provides significant value for large portfolios, helping bridge the gap between traditional monitoring and evaluation timelines. It supports ongoing learning, contextual understanding, and adaptation in complex operating environments. 

Looking ahead: the future of MEL 

The MEL landscape is increasingly being shaped by shifting funding realities and government priorities.  

Tightening budgets and shifting priorities: 

Global ODA budgets are shrinking – UK aid, for example, is set to reduce from 0.5% to 0.3% of Gross National Income (GNI) in 2027.  and other donors like the USA, France, and Germany also made significant cuts in 2024. Funders are becoming more selective – prioritising impactful, strategic spending and seeking maximum learning for limited resources.   

The UK aid budget in 2025 and 2026 will focus on humanitarian, health and climate crises, with spending being prioritised through the most impactful multilateral organisations. This means that complex portfolio evaluation models are likely to become increasingly common. 

With funding tightening, flexible and adaptive MEL approaches are becoming critical. The FCDO’s 2025 Evaluation Strategy points to a growing interest in innovative methods for evaluating diplomacy.  

Embedding learning and evidence: 

Amidst declining budgets, demonstrating ongoing impact and embedding learning and evidence into decision-making throughout the lifetime of portfolios and programmes is more important than ever. Evaluative monitoring is one way in which we can provide these real-time insights and actionable learning for clients. By breaking down the silos between monitoring, evaluation and learning, evaluative monitoring allows MEL frameworks to be designed and delivered in a more integrated and strategic way. 

Strategic innovation: 

Looking ahead, the future of MEL will require both adaptation and innovation. As funding realities shift and client needs evolve, we need to be strategic, flexible and intentional in how we design and deliver MEL work. Evaluative monitoring offers an agile mindset that supports this. 

Why Ecorys

25 September 2025

7 minute read