Monitoring and Evaluation | Using data to create impact | IDR https://idronline.org/expertise/monitoring-evaluation/ India's first and largest online journal for leaders in the development community Thu, 09 May 2024 10:30:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.4 https://idronline.org/wp-content/uploads/2018/07/Untitled-design-300x300-1-150x150.jpg Monitoring and Evaluation | Using data to create impact | IDR https://idronline.org/expertise/monitoring-evaluation/ 32 32 Case study: The importance of nonprofit M&E systems https://idronline.org/article/monitoring-evaluation/case-study-the-importance-of-nonprofit-me-systems/ https://idronline.org/article/monitoring-evaluation/case-study-the-importance-of-nonprofit-me-systems/#disqus_thread Thu, 09 May 2024 06:00:00 +0000 https://idronline.org/?post_type=article&p=58250 coloured pencils against a white background_m&e

Monitoring and evaluation, or M&E, is a commonly used term in the social sector. For an organisation to have an accountable programme design, strengthening its M&E capacity is integral. How does an organisation continue to measure and evaluate its work while attempting to scale? What challenges does a nonprofit, especially one that started working at the grassroots, face in building a sustainable M&E framework? What sort of capacity building needs to take place and how does one gather the resources required? To answer some of these questions, this case study looks at Tapasya—a grassroots nonprofit that has implemented effective M&E as part of its model. Tapasya was started in 2018 by Tapas Sutradhar and Mrinal Rao to support families from socio-economically vulnerable backgrounds in accessing government social welfare schemes. At its inception, the organisation began as a policy implementation agency. It had a small team of two founders, one project coordinator, and three helpline callers, along with a limited budget of INR 15 lakh per year for the first two]]>
Monitoring and evaluation, or M&E, is a commonly used term in the social sector. For an organisation to have an accountable programme design, strengthening its M&E capacity is integral. How does an organisation continue to measure and evaluate its work while attempting to scale? What challenges does a nonprofit, especially one that started working at the grassroots, face in building a sustainable M&E framework? What sort of capacity building needs to take place and how does one gather the resources required? To answer some of these questions, this case study looks at Tapasya—a grassroots nonprofit that has implemented effective M&E as part of its model.

Tapasya was started in 2018 by Tapas Sutradhar and Mrinal Rao to support families from socio-economically vulnerable backgrounds in accessing government social welfare schemes. At its inception, the organisation began as a policy implementation agency. It had a small team of two founders, one project coordinator, and three helpline callers, along with a limited budget of INR 15 lakh per year for the first two years.

Since then, the organisation has burgeoned to a team of 30 and its budget has grown more than sixfold, totalling INR 1 crore as of FY 2023–24. One of the core reasons behind Tapasya’s growth was that the founders were motivated to put in place a strong M&E system right from the organisation’s inception. Having worked in the social sector prior to establishing their own nonprofit, they realised that an effective M&E system was necessary for two main reasons: assessing impact effectively and enabling the professional growth of the organisation’s employees. According to Mrinal and Tapas, “Even though we lacked the resources to build a strong system in Tapasya’s initial phase, the need for and importance of developing our M&E strategy was always clear in our minds.” This clarity pushed them to adopt key processes and systems that gave them leeway to expand their scope.

The importance of intermediaries

Since the co-founders realised that they had a lot to learn, they approached incubators at various stages in order to build key capacities and forge networks. Tapasya was soon successively incubated by Atma, UnLtd India, and the The/Nudge Institute, which accelerated its organisational development.

Mrinal and Tapas emphasised the impact of the knowledge and support that these incubators offered. “Atma hand-held us through the nascent stages and helped build key areas of the organisation as well as our theory of change. Eventually, we became confident enough to make growth decisions independently. UnLtd India (UnLtd) supported us in identifying our niche and helped us deepen and validate our programme design and M&E, and The/Nudge Institute helped us look at the problem and solution differently. We discovered how a programme addressing a local problem can be scaled through various strategies to address a national issue.”

When the organisation was being incubated by UnLtd, it received mentorship in developing M&E strategies to measure both qualitative and quantitative impact. While Tapasya had previously focused heavily on data-driven measurements, UnLtd emphasised the importance of understanding the broader impact on the lives of the families that it works with. This insight highlighted the importance of empowering families and building community resilience by fostering behavioural change. It also resulted in the organisation monitoring and evaluating its interventions more comprehensively.

Growing organisations need strong M&E

The scale of the organisation’s impact has grown significantly over the years as it gradually built capacity on various fronts. For instance, during its first three years, Tapasya focused solely on enabling access to the benefits of Section 12(1)(c) of the Right to Education Act, supporting 22,000 children over this period. Subsequently, it progressed to enabling access to the Pradhan Mantri Matru Vandana Yojana—a maternity benefit scheme for women—and supported 6,000 women as part of this initiative. The leadership realised that many schemes were underutilised, prompting them to shift their model towards implementing multiple schemes across low-income communities in areas ranging from urban slums to remote Adivasi villages. To expand effectively, however, they needed to have a well-oiled M&E system.

The M&E framework enables the nonprofit to set targets for field workers. 

Tapasya made sure its work was highly measurable. In order to do this, there were several stages involved: first, identifying families in need, then onboarding them into the system, followed by determining their eligibility criteria and verifying their documents. Next, the organisation tracks the number of applications it has completed and which schemes the households have successfully accessed.

An added benefit is that the M&E framework enables the nonprofit to set targets for field workers. Mrinal elaborated on why they set higher targets than required from the beginning. She states, “In cases where families migrate or withdraw from the process, we may not be able to assist them throughout the entire procedure of receiving their social entitlements. Therefore, we always set higher targets to ensure that we have sufficient margins to be able to help the expected number of families. For example, if we wish to support 300 families, we reach out to 500 eligible families.” This approach ensures that M&E efforts align with the organisation’s overall strategy, allowing for effective tracking and evaluation of their impact.

Adopting a tech-driven M&E strategy

Tapasya’s programmes and outcomes have been measured by recording data through tech-driven strategies—each individual’s data plays a crucial role in monitoring the progress and delivery of welfare schemes. Technology serves as a cornerstone in this process, facilitating the mapping of individuals with eligible schemes and monitoring their progress until they benefit from them. Also, the daily work of each field worker can be tracked, building more accountability, and thereby enhancing the efficiency and credibility of the work.

The adoption of technology has made data collection much less complicated.

But it wasn’t always smooth sailing. There was a time when Tapasya’s impact was faltering as it struggled to implement a robust M&E system. The organisation was unable to accurately track the work done by team members. The community mobilisers used to visit families in the community, but there wasn’t an effective way of tracking information such as how much time they spent in each household, or how many visits they needed before a family received their entitlements. Even the leadership team were not able to prioritise the stage-wise progress of the work that they had to do.

In Tapas’ view, the adoption of technology has made data collection much less complicated. “Imagine if one field worker is working with 300–500 families—it is not possible to remember all the families’ information. A family ID number is generated every time a new family is entered into our database. The next time, they [the field worker] just have to type in the ID number, and all the previously entered data about the family will pop up.” Through the family ID database, the organisation is able to log visits to each household and thereby track the aforementioned parameters that had previously remained unmonitored.  

The automation of most of its data collection tools made Tapasya’s tracking system watertight. Witnessing a larger number of families receive their entitlements year on year (via the data they collected) helped generate greater accountability and better performance, as the team members felt a greater sense of ownership over their work. 

coloured pencils against a white background_m&e
Nonprofits also benefit greatly when funders view them as equal partners. | Picture courtesy: Pexels

Both tangible and intangible impact

All partners seek concrete evidence of impact, whether through quantitative data, case studies, or success stories. With relevant data readily available for all projects, the organisation’s partners were able to view the real-time progress of its programmes, which was vital to ensuring trust and transparency. Systematic data collection enables the nonprofit to conduct research studies, validating its work and informing stakeholders within the ecosystem. This data can be shared with government departments to inform intervention strategies, caters to emerging partner needs, and can be used to design projects that directly support communities on the ground.

Tapas indicated that sometimes impact may take forms that cannot necessarily be quantified. Such forms of impact cannot be accounted for as easily as data-driven measures in an M&E framework but are just as meaningful for evaluating impact.

For Tapasya, one such outcome has been the changes in the lives of the sakhis (field workers). These women belong to the communities that they work in and were previously not engaged in any formal employment due to family responsibilities or a lack of opportunities. Once they were provided with the opportunity to work within their community and trained, they witnessed a remarkable transformation. The sakhis reported feeling more aware and informed, noting that they feel valued by their community as integral resources. Even after the organisation moved on to working in other geographies, sakhis have continued to serve as agents of change within their communities. Therefore, the organisation’s impact has stretched beyond just the families captured in their database.

Drawing on their sectoral experience and learnings over their six-year journey at Tapasya, Tapas and Mrinal offered the following advice for other grassroots nonprofits and funders.

Advice for nonprofits

1. Engage in continuous experimentation

According to Mrinal, it’s vital to engage in ongoing experimentation and embrace failure as a means of gaining insights. This allows for informed decisions regarding which strategies to retain and which to discard. At Tapasya, through comprehensive M&E processes and programme design, a wealth of data points has been collected to inform decision-making. This has helped external stakeholders gain confidence in the organisation, as evidenced by its being incubated for six successive years and a sevenfold increase in funding since inception.

2. Never stop learning

The process of learning never stops because the challenges faced by an organisation change as it grows. Stagnation in knowledge indicates a halt in progress.

For instance, when Tapasya conducted a survey of the penetration of 12 welfare schemes in Janta Vasahat, Maharashtra’s second biggest slum, they discovered that the locals were largely only accessing rations under the Public Distribution System, and approximately 80 percent of these schemes were not being availed of by eligible families. They realised that the poor coverage of welfare schemes resulted from a variety of factors, including the government system being overburdened, the lack of support systems, and challenges with service delivery. When Tapasya extended their work to other communities, it found similar patterns, validating the need for the organisation’s interventions.

As the organisation expands, both M&E practices and programme designs must adapt accordingly. Mrinal states, “When Tapasya grew from implementing one scheme to multiple schemes, our entire strategy and M&E had to change with this decision. The funnel to work around one scheme versus multiple schemes is completely different. When working towards implementing a single scheme, the eligibility criteria, necessary documents, and application process were straightforward. However, when you start working on multiple schemes, the complexities increase.”

3. Stand by your ethos

Amid suggestions to transition to a for-profit model, the idea of monetising services arose, especially during the COVID-19 pandemic when funding was scarce. Tapasya conducted a small-scale experiment with 100 families, offering a subscription-based service at a nominal fee of INR 100. Within three months, it became clear that while half of the families were willing to pay and appreciated the service, the other half faced financial constraints and hesitated to enrol. The organisation introspected its core purpose, and it became evident to the team that prioritising profit over their mission of empowering communities would undermine the organisation’s fundamental principles.

Advice to funders

1. Build in the freedom to experiment  

Mrinal highlights that funders must recognise the potential for impact when nonprofit partners are granted the freedom to experiment and evolve. In her opinion, “Donors should not fear failure and should support new ideas or issues emerging in the new [nonprofit] ecosystem. For example, our financial partner Indus Action gave us this freedom right from when they started funding us. They have always encouraged us to experiment, be it around working to implement new schemes, redesigning the project, making M&E changes, building tech, or other aspects of the programme. When your partners trust you, you don’t fear failure.”

2. Engage closely and equitably with nonprofit personnel

Donors should deepen their engagement with their partner nonprofits to better understand one another and build a co-learning system for all. This fosters a collaborative learning environment, benefiting everyone involved. Given the positive influence of the sustained and meaningful engagement with funding partners such as Azim Premji Foundation and SVP India, Tapas believes that, “When the donor does not limit themselves to only seeing through the utilisation of the funds but instead engages with the nonprofits in designing the processes and programmes, it always turns out to be valuable.”

According to Tapas and Mrinal, nonprofits also benefit greatly when funders view them as equal partners. They emphasise that engaging as equal partners contributes to the emergence of better ideas, support, and collaboration, and creates a healthy relationship between funders and their nonprofit partners. This approach enables organisations to grow while expanding funders’ portfolios, thereby enhancing the understanding of how to secure and utilise funds effectively.

Blended models are the future

Tapasya’s M&E is among the key anchors that facilitated its growth. And as a result of this growth, Tapasya has already supported 35,000 eligible families in accessing the benefits of various social welfare schemes. They aspire to support 1 million eligible families across India to access government benefits by 2030.

Tapas spoke with confidence about the organisation’s sustainability. He says, “We have a blended model where we work simultaneously with both the government and the community. When the government is supportive, particularly in specific departments and schemes, we can assist a vast number of households. Apart from government collaboration, we also directly engage with communities to provide support, albeit to a lesser extent. However, our operations never halt. These adjustments have evolved over time and are now part of our strategy. Our team continually learns, unlearns, and relearns, maintaining an ongoing cycle. We’re still evolving, and future discussions will likely bring further changes based on our experiences.” By centering M&E throughout its programme design, providing the flexibility to experiment, fail, and innovate, Tapasya is in a strong position to scale and create further impact at the grassroots and national level.

About Tapas and Mrinal

Tapas Sutradhar, co-founder and CEO of Tapasya, has 13 years of work experience in the development sector. He manages partnerships, compliance, and technology at Tapasya. Tapas has a master’s degree in social work.

Mrinal Rao, co-founder and COO of Tapasya, has 13 years of work experience in the development sector. She oversees operations, people management, and research at Tapasya. Mrinal has a master’s degree in social work.

Know more

  • Read this article to to learn whether M&E should be entrusted to an external or internal team.
  • Read this article to learn how to build expertise in the M&E field.

]]>
https://idronline.org/article/monitoring-evaluation/case-study-the-importance-of-nonprofit-me-systems/feed/ 0
Lessons from participatory research in Jharkhand https://idronline.org/article/monitoring-evaluation/lessons-from-participatory-research-in-jharkhand/ https://idronline.org/article/monitoring-evaluation/lessons-from-participatory-research-in-jharkhand/#disqus_thread Thu, 23 Nov 2023 06:00:00 +0000 https://idronline.org/?post_type=article&p=32815 a class of girls with their hands raised--participatory research

As researchers dedicated to creating social impact, we strive to improve the lives of communities impacted by our work. However, the same communities are rarely included in our research processes beyond answering survey questions. Often, they are limited to being data sources, from whom we generate insights for policymakers to guide their programmatic decisions. Thus, those most impacted by the policy decisions often have the least influence over them—they often are excluded from identifying problems/needs they care about most, interpreting data finds, and shaping recommendations. This approach not only overlooks the valuable locally contextualised knowledge they possess, but also fails to uphold the principles of dignity. Participatory approach to research A participatory approach to research empowers communities to actively participate in decisions that impact their lives. It recognises the importance of listening to the voices of communities regarding what evidence is needed and how it should be interpreted and used. Evidence is a valuable resource that should be accessible to communities and influenced by their perspectives, ultimately shaping policy decisions.  Recently, through]]>
As researchers dedicated to creating social impact, we strive to improve the lives of communities impacted by our work. However, the same communities are rarely included in our research processes beyond answering survey questions. Often, they are limited to being data sources, from whom we generate insights for policymakers to guide their programmatic decisions.

Thus, those most impacted by the policy decisions often have the least influence over them—they often are excluded from identifying problems/needs they care about most, interpreting data finds, and shaping recommendations. This approach not only overlooks the valuable locally contextualised knowledge they possess, but also fails to uphold the principles of dignity.

Participatory approach to research

A participatory approach to research empowers communities to actively participate in decisions that impact their lives. It recognises the importance of listening to the voices of communities regarding what evidence is needed and how it should be interpreted and used. Evidence is a valuable resource that should be accessible to communities and influenced by their perspectives, ultimately shaping policy decisions. 

Recently, through Project Sampoorna in Jharkhand, India—(as part of the consortium, IDinsight is a monitoring and evaluation partner)—IDinsight used communication techniques and participatory methods like visual tools (storytelling boards and videos) to engage with school students and teachers—the primary respondents in the research. Through these tools, IDinsight shared some of the findings generated from the project’s baseline. This approach ensured participants’ inclusion in interpretation of data, thereby helping shape programmatic action based on their in-depth knowledge of school realities—this is a step toward more participatory research.

About the project

Project Sampoorna is a social-emotional learning (SEL) initiative led by the Government of Jharkhand in partnership with a consortium of non-profit organisations. At the request of our partners, we integrated a participatory lens in our evaluation efforts to ensure greater involvement of students and teachers. 

Since the idea of using a participatory lens was explored after the implementation and evaluation designs were already finalised, the participatory elements were adapted accordingly and were focused mainly on sharing baseline findings.

We had collected baseline data primarily through student and teacher interviews and classroom observations. We wanted to share our learnings on student social-emotional skill levels, teacher behaviour, school climate, etc. to help teachers and students use this new evidence. We also wanted to get teachers’ and students’ input to contextualise our findings. 

However, communicating complex survey findings with teachers and students, and ensuring their engagement, was fairly new to IDinsight—we typically share findings with policymakers and decision-makers but rarely with community members on the ground. We knew that sharing findings should not involve technical terms or a digital presentation; instead, we needed something simple, fun, inclusive and relatable. We worked with IDinsight’s Dignity and Lean Innovation teams to develop a plan and selected three school activities:

  1. Short video on baseline findings shared with teachers and parents on WhatsApp 
  2. Storyboard presentation and ‘Draw Your Vision’ activity with students
  3. ​​Discussion on baseline findings with teachers

In this blog, we share the team’s lessons from planning and executing a participatory approach to sharing our findings with students and teachers in government-run schools of Jharkhand.

a class of girls with their hands raised--participatory research
Communicating complex survey findings with teachers and students, and ensuring their engagement, was fairly new to IDinsight. | Picture courtesy: IDinsight

Key Lessons learnt from participatory work in schools

Phase 1: Planning

Lesson 1: Evidence/data needs careful framing to ensure relevance, simplicity, and sensitivity

To engage stakeholders with our findings effectively, the careful selection and framing of the data were crucial. 

For the video and storyboard presentation, we started by identifying the target audience and clearly defining key takeaways we wanted to communicate. We then shortlisted the most relevant and easy-to-understand findings to include.

For the discussion with teachers, we selected findings that, apart from being relevant and simple, were also those that we needed additional context on. We were mindful of sensitively framing the findings, especially those that highlighted improvement opportunities. Take a hypothetical example: If a finding states that “60% of teachers scold students for wrong answers,” we frame it as “most students feel cared for and heard by their teachers; however, data also shows that some teachers might scold students in the class.” In this way, we combine a negative finding with a positive one.

Input from teammates with experience in community engagement, including those outside of the project team, as well as our implementation partners who routinely work with these participants, played a valuable role in framing the findings. Additionally, we sought feedback from a group of teachers through a small pilot to ensure the findings were easy-to-understand, allowing the key takeaway to shine through.

Lesson 2: Visualising step-by-step execution of activities before school visits helps identify potential roadblocks and brainstorm solutions

To ensure smooth execution of our planned activities, we visualised the entire process from entering the school to conducting the activities to leaving the school. This helped us identify potential challenges, develop solutions, and gain more confidence. 

We planned to conduct the storyboard presentation and drawing activity with students, and discussion with teachers in each school on a single day; hence, time optimisation was of utmost importance. To ensure efficient dissemination, we talked to school leaders, teachers, and implementing partners in advance. We clearly shared the goals of the school visits, communicated the logistical support needed, and confirmed teacher and student availability. This helped us reduce the time needed to initiate and organise the activities upon reaching the schools.

post it notes with lessons and insights for schoolteachers--participatory research
Input from teammates with experience in community engagement played a valuable role in framing the findings. | Picture courtesy: IDinsight

Phase 2: Execution

Lesson 3: Communication techniques should be familiar, inclusive and relatable to ensure audience engagement 

To engage teachers and students with data effectively, we needed to use formats that resonated and had limited technical concepts. Our usual methods of sharing findings with clients would not have suited this context. 

We therefore chose storytelling and activity-based techniques to communicate. For instance, the storyboard presentation and drawing activities we chose were part of the students’ day-to-day academic curriculum. The story we built was quite relatable because it included a teacher trying to improve her relationship with students and working with them to improve the class climate. Alongside verbal storytelling, we used a storyboard printed on a large flexible material stuck to the class blackboard—which, again, the students were used to looking at every day during class. The storyboard helped add to how relatable the story was—we used characters that looked like them, wore the same uniforms, and sat in similar classrooms. The drawing activity was a group activity; students enjoyed working with colours and collaborating on what they wanted to draw.

Since videos are always fun to watch, easy to understand, and shareable on social media, we developed an animated video for teachers and parents. When we showed this video to teachers, they found the story similar to what they had experienced in schools and were positive that both parents and other teachers would like and learn from it! Before starting the drawing activity, we also showed the video to the students, which inspired their drawing ideas.

Lesson 4: Using local and colloquial language by a familiar/relatable presenter helps the audience connect with the activities

To ensure relatability with the students, our team’s Field Manager took on the role of the storyteller for the storyboard presentation. It was important for the presenter to be someone the students could connect with in terms of language and cultural familiarity. We created a concise script in the local language with a colloquial touch, making it simple for students to understand.

As storytelling was a new format for IDinsight, we conducted multiple mock sessions with the team to refine the script and improve the tone and energy of delivery. Once finalised, our Field Manager diligently rehearsed the script with teammates and children in his community to improve its delivery. 

While delivering the storyboard presentations in schools, we actively engaged the students by asking simple questions they answered in unison. This interactive approach helped maintain their attention and connection with the story.

Similarly, the video script went through several revisions with our video production agency to ensure the language avoided jargon and appeared friendly and relatable.

Lesson 5: Creating a comfortable and safe environment is necessary for good participation and candour

IDinsight’s interaction with students and teachers is typically limited to when we visit schools for data collection on our monitoring and evaluation work. This was the first time we visited schools to share our findings instead, and creating a comfortable environment for students was a top priority for their participation, enjoyment, and learning. 

We collaborated with our implementation partners, who regularly engage with schools. Their presence helped us establish a rapport with school leaders, teachers, and students. Our partners facilitated introductions, conducted icebreaker games with students, and helped us communicate better with teachers and students. 

We also actively participated in icebreakers, which helped students feel at ease with us. The “draw your vision” activity allowed quieter students to express themselves through art, ensuring inclusivity. We also emphasised that participation in the activities was voluntary and respected students’ choice not to participate.

With teachers, we initiated conversations with a round of introductions and discussing the subjects they teach. We empathised with their experiences and challenges, creating a comfortable space for them to share their thoughts and opinions openly.

Phase 3: Insights Generation

Lesson 6: Document participants’ insights and recommendations to inform programme design and implementation

Given our goal of seeking input on the baseline findings, our activities were specifically designed to generate valuable insights. We took diligent notes on teacher and student responses and observed their levels of engagement. While the input on findings from teachers was relatively straightforward, the feedback from students was particularly interesting. 

This is because the latter was in the form of semi-structured discussions, and drawings we reviewed to derive meaningful insights. We shared these valuable insights with our partners to inform programme improvement efforts. For example, an artwork showcased classmates supporting a student with a physical disability and including him in their playground games. This vision could be used to build a student parliament-led project to ensure a disability-friendly school environment and infrastructure.

What’s next?

Our initial foray into participatory methods as part of the Sampoorna project has been a valuable learning experience. These insights will shape our future work and contribute to the broader landscape of similar projects at IDinsight. As we move forward, we are excited to refine our approach further, deepen our collaborative efforts, and continue making a positive impact in the communities we serve.

Acknowledgements

I would like to thank Sumedha Jalote, Neha Raykar, and Debendra Nag for their reviews and valuable input on this blog. Special thanks to Tom Wein for his guidance in shaping this work and encouraging thoughtful reflection and knowledge sharing.

This article was originally published on IDinsight.

]]>
https://idronline.org/article/monitoring-evaluation/lessons-from-participatory-research-in-jharkhand/feed/ 0
How can nonprofits use psychometric tools effectively? https://idronline.org/article/monitoring-evaluation/how-can-nonprofits-use-psychometric-tools-effectively/ https://idronline.org/article/monitoring-evaluation/how-can-nonprofits-use-psychometric-tools-effectively/#disqus_thread Wed, 22 Nov 2023 06:00:00 +0000 https://idronline.org/?post_type=article&p=32877 many squiggly lamps against a wall-impact measurement

In various sectors—from education to organisational behaviour—psychometric tools are increasingly being used to make sense of abstract concepts. These are instruments or assessments designed to measure the psychological traits, abilities, attitudes, and characteristics of individuals. They are utilised to quantify and evaluate various aspects of human behaviour and cognition. The tools, while powerful, often produce intricate and elusive results, pushing the limits of traditional empirical analysis. Their strength lies in being able to bridge the gap between abstract ideas and tangible metrics, appealing to data-driven individuals as well as those who value qualitative insights. Nonprofits commonly use these tools to measure programme impact in various sectors such as education, life skills, livelihood, and health, as well as in areas such as identifying needs, planning curriculum activities, monitoring client progress, and evaluating organisational culture. Our organisation, Udhyam Learning Foundation, has leveraged these psychometric tests to detect shifts in learners’ mindsets and attitudes. However, a critical question looms: Are the results consistently trustworthy and precise? To deepen our understanding, we initiated an]]>
In various sectors—from education to organisational behaviour—psychometric tools are increasingly being used to make sense of abstract concepts. These are instruments or assessments designed to measure the psychological traits, abilities, attitudes, and characteristics of individuals. They are utilised to quantify and evaluate various aspects of human behaviour and cognition.

The tools, while powerful, often produce intricate and elusive results, pushing the limits of traditional empirical analysis. Their strength lies in being able to bridge the gap between abstract ideas and tangible metrics, appealing to data-driven individuals as well as those who value qualitative insights. Nonprofits commonly use these tools to measure programme impact in various sectors such as education, life skills, livelihood, and health, as well as in areas such as identifying needs, planning curriculum activities, monitoring client progress, and evaluating organisational culture.

Our organisation, Udhyam Learning Foundation, has leveraged these psychometric tests to detect shifts in learners’ mindsets and attitudes. However, a critical question looms: Are the results consistently trustworthy and precise?

To deepen our understanding, we initiated an in-depth study, partnering with seasoned experts to fine-tune our approach and improve testing accuracy. We’ve pinpointed common hurdles that come with the application of these tests, which, if overlooked, can skew results and lead to misjudgements.

many squiggly lamps against a wall-impact measurement
Validity is important as it ensures that the test measures what it intends to measure. | Picture courtesy: PickPik

Using tests that are not reliable or valid

Reliability is crucial because it ensures consistent and dependable scores. Without it, a test will yield erratic results, which will render it difficult to draw any meaningful conclusions from it. The lack of reliability in psychometric tests is a pertinent concern. A 2010 study found that the validity of the 16PF (personality factor), a very commonly used personality psychometric test, varied across different cultures. Although the test had good validity in Western cultures, it was less valid in non-Western cultures.

Validity is equally important as it ensures that the test measures what it intends to measure. When we create our own tools without a solid empirical foundation, we cannot guarantee the accurate assessment of the skills, traits, or knowledge we aim to evaluate. This undermines the credibility and usefulness of the results.

At Udhyam, we recently tested the validity of the standardised grit scale we’ve previously used in our work. The scale measures a person’s passion and perseverance for long-term goals. Through this examination, we discovered that the scale’s reliability and convergent validity were poor. Furthermore, it did not exhibit adequate psychometric properties for the sample we used it on. We were thus prompted to conduct further analysis, exploring whether certain items or questions required adjustments to better align with our data set.

This reinforced the importance of testing the reliability and validity of the instruments beforehand. The tests should produce similar scores when administered to the same person on different occasions, and also produce scores that are related to the target skills or knowledge.

Developing in-house psychometric-like tools

Organisations may attempt to create their own psychometric-like tools to address challenges such as the assessment form being too lengthy or parts of it being irrelevant to the programme it is being deployed for. For instance, many times organisations choose to combine multiple psychometric scales and their respective questions to make one scale that they believe will address key aspects of their programme or interventions, while simultaneously ensuring that the questionnaire is short enough to be completed quickly. However, developing such tools without adhering to rigorous processes can result in issues related to reliability and validity.

Furthermore, creating psychometric-like tools without expertise in test construction can introduce unintended biases or skewed measurements. Professionals in psychometrics have the necessary knowledge and skills to ensure fair, impartial, and accurate assessments. Developing tools without this expertise may lead to biased evaluations or discriminatory practices. For instance, there is evidence of tools exhibiting bias against women or specific racial groups, primarily due to the absence of these demographics in the initial samples.

Developing and validating psychometric tests require a substantial amount of time, effort, and resources.

However, developing and validating psychometric tests require a substantial amount of time, effort, and resources. The assessment must undergo multiple stages, including item development, pilot testing, data collection, analysis, and refinement. Organisations may not always have the expertise or resources to undertake this comprehensive process. In such cases, relying on established and validated tests developed by experts can save time and ensure quality assessment. If organisations choose to build their own tools, it is highly recommended to conduct reliability and validity testing and follow best practices for tool development. Alternatively, organisations can seek assistance from the original creators or authors of such tools to ensure effective implementation. Collaborating with experts and reaching out to the broader community is encouraged during tool problem-solving sprints.

Using tests that are not culturally appropriate

Many psychometric tests are developed in Western countries and may not be culturally appropriate for use in other parts of the world. This is because the tests might be based on values and norms that are not shared by that particular culture; this applies to India as well.

Education and literacy also significantly impact scores on various types of tests, such as those assessing working memory and visual processing of certain indigenous populations.

Therefore, when selecting psychometric tests for India, it is important to choose only those that have been validated for use keeping the cultural context in mind. There are a number of tests that have been specifically developed for use in India. For example, NIMHANS Neuropsychological Battery, Indian Adaptation of Wechsler Adult Performance Intelligence Scale (WAPIS – PR) by P Ramalingaswamy, and more.

Using tests that are not aligned with programme goals

It is important that the psychometric tests used by the organisation are aligned with the goals of their programme. If the tests do not measure the specific traits, skills, or knowledge that the programme is designed to develop, then the results of the tests will not be meaningful. Using the wrong tests can lead to incorrect diagnoses, which bears serious consequences such as stigma or missed opportunities for support.

It is imperative to use tools that match the cognitive development of individuals, ensuring that data collected is both accurate and fair.

For example, we have adopted psychometric tools to evaluate mindsets, including self-awareness, grit, and self-efficacy. If our entrepreneurship curriculum delivered to learners aged 14–18 doesn’t directly address the enhancement of these specific traits, this gap would make it challenging to align the outcomes from the psychometric evaluations with our curriculum’s content. As a result, establishing a feedback mechanism to refine and improve our curriculum interventions would become a hurdle. In order to obtain accurate data from psychometric tests, it is essential that the curriculum of the programme is appropriately tailored to meet the intended programme and learning objectives.

Administering psychometric assessments to a population requires a careful evaluation of age appropriateness and literacy levels. It is imperative to use tools that match the cognitive development of individuals, ensuring that data collected is both accurate and fair. Moreover, taking into account diverse literacy levels is vital for fostering clear communication and preventing potential bias or frustration in the assessment process. These considerations uphold the ethical standards in the assessment of individuals’ abilities and attributes.

Translating psychometric tools from English to Indian languages

Language and cultural nuances play a significant role in psychometric assessments. The meaning of certain words or concepts may vary across languages and cultures. Direct translations of items or instructions from English to Indian languages without considering these nuances can lead to misunderstandings or misinterpretations, affecting the accuracy and validity of the assessment results.

For instance, if a question in English asks about ‘self-esteem’, a literal translation might use a Hindi term that refers to ‘self-worth’ or ‘self-respect’. While these may be related concepts, they don’t have the exact same meaning as ‘self-esteem’, leading to a loss of the original nuance and potentially impacting the validity of the tool in the new cultural context.

The translation process for a psychometric tool should be rigorous and systematic to ensure the validity and reliability of the translated version of the tool. It involves guaranteeing conceptual equivalence, linguistic validation, cultural adaptation, back translation, and validation studies to create dependable assessments in the target language.

Know more

  • Learn more about how to conduct psychological assessment and evaluation.
  • Read this article on how nonprofits can optimise their monitoring and evaluation efforts.
  • Learn more about best practices for developing and validating scales.

]]>
https://idronline.org/article/monitoring-evaluation/how-can-nonprofits-use-psychometric-tools-effectively/feed/ 0
What is lacking in life skills assessments in India? https://idronline.org/article/education/what-is-lacking-in-life-skills-assessments-in-india/ https://idronline.org/article/education/what-is-lacking-in-life-skills-assessments-in-india/#disqus_thread Tue, 22 Aug 2023 06:00:00 +0000 https://idronline.org/?post_type=article&p=31420 A group of school boys hanging out-life skills

Research conducted on the value of life skills has demonstrated its short-term and long-term impact, such as reduced emotional distress and increased classroom engagement, well-being, and academic performance. Measuring such skills is important as it helps gauge the effectiveness of the intervention, identify students who may need extra support, and inform policy and practice. The National Education Policy 2020 recognises and ably articulates the importance of life skills in the Indian education landscape, and eight Indian states—Delhi, Uttarakhand, Karnataka, Jharkhand, Telangana, Nagaland, Uttar Pradesh, and Tripura—have introduced curriculums and teacher training to promote the social and emotional skills of students and teachers. But how do we really know if these interventions are working in our local contexts? A straightforward response would be to assess the targeted skill sets of the participants before and after an intervention to understand its effectiveness. But collecting data at periodic intervals by default does not guarantee that this data and the measured changes are contextually and culturally appropriate. In 2022, the Dream a Dream research]]>
Research conducted on the value of life skills has demonstrated its short-term and long-term impact, such as reduced emotional distress and increased classroom engagement, well-being, and academic performance. Measuring such skills is important as it helps gauge the effectiveness of the intervention, identify students who may need extra support, and inform policy and practice.

The National Education Policy 2020 recognises and ably articulates the importance of life skills in the Indian education landscape, and eight Indian states—Delhi, Uttarakhand, Karnataka, Jharkhand, Telangana, Nagaland, Uttar Pradesh, and Tripura—have introduced curriculums and teacher training to promote the social and emotional skills of students and teachers.

But how do we really know if these interventions are working in our local contexts? A straightforward response would be to assess the targeted skill sets of the participants before and after an intervention to understand its effectiveness. But collecting data at periodic intervals by default does not guarantee that this data and the measured changes are contextually and culturally appropriate.

In 2022, the Dream a Dream research team tried to understand the need for life skills intervention in government schools in Telangana’s Kondurg district. As part of the research, we asked teachers whether they had heard of life skills and if they could elaborate on what it means. A majority of the respondents defined life skills as ‘any skill that makes life easier’, and listed cooking, cleaning, and driving as examples. Although this response is arguably correct, we had intended something different.

Most of us who undertake field visits can admit that such situations arise more often than we would like to confess. Without an understanding of the community and their culture and norms, the questions we ask and the data we collect might not be useful, regardless of the methods employed. A contextual knowledge of the community needs to be at the core of life skills interventions since the various domains of life skills—such as communication, conflict resolution, and teamwork—are understood and operationalised differently across cultures.

Lack of contextual knowledge and unconscious penalisation of students

Though Eastern education philosophy has traditionally espoused concepts of holistic education, which included the social, ethical, emotional, and moral development of students, the scientific tools for measuring these skills were pioneered in the West in the late 1930s. Many life skills measurement tools such as the Devereux Early Childhood Assessment, Social-Emotional and Character Development Scale, and Social Skills Improvement System Rating Scales were developed in Western countries. Therefore, the questions being asked in a survey based on these tools might not be relevant for a community outside of the West. Even if the skills covered by the survey tool are relevant, it may not be able to capture cultural nuances.

For example, while independence is encouraged in the West, accomplishing tasks in a team is encouraged in collectivist societies such as India. Likewise, eye contact is encouraged in conversations in the West, but in many cultures, especially in Asia, eye contact with older people is discouraged or considered disrespectful. So, if a survey tool standardised for a Western population is used in India, students are likely to be scored differently even if they are competent in a particular skill.

The language used in the construction of the tool is another factor of concern. Although students in English-medium schools across India are taught the language, the meanings attached to specific words may vary based on their cultural contexts and understanding of the mainstream usage of the language. For instance, in one of our assessments, students from a low-income rural school had difficulty understanding the statement ‘I share my problems with others.’ Through conversations with the students, our team realised that they associated ‘sharing’ with materials that can be physically divided such as chocolates and pencils, and use ‘talk/say’ when it comes to communicating something that is on their mind. So, by design, assessments can favour those with an understanding of the mainstream usage of language and unconsciously remain challenging for students who do not have that same kind of access or exposure.

Furthermore, life skills are interconnected and have varied dimensions—each skill has multiple characteristic attributes. For example, within the domain of ‘working with others’, the skills required to successfully complete a project would be different in a controlled space such as a classroom versus an open space such as a playground or a picnic. The complexity intensifies when students are comfortable expressing their skills in certain environments but suppress themselves in other spaces because of social norms. For instance, due to skewed power dynamics in the community, students from marginalised castes might not voice their opinions or initiate conversations in a classroom that they share with dominant caste students from the same locality. Their lack of interaction or communication in a classroom environment might not be an adequate reflection of their skill set. Therefore, when assessments do not consider these social realities, they produce inaccurate data that is neither reflective of the student and their social context nor useful in decision-making.

A group of school boys hanging out-life skills
It it is important to consider how local actualities validate or invalidate a student’s abilities, identity, and culture. | Picture courtesy: Pixabay

Keeping student context at the centre

Knowing that every aspect of an assessment is influenced by context and local actualities, it is important to consider how they validate or invalidate a student’s abilities, identity, and culture. Intentionally creating space for different learning environments, multiple ways of demonstrating skill, and multiple types of evidence for skill improvement creates equitable and just assessment processes. More importantly, it fosters a sense of belonging among students.

The Life Skills Assessment Scale (LSAS) by Dream a Dream was one such attempt. Earlier, we struggled to measure the impact of our life skills programmes since the standardised scales available either measured specific life skills or were not contextually appropriate for disadvantaged communities. Taking the contextual realities of disadvantaged communities into consideration, the tool was designed in collaboration with clinical psychologists Dr Fionna Kennedy and Dr David Pearson. The tool enables trained staff to assess young people in different settings across various activities.

Some of the proven ways to assess life skills in a contextually relevant manner include behavioural observation in different settings, situational judgements, anchoring vignettes (getting participants to rate other people’s behaviour before rating one’s own, and using the former as an anchor to rate their individual response), and creating community-specific norms for test administration. But these methods are scattered and not readily available for public use. Addressing these access gaps and covering understudied skills such as cooperation, negotiation, and critical thinking in assessments is therefore critical.

The unlearning we need

Although the acknowledgement of the need for life skills in mainstream education in India is a laudable move, ensuring the just and equitable assessment of students’ skill requires unlearning and innovation on many fronts.

At the core of this attempt is to think about whether what we accept as legitimate evidence of learning is inclusive of different communities and their ways of learning and expression. A practical way to approach this would be to ask a few grounding questions at the launch and during the assessment process. Some examples include:

  • Do we comprehend the community’s knowledge/understanding of what we are trying to measure?
  • Are the measurement tools prepared with a grounded awareness of the concept? Was the measurement tool tested for validity in a particular context?
  • Does the measurement process provide mechanisms to capture diverse evidence of learning/improvement from the community?
  • Is the data collection approach culturally sensitive?
  • Will our data analysis, interpretation, and recommendations be conducted in light of the contextual realities?

When assessment tools are developed with a grounded and intersectional lens, they can bring forth unique and nuanced insights and can ably help ascertain the effectiveness of different programmes. We must acknowledge that life skills assessment tools need to be contextually designed to ensure there are no chances of oversight or misinterpretation.

Know more

  • Read this report on the importance of weaving life skills into school education.
  • Read this article to learn more about the importance of intersectionality in social-emotional learning programmes.

]]>
https://idronline.org/article/education/what-is-lacking-in-life-skills-assessments-in-india/feed/ 0
Is data meant for policymakers alone? https://idronline.org/article/monitoring-evaluation/is-data-meant-for-policymakers-alone/ https://idronline.org/article/monitoring-evaluation/is-data-meant-for-policymakers-alone/#disqus_thread Tue, 06 Jun 2023 06:00:00 +0000 https://idronline.org/?post_type=article&p=29953 man looking at a phone_actionable data

Collecting and analysing data is an essential part of programme evaluation and improvement. By collecting and analysing data on student performance, attendance, and engagement, for example, teachers can improve their classroom practices. Administrators can use consolidated data to gain insights on how well a programme is functioning, understand which goals are being met, and identify areas that need improvement. The same data can also help policymakers decide where to allocate resources and which interventions to implement. At Gyan Prakash Foundation (GPF), we are working to bring about a fundamental change in learning through competency-based education in rural government schools in four districts of Maharashtra (Parbhani, Nandurbar, Satara, and Solapur). We have adopted a collaborative approach since our inception in 2011, working with teachers, school leaders, cluster officers, block- and district-level officials, and parents and community representatives—all of whom play a significant role in improving learning outcomes for children. This article is based on our learnings from adopting a decentralised approach to data, which we believe can be used by all]]>
Collecting and analysing data is an essential part of programme evaluation and improvement. By collecting and analysing data on student performance, attendance, and engagement, for example, teachers can improve their classroom practices. Administrators can use consolidated data to gain insights on how well a programme is functioning, understand which goals are being met, and identify areas that need improvement. The same data can also help policymakers decide where to allocate resources and which interventions to implement.

At Gyan Prakash Foundation (GPF), we are working to bring about a fundamental change in learning through competency-based education in rural government schools in four districts of Maharashtra (Parbhani, Nandurbar, Satara, and Solapur). We have adopted a collaborative approach since our inception in 2011, working with teachers, school leaders, cluster officers, block- and district-level officials, and parents and community representatives—all of whom play a significant role in improving learning outcomes for children.

This article is based on our learnings from adopting a decentralised approach to data, which we believe can be used by all the stakeholders involved—policymakers and teachers alike—for effective decision-making.

How do different stakeholders use data?

In the education sector, if a programme involves children, the most common data point is child learning data. This data is typically for two sets of stakeholders: decision-makers (such as cluster heads and block and district education officers) and teachers in the classroom. The data intended for decision-makers is usually consolidated child assessment data, details pertaining to school infrastructure, plans for the following academic year, etc. This data is crucial to understanding the status of education in the cluster, block, and district, and is used to inform strategies, make programmatic changes, and write publicly accessible reports. The data meant for teachers is mostly individual child assessment data that helps inform their own teaching practices.

The direction of the data is almost always fixed—it flows from the teachers to the decision-makers.

Data is mostly demanded by those at higher levels in the hierarchy from those at lower levels and is often associated with performance, be it of a child, a teacher, or an official in the system. There is a tendency to evaluate a child based on their scores on a test, and a teacher based on the number of students in their classroom who are performing ‘well’. This association, however inaccurate it may be, has led to a general fear of data among teachers. As a result, teachers either disengage from the process of collecting data or tend to report to higher authorities what may not be a completely accurate representation of their students’ performance in the classroom.

The direction of the data is almost always fixed—it flows from the teachers to the decision-makers. For example, a teacher consolidates the data for their class and submits it to the cluster head, who takes the data sets of all the schools in the cluster and creates a cluster-level report. The cluster report will be submitted to the block head, who will consolidate the data of all the clusters to create a block report. This unidirectional flow of data shifts the ownership from one entity to another, ultimately rendering the information non-actionable at the source, which is the teacher. The teacher maintains a data set of each student’s performance that can be used to bring about a change, but in reality, it does not directly inform their teaching practices to improve children’s learning.

How can data be made more actionable?

Data is actionable when the user is empowered to make certain decisions based on it. Data should be easy to collect and organise and should be actionable to users at every level. In education, however, the data is only actionable at the policy level. The teachers’ lens is missing—in the context of a classroom, data that is collected should reveal the actual learning level of each child. Teachers should then be able to identify their students’ needs and adapt their approach based on the learning data of each child.

In our programme, we came up with three core strategies to make data actionable: make data actionable for teachers, change higher authorities’ perspective on data, and use tech to consolidate data.

man looking at a phone_actionable data
Data is actionable when the user is empowered to make certain decisions based on it. | Picture courtesy: Mircea Iancu / CC BY

1. Make data actionable for teachers

Data is most useful to teachers when they can interpret it, that is, use it to assess children’s learning status and identify key actions to improve it.

To make this happen, in 2016, we created a simple offline spreadsheet with the teachers that helped them interpret data from their classrooms. Each row listed the name of a student and each column the skills or competencies they were expected to master (for example, recognising numbers up to 10, or understanding concepts of ‘more’ and ‘less’) within a certain period. Teachers were given green pencils to mark the competencies that students had mastered and red pencils to mark the ones that students had yet to master.

By simply glancing at a page, teachers could now deduce whether or not a child had mastered a certain skill or competency. It also became easy to see whether the class, as a whole, was doing well or not. Using this sheet, teachers could identify two major things: the specific support that individual students required, and the competencies that they needed to work on with the entire class. Combining these two, teachers were able to plan better for individual students and for the group as a whole, as well as for how to utilise time in the classroom. When teachers saw that this data was helping them work better with the students, it took out their fear of data. Once the ownership of data stayed with the source and the direction of the information was reversed, it had meaning. The same sheet could also be used to create data sets such as class averages that had to be submitted to higher authorities.

For two years, GPF used the ‘red–green sheet’ to make student learning data actionable. The process was manual, which made it prone to errors and difficult to manage and scale. In 2018, GPF transitioned to a digital platform—Learning Navigator—developed by the nonprofit Gooru. This platform catered to the exact same need, but made the management and presentation of data more efficient. It enables the teachers to manage childwise data of assessments; get individual, group, and consolidated performance reports; and identify competencies that need further attention.

2. Change higher authorities’ perspective on data

We wanted to make the data from the red and green sheet actionable to not just the teachers, but also to higher authorities who could make informed decisions using the same data. At the monthly shikshan parishad (a meeting of all teachers in a cluster), teachers were requested to bring their red–green sheets for a group discussion. This gave cluster officials an opportunity to look at the data from a new perspective. Rather than viewing data solely as a reflection of individual teacher or school performance, the collective data from all classrooms and schools helped the cluster head understand the challenges faced by students in their learning journey and identify teachers’ pain points. Which competencies were students struggling with? What challenges were teachers facing in teaching those concepts?

Teachers observed that the data collected from their classrooms was being utilised in enhancing their ability to work effectively with their students. As a result, they became less apprehensive about reporting accurate data. Consequently, cluster resource groups consisting of six to seven experienced teachers from each subject were convened. These groups played an important role in building the capabilities of other teachers in the cluster. The use of data from the red–green sheet during shikshan parishads helped the cluster head identify specific learning outcomes to focus on in a particular month and ensure that all teachers were equipped with skills to teach those competencies in their classrooms.

3. Use tech to consolidate data

Since GPF is a large-scale programme, our core interest was in consolidated data. In October 2021, consolidated data was made available to authorities that support teachers in four districts in Maharashtra. With this update to the ‘mission control’ feature on Learning Navigator, data from all schools in a district were made available in real time not only to the teacher, but to school management committees, cluster heads, block education officers and district officials across the system. 

One cluster head said that he finds it easy to monitor the work of all the teachers that fall within the area for which he is responsible. “The total number of teachers from the two clusters [I oversee] is more than 182. But today, wherever I am, it is possible for me to see how many teachers have worked on which specific learning outcomes.”

Data can be useful to stakeholders at all levels of a programme.

A block education officer described how the mission control feature would be beneficial to him. “We will use mission control to track students from nine blocks of Parbhani district. This will help us in working towards the goal of enabling every student to achieve all the learning outcomes as expected of their grade.”

The strategies we employed helped ensure that users at every level were empowered with data in their hands and could make decisions to improve learning for every child in every classroom. We often mistake that the real and only consumers of data are the entities that fund the activities. This is almost always a myth. Data can be useful to stakeholders at all levels of a programme. Whether or not data is actionable can be a self-check for how authentic it is, which is something that many programmes struggle with.

Know more

  • Read this article to learn about whether nonprofits should buy tech applications or build them from scratch.
  • Watch this video to learn more about how Gyan Prakash Foundation used technology to enable competency-based teaching and learning.

]]>
https://idronline.org/article/monitoring-evaluation/is-data-meant-for-policymakers-alone/feed/ 0
M&E does not have to be costly or complex https://idronline.org/article/monitoring-evaluation/me-does-not-have-to-be-costly-or-complex/ https://idronline.org/article/monitoring-evaluation/me-does-not-have-to-be-costly-or-complex/#disqus_thread Fri, 19 May 2023 06:00:00 +0000 https://idronline.org/?post_type=article&p=29570 a monitor displaying some charts and graphs--monitoring and evaluation

Most social enterprises and non-profit organizations recognize that high-quality data and evidence can help them make better decisions, improve performance, attract more funding, and understand—and eventually increase—their impact. Yet these businesses and non-profits often face financial constraints in creating and sustaining effective monitoring and evaluation (M&E) systems. At IDinsight, we work with organizations and social enterprises across the globe to help them amplify their social impact, and one way we do this is by helping our clients strengthen their M&E systems. This article draws on our experience with different partners to share some simple but effective ways to overcome obstacles and create effective M&E systems on a limited budget. These insights can help break down the perceived complexities of monitoring and evaluation, enabling you to take a do-it-yourself (DIY) approach to M&E. M&E is a long game Before building a monitoring and evaluation system, it is important for you to consider why your enterprise or non-profit needs M&E, and what your ultimate learning priorities are. We recommend that you do a]]>
Most social enterprises and non-profit organizations recognize that high-quality data and evidence can help them make better decisions, improve performance, attract more funding, and understand—and eventually increase—their impact. Yet these businesses and non-profits often face financial constraints in creating and sustaining effective monitoring and evaluation (M&E) systems.

At IDinsight, we work with organizations and social enterprises across the globe to help them amplify their social impact, and one way we do this is by helping our clients strengthen their M&E systems. This article draws on our experience with different partners to share some simple but effective ways to overcome obstacles and create effective M&E systems on a limited budget. These insights can help break down the perceived complexities of monitoring and evaluation, enabling you to take a do-it-yourself (DIY) approach to M&E.

M&E is a long game

Before building a monitoring and evaluation system, it is important for you to consider why your enterprise or non-profit needs M&E, and what your ultimate learning priorities are. We recommend that you do a full review of your organization’s work, including all of its activities, the resources required, the expected outcomes—and how all these elements connect. For many, this may seem obvious. But we have found that different teams within an organization can have divergent interpretations of their programs’ functions, and different expectations of what their M&E system should do.

Conducting a high-level overview of the organization’s activities helps align everyone on its longer-term M&E vision, helping to identify potential M&E solutions that fit different programs’ context and scope, and reducing the risk of wasting money by collecting and synthesizing output data that fails to measure the desired outcome. To help find the best M&E solutions for your organization’s various programs, you can utilize impact modeling methods, such as building a theory of change.

Taking a step-by-step approach to M&E can help an organization conduct the necessary financial planning.

Putting M&E in the larger context of a program and its goals can also help team members start small and build gradually. As we explain in the following sections, taking a step-by-step approach to M&E can help an organization conduct the financial planning necessary to sustain these systems while maintaining its other activities.

Cheaper can be better—and free tools are fine

Your business or non-profit can cut down on monitoring and evaluation costs by using free tools or services. These tools can serve your needs as well as—or sometimes even better than—costly and complicated ones that require more expertise and human resources, and they pose fewer financial risks in the earlier stages. Once you have used these tools to establish a solid foundation, M&E systems based initially on free tools can easily be adapted and migrated when a program scales or the organization’s needs become more complex. If you are taking a DIY approach, consider researching the available free or low-cost data tools to find a suite of budget-appropriate options that address your needs.

Rather than spreading your limited budget across a variety of data activities and tools, you should prioritize components linked to key decisions (for example, by determining which decision-relevant indicators to measure and how to collect them) and build the appropriate M&E systems from the ground up. We strongly recommend starting from a centralized data management system (DMS), which means all M&E data collected and generated should eventually be fed back to and stored on the central DMS. A central DMS allows for easier and more cost-effective data storage, has better security and enhances data integrity—and it can often be linked to other free and low-cost data tools.

a monitor displaying some charts and graphs--monitoring and evaluation
A centralized data management system (DMS) allows for easier and more cost-effective data storage. | Picture courtesy: Pixabay

No software can run well without human efforts

When budgeting for monitoring and evaluation systems—or any technical solutions—software is not the only cost. You have to invest in in-house expertise by hiring and training staff with the knowledge and ability to maintain and optimize these systems.

These staff members can also help your organization realize the value of M&E and gain buy-in from your other staff. Even having just one individual or small team willing to explore a business or non-profit’s data needs, find suitable solutions and champion data usage within the organization can bring significant change.

Make the data talk

Even when you have selected the right indicators, they are not useful if no one is looking at them. To keep this data top-of-mind, as a start, your social enterprise or non-profit should choose the most critical metrics, track them regularly, then aggregate, analyze and report the findings to relevant decision-makers.

High-quality data does not automatically lead to good decisions.

But high-quality data does not automatically lead to good decisions. To enhance its usefulness in decision-making, data should be readily available when needed, and the organization should work to foster a culture of learning. Data visualization can address both of these needs. Dashboards of tracked indicators—even simple ones—can turn numbers into narratives and provide analyses that help organizations make better-informed decisions. These dashboards are not a core component of most basic M&E systems, but free dashboarding tools that can connect with external data sources are widely available.

These four recommendations are relatively simple: Think ahead and prioritize, choose cheap but effective tools, invest in people, and use data effectively to inform decision-making. Yet for small businesses and non-profits with a limited budget, busy staff and competing priorities, these suggestions can be challenging to implement. That’s why we also recommend checking out IDinsight’s new M&E Health Check, which is a free tool organizations can use to assess the health of their M&E systems and identify cost-effective areas of improvement. We hope this tool, along with the insights above, will help your social enterprise or non-profit optimize its monitoring and evaluation efforts and maximize its social impact.

This article was originally published on NextBillion.

]]>
https://idronline.org/article/monitoring-evaluation/me-does-not-have-to-be-costly-or-complex/feed/ 0
Outcomes-based financing: What do nonprofits need to know? https://idronline.org/article/fundraising-and-communications/outcome-based-financing-what-do-nonprofits-need-to-know/ https://idronline.org/article/fundraising-and-communications/outcome-based-financing-what-do-nonprofits-need-to-know/#disqus_thread Thu, 23 Mar 2023 09:30:00 +0000 https://idronline.org/?post_type=article&p=28602 two girls in school uniforms riding their bicycles on a road--outcomes-based financing

India has been pioneering the use of outcomes-based financing (OBF) among low- and middle-income countries and has seen a surge of funding directed to OBF instruments in recent years. OBF instruments tie the disbursement of funding for development projects to the achievement of results. These results are linked to the stated development objective—outcomes—as opposed to actions, inputs, activities, and outputs. Some types of OBF instruments include development impact bonds (DIB), impact guarantees, result-based financing contracts, and social success notes (SSNs). Since the launch of the first DIB by Educate Girls in 2015, India has seen four more DIBs in education, health, and skills development—Quality Education India DIB, Haryana Early Literacy DIB, Utkrisht DIB, and Skill Impact Bond. Two SSNs—one for affordable private schools by Varthana and Michael & Susan Dell Foundation and one by IPE Global for healthcare enterprises—are also in the works. These OBF projects collectively mobilise at least USD 30 million of funding and impact approximately 6,00,000 people. Many more OBF projects are also in the offing. While]]>
India has been pioneering the use of outcomes-based financing (OBF) among low- and middle-income countries and has seen a surge of funding directed to OBF instruments in recent years.

OBF instruments tie the disbursement of funding for development projects to the achievement of results. These results are linked to the stated development objective—outcomes—as opposed to actions, inputs, activities, and outputs. Some types of OBF instruments include development impact bonds (DIB), impact guarantees, result-based financing contracts, and social success notes (SSNs).

Since the launch of the first DIB by Educate Girls in 2015, India has seen four more DIBs in education, health, and skills development—Quality Education India DIB, Haryana Early Literacy DIB, Utkrisht DIB, and Skill Impact Bond. Two SSNs—one for affordable private schools by Varthana and Michael & Susan Dell Foundation and one by IPE Global for healthcare enterprises—are also in the works. These OBF projects collectively mobilise at least USD 30 million of funding and impact approximately 6,00,000 people. Many more OBF projects are also in the offing.

While there is a definite growth in OBF instruments and projects, such opportunities are not available to all, with only a small pool of nonprofits being able to participate as partners on these projects. There are multiple reasons for this:  

  • Lack of access to networks and information that enable nonprofits to tap into such opportunities due to the nascent stage of the OBF sector and limited players. 
  • Lack of awareness and know-how about how these projects work and what funders expect from nonprofits.
  • Limited internal capability to participate in such projects.

This article focuses on sharing the experiences of the British Asian Trust (BAT) and our learnings on navigating these barriers. BAT has undertaken multiple OBF projects in the past five years and is working on building the capacities of nonprofits through the Outcomes Readiness Programme, a collaborative initiative with Atma.

What do OBF projects mean for nonprofits?

There is a common misunderstanding that OBF projects entail nonprofits getting paid only after the achievement of outcomes and not receiving adequate support for upfront working capital. On the contrary, the theory and practice of OBF essentially focuses on de-risking nonprofits by bringing in a class of risk investors who provide upfront working capital to nonprofits. These investors are the ones who get paid only if pre-agreed outcomes are met, thereby moving the risk away from the nonprofits.

By facilitating upfront funding, most OBF instruments and projects remove shorter funding cycles and uncertainty, allowing nonprofits to focus their energies on implementing, innovating, adapting, and strengthening their interventions rather than on completing quarterly activities to get the next tranche of funding or finding the next donor.

At the heart of the OBF theory lies the fundamental belief that nonprofits are generally experts in their domain and know their solutions and implementation. As such, OBF instruments try to shift the power dynamics inherent in a grantor–grantee relationship towards a more equity-based partner relationship where funders contribute financial resources but do not micromanage or tell nonprofits what to do. Instead, they trust the nonprofits to bring non-financial resources such as their expertise and understanding of the community and ground realities to the project.

Having said that, most OBF tools do require specific mindsets, competencies, and capabilities from nonprofits, which are highlighted in the next section.

two girls in school uniforms riding their  bicycles on a road--outcome-based financing
There is a common misunderstanding that OBF projects entail nonprofits getting paid only after the achievement of outcomes. | Picture courtesy: Prasanta Sahoo / CC BY

What do nonprofits need to participate in OBF projects?

1. Adoption of an ‘outcomes-first’ approach

There are two different levels at which the outcomes-first approach reflects in a nonprofit.

Organisational level: The nonprofit needs to gradually start aligning its overall strategy to the outcomes it wants to achieve over the medium and long run. This strategy plays a key role in guiding the priorities of different departments such as fundraising, programme management, monitoring and evaluation, and communications.

OBF structures require high engagement and buy-in from the nonprofit’s leadership.

For instance, Sesame Workshop India, one of the nonprofits under our Outcomes Readiness initiative, refined its strategy to align with its impact objectives. Sonali Khan, managing director of Sesame Workshop India, shared, “The exercise allowed us to develop a cohesive long-term strategy and plan that spoke to the needs of our internal as well as external ecosystems. It helped articulate our definition of impact, streamline our programmatic focus areas, prioritise important fundraising channels and influence strategy across our global offices.” 

Project level: OBF structures require high engagement and buy-in from the nonprofit’s leadership to design a clear project road map that aligns with the mission of the organisation. This road map provides clarity to the project and field team members, thereby allowing them to build work plans that are geared towards achieving the set outcomes within a specific timeline.

For instance, one of our partners on the Quality Education India DIB, Kaivalya Education Foundation, highlighted that the outcomes focus of the DIB meant that each Gandhi Fellow knew what was expected and what had to be achieved. This led to a sense of purpose, clarity, and uniformity of goals throughout the organisation. The chairman of the organisation himself became very involved in having a target-based learning outlook, which then percolated down to all levels of the organisation.

2. Ability to identify and manage risks that arise during implementation

Under OBF structures, outcome payments to investors are linked to the performance of the nonprofits and programme results. As such, any risks (internal or external) that affect programme implementation and outcomes need to be clearly identified, monitored, and managed effectively. This in turn means that nonprofits must be well versed in applying risk management tools such as root cause analysis, creating and using risk registers, and undertaking scenario planning in their projects. Good risk management also requires nimble and agile decision-making, and high engagement between programme and field teams to identify risks, develop and implement mitigation strategies, and quickly revise strategies if needed.

For instance, under our Bharat EdTech Initiative, funded by a performance-linked grant, our community partners identified the low activation and poor engagement with EdTech apps as a key risk to achieving the learning outcomes envisaged. They had to quickly design and pilot many targeted micro-strategies to increase the time spent on apps.

3. Openness and willingness to subject one’s intervention to third-party evaluation

Third-party assessments are often perceived as report cards that label interventions as successes or failures and are therefore met with resistance at times. However, third-party evaluations are the cornerstone of the OBF structure—they trigger payments for investors, assess an intervention’s performance, and generate valuable insights that can help in improving performance.

A mindset shift is needed to transform the fear of third-party evaluations into a learning experience. Participating in such rigorous impact evaluations necessitates a strong theory of change, sharply defined outcomes, and a corresponding monitoring and evaluation framework to measure these. Sometimes, it also means that a nonprofit should possess adequate confidence, domain knowledge, and experience to understand the technicalities of an evaluation framework, and challenge the framework if it is not suitable.

4. Strong data-driven performance management culture in a high-stakes environment

Nonprofits need to have robust systems and processes to gather, analyse, and use performance-related data to guide their interventions. They should be prepared for and comfortable with collecting and synthesising fit-for-purpose data and using it to make necessary pivots in their interventions to meet their on-ground needs. For instance, Gyan Shala said that the DIB structure enabled them to improve processes such as collecting regular feedback from field teams to pass on to the curriculum design team.

Under OBF instruments achieving outcomes is central to payments, so performance monitoring is a must and not merely nice to have.

Similarly, under the Bharat EdTech Initiative, we had to provide our community partners with a tech-based solution to capture data on the various types of nudges that encourage the use of EdTech accurately and regularly. The performance manager for this project provided training and handholding to people at all levels of the organisation, especially the field staff, to ensure that they understood the importance of this data and how it was used, and did not see it as an administrative burden.

Sonali Saini, founder and CEO of Sol’s ARC, another participant under Outcomes Readiness, highlighted, “Identifying a specific set of key success indicators from a long list, setting up dashboards that consolidated data in line with these indicators, and using this data for quick decision-making were some of the most valuable capabilities that we built under this initiative.” Under OBF instruments achieving outcomes is central to payments, so performance monitoring is a must and not merely nice to have. This can involve working under high pressure.

Participating in OBF instruments and projects requires a convergence of mission, mindset shift, and capabilities. While this may sound ambitious—and it certainly requires sustained buy-in and commitment at all levels of an organisation—nonprofits do not have to walk the path alone. Sneha Arora, CEO of Atma, believes that building outcome readiness is about enabling nonprofits to develop the confidence, systems, and processes they need to raise and use results-based funding.

While donors are shifting towards outcomes, it does not always translate into OBF projects or tools. Therefore, there are limited opportunities for nonprofits to raise funds or apply their learnings. There is a need and opportunity for traditional grants—which is the primary source of funding for nonprofits—to take on a more pronounced outcome orientation. Within such programmes, donors can provide specific capacity-building support to nonprofits and invest in performance managers who can hand-hold the nonprofits. Similarly, to plug the information asymmetry, intermediaries can share their learnings, resources, and templates with the wider nonprofit sector and create platforms (such as India Blended Finance Collective or Convergence) to showcase relevant opportunities. 

This article was updated on April 13, 2023. An earlier version incorrectly stated that two SSNs–one by Varthana and Michael & Susan Dell Foundation, and one by IPE Global for healthcare enterprises–collectively mobilised USD 30 million of funding and impacted approximately 6,00,000 people.

Know more

  • Read this interview to learn what it would take to build the social finance ecosystem in India.
  • Read this article to learn about the role implementing organisations can play in improving nonprofits’ outcome readiness.
  • Read this report on the state of blended finance in 2022.

]]>
https://idronline.org/article/fundraising-and-communications/outcome-based-financing-what-do-nonprofits-need-to-know/feed/ 0
Conducting a randomised control trial (RCT) on a budget https://idronline.org/article/programme/conducting-a-randomised-control-trial-rct-on-a-budget/ https://idronline.org/article/programme/conducting-a-randomised-control-trial-rct-on-a-budget/#disqus_thread Fri, 14 Oct 2022 06:00:00 +0000 https://idronline.org/?post_type=article&p=25490 A representative from Medha speaking with students_randomised control trial

What’s the counterfactual? What happens to people who don’t get to participate in your programme? We’ve all faced this dreaded question. It frequently comes from a donor or a board member—but hopefully increasingly from the team itself—who has a genuine interest in better understanding the impact (or lack thereof) of the interventions they work so hard to design and deliver. In many cases, the question comes from all three. However, a good counterfactual is hard to get. It typically involves conducting a randomised control trial (RCT). The general perception is that they are always exorbitantly expensive, can only be done if you are operating at a significant scale, and consume the entire organisation’s energy. We’re here to bust that myth. Since 2018, Medha has been conducting an RCT with J-PAL to measure our programme’s impact on young women’s career preparation and progression in the Hindi heartland. (To learn more about the findings, check out the midline report.) We conducted this study on a shoestring budget for RCTs (USD 60,000/INR 45]]>
What’s the counterfactual? What happens to people who don’t get to participate in your programme?

We’ve all faced this dreaded question. It frequently comes from a donor or a board member—but hopefully increasingly from the team itself—who has a genuine interest in better understanding the impact (or lack thereof) of the interventions they work so hard to design and deliver. In many cases, the question comes from all three.

However, a good counterfactual is hard to get. It typically involves conducting a randomised control trial (RCT). The general perception is that they are always exorbitantly expensive, can only be done if you are operating at a significant scale, and consume the entire organisation’s energy.

We’re here to bust that myth.

Since 2018, Medha has been conducting an RCT with J-PAL to measure our programme’s impact on young women’s career preparation and progression in the Hindi heartland. (To learn more about the findings, check out the midline report.)

We conducted this study on a shoestring budget for RCTs (USD 60,000/INR 45 lakh) and in a way that minimised its impact on our operations (less than 15 percent of the team was involved). We thought it would be helpful to share how we did this for all the other resource-constrained organisations out there. It is certainly not a miracle or something that hasn’t been done before, but we didn’t come across many examples from the Indian nonprofit world when we were looking.

A representative from Medha speaking with students_randomised control trial
There is no reason that you need to have countless third-party enumerators (who are costly!) collect the baseline data. | Picture courtesy: Medha

1. Find a researcher who is passionate about what you do and the problem you are trying to solve

This is a non-negotiable. Don’t move forward unless you feel confident about this. In our case, it took many years to make this happen. We networked hard and reached out to many research firms. We shared (pitched) our work and potential research questions. Without a large grant behind us, the conversations didn’t go very far.

However, after multiple discussions with J-PAL, our partnership development form serendipitously found its way to Lori Beaman (I like to think of it as the equivalent of Remy finding Gusteau’s in Ratatouille). Only after having conversations with Lori, and gaining the confidence that both of us were equally excited and passionate about pursuing this, did we move forward.

2. Leverage your existing team as much as possible

There is no reason that you need to have countless third-party enumerators (who are costly!) collect the baseline data. In our case, we utilised our student relationship managers (SRMs) to hold the lottery for randomisation and facilitate the collection of baseline data through smartphones and tablets we provided (which enabled cost savings). Since SRMs directly work with students, it was tough for them not to be able to work with all interested students in that year (argh, randomisation). While it doesn’t necessarily make it any easier, we took extensive training sessions with the team so they understood the ‘why’ behind an RCT design.

3. Reduce the sample size

While there are reasons to have a big sample (researchers will frequently cite power calculations), it is in your best interest to reduce the sample size as much as possible (of course, without sacrificing the integrity of the research). In our case, we did a pilot sample of 600 students, followed by a second sample the following year of 1,500 students (treatment and control). This outreach was approximately 15 percent of our total students that year.

Reducing the sample size enabled us to insulate and minimise the study’s impact on the rest of the organisation and team. RCTs can be disruptive. They limit the number of people you can work with. And telling users that you are denying them service is never easy. If you can minimise this as much as possible, it will make life simpler.

So, while it is easier to accomplish this if you have extensive operations, as you can see, it’s still achievable with a scale of a couple of thousand users a year. In the Indian context, this is small.

4. Explore pilot/small grant opportunities and existing funds your researcher may have access to

Researchers often have discretionary funds that can be used for your study (granted, the amounts are not huge). In our case, Lori could apply some of these funds to recharge SIM cards for study participants and even hire a research associate when required.

If these funds are unavailable, you can explore pilot or smaller grants that do not require a significant scale. In most cases, the researcher will do the heavy lifting on these applications, saving you the time and expertise needed to secure them. In our case, we got a pilot grant from J-PAL’s PPE Initiative and also explored small grants from Spencer Foundation. There will likely be many such opportunities in your networks.

5. Position it as a learning exercise, and not as an existential question of whether your programme works

RCTs, and other empirical research methods, are just another tool in the MEL toolbox to understand more about your programme, users, and impact. They are neither perfect nor a silver bullet. Sometimes they give you minimal information about what you are doing.

One example from our experience: Our RCT told us that, compared to their peers, students who went through the Medha programme are more likely to have a CV. While that’s nice to know, it doesn’t tell us anything about how good that CV was and if it helped them secure the job they wanted. The point is, if we approach RCTs as a learning opportunity, it reduces the pressure on the whole team and makes us focus on what we can learn from them, and not on someone sitting in Chicago telling us if the last 10 years of our lives have been successful.

Happy randomising

So, there you have it, five ways to pull off an RCT in a low-cost and low-impact way. With a bit of persistence (working hard to be resourceful and scraping together what limited funds exist), luck (finding the right researcher), and collaborative spirit (getting the whole team behind the learning objective), you too can make it happen.

This article has been edited for IDR. It was originally published on Medha’s blog. The author is related to a team member at IDR.

Know more

]]>
https://idronline.org/article/programme/conducting-a-randomised-control-trial-rct-on-a-budget/feed/ 0
Why an ineffective school improvement programme was scaled up https://idronline.org/article/education/why-an-ineffective-school-improvement-programme-was-scaled-up/ https://idronline.org/article/education/why-an-ineffective-school-improvement-programme-was-scaled-up/#disqus_thread Thu, 11 Aug 2022 09:00:00 +0000 https://idronline.org/?post_type=article&p=24394 Children in a file at school-school improvement

In 2014, the government of Madhya Pradesh launched a comprehensive school management programme—the Madhya Pradesh Shaala Gunvatta (MP School Quality Assurance) programme—in an effort to improve the management of public schools. This programme is one among many management-related interventions in the education sector, where management quality has been found to influence test scores and school productivity. However, evidence on whether such interventions are able to change actual behaviours and outcomes at scale remains scant. To understand this gap, Karthik Muralidharan and Abhijeet Singh examined the impact of the Shaala Gunvatta programme as part of a project funded by the International Growth Centre, J-PAL Post-Primary Education Initiative, Economic and Social Research Council, and Department for International Development’s RISE programme. The intervention was the precursor of a variant that has since been rolled out to more than 6,00,000 schools across India and is expected to eventually cover 1.5 million schools. The programme entailed: Developing school rating scorecards by conducting independent and customised assessments of school quality to identify strengths and weaknesses. The]]>
In 2014, the government of Madhya Pradesh launched a comprehensive school management programme—the Madhya Pradesh Shaala Gunvatta (MP School Quality Assurance) programme—in an effort to improve the management of public schools. This programme is one among many management-related interventions in the education sector, where management quality has been found to influence test scores and school productivity. However, evidence on whether such interventions are able to change actual behaviours and outcomes at scale remains scant.

To understand this gap, Karthik Muralidharan and Abhijeet Singh examined the impact of the Shaala Gunvatta programme as part of a project funded by the International Growth Centre, J-PAL Post-Primary Education Initiative, Economic and Social Research Council, and Department for International Development’s RISE programme. The intervention was the precursor of a variant that has since been rolled out to more than 6,00,000 schools across India and is expected to eventually cover 1.5 million schools. The programme entailed:

  • Developing school rating scorecards by conducting independent and customised assessments of school quality to identify strengths and weaknesses. The scorecards were based on indicators in seven domains—mentoring, management, teacher practice and pedagogy, student support, school management committee and interaction with parents, academic outcomes, and personal and social outcomes.
  • Devising school-specific improvement plans with concrete action steps based on the school assessments. These plans aimed to set manageable targets for improvement that schools could achieve step by step.
  • Ensuring regular follow-ups by block-level supervisors to monitor the schools’ progress and provide guidance and support. This was an integral aspect of the programme that sought to motivate schools to deliver continuous improvement.
  • Ensuring school inspectors, all school staff, and parent representatives were involved in the assessments and the design of improvement plans.

From among 11,235 schools across five districts, 1,774 elementary schools were randomly selected to receive the programme (treatment schools) and 3,661 schools were assigned to the control group. The experiment’s primary outcome of interest was student learning, which researchers calculated using three data sources: student scores attained on independently designed Hindi and maths tests; student scores on official assessments; and aggregate scores of all the schools on the Pratibha Parv annual assessments, which are administered to all students from grades 1–8 in the public schooling system. Additionally, student and teacher absences were tracked and principals, teachers, and students were surveyed.

Interestingly, the authors learnt that the programme was largely ineffective. Nevertheless, it was viewed as a success and was scaled up to approximately 25,000 schools across the state.

Here’s what didn’t work and why

1. Lack of sustained oversight

Although the school assessments were comprehensive and informative, there was no change in the frequency or quality of supervision as a result of the programme. The block-level supervisors did not increase their monitoring of the schools. School management committees did not play a more active role either. Moreover, there was no difference between treatment and control schools in terms of the content of official feedback recorded in the inspection registers that the schools maintained. Thus, all evidence suggested that the school ratings did not inspire any meaningful follow-up.

Children in a file at school-school improvement
Interviews indicated that bureaucratic incentives are geared more towards the appearance of activity as opposed to actual impact. | Picture courtesy: ILO Asia-Pacific/CC BY

2. No improvement in pedagogy or effort

Although the assessments and school-improvement plans could have led to improved teacher effort and classroom processes, no evidence of this was found in the schools. Teacher absence rates remained high (33 percent across the board) and teacher effort was unchanged. Their instructional time, the use of textbooks and workbooks, and how much they checked student homework remained the same. Student absence rates were also high (44 percent) and were not affected by the programme.

3. Unaffected learning outcomes

The programme failed to demonstrate an impact on student learning outcomes both in the short run (three–four months after the intervention) and over a longer term (15–18 months later). This applied to both school-administered tests and tests that were independently administered by the research team.

Despite the evidence that highlighted the programme’s ineffectiveness, it was scaled up to approximately 25,000 schools across the state. The researchers carried out extensive interviews with principals, teachers, and field-level supervisory staff in six districts during the scale-up to identify the reasons for the programme’s ineffectiveness.

This is what they learnt

1. Implementation was poor

The officials recalled that the momentum generated by the programme largely dissipated after the preparation of work plans. Additionally, although the schools and teachers repeatedly mentioned that pedagogical support and accountability in schools were lacking, neither of these was reported to having changed as a result of the programme. This means that the programme failed to remedy the gaps of the system through improved pedagogy or governance. In fact, it was largely viewed as an exercise in administrative compliance, which officials demonstrated by submitting their paperwork on time. This was a significant departure from the exercise in self-evaluation and improvement that the programme aimed to be.

2. Valuing the appearance of success, rather than actual impact

A striking insight was that although the programme did not facilitate changes in school practices or student learning outcomes, it was perceived as a success by senior officials and continues to be scaled up. The interviews revealed that there was a disconnect between the programme’s objectives and how it was perceived by its implementers. Although the programme prioritised continuous support and self-improvement, those implementing it only focused on completing paperwork and submitting assessment reports and improvement plans. There was also a disconnect between the role the education officials were expected to play and how this role was perceived by others in the system. Although they were meant to monitor, coach, and hold schools accountable, they were perceived as conduits for communication (especially paperwork) from schools to the bureaucracy.

Bureaucratic incentives are geared more towards the appearance of activity as opposed to actual impact.

Frequent changes in education policy and programmatic priorities also resulted in negative field-level consequences, such as a lack of engagement by implementing staff. This led the implementers to believe that government policies are impermanent, often designed without considering implementation constraints, and frequently abandoned or changed. And, the interviews indicated that bureaucratic incentives are geared more towards the appearance of activity as opposed to actual impact. As a result, completing school assessments and uploading school improvement plans at scale were the main elements that were monitored. Based on these metrics, the programme was a success.

How can public service delivery be improved at scale?

In light of their findings, the researchers offered the following recommendations to improve public service delivery at scale.

1. Better incentives

Many existing studies have outlined the positive effects of well-designed interventions to improve the performance of public sector workers, including those in the education sector. The failure of this programme showed the difficulty of improving outcomes without incentivising front-line staff and supervisors to do so. The partial implementation of the programme reflects how bureaucratic incentives were skewed in favour of what was being monitored, while other metrics were ignored.

2. Better outcome visibility

Senior officials only monitored easily visible aspects of programme performance, and the programme even worked till the point where outcomes were visible to senior officials (completed school assessments, uploaded improvement plans). However, the effect ceased at the point where outcomes were no longer easily visible (for instance, in the case of learning outcomes and classroom effort). Thus, investing in ensuring improved measurement and integrity of outcome data would enable better monitoring of the programme and yield better results.

3. Staffing

The programme merely added responsibilities to departments that are already overburdened and understaffed. Given the importance of dedicated programme staff, the programme may have been more effective if its staff capacity was higher. This would have enabled them to conduct follow-up visits to schools and comprehensively monitor their progress against the targets set out in the improvement plans.

Although solely improving these factors may not successfully improve school governance and its related outcomes, they can serve as effective first steps.

Know more

  • Read the complete research paper that this article is based on.
  • Listen to IDR’s podcast that discusses the pros and cons of government schools and private schools.
  • Read this article on the cost of teacher absences in India.

]]>
https://idronline.org/article/education/why-an-ineffective-school-improvement-programme-was-scaled-up/feed/ 0
Measuring social impact: Who decides what counts? https://idronline.org/article/social-business/measuring-social-impact-who-decides-what-counts/ https://idronline.org/article/social-business/measuring-social-impact-who-decides-what-counts/#disqus_thread Wed, 03 Aug 2022 06:00:00 +0000 https://idronline.org/?post_type=article&p=24205 A Gond painting with a white-spotted and a black-spotted rabbit and trees with moneys and birds behind them_social impact

Social impact organisations operate as a crucial link between the government and its policies and the people to ensure they have access to basic rights, such as education, water, healthcare, and safety. However, while most of these organisations agree that positive social impact is the goal, they may differ on its definition and how best to create it. On our podcast ‘On the Contrary by IDR’, host Arun Maira sat down with Hari Menon, the India office lead of the Bill & Melinda Gates Foundation, and Vineet Rai, founder-chairperson of Aavishkaar, to talk about what social impact really means, how it should be measured, and what needs to change in how we think about it. Below is an edited transcript that provides an overview of the guests’ perspectives on the show. Defining social impact is complicated Vineet: This is a debate I’ve had since the time we coined the term ‘impact investing’. The impact that I define for myself is very different from the impact that my investor sitting in]]>
Social impact organisations operate as a crucial link between the government and its policies and the people to ensure they have access to basic rights, such as education, water, healthcare, and safety. However, while most of these organisations agree that positive social impact is the goal, they may differ on its definition and how best to create it.

On our podcast ‘On the Contrary by IDR’, host Arun Maira sat down with Hari Menon, the India office lead of the Bill & Melinda Gates Foundation, and Vineet Rai, founder-chairperson of Aavishkaar, to talk about what social impact really means, how it should be measured, and what needs to change in how we think about it.

Below is an edited transcript that provides an overview of the guests’ perspectives on the show.

Defining social impact is complicated

Vineet: This is a debate I’ve had since the time we coined the term ‘impact investing’. The impact that I define for myself is very different from the impact that my investor sitting in New York sees, and [it] is very different for the person I’m trying to impact because their aspiration for impact is very different.

So [for instance], whether you go to Reliance or the Tatas, or anybody—they are making some positive impact, some negative impact. We are probably trying to talk about the summation of the negative and positive and see whether there is an end result which is positive, and then you can call it an impactful enterprise.

Acknowledging that what we are doing is probably efficient but less impactful is probably the first way to go.

For example, instead of setting up a factory and trying to actually manufacture shirts, are you going to rural India, working with rural artisans, which is a far more cumbersome, complex process, and getting shirts stitched in the villages and then bringing them to the urban world to sell them? Both approaches are creating impact. But in one, you are not really trying to move all the workers to one place and creating a new ghetto to get a shirt stitched and create livelihood. In the other, probably what you are doing is you are taking livelihood [to] where people live, so you’re not displacing them, and in the process are actually allowing a local ecosystem to thrive. And I think it’s very difficult for a large number of people to actually give this context to the idea of impact, especially when we are talking about impact investing. And, therefore, it has been simplified to create the ghetto because it’s far more efficient, far better. And this is where the math comes in. So acknowledging that what we are doing is probably efficient but less impactful is probably the first way to go.

When we think about impact, we need to factor in how systems work

Hari: Our foundation is driven by that line: ‘All lives have equal value’. So, at one level, the impact we try to track is: Is the work improving the well-being of the communities you’re trying to serve?…Our approach to serving these communities is often through what are seen as vertical programmes. We work with partners or we work with the government on health interventions, on nutrition, and financial inclusion, on sanitation, on agriculture-based livelihoods, increasingly also looking at gender equality as a bedrock. And what we find is that you can get locked into a particular programme and the impact metric for that. So, for example, with vaccines…there’s a very clear metric which is obviously [that] you need to have…a product that’s cost-effective for a country to roll out. But once that’s there, how do you maximise coverage of the populations that will benefit from the particular vaccine?…That’s where things get trickier. And if you start looking at what impact you are having there—is the infrastructure in the right place? Does it have the right kind of quality? Are there people there when communities need them? Are they trained? Do they have empathy on the community side? Do people have the awareness, the knowledge, [and] the information they need to be able to make the best choices? What are the power equations that determine the choices they make? So if you start looking at the system overall and then ask yourself, ‘Okay, did you have impact?’ then it gets a lot more complicated.

Numbers don’t paint the whole picture

Hari: We end up looking at numeric measures of impact, because that’s what the world seems to value and understand. But two problems with that. One is you often default to averages and averages hide inequities, and variations. [For example] India is much better off on many development indicators, visibly [than it was] a couple of decades ago. But if you drill down, you get down to state numbers, you get down to district numbers, you get down to block numbers—the averages hide huge variations. And we don’t often factor in those variations into how we are thinking about improvements and change. And that’s where this focus on numbers and averages can have a distortive effect.

The second is the point of sustainability, and sustainability to my mind only comes if the people are really brought into the change, that is supposedly to improve their lives. And this urge to measure often takes us towards supply side interventions, because those are easier to track, those are easier to control. And we leave out the factors around demand, cultural norms, all of which can have lasting influence on impact, well beyond the time period of interventions.

So I think there is a need to take a much more integrative approach that is beyond quantitative, in which we spend a lot more time on qualitative conversations, understanding the cultural context within which interventions need to be delivered, and truly getting the communities more empowered and engaged and giving them more agency in determining the kinds of interventions they want to receive.

The urge to measure often takes us towards supply side interventions, because those are easier to track and control. | Picture courtesy: Aseema

The new paradigm of impact moves away from efficiency and focuses on sustainability

Vineet: Most venture capitalists say that we are a partnership but the reality is that’s not the truth…The only skill that they are seeking from me is that ‘When you take over my money, you will make it work on the most efficient paradigm,’ which I said is the old paradigm. The new paradigm is not [about being] the most efficient, but the most sustainable and resilient. Efficiency is not necessarily what will deliver sustainability, and this conflict is a very deep and inherent conflict. And now I have actually had this debate with a lot of very successful and very large corporate CEOs. And I think their ability to understand this conflict is very limited even now, when we are actually challenged with it. And climate is actually a very good example. Growing trees seems to be a very good solution. But now the debate has moved on [to] that people who are actually dealing with creating the biggest amount of pollution are now more interested in buying carbon offsets rather than changing the behaviour of drilling in the Arctic. There’s an inherent culprit. So they will keep drilling in the Arctic, but keep buying carbon offsets—will that actually solve the problem?

Impact at scale can only come through effective partnerships

Hari: When we think of scale, there is often this monolithic vision of scale, which is an…intervention that can be done all across massive geographies, massive populations—that’s what’s going to drive change. In the corporate sector, you often do see that when entities are successful, [they] grow, and they get bigger and bigger and bigger, and swallow up more and more and more. And that’s how scale is thought of. I think, in the social sector, and the development context, scale might actually need to be thought of very differently. The only way we can have impact at scale is through effective partnerships of different actors who understand [and] can engage with communities on the different sectors where the government and partners are trying to make a difference…And right now, I think a lot of mental energy goes into the intervention at scale, right? How do you come up with the frameworks, the operational guidelines, cascade it down? The fact that all of these interventions and guidelines will land in very different contexts, very different realities, and if you’re not creating the intelligence and the ownership to problem-solve, evolve, and morph those interventions and guidelines into whatever is called for locally, we will not have impact.

Listen to the full episode here.

Know more

  • Read this study about accelerating impact and scaling social innovations
  • Read this article to learn why size may not be the right metric to measure impact.
  • Understand how focussing on measuring impact can be counterproductive to collecting other important data here.

]]>
https://idronline.org/article/social-business/measuring-social-impact-who-decides-what-counts/feed/ 0