Partner Spotlight: Shadrock Roberts of Mercy Corps

Could you share a bit about yourself? 

I’ve been working with algorithms since the start of my humanitarian data career, which began by applying automated image classification methods to high-resolution satellite imagery for population estimates. And while I no longer process or analyze imagery, and inhabit more directorial roles now, I have always tried to maintain a technical base, which is necessary when working with engineers; defining requirements; or developing workflows. I bring all this to bear in my role as Director of Data Protection, Privacy, and ethical AI at Mercy Corps.

What interested you in evidencebase.ai? How does this fit into your existing work?

Evidencebase.ai can make good on the promises that we all hoped artificial intelligence would answer a few short years ago: primarily the ability for humanitarian organizations to truly learn from their experiences and share these with the communities they serve. 

In 2018, I wrapped up work on a project to apply AI to make it easier for communities to gather, process, and classify their own data during a crisis so it would be more useful for them while also being more legible for humanitarian organizations. The project was a collaboration with some fantastic computer scientists from the Open University, the University of Sheffield, humanitarian information managers from Delft University, and the open source platform Ushahidi. This was where I first had the opportunity to engage with machine learning, natural language processing, and the use of chatbots. Our chatbots showed promise, but simply weren’t good enough yet to be really practical and our approaches to classifying data were definitely cutting edge, but were still a long way from the incredible speed and ease-of-use that we see today with tools trained on large language models. 

Throughout the project, I felt like the technology still hadn’t evolved enough to be easily used by the wide range of stakeholders who would count on it during a crisis. It did, however, prompt me to reflect on the possibility of using AI to process all the documents that humanitarian organizations create and I ended up advocating for a use case in which AI would be applied to help these organizations better understand their own lessons and make them open to the communities they stand to serve. While this project is really only 5 years old, and was at the leading edge of possible for the time, it looks antiquated in comparison to tools like ChatGPT!

And this is really where I see the value of Evidencebase.ai: the ability to create tools for communities of practice to share information with each other and with affected populations. The first time I created a custom corpus with Evidencebase.ai and queried it quickly, easily, and with incredible fidelity, it was clear that things had changed. I love the promise the tool has for document intelligence but, more importantly, for communities of practice to create a collective corpus and share it with others. 

What’s next for you and AI?

Evidencebase.ai has been a great way for us to begin exploring what using generative AI looks like. In the short-term we are planning to create some tools that will allow Mercy Corps staff to access vetted information and generate new output from that information in a controlled environment. Our primary goal is risk mitigation: ensuring the safety and security of our data while giving our staff a commensurate user experience.  In the longer term, I envision a wide range of tools for different use-cases, but the starting point for us is to leverage the document library we already have and make it easier to use and derive new insight from. 

What would you say to someone just getting started with AI in the humanitarian space?

The very first thing I'd say is that you probably shouldn’t start with AI unless you already have a data plan and some capacity in your organization to use data safely and ethically. While emerging tools will likely require their own policies, guidelines, and practices, basic responsible data practices do still apply to AI. If you don’t have those already in place, that should be your first step. A good primer on AI in the humanitarian sector is the new report from the Digital Humanitarian Network called, “Generative AI for Humanitarians.” After that, it’s all about working with someone you trust to understand what generative AI is, what use-cases make sense for your organization, and then prioritizing those. I strongly recommend starting small and picking a clear use case that you can develop quickly and deliver without too much investment. This will give you insight to the whole process, from how the tech works and what it costs to the surprising ways that people will use the tool and what it means to adapt it and maintain it over time. Successfully delivering a small win will help build your organizational capacity way more than a failure that didn’t go anywhere because it tried to be everything to everybody. 

Previous
Previous

Why We Are Building an Open Source Interface for Generative Artificial Intelligence

Next
Next

Evidencebase.ai - October Product Update