top of page
  • Aiforgood Contributor

The Best of Both World’s - Better AI Through Human Diversity

Updated: Nov 13, 2021



AI is being deployed across a growing number of industries at an ever-increasing pace. While there is high potential for AI to enhance decision-making, humans still need to be engaged throughout the process to ensure that the outcomes are ethical and the decisions are transparent. Catherine Cheney, the senior reporter for DevEx, covered the potential of AI to create a positive impact across multiple sectors in her 2020 piece, “To build responsible AI for good, keep humans in the loop”. She further details how two organizations with a focus on artificial intelligence for good, Wadhwani AI and Rainforest Connection, are actively reflecting on how to ensure ethical AI development and deployment.


The two organizations face different ethical situations based on their respective product offerings: Wadhwani AI uses this technology to help smallholder farmers in India decide optimal pesticide use, providing a vulnerable population with the tools for better decision-making. Rainforest Connection uses AI to power a bio-acoustic monitoring platform that relays forest threat information to responders on the ground, providing professionals with actionable data insights to make optimal on the ground decisions.


Wadhwani AI and Rainforest Connection may serve different populations, but there is a common thread whereby both populations may not have the capability to question or analyze the AI tools that they are providing. AI providers need to be able to explain the limitations of their technology, especially how the quality of data could influence the system’s recommendations. Effective communication about the limitations of AI should push AI practitioners to engage with their diverse user bases. Consistent engagement can ensure that continuous human context and input can be used to mitigate the risk of AI failure or decision apathy, a situation where humans cede decision-making interest to AI machines and lose personal autonomy.


AI is often developed in contrived environments with highly structured datasets. However, these datasets are often inaccurate representations of the real world. AI trained on unrepresentative data is prone to generating biased and inapplicable results when placed in real-world environments. We need to be aware that AI systems function in heterogenous environments with messy data very unlike structured datasets they are trained on. In addition, AI projects with a focus on the social good need to actively consider how to invite diverse stakeholder input. Inviting diverse input allows teams to mitigate the problem of training AI on potentially biased data, and avoid groupthink during the assessment of AI design and performance.


The responsibilities of AI technologists need to continue even after product delivery. It is essential to maintain a feedback loop to continuously monitor the performance of the AI after deployment, this could include double-checking the results, or observing the occurrence of unanticipated consequences. Continuous improvement of AI functionality by incorporating real-world feedback is essential to sustain ethical operations after deployment and will be critical to building long-term public trust for widespread adoption of AI. To harness the decision-making power and also develop trust in AI systems, it is clear that people will need to play a crucial role. To have the best of both worlds it will take not just human ingenuity but also human diversity.


Wei-Ann Chang,

Researcher at Aiforgood.asia


Post: Blog2_Post
bottom of page