Hot Topics in Tech with Jon Riley: Blog Series Menu
As AI becomes more and more ubiquitous in our daily lives, inevitably our sector has become increasingly interested in how we can integrate these technologies into their workflows, operations, and even our artistic practices. With great power comes great responsibility: If AI is used, it should be leveraged in a way that demonstrates awareness of the ethical issues with AI: most notably, privacy, plagiarism, and accountability.
Hot Topics in Tech (part 3): Responsible AI
These five considerations explore the ethical quandaries of AI, maybe offering more questions than answers. I suggest you use this as a starting point in talking to your staff and peers about their hesitations with AI.
1. Alignment with Mission and Values
Nonprofits have a well-defined mission towards serving the greater good. This makes it critical for nonprofits, before the induction of AI, to ask themselves whether this technology aligns with the mission.
For instance, a nonprofit organization that tackles homelessness may use AI-powered predictive analytics to identify at-risk populations. However, the AI model should be designed not to stigmatize or marginalize but to uplift and support. In so doing, nonprofits will be able to keep technology merely as a means to ends, not an end in themselves.
2. Ethical Considerations: Transparency, Fairness, and Privacy
Ethics form the core of responsible AI use. Three critical ethical pillars that a nonprofit should address are the following:
1. Transparency: Beneficiaries, staff, and donors should understand how AI is being used. Clearly communicating its purpose, limitations, and decision-making processes can help gain trust and encourage participation.
2. Fairness: AI systems should not perpetuate or promote any form of social prejudice, especially those targeting underprivileged or vulnerable groups. For example, AI that handles the hiring process should be neutral from racial, gender, and socioeconomic biases. Regular audits and testing for bias can help with fairness.
3. Privacy: Nonprofits deal with sensitive information. From health records to personal stories, this must be safeguarded. AI solutions should be compliant with the data protection laws such as GDPR and focus on encryption, anonymization, and secure storage.
3. Inclusivity: Stakeholder Engagement
Inclusive AI begins with involving stakeholders. Communities impacted by AI-driven decisions need to have a say in how the technology is created and used.
A nonprofit educational provider, for example, may want to include teachers, students, and parents as key contributors in developing the concept for an AI-driven learning platform. It also provides trust and acceptance, and that the tool addresses real needs.
Additionally, nonprofits should work toward AI tools being available for everyone, including those with disabilities and people with limited technical abilities. This could be through simple interfaces, multiple languages, and the use of assistive technologies.
4. Data Responsibility: Collecting and Using Data Ethically
While data feeds AI, great power should come with great responsibility: Informed consent, purpose limitation, and minimal data collection should undergird the ethical handling of nonprofit data.
A food insecurity nonprofit might do things like use AI to map hunger hotspots. These types of collections will involve clearly given consent from participants and also will not be used for purposes unrelated to the collection.
The collection collaborates with data scientists and follows ethical data practices to ensure that the base of AI is as reliable as its results.
5. Accountability: Who Owns the Decisions?
When AI systems make decisions—whether it’s prioritizing beneficiaries or allocating resources—accountability remains paramount. Nonprofits must establish clear protocols outlining who is responsible for these decisions, especially when errors or unintended consequences occur.
For instance, a denial of access by an AI model to financial aid for a family should be appealable and the error corrected. Human oversight is required in sensitive or high-stake applications.
Accountability of an AI system would, of course, also entail regular audits that can identify risks in the system and align with organizational goals.
Conclusion: The Path to Responsible AI
AI holds immense potential for the arts, culture, and nonprofit sectors: offering tools to solve complex challenges, amplify impact, and operate more efficiently. This potential comes with significant risks that nonprofits must navigate with care. While AI has the potential to transform how organizations operate, this should not come at the cost of the values and communities nonprofits are sworn to protect.
More particularly worrying, there is a potential that the AI systems could replace workers. Since nonprofits depend greatly on personal connection and trust, losing human touch can lessen their impactful work. For instance, at the same time that AI chatbots might make many things efficient, answering routine questions simply cannot simulate the empathy or understanding of a trained caseworker. Automation taken to excess would indeed alienate the beneficiaries, often eroding that sense of community that many nonprofits provide. It should be taken that AI is meant to assist an organization and its staff rather than substitute it by freeing up people to carry out work that needs creativity, empathy, and subtle decision-making skills.
Another key issue: privacy. Nonprofits often work with highly sensitive information, including medical histories, financial information, or personal stories from vulnerable populations. Adding AI to the mix exponentially raises the likelihood of data breaches, misuse, or unauthorized surveillance. In an era where trust in how organizations manage personal information is increasingly tenuous, one misstep can have disastrous results for both the organization and those they serve. But nonprofits need to go beyond mere compliance with data protection laws and establish good privacy practices to ensure that their use of AI is in consonance with the principles of confidentiality and consent.
The biases inherent in AI further exacerbate various inequalities. Nonprofits need to ask the tough questions about who designs their systems, what data, and whether these reflect or challenge systemic inequities. If not carefully policed, AI can perpetuate and reinforce discrimination unintentionally—a privileged few and a marginalized majority. This is particularly dangerous in organizations working with underserved populations because biased AI could compound the very problem that they are trying to solve.
Finally, nonprofits must approach AI with prudence, realizing its great promise and perils. AI is not a magic bullet, and its adoption should never be at the cost of human dignity, equity, and trust. By critically considering the ethical issues thrown up by AI and embedding safeguards at each and every step, nonprofits will ensure that technology is used to advance the mission, rather than undermining it.

Jonathan Riley,
CultureSource Technologist-in-Residence, 2024
Telescope Vision LLC.