If you’re promoting AI to nonprofits, be SPECIFIC about benefits. No more generalizations!

HAL from 2001 a space odyssey

The hype regarding Artificial Intelligence (AI) is out of control, including regarding mission-based organizations. There are blogs, webinars, YouTube videos and more, all singing the praises of AI for nonprofits and NGOs. Various companies, nonprofits and consultants are falling over themselves to say that AI can do ANYTHING a nonprofit or NGO needs done: raise funds, manage volunteers, talk with clients, administer programs, manage all incoming calls, all with little or no human involvement.

Yet, these promoters are rarely specific. “You can use AI to research grants!” Okay, how? Tell me exactly what that looks like and how it’s different than just typing in keywords to an online search engine?

“You can use AI to screen volunteers!” Great. How? Tell me exactly what that looks like and how it’s different than just requiring certain fields in a volunteer application to be filled out or require a certain number of characters in that field? And is the goal to eliminate all human interaction until the volunteer shows up for the scheduled volunteering gig, because it’s that personal, human interaction that often seals the deal for a volunteer to show up at all.

So many of you are breathless about your use of AI, but you aren’t being specific about what that REALLY looks like. Specifics and obvious, real-world benefits are what lead to tech adoption.

Back in the 1990s, when the Internet started going mainstream, I started my own web site as a place to be specific about how the Internet could be used by nonprofit staff, specifically those responsible for outreach and those responsible for recruiting and engaging volunteers. Lots of makers of software and computers were making claims about what these tech tools could do for nonprofits, but they offered no specifics and no detailed guides, probably because they were talking in theory, not actual practice. As a result, a lot of nonprofits were dragging their feet about switching from index cards to track contacts to software that would manage clients and donors – they relished their personal relationships and saw tech eliminating something fundamental to their fundraising, outreach and program management success. A lot of nonprofits balked at the idea of creating a web site when they weren’t using any web site themselves: if a web site wasn’t the primary way they got info, why should they care? Of course, the reluctance of government and corporate donors to fund tech equipment, Internet subscriptions and training for staff also had something to do with many nonprofits not adopting computers and the Internet for so long.

I was one of the first people to start talking online and in workshops, in low-tech PLAIN language, about practical, real-world applications of online and computer tech for nonprofits. I could see the digital divide emerging between nonprofits that were adopting tech, especially online tools, and doing so much more with less, and those that still hoped the Internet was the CB Radio of the 1990s. But those latter nonprofits were providing critical services, and I did not want to see them die due to lack of understanding about emerging tech tools. In my work, I emphasized not only the practical applications and the specifics of tech use, but also that I would never propose the Internet or software as tools to replace humans; I always emphasized the application of tech tools with the goal of increasing meaningful human interactions, to increase support and help for humans, both clients and volunteers, and to free up time for staff so that they could spend more time in real-time work with clients, donors, the press, potential partners, other staff, etc.

(if you want to see those early versions of my web site, type the URL into The Internet Wayback Machine.)

That web site and my trainings launched an entirely new career for me. One of the things that made me so successful was that I was SPECIFIC: I didn’t just say, “The Internet can help you reach new audiences!”; I gave specific details on what that looked like, and exactly what a person would need to do to replicate those results. The Virtual Volunteering Project (1996 – 2001) was laser-focused on specifics and practical applications. I wrote one of the first articles (October 2001) about how hand-held technologies – what we now call smart phones – were being used in humanitarian and public health field work and grass roots organizing.

In all of this work, I also never stopped emphasizing the human aspect: when I talked about online mentoring, I noted that success was NEVER about the tech tools, but about the HUMANS involved and how well they were trained and supported.

As a result of my approach, via my web site and via workshops, I regularly got comments like, “This is the first time I’ve ever understood why I should care about the Internet at my job” and “I finally know what questions to ask software salespeople.”

To all of you promoting AI for nonprofits: you have to be as specific as I was. For instance, be clear about why using AI would be preferable to just a web search on Google or Duck Duck Go. In fact, in my opnion: it’s not AT ALL preferable, and if you use AI to make suggestions about small-budget fundraising events for an animal shelter, you should still go to the search engine of your choice and look for fundraising events for an animal shelter, because you will find even more ideas. YOU should know the full range out there, and no AI tool provides that.

And also to all of you promoting AI for nonprofits: you need to be clear in warning nonprofits NEVER to take an AI-produced product, whether it’s a graphic, a press release or a social media strategy, and use it as is. AI makes mistakes (link goes to one that was very personal for me and would have been traumatizing). AI hallucinates and FREQUENTLY puts incorrect info into the written text it produces. AI not only claimed Ananda Valenzuela was speaking at an upcoming conference, it doubled-down when she tried to correct it. AI also doesn’t adhere to standards of accessible or even GOOD design: you can use an AI tool like Canva to produce your event flyer, but a HUMAN still has to make sure it adheres to standards of good design (like appropriate color contrast).

One final note to all of you promoting AI to nonprofits: the energy needs of AI are threatening to overwhelm the power grid. They are increasing our need for electricity at a time when we need to be DECREASING that need and RAISING energy prices for regular folks. You had better acknowledge this, full disclosure, when talking to nonprofits, many of whom are trying to adopt greener ways of doing business (and some of whom are focused on addressing global climate change specifically).

Yes, I use AI. One cannot use anything is on a network of any kind, or even a stand alone new computer, without using some form of AI. My spell checker and grammar checker tool is considered an AI tool, because, supposedly, this tool “learns” from me. I use Canva sometimes. I was once charged with writing a poem that might be a part of a fundraising campaign, and after I wrote my poem, I then asked AI to write a poem, giving it the same parameters that I was given, just to see how it compared. The AI poem actually wasn’t horrible. Mine was better, of course, but if all I had had to work with was that AI poem, with some tweaking, it would have been okay. But just okay. But I never trust the AI summary at the top of an online search – I always go looking for the source. WIkipedia remains a far superior resource for explanations and summaries, IMO.

Think of AI-produced material as something that a new employee from the corporate world or volunteer fresh out of high school, someone who might be able to use the latest computer tech to play video games and watch TikTok videos but does not understand that not everyone has the latest tech tools, not everyone has great eyesight, not everyone uses their hands to navigate web pages, not everyone speaks English as a first language, not everyone understands your soon-to-be dated jargon, etc. You are always going to have to correct and refine the material AI produces, just like you would that new employee or volunteer.

Why am I not taking up the challenge myself and researching and compiling real-world, practical examples of specific ways nonprofits and NGOs are using AI?

  • I do not have the finances to do yet another mostly-unfunded project. I was paid when I managed the Virtual Volunteering Project. I have not been paid for any of the research and resources I’ve produced for my own web site, nor for the Virtual Volunteering Guidebook (when it comes to the book, which I paid to publish, I barely broke even).
  • I think it should be NOT ME. It’s overdue for someone else to take up this let’s-talk-plain-language-about-tech challenge.
  • I am much older now and would like to focus on other things.

I really hope someone out there is reading this and will take up the challenge.

Also see:

If you have benefited from this blog or other parts of my web site and would like to support the time that went into researching information, developing material, preparing articles, updating pages, etc. (I receive no funding for this work), here is how you can help

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.