AI Agents Raise Funds for Charity in Real-Time Test

AI Agents Raise Funds for Charity in Real-Time Test AI Agents Raise Funds for Charity in Real-Time Test
IMAGE CREDITS: FIDELITY CHARITABLE

While tech giants like Microsoft continue promoting AI “agents” as productivity tools for business, one nonprofit is flipping the script — showing that AI can also drive social good.

Sage Future, a 501(c)(3) nonprofit supported by Open Philanthropy, recently ran an experimental project using AI agents to raise money for charity. In a virtual sandbox environment, four large language models — OpenAI’s GPT-4o and o1, along with Anthropic’s Claude 3.6 and 3.7 Sonnet — were assigned a single mission: fundraise for a cause of their choosing.

From Code to Compassion: AI Models Pick a Charity

The agents were granted autonomy in how they approached the challenge. They could browse the internet, collaborate, create content, and decide which nonprofit to support. In this case, they chose Helen Keller International, a charity known for providing vitamin A supplements to children — a proven way to save lives.

Over about a week, the AI agents managed to raise $257 for the organization. While not a massive amount, it was an encouraging step for the Sage team, as it showed what these digital tools can do — even at this early stage.

Human Help Still Needed

To be clear, the fundraising wasn’t entirely independent. Human observers were allowed to interact with the agents, offering advice and reacting in real time. Most donations came from these viewers rather than random online users, which highlighted the agents’ current limitations in organic outreach.

Still, the project revealed how AI agents can learn, adapt, and collaborate in surprisingly human-like ways.

Agents Learn, Collaborate, and Create

Throughout the experiment, the agents grew increasingly sophisticated. They built donor tracking systems, shared Google Sheets, communicated via group chats, and sent emails through pre-configured Gmail accounts. Claude even observed and noted o1 accessing the shared spreadsheet — a glimpse into future agent collaboration.

One standout moment? A Claude agent needed a profile picture for its X (formerly Twitter) account. It signed up for a ChatGPT account, generated three images, launched a poll for viewers to vote on their favorite, downloaded the winner, and uploaded it to its new profile. All on its own.

Speed Bumps Along the Way

The road wasn’t without issues. At times, the AI agents got distracted — veering off to explore unrelated topics or playing games. GPT-4o once paused its activities for an entire hour. In another case, Claude hit a wall trying to solve a CAPTCHA, repeatedly failing despite encouragement from human spectators.

These hiccups underscored that while AI agents have made big strides, they still need guidance and guardrails.

What’s Next: Smarter Agents, Bolder Experiments

Sage Future’s director, Adam Binksmith, sees these early tests as just the beginning. He believes as new and improved AI models emerge, they’ll be better equipped to take on complex, multi-step tasks — and even handle conflicting objectives in multi-agent settings.

Future experiments may include teams of agents with competing goals, or even agents assigned to sabotage the efforts of others, to test resilience and adaptability. Sage also plans to enhance safety protocols and build more advanced oversight systems as agent capabilities grow.

Binksmith remains optimistic: “We want people to understand what agents are capable of today, where they fall short, and how quickly they’re evolving. The internet might soon be filled with AI agents working — or clashing — toward different missions. Our goal is to explore that space safely, and maybe raise a little money for a good cause along the way.”

Share with others

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Follow us