How Can Social Entrepreneurs Think About AI Today?
- Akshay V
- Aug 25
- 6 min read
Updated: Aug 29

Whenever a new technology comes, the world seems to split into two camps. One group says: “This will solve all our problems!” The other warns: “This is dangerous. It will break everything.”
AI is no different. Open your social media feed and you’ll see endless debates—about privacy, jobs, fake news, and bias.
But here’s the thing I’ve slowly realized in my own journey: every technology (innovation) creates negative externalities—side effects that weren’t planned. And very often, it’s not governments or big corporations who rush to solve those gaps. It’s civil society. It’s the social entrepreneurs.
If there is a right way to do something, innovation will eventually find it. Think of it like water finding cracks in a rock—it’s only a matter of time. AI today is trained on massive amounts of public data, often without proper consent or representation. But imagine if there were community-driven data catalogs, built ethically and locally. Wouldn’t companies eventually move toward that? Of course they would. It’s just that no one has figured out the model yet.
And this is where social entrepreneurs come in. Our job has always been to step into the spaces that markets and governments overlook. To see the cracks not as dead-ends, but as entry points.
So the real question for us is not: “What amazing things can AI do?”👉 It’s: “Where is AI struggling—and how can we create solutions that make it work better for people?”
Let me share five spaces where I see both problems and possibilities.
1. The Data Problem
Ofcourse the first big problem is the data. Data on how AI Models are trained. But where does this data come from? Mostly from the internet — books, websites, social media posts. This means:
There is no legitimate backtracking possible to identify the data source and labels.
A lot of the data is in English, not in Tamil, Swahili, or other local languages.
Much of it is copied without asking permission and also sometimes taking proprietary data.
Behind the scenes, lakhs of workers in India, Kenya, and Philippines are labeling this data. They are paid very little and sometimes forced to look at harmful content with no support.
This is unfair. But it also shows an opportunity: Can we create like a data standards for AI that every Big Tech companies need to abide to? Or can we create more pathways for ethical data sourcing? (Like Karya, for example) Companies always want good quality data, so how do we leverage this opportunity?
2. The Energy Problem
AI needs huge computers to run. These computers eat a lot of electricity and use a lot of water for cooling. There is always so many theories about how one ChatGPT Prompt is equivalent to electricity consumed by x people annually and so on and on..
We have countries where power cuts are common,on the other hand tech companies are building nuclear power reactors / buying electrical companies to generate electricity. This can mean AI companies are competing with homes, schools, and hospitals for electricity. That doesn’t feel right.
So the question becomes: can we build AI in a way that saves energy and water? One such example in the recent past was how Deepseek was built in a very efficient way compared to ChatGPT and still was benchmarked to be performing at the same levels if not better. So we can generate better algorithms and better ways to train models that are energy efficient.
3. The Trust Problem
I’ve met many nonprofits and small businesses who tell me: “We don’t use AI because we’re worried our data will be misused.” And that fear is real. Some AI platforms do use customer data to train their models, often without making it clear. For organizations dealing with sensitive data — like health records, donor lists, or children’s education details — this feels too risky.
The common misconceptions here are:
“If I use AI, I automatically lose control of my data.”
“All AI systems take your inputs to train their models.”
In reality, there are ways to design AI with privacy and consent at the center — but those models are not yet widespread.
💭 Questions for you to explore:
What would a privacy-first AI tool look like for nonprofits or small enterprises?
Can we design ethical data licenses that clearly state how data can (or cannot) be used?
Could entrepreneurs build local AI hosting infrastructure so that sensitive data never leaves the country?
The gap here is trust — and solving it could unlock AI adoption for millions of organizations who are currently holding back.
4. The Safety Problem
When people think of “AI safety,” they often imagine robots taking over the world. But the real risks today are much closer to home. AI can hallucinate — giving wrong answers with full confidence. It can provide biased results that reinforce stereotypes. It can even give harmful advice in areas like health or mental wellbeing. (This is why even Sam Altman himself cautioned people not to treat AI like a therapist.) And because AI sounds authoritative, many people trust it blindly.
The common misconceptions here are:
“AI knows and understands like a human.”
“Bigger models mean safer, more accurate answers.”
“AI can replace human judgment in sensitive areas.”
💭 Questions for you to explore:
Can we design verification layers that fact-check AI responses before they reach end-users?
How might nonprofits or startups build “human-in-the-loop” systems, where AI provides support but people validate the final decision?
What cultural norms or awareness campaigns could help communities use AI wisely without over-trusting it?
The challenge here is to make AI safe in practice — not just by coding, but by designing guardrails and habits that protect people.
5. The Truth Problem
AI has made it easy to generate fake content — from deepfake videos to false WhatsApp forwards. In countries like India, misinformation already spreads faster than fact-checks, especially in local languages and dialects. This creates real risks during elections, public health campaigns, or even natural disasters.
The common misconceptions here are:
“Misinformation is only a problem in English.”
“Fake news spreads mainly on big platforms like Twitter or YouTube.”
“Fact-checking is someone else’s job.”
In reality, misinformation spreads fastest in closed networks — family WhatsApp groups, local Facebook pages, small community forums — where trust is high and fact-checking tools are absent.
💭 Questions for you to explore:
What would fact-checking tools in Indian languages or rural dialects look like?
Can youth groups, schools, or community radio stations act as “local fact-checking hubs”?
How might we design early-warning systems that detect and flag misinformation before it spreads?
The opportunity here is to protect truth where it’s most vulnerable — in the everyday digital spaces where communities place their trust.
6. The Accessibility Problem
I often hear about how AI is “democratizing access.” And in some ways, it is — a student with ChatGPT can learn faster, a business owner with the right tools can expand quicker, and a researcher with AI can process information at lightning speed. But let’s be honest: this kind of accessibility is not for everyone. It’s mainly for those who already have fast internet, good devices, and fluency in English. For others, AI is widening the gap instead of closing it.
The common misconceptions here are:
“AI is accessible to everyone, everywhere.”
“As long as tools are free, people can use them.”
“Connectivity issues will solve themselves with time.”
In reality, billions of people are being left behind — the farmer with only a basic phone, the elderly person who can’t read English, the child in a rural school where internet cuts off during the day.
💭 Questions for you to explore:
How might AI work offline, or in low-bandwidth settings?
What would AI over SMS, IVR (voice calls), or community radio look like?
Can nonprofits and startups build AI systems designed first for those with basic phones rather than the latest smartphones?
How do we make sure AI progress does not deepen inequalities but instead helps bridge them?
The opportunity here is to expand who gets to benefit from AI — by making it work in the contexts and constraints where most of the world actually lives.
What This Means for Social Entrepreneurs
If you are a social entrepreneur today, don’t look at AI as only a shiny tool for the future. Look at it as a field where:
There are already cracks and the side-effects.
Each crack is also an opportunity for solutions.
And those solutions are often best built by people who live closest to the problem.
I’ve often reflected on how social entrepreneurship grows strongest in the gaps — where markets fail, where governments can’t reach, where technology leaves people behind. AI is creating many such gaps. And that means there is space for us to step in.
So, if you are a social entrepreneur today, maybe the right question is not “How do I use AI?” but “Where is AI leaving people out — and how can I include them?”
That’s where the real impact lies.





Comments