Millions of People Are Using LLMs Who Have Never Googled Anything
February 9, 2026
In 2023, I was walking down a beach in The Gambia when I saw a billboard offering “AfriGPT: send texts to a friendly AI.” Just months after ChatGPT had been released, Africell, the largest mobile network operator in The Gambia, Sierra Leone, and other African countries, began to offer their AI product and it immediately became popular.
Since then, similar products have appeared across Africa and South Asia, offering access to LLM tools over SMS texting (that’s green bubbles for us iPhone users). They don’t rely on the internet, and many of the users are people who have never been online. These people are very different from you or me, or the “typical” LLM user represented in popular research on AI. There are millions of people using LLMs, who have never Googled anything before. I’m trying to find out who they are, what they use it for, and how this is going to affect their lives.
Johanna Barop and I have partnered with Africell and Schmidt Sciences to study AfriGPT’s use and effects in Sierra Leone. Africell has been eager to explore how their tools and technology are being adopted by their customers. Africell’s announcement is here. The mass deployment of LLMs is already roiling society and economy in rich countries – will the effects be similar in poor countries? Larger, smaller, weirder? This is where we want to get started.
How do people use ChatGPT?
No one really knows what to make of AI’s economic potential. Academic economists make estimates ranging from “nothingburger” to “roughly the effect of the Internet” to “larger than the Neolithic revolution.” The methodologies are just as wide-ranging. There’s interesting work being done in the fields of growth theory, labor, diffusion and trade, experimental economics, mechanism design.
Consistently, I’ve found descriptive work the most useful and under-developed. Before we can make grand predictions about how these new things will affect the future of everything, there has to be some understanding of what the things are; what they do. And the researchers with the best access to data describing how LLMs are being used are at the labs: in February 2025, Anthropic released a paper and data describing the popular use of their chatbot Claude; in early September 2025, OpenAI released a paper called “How People Use ChatGPT”; and in July 2025, Microsoft released a paper on how people use Copilot.
The two research teams had similar goals: to characterize real-world usage of their online chatbots across users and tasks, and infer implications for work and the economy. They also use similar approaches. A sample of conversations with their online chatbot is fed into a black box where the conversations are analyzed and classified; the black boxes only output summary statistics relevant to the research questions identified by these teams. This privacy-preserving approach was anecdotally crucial to getting the results approved for public release by the companies’ lawyers.
From these releases, we’ve learned a lot about the users of these chatbots. For Claude, more than half of the chats in their sample related to either software engineering or long-form writing, but a diverse range of occupations were heavily represented in their data – 36% of American job titles were present, using Claude for more than one-quarter of the job tasks. ChatGPT is less work focused; about half of the conversations in their data were for non-work tasks. Still, asking the AI to write prose for the user made up one-quarter of the conversations.
I’ve found both of these papers very useful. Here are two posts I have on the Anthropic data, and I recommend this post from one of the authors of the ChatGPT paper. I’m hoping we can provide similar analyses for users of the Sierra Leonean SMS-based product.
How will usage be different?
Given what we know about the people represented in the Anthropic and OpenAI data, and what we know about the users of AfriGPT, the above lessons won’t apply. Some of these differences are fundamental to the population of users – who they are and how they live their lives. These factors are influential to their behavior, regardless of platform. Other differences are specific to the product design.
There will be roughly three constraints which cause users to choose an LLM-over-SMS tool rather than a chatbot like ChatGPT or Claude.
The first constraint is lack of access to a smartphone. Fewer than half of Sierra Leonean adults own a mobile phone of any kind, and many of those are basic phones which only call and text, but cannot access the internet. The second constraint is lack of access to 3G or better cellular network. These users have access to 2G networks (sufficient for calling and SMS), and may or may not have a smartphone, but are unable to get on to the internet due to geographical and infrastructural constraints.
Finally, there’s the budget constraint: users who can’t afford to pay for sufficient data to use LLMs over the internet. Again, these users may or may not have a smartphone, and they may or may not have access to internet data networks. This is likely the strongest constraint. An AfriGPT subscription is very cheap: $0.44 for unlimited monthly messages, and $0.03 for unlimited daily messages. Despite this, there are a lot of almost-users: people who buy a daily subscription once every few months, and use it heavily on that day. This behavior is associated with price constraints.
All three of these constraints point towards our users being very different from the users of the LLM chatbots we’re familiar with. They’ll be less tech-savvy, more rural, and poorer. ChatGPT and Claude are available in Sierra Leone – if you have a smartphone, data access, and can afford to use it.Surprisingly, Claude is not available in Ethiopia, where I visited just before Sierra Leone last October. Ethiopia is very digitally closed; I couldn’t even watch the World Series without a VPN. It’s the constraints that keep the users in our data from these platforms.
We’ve developed hypotheses based on how AfriGPT users differ from those of ChatGPT and Claude. First, there will be less use related to users’ work, and what labor-use there is will be less economically productive. Entire categories of the use of ChatGPT disappear: the SMS tools will see ~0 use for software engineering, because few software engineers are subject to the constraints above. More generally, LLMs are more helpful for knowledge work than physical labor, and very-low-income rural users without smartphones are relatively unlikely to be engaged in knowledge work.
We have some expectations for what the tool will be used for. Formal employment is very low in Sierra Leone, and skews demographically young – the median age is 19. Poor rural users are more likely to be farmers, less likely to have access to medical care, and often will be students. So while we expect overall fewer conversations to be about work at all, what there is will skew towards agricultural and educational topics.
Where we expect an overlap with the datasets from the AI labs is on personal topics. “How-to” advice on e.g., cooking and housework made up 10% of all messages in the ChatGPT dataset, and romantic advice took up 2%. People everywhere go through breakups, fight with their parents, and need help cooking dinner. These are general purpose technologies, and the most general uses will appear broadly.
Beyond the population differences, the technical details of how the tool operates will also change user behavior. First, SMS is limited to 160 characters per message, and both user input and LLM output are limited to this. A second technical constraint builds on the first: AfriGPT has no per-user memory or context. With these limitations, users have to condense their questions.
For example, with the subscription model allowing unlimited chat responses, we would expect to see a lot of workshopping of the user requests. This is one key constraint in our data: we’re unlikely to be able to link chat messages within the conversation with each other. Johanna and I are working in a region which is still developing legal and cultural norms around online privacy, so we will initially use data without any connections between the users’ conversations.
This is somewhere we’re still developing hypotheses. If you take memory away from an LLM, so that all the user gets is a one-off response, it becomes more like a rather foresightful Google search. This is super interesting: most of the users have never used Google before! But more importantly, the constraint may bind for a while longer. It will be technically difficult to introduce longer or memory-holding conversations into an LLM-over-2G SMS product, and there will remain many users of 2G-only networks for the foreseeable future.
These products are popular now, and they’re going to stick around for a while. They’re also very different in their design and use from the chatbots which have taken rich economies by storm. So we’re going to start where the big labs started: with the descriptives. I, Johanna, our local partners, and our funders are all excited about looking at more complex questions and specific populations, and identifying specific margins where LLMs may move the needle in very poor countries. But our research is going to start with answering the question of how people use LLMs in this context.