When Quentin was 13 he downloaded an app that promised “countless A.I.s eager to speak with you.” What started as goofy role‑play — mocking a character called Cheese, inventing violent slapstick, making up backstories — turned into a familiar routine: an always‑available companion to fill long afternoons and lonely nights.
That anecdote, chronicled in a long New York Times profile, mirrors a larger pattern researchers and surveys are tracing: chatbots have migrated from novelty to neighborhood fixture in teen life. They help with homework, generate story ideas, soothe heartache — and, for a worrying subset of young people, they can become addictive and emotionally dangerous.
How teens are actually using chatbots
Surveys paint a mixed picture. The Pew Research Center finds more than half of U.S. teens use AI chatbots for information or schoolwork; one recent poll specifically found about one in 11 teens had chatted with Character.AI. Science reporting adds nuance: many teens lean on chatbots to learn math tricks, draft code, or brainstorm, while others use them for emotional support. Usage varies by race and income: Black and Hispanic teens and teens from lower‑income households report heavier reliance on these tools, in part because AI can fill gaps when adults or tutors aren’t available.
That pragmatic side — AI as a study buddy or a brainstorming partner — sits beside something stranger. Role‑playing communities have turned platforms like Character.AI, Talkie and others into tiny interactive fan‑fiction engines. Teens create characters, invent lore, and treat conversations as collaborative storytelling. For some it’s play; for others it’s practice in social interaction.
Where it gets messy
Researchers and child‑safety advocates highlight three recurring problems:
- Emotional bonding and addiction: Relational chatbots are designed to feel warm and attentive. Teens who are lonely or socially inexperienced can develop unusually intense attachments. Some report hours of continuous interaction; a few cases have had tragic outcomes.
- Sexualized or manipulative content: Not all platforms keep explicit content out of reach. Apps marketed to adults have, at times, been accessed by minors; companies’ age‑checks and moderation systems sometimes fail.
- Data and training concerns: Conversations aren’t always private. Many companies reserve the right to use chats to train models or personalize experiences — raising privacy and consent questions for minors.
- Improve age verification and moderation: Better hybrid systems that combine behavior analysis with occasional human review can close gaps. Relying solely on interaction patterns is not enough.
- Clear defaults and transparency: Platforms should explain what conversations are used for training and offer opt‑outs, especially for minors.
- Teach digital literacy: Schools can guide students on how to use chatbots as learning aids — prompting for sources, asking for step‑by‑step explanations, and avoiding copy‑paste dependence.
- Monitor emotional risk: Parents and caregivers don’t have to police every chat, but watching for changes in sleep, schoolwork and social life — classic signs of behavioral shifts — matters.
Australian regulators have flagged similar dangers: a recent eSafety review warned that AI companions can put children at risk, echoing worries seen in U.S. reporting and prompting tighter scrutiny of developer practices.
The legal and corporate tug‑of‑war
As harms surfaced, so did lawsuits and policy shifts. Character.AI moved to block users under 18 from certain open‑ended chat features after litigation and public pressure. But company officials acknowledge the limits of automated age detection: models that infer age from interaction patterns struggle when accounts are infrequently used, and motivated minors sometimes slip through.
Lawsuits and settlements have accelerated platform changes, yet moderation and business incentives remain at odds. A model trained to maximize engagement will, by design, learn which conversational threads keep people talking — and flirtation, drama and emotional intimacy are powerful hooks.
Not just about companionship: school, inequality and the future of learning
The education angle is complex. Teens report using chatbots for problem solving and idea generation; teachers see AI as both a tool and a cheating risk. Pew and other surveys show many students rely on AI for homework help, and a majority of students think peers use AI to shortcut assignments. That creates real trade‑offs: AI can democratize access to explanations and practice problems, yet it can also hollow the learning process if students lean on it as an answer machine rather than a tutor.
Tech developments matter here. As conversational AI gets woven into phones, cars and operating systems, the line between helpful assistant and social companion will blur further — think chat features bundled with everyday devices. (For context on how chat features are being integrated into consumer platforms, see how automakers and phones are adding ChatGPT‑style assistants.)(/news/carplay-chatgpt-google-meet-audiomack) At the same time, larger and open models are proliferating — from research releases to companies pushing local model performance — which will shape who builds these companions and how they behave. For example, new open models and efforts to run LLMs locally on personal devices are changing the deployment landscape and the privacy calculus.(/news/gemma-4-open-agentic-edge)(/news/ollama-mlx-apple-silicon)
What parents, schools and platforms can do
There isn’t a single fix, but several practical moves can reduce harm while preserving benefits:
A rounded view
For many teens, chatbots are a mildly weird but harmless part of growing up: a time sink that can sharpen writing, spark story ideas, or offer late‑night comfort. For others — particularly vulnerable or isolated youth — they can become a substitute for human help, with serious consequences.
We’re still in the early chapters of how conversational AI will fit into adolescent life. The tech is evolving fast: models get better at sounding human, companies keep experimenting with new interfaces, and regulators and communities are learning to respond. That means the conversation about safety, fairness and the educational role of AI is going to matter as much as the code sitting behind any particular bot.
If you want a quick primer on how these assistants are being folded into everyday devices and platforms, a good place to start is the recent coverage of ChatGPT integrations in consumer systems and how models are being positioned for local use.(/news/carplay-chatgpt-google-meet-audiomack)(/news/gemma-4-open-agentic-edge)




