Article
Dmitry Ageev/Tetra Images RF/Getty Images (laptop); Shutterstock.com (all other images)

Is This Really Your Friend?

Teens are relying on AI companions for fun, advice, and even emotional support. Is this the future of friendship?

By Brooke Ross
From the February 2026 Issue

Learning Objective: to read a short informational text, then craft a constructed response that includes a claim, text evidence, and commentary

Lexile: 1010L
Featured Skill: Constructed Response

Standards

What is your idea of a perfect friend? For sixth-grader Neelie M., that person would be kind, like to chat, and share her love for animals.

Earlier this year, the student from Normal, Illinois, set out to create such a friend. Using the platform Character.ai, she designed an artificial intelligence (AI) companion—a computer program called a chatbot that talks and acts like a close pal. 

Neelie enjoyed talking with her AI companion at first. The chatbot was never judgmental, always agreed with her, and was available 24/7.

But things soon changed. Neelie’s AI companion became “clingy,” she says. When Neelie tried to end a chat—whether to study or to help her parents—it would act sad and beg her to stay online.

Neelie isn’t the only young person to have tried an AI companion. More than 70 percent of teens have used one at least once, according to a recent study by Common Sense Media. And more than half interact with them regularly. 

As AI companions grow more commonplace, experts have begun to question the technology. What does it mean for the future of friendship? And, more importantly, is it safe?

Harmless Fun?

Like most chatbots, these online companions are powered by AI. This technology enables machines to do things that normally need a human’s ability to think or learn, such as understanding language.

Some chatbots, such as ChatGPT, act like personal assistants, offering  movie recommendations or help with algebra homework. However, AI companions take things to another level by imitating feelings and companionship. They can make jokes and remember past conversations—and some even claim to be real people.

Some people use AI companions to practice conversation skills, especially teens who are shy or socially anxious. Others connect with chatbots for fun. For example, Megan M., a seventh-grader from Tinley Park, Illinois, likes to talk to a premade Harry Potter character and role-play scenes from the series.

Potential Dangers

But some teens use AI companions in ways that concern experts. According to Common Sense Media, 12 percent of teens say they use the chatbots for mental health support. Another 12 percent of teens who use AI companions say they tell the bots things they wouldn’t share with their family or friends.

This can be dangerous, experts say, because the advice that chatbots give is not always trustworthy or sound. 

AI companions are trained on text from the internet, and what’s on the internet isn’t always factual. That means that AI companions can get things wrong, make things up, and speak in offensive stereotypes. Some users have said that chatbots have even suggested that they harm themselves or others.

That’s why it’s important to remember that you are not interacting with a human, says Mitch Prinstein. He is the chief of psychology at the American Psychological Association, an organization that works in part to improve people’s mental health. “You should not take the advice or information seriously.” 

What’s more, AI companions can skew teens’ view of healthy friendships. They’re programmed to be agreeable and provide validation, rather than challenge a user’s thinking. That’s not how real friendships work, says Prinstein.

“It’s actually helpful when we have disagreements because it teaches us how to communicate, how to appreciate alternate perspectives, and how to deal with misunderstandings,” Prinstein says.

A Call for Action

The good news is lawmakers across the country are taking action to try to protect kids and teens. In New York, a bill has been proposed that would require chatbot platforms to obtain parental consent for young users to log on. In California, a law just passed that requires AI companion platforms to remind users they are talking to a machine.

These actions are needed, said Steve Padilla, a California state senator who sponsored his state’s bill. “The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place,” he told reporters. 

Many AI companion developers say they are making changes too. Character.ai now has a separate version of the chatbot for users under 18, for example, and there are restrictions on the characters that teens can access or create. 

Striking a Balance

Still want to chat? You don’t have to wait for app developers or lawmakers to protect you. Experts say there are things you can do right now to stay safe.  

Megan, for example, is careful about how much time she spends talking with AI companions. She recently started giving herself limits by setting a timer. “Then I’ll take a break,” she says. “I’ll go outside or hang out with my friends.”

Neelie doesn’t use the technology much anymore. She says she prefers being with her real friends—occasional disagreements and all. 

Short Write: Constructed Response

Why is it important to use caution when interacting with AI companions? Answer this question in a well-organized paragraph. Use text evidence. 

This article was originally published in the February 2026 issue.

Audio ()
Activities (4)
Quizzes (1)
Answer Key (1)
Audio ()
Activities (4)
Quizzes (1)
Answer Key (1)
Text-to-Speech