Image of a cellphone with apps, photos, robots and other images coming out of it
Illustration by Ryan Olbrysh; mila103/AdobeStock (robot); JEROSenneGs/Adobe Stock (Echo); Shutterstock.com (all other images)

The Rise of AI

Advanced artificial intelligence tools like ChatGPT are stunning the world with their humanlike abilities. Should we welcome this new technology—or fear it? 

By Mackenzie Carro
From the April 2024 Issue

Learning Objective: to trace and evaluate two opposing arguments

Lexile: 920L

The Situation:

A self-driving car. Siri. Your TikTok “For You” page. What do these things have in common? They are all powered by artificial intelligence.

Artificial intelligence, or AI, is a technology that enables computers and other machines to perform tasks that normally require a human’s ability to think or learn. Machines equipped with AI can do everything from write and speak to make decisions. 

You probably already interact with AI in your daily life. Voice assistants like Alexa use AI to turn on lights or tell you who won last year’s Super Bowl. Spotify uses AI to recommend new songs to you. Snapchat filters use AI to turn your face into a cartoon character. 

You may even be familiar with a more recent AI tool: ChatGPT. Known as a chatbot, ChatGPT can carry on humanlike conversations. You can ask it to do all sorts of tasks, like explain what gravity is or write a story in the style of Edgar Allan Poe. By drawing on vast quantities of information on the internet, ChatGPT can complete these tasks—and many more—in seconds.

OpenAI released ChatGPT in 2022. Since then, it has been used by more than 180 million people. People have begun dabbling in other advanced AI tools too, like DALL-E, which can create images and art. 

For some, the potential of AI is dazzling. They say it could be the key to solving the world’s most pressing problems. Indeed, AI has already helped develop new medicines, for example. 

But not everyone is excited. In a recent poll by Monmouth University, 41 percent of Americans said they thought AI would do more harm than good. AI could take people’s jobs, some fear, or spread misinformation. 

So how will AI affect us? How will it affect our world? Will it ultimately help us—or hurt us?

AI will make the world a better place.

We just have to use it responsibly.

By Mikayla Simmons

Artificial intelligence is not something to be feared. It’s a brilliant technology that can make our world—and our daily lives—better.

Nothing New

Shutterstock.com 

It’s not surprising that people are skeptical of AI. If you look back at some of the most groundbreaking inventions, you’ll see that fear of new technology is nothing new. 

Take written language. In ancient Greece, the philosopher Plato worried that learning to write would weaken people’s minds. Then there is the telephone. When it was introduced in the 1870s, some people thought it would bring about the end of written communication. In the 1980s, people feared computers would take everyone’s jobs. Some even thought computers would control our brains! 

Of course, none of these fears came true. In fact, writing, phones, and computers all transformed our world for the better.

AI will do the same. Indeed, it has already started. AI-powered algorithms on TikTok serve us content that we’re interested in. Facial recognition unlocks our phones with a glance. Devices like Alexa tell us the weather when we’re getting ready for school. These are just a few of the AI tools that have made life more convenient. 

And now there’s ChatGPT, which can do even more, like write emails, translate text into other languages, summarize stories—even tell jokes! 

Helping Society

AI can be used in ways that will help society too. Because it can sift through large amounts of data at lightning speed, AI can help doctors detect and diagnose diseases. It can analyze photos and describe objects for people with vision loss. It can help predict natural disasters like hurricanes by quickly analyzing information about past storms. 

What’s more, AI can help people do their work faster. One study found that computer coders who used an AI tool completed their tasks 56 percent faster than those who did not.

Neither Good Nor Bad

Still, there are issues with AI that must be addressed. One issue is the potential spread of misinformation by tools like ChatGPT, which sometimes gives false or incomplete information. 

But concerns like these are already being dealt with through safety guidelines. In fact, seven major tech companies—including OpenAI, Google, and Meta—have agreed to enact AI safety rules. 

One rule that’s been proposed is that content generated by AI must be labeled with a mark or stamp. This could help prevent the spread of misinformation. If AI-generated images must be marked, for example, it will be harder to pass off fake images as real.

The fact is, AI is neither good nor bad. It doesn’t have feelings or emotions. It is simply a tool, and it’s up to us to use that tool in the right way. As Uncle Ben once said to Peter Parker in Spider-Man, “With great power comes great responsibility.” 

AI is a great power. And it is our great responsibility to use it wisely. 

I believe that we will, and that AI will make our world better.

AI is a dangerous technology.

And it must be stopped.

By Dave Ram

Should we embrace AI because it can make our lives easier? Many say yes. They marvel at how humanlike conversations with ChatGPT are and how well written and smart the bot seems to be. 

But the fact that AI can do so many of the things that humans can—and do them well—is nothing to celebrate. The truth is, AI is dangerous because it can make us less smart, spread false information, and take people’s jobs.

Taking Over

Shutterstock.com

McKinsey Global Institute estimates that 12 million people may need to change jobs by 2030 because of AI. That’s because AI could one day become powerful enough to take over tasks that can now be completed only by humans. 

Even if AI didn’t threaten jobs, it would still be a problem. Sure, it’s incredible that it takes ChatGPT mere seconds to write an email or help create a resume for that summer job you want. Yet, if we always turn to AI, we won’t know how to do anything on our own. 

Convenience and speed are no doubt valuable when it comes to getting things done. But what about the sense of pride that comes with doing things for yourself? The feeling of accomplishment you get when you solve that tough algebra problem or write the perfect introduction for that social studies essay, for example, simply cannot be replaced. 

Perhaps the most concerning thing about AI is that while its powers can be used for good, they can also be used to do harm. AI can be used to generate convincing fake videos and images, as well as articles filled with lies. Someone could, for example, use ChatGPT to write an article in the voice of a doctor giving incorrect medical advice, or they could use DALL-E to create a fake video of the president saying the country is under attack.

Getting Things Wrong

Another concern is that AI systems can get things wrong. After all, AI-powered bots like ChatGPT learn from internet text and data, and not everything on the internet is factual or correct. That means not everything these bots say is right either. 

Plus, if an AI system doesn’t know the answer to a question, it may “hallucinate,” which means it makes something up. Last year, a lawyer used ChatGPT to write a document that he submitted to a judge. The document was full of examples of past court cases. But it turned out that several cases were completely made-up! The lawyer lost his job. 

Yet another problem is that because tools like ChatGPT learn from the internet, what they generate may reflect or repeat offensive online content. Why use something that could reinforce harmful ideas, like stereotypes?

It’s true that no one can be sure where AI will take us. But even the CEO of OpenAI, Sam Altman, acknowledged that there are concerns. He told Congress: “I think if this technology goes wrong, it can go quite wrong.” 

So we must ask ourselves: If there is a chance that AI could “go quite wrong,” is it really worth the risk?

What does your class think?

Will AI do more harm than good?

Please enter a valid number of votes for one class to proceed.

Will AI do more harm than good?

Please select an answer to vote.

Will AI do more harm than good?

0%
0votes
{{result.answer}}
Total Votes: 0
Thank you for voting!
Sorry, an error occurred and your vote could not be processed. Please try again later.

Scavenger Hunt

Directions:

For each essay, complete the following steps on your own document:

1. Identify the central claim.

2. Identify the reasons.

3. Identify two pieces of supporting evidence.

4. Identify the counterclaim.

5. Identify the rebuttal

Now decide: Who makes the stronger argument?

This article was originally published in the April 2024 issue.

video (1)
Audio ()
Activities (9)
Answer Key (1)
video (1)
Audio ()
Activities (9)
Answer Key (1)
Step-by-Step Lesson Plan

Close Reading, Critical Thinking, Skill Building

1. PREPARE TO READ (15 MINUTES)

2. READ AND DISCUSS (45 MINUTES)

For students’ first read, have them follow along as they listen to the audio read-aloud, located in the Resources tab in Teacher View and at the top of the story page in Student View. 

Have students silently reread the article to themselves.

Poll the class: “What do you think? Will AI ultimately help us—or hurt us? No matter what you personally think about AI, who do you think makes the better argument: Mikayla or Dave?” Tally the results on the board. 

Now trace and evaluate the arguments in each essay: 

Read the directions in the Scavenger Hunt box on page 12 or at the bottom of the digital story page. If you need to review the bolded academic vocabulary in the box, here are definitions and examples:

  • central claim: the big idea that the author supports in their argument; their position, belief, or viewpoint
    • Example: School should start later.
  • reasons: the grounds on which a central claim is based; the individual reasons that support or prove the central claim
    • Example: Middle school-aged kids need more sleep
  • supporting evidence: facts, statistics, and examples that show why a reason should be believed; evidence and reasons that support and “hold up” a claim    
    • Example: A study by the Sleep Institute found that 47 percent of kids aren’t getting enough sleep.
  • counterclaim: an acknowledgment of a concern or disagreement from those with opposing viewpoints 
    • Example: Some may argue that starting school later won’t help kids get more sleep, that they’ll just go to bed later.
  • rebuttal: an author’s direct response to an opposing viewpoint or claim (the “comeback” to a counterclaim)
    • Example: Some may argue that starting school later won’t help kids get more sleep, that they’ll just go to bed later. ←[counterclaim] While that may be true in some cases, a 2018 study that looked at two schools in Seattle found that students’ sleep increased an average of 34 minutes each night after start times were moved nearly an hour later. ←[rebuttal]

For more argument terms support, see our Argument Terms Glossary, found in the Resource Library at Scope Online.

  • Project Mikayla’s essay and do a think-aloud that models each step in the Scavenger Hunt. Students can mark along in their magazines with you, or fill in the Scavenger Hunt graphic organizer found at Scope Online. This activity is offered on two levels; the lower-level version has students identify central claims, reasons, and supporting evidence only.
    • Identify Mikayla’s central claim(What does Mikayla think?)
      • First, ask students: “Based on her essay, how would Mikayla respond to the question in the introduction: Will [AI] ultimately help us—or hurt us?” (Mikayla would say, “AI will ultimately help us.”)
      • Think aloud: “I’m going to circle lines that express this big idea: ‘Artificial intelligence is not something to be feared. It’s a brilliant technology that can make our world—and our daily lives—better.’”
    • Underline Mikayla’s reasons(Why does she think that?)
      • Think aloud: “I just circled Mikayla’s central claim—that is, what Mikayla thinks. Now I’m going to underline her reasons—or why she thinks what she thinks. I’m going to underline ‘Fear of technology is nothing new’ and ‘Of course, none of these fears came true.’ Then I’m going to underline ‘. . . writing, phones, and computers all transformed our world for the better’ and ‘AI will do the same’ and draw a bracket to show that they go together. Finally, I’m going to underline ‘AI can be used in ways that help society too.’”
    • Put check marks on two pieces of supporting evidence(How does she know?)
      • Think aloud: “Can I find information Mikayla provides to back up her reasons?” Then draw students’ attention to the following two pieces of evidence: (1) “AI-powered algorithms on TikTok serve us content that we’re interested in. Facial recognition unlocks our phones with a glance. Devices like Alexa tell us the weather when we’re getting ready for school. These are just a few of the AI tools that have made life more convenient” and (2) “Because it can sift through large amounts of data at lightning speed, AI can help doctors detect and diagnose diseases. It can analyze photos and describe objects for people with vision loss. It can help predict natural disasters like hurricanes by quickly analyzing information about past storms.”
    • Star the counterclaim(What does the other side say?)
      • Think aloud: Think aloud: “Where does Mikayla acknowledge a concern or concerns from the opposing viewpoint? I’m going to star ‘Still, there are issues with AI that must be addressed. One issue is the potential spread of misinformation by tools like ChatGPT, which sometimes gives false or incomplete information.’”
    • Put a double star next to her rebuttal(What is her response to the other side?)
      • Think aloud: “Does Mikayla have a comeback for the viewpoint that there are issues with AI, like the fact that it could spread misinformation? Yes. She says, ‘But concerns like these are already being dealt with through safety guidelines. In fact, seven major tech companies—including OpenAI, Google, and Meta—have agreed to enact AI safety rules’ and ‘One rule that’s been proposed is that content generated by AI must be labeled with a mark or stamp. This could help prevent the spread of misinformation. If AI-generated images must be marked, for example, it will be harder to pass off fake images as real.’”
  • Have students complete the Scavenger Hunt for Dave’s essay. They can work independently or in pairs, optionally using the Scavenger Hunt graphic organizer available at Scope Online. Then share out responses as a class. Sample responses:
    • Central claim: “AI is a dangerous technology.” (Students may also say: “But the fact that AI can do so many of the things that humans can—and do them well—is nothing to celebrate.”)
    • Reasons: “The truth is, AI is dangerous because it can make us less smart, spread false information, and take people’s jobs.”
    • Supporting evidence: “McKinsey Global Institute estimates that 12 million people may need to change jobs by 2030 because of AI,” “The feeling of accomplishment you get when you solve that tough algebra problem or write the perfect introduction for that social studies essay, for example, simply cannot be replaced,” and “AI can be used to generate convincing fake videos and images, as well as articles filled with lies.”
    • Counterclaim: “Convenience and speed are no doubt valuable when it comes to getting things done.”
    • Rebuttal: “But what about the sense of pride that comes with doing things for yourself? The feeling of accomplishment you get when you solve that tough algebra problem or write the perfect introduction for that social studies essay, for example, simply cannot be replaced.”
  • Discuss: Which evidence do you find most convincing in each essay? Least convincing? What do Mikayla and Dave agree about? Are there any important reasons you think they left out of their arguments?Answers will vary.

3. WRITE ABOUT IT: WHAT DO YOU THINK? (45 MINUTES)

Text-to-Speech