While parents have been worrying about what their child's teacher is teaching in the classroom, there may be another classroom presence they should be more concerned about.
That presence is a new generation of artificial intelligence products being used in the classroom to complete tasks such as diagnostic testing, content drills, data collection on students, etc. They are also being used to teach.
One product, i-Ready by Curriculum Associates, is a program being used throughout the United States and most important, in Maryland.
The program is promoted as an easy way to assess students, identify student weaknesses and progress, and instruct students. All without a teacher involved. The company adds, "And it's fun!"
There are some glaring problems with AI (artificial intelligence) programs in school.
Alex Molnar, a director of the National Education Policy Center (NEPC) at Colorado University Boulder, recently wrote an article suggesting an "indefinite pause" in implementing these programs in our nation's classrooms. Co-authors included Ben Williamson of the University of Edinburgh in the United Kingdom and Faith Boninger, assistant research professor of education at CU Boulder.
First, Molnar notes that these programs use opaque and usually proprietary algorithms—making their inner workings mysterious to educators, parents and students alike. The companies claim to be protecting their investment but may also be protecting harmful changes in how these programs work on the minds of our children.
First, there is a concern over the data collection that will happen when a child is connected to this program and responding to carefully designed questions. With little to no control or knowledge of the algorithms, it's hard to protect what information artificial programming will elicit from children and how that information will be used.
Protection of that student data is another major concern because of the strong possibility of data leaks. These leaks will be from third party vendors who ultimately will not be held accountable since there are no laws that currently address the issue in these cases. Many of the programs are in beta testing phase, thus amplifying the danger even more.
Then there's the problem of who decides and creates the curriculum content written into the AI platform. Since programmers are not teachers and teachers are not programmers, the content in these programs may be neither pedagogically nor developmentally appropriate or effective for kids. Who will review the content?
With limited access and testing, even the school systems that use this technology may not be able to gauge the effectiveness of programs in helping students learn. This is exacerbated if the AI company is both the teacher and the assessor creating a huge conflict of interest and unreliability in the data. When millions of dollars are on the line, algorithms can be altered to make data look more favorable to the AI program and project success that is not real.
The nature of AI itself might make matters worse. From the National Educational Policy Center:
So-called AI uses algorithms and massive amounts of computing power to produce results based on countless calculations of probabilities. For example, what is the probability that the next word in a sequence will be ‘juice’? These calculations do not produce ‘truth’ or even, necessarily, accuracy. They produce probabilistic output. 1
One additional problem for parents is the accessibility of the materials used in their child's classroom. If a parent wants to see those materials, there is no textbook nor standard computer program that they can review. Everything will be contained in the AI algorithms. If a grading issue occurs who does the parent speak with if the AI program determined the grade? No one will know the basis of the grade, not the teacher, principal, superintendent, etc. Hard to push back on something that no one can confirm.
The lack of teacher control and contact with the student are also important. Parents depend on the judgement of a well-trained human teacher who can go beyond mere data and algorithms to address the academic needs of students and inspire them to achieve to their potential. No AI program can do that.
One of the "tells" about the dangers of AI classroom programs is the fact that the companies who are selling them are discussing up front why people SHOULD NOT be concerned. From the Curriculum Associates Website:
“Is i-Ready dangerous?” “Is i-Ready bad for student achievement?” “Does i-Ready replace teachers?” We hear some of these concerns every now and then from educators and parents before they become familiar with i-Ready. This page will help you learn more about i-Ready, answer some of those questions, and point you to additional resources to learn more.
Here is a link to that page:
The responses are all based on research from this group:
This group is funded by the U.S. Department of Education.
When a business feels compelled to go to these lengths to dispel rumors, it's possible the rumors are at least partially true. Let's not forget that having computers or i-pads in the hands of every student in every classroom was a concept sold a few years ago as being the answer to improving academic achievement in Maryland. Yet, in a time when many students have technology to support learning in the classroom, Maryland scores are lower than ever.
Technology fails to improve student achievement where quality teaching is effective. From the Rand Corporation:
Many factors contribute to a student’s academic performance, including individual characteristics and family and neighborhood experiences. But research suggests that, among school-related factors, teachers matter most.When it comes to student performance on reading and math tests, teachersare estimated to have two to three times the effect of any other school factor,including services, facilities, and even leadership
School systems across the country will no doubt buy into the AI myth as they are sold these programs as not only a way to improve student learning, but also a way to solve the shortage of quality teachers as well as the rising cost of education in our country. After all, AI programs don't need days off, don't get tired, don't need benefits, and don't ask for pay raises.
But they do have serious flaws that need to be acknowledged before a system adopts them.
What can be done? Again, Molnar and his associates advocate for a pause in the implementation of these programs until they can be tested, and the "bugs" worked out.
The solution would be for state legislatures to, by statute, say, in essence: Public schools in this state may not adopt artificial intelligence programs unless and until those programs are certified by this governmental entity—they’d have to create the entity. It has reviewed these programs. It has said they are safe for use, and it defines what the appropriate uses of the program are and for whom.
In other words, nothing goes in the schools until we have the statutory and regulatory framework and institutional capacity in place to independently assess AI platforms that are proposed for school use.1
This makes sense. Unlike an app you put on your phone that can be updated periodically to correct problems, these classrooms applications can drastically affect the education and development of young children. Therefore, they must be implemented with complete transparency and undeniable safety and security.
There are some problems that no legislation or monitoring can solve. Already psychologists tell us that there are many negative consequences of children's constant exposure to technology: (This is from Grand Canyon University and contains ads for their degree programs).
Lower attention span. Teachers, parents, and students themselves find that technology can have a direct impact on attention spans. The immediacy of technological interactions make waiting harder for children. With technology, they aren’t forced to wait. They can have their TV show immediately, they don’t get bored because they always have something to entertain them. Technology moves fast, instant responses and instant gratification are impacting attention spans for young children and teenagers alike.
Increased risk and lack of privacy. Teenagers and children have grown up in a technological world, and the idea of privacy is somewhat foreign to them. Cybersecurity is a huge element of tech today, but it isn’t always perfect. Hackers and criminals can utilize technology to steal identities and harass children. Technology has created an increase of theft, privacy issues, harassment, and more. The IT industry is in need of cybersecurity professionals who can help make technology more safe for children, so consider getting started on your degree today.
Risk of depression. Teenagers and children who report more time using media are more likely to also report mental health issues. Depression is a key issue that is correlated with more media use. This has increased suicide rates and has lead to more youth needing mental health interventions like medicine and counseling. Experts believe time spent on social media or using technology can directly be tied to increased depression.
Obesity. Children who spend more time inside on their phones or tablets don’t spend as much time running and playing outside. They establish habits of technology use that doesn’t involve exercise. This can lead to increased obesity rates in children and young adults.
Falling grades. Many students today can see their grades take a hit when they spend more time with technology. Increasing technology usage means less time spent on homework, and the kind of developmental changes technology can bring can make students struggle with homework like reading and writing.
Bullying. As technology flourishes, so does bullying. Children and teens are using technology and social media to bully other kids, without having to face them. Often called cyberbullying, this trend is increasing and getting more popular with even younger students.
Social interaction issues. With more time spent on technology, younger children are having issues with face-to-face social interactions. Many seem to prefer to text or talk on social media as opposed to talking to each other in-person. Even when children spend time together, they may spend more time texting or on their phones than actually being together. 2
Children suffer these consequences when they use technology constantly outside the school day. Imagine the effects of an additional six to seven hours of being attached to a screen in school.
What can parents do to prevent this AI onslaught?
First, be aware of what your child's school/district is implementing AI. Ask questions and ask for links to the companies providing this technology. Be specific in questions about how the technology will be used and how it will impact student/teacher contact. If the school sends a "contract" asking for you to sign off on your child's participation in AI programs, trackers etc. remember you do NOT have to sign. The school will have to provide an alternative.
Most important, parents need to ask how the school system will protect their child's privacy.
Artificial intelligence is gradually creeping into every aspect of our lives. However, now is the time to prevent it from raising our children.
Nancy Bailey, on her Education Website, explores AI more. She also addresses a variety of educational issues.
The Murky World of i-Ready, Grading, and Online Data - Nancy Bailey's Education Website (nancyebailey.com)
Jan Greenhawk, Author
May 12, 2024
Jan Greenhawk is a former teacher and school administrator for over thirty years. She has two grown children and lives with her husband in Maryland. She also spent over twenty-five years coaching/judging gymnastics and coaching women’s softball.
This article was originally featured on the Easton Gazette.
You can subscribe to the Delmarva Parent Teacher Coalition and follow us on FaceBook to stay informed of what's really happening with education in our schools.