Edition 14 | April 13, 2026
The FTC ordered edtech giant Illuminate Education to overhaul its security practices and delete unnecessary student data after a 2021 breach exposed personal information of 10 million students. The company stored data in plain text, ignored vendor security warnings for two years, and waited nearly two years to notify some school districts comprising 380,000 students. The 10-year consent order sets a new precedent for what the federal government expects from edtech data security.
Why it matters: If your district evaluates vendors based on privacy pledges alone, this is your wake-up call. Illuminate pledged to protect student data and failed catastrophically. Districts need verification protocols, not trust.
A fourth grader at Delevan Drive Elementary in Los Angeles was assigned to create a book cover for Pippi Longstocking using Adobe Express for Education on a school-issued Chromebook. The AI generated sexualized images of women in lingerie instead of a children's book character. Other parents reproduced similar results. Within weeks, California published new AI safety guidelines requiring vendors to demonstrate rigorous testing against child-safety benchmarks before tools can be adopted by schools.
Why it matters: AI tools marketed as "education-safe" may not have adequate content filters. Every teacher assigning AI-based projects needs to pre-test the tools themselves first. California's "safety by design" framework could become a national model.
Forrester predicts that an agentic AI deployment will cause a publicly disclosed data breach in 2026. 48 percent of cybersecurity professionals now identify autonomous AI systems as the single most dangerous attack vector. The U.S. government published a formal Request for Information on AI agent security in January, and researchers have demonstrated that indirect prompt injection can corrupt an agent's long-term memory, creating persistent backdoors.
Why it matters: If your district is adopting AI tools that take autonomous actions, you need to understand what data those tools can access, what permissions they have, and who is liable when something goes wrong.
Ransomware gangs claimed over 400 victims in Q1 2026. The PowerSchool breach exposed data on 62 million students and 9.5 million teachers through a single compromised contractor credential on a customer support portal lacking multi-factor authentication. A deepfake video call impersonating a CFO cost one Fortune 500 company $28 million. Deepfake-enabled fraud is now happening at industrial scale, and average business email compromise losses have reached $4.9 million per incident.
Why it matters: The PowerSchool breach is the largest K-12 data breach in history. Social engineering is the number one attack vector. MFA is non-negotiable. If your district has not mandated it everywhere, start today.
AB 1159 would ban using student data to train AI models unless specifically for educational purposes, give students the right to sue for privacy violations with damages up to $500 per incident, and extend protections to college students. The bill expands coverage from apps "designed and marketed" for K-12 to any online service known to be used for school purposes, closing the biggest loophole in existing law.
Why it matters: The AI training ban directly addresses educator concerns about student work being used to train commercial AI. The private right of action means real financial consequences. If passed, this becomes the strongest student privacy law in the country and a model for every other state.
Governor Brad Little signed SB 1227 on March 26, 2026, directing the State Department of Education to develop a statewide framework for responsible AI use in classrooms. The law explicitly states AI cannot replace teachers, requires every district to adopt local AI policies, mandates vendor transparency about AI use in their products, and directs the development of AI literacy standards for students.
Why it matters: Idaho is one of the first five states to codify AI guidelines for K-12 into law. The framework balances state-level structure with local flexibility and keeps teachers central to the learning process. Watch this model closely.
Vermont's H.650 would create the first state-level EdTech company registry, requiring providers to register with the state and certify privacy compliance annually before doing business with Vermont schools. The Software and Information Industry Association opposes the bill, arguing it would create duplicative burdens and reduce access to essential tools, particularly for students with disabilities and those in rural communities.
Why it matters: This would shift the compliance burden from individual districts to a state-level process. If Vermont succeeds, other states will follow. The industry pushback highlights the real tension between innovation access and student protection.
One of the strongest family opt-out proposals in the country, SB 3735 would give students and parents the right to opt out of school-issued devices, electronic textbooks, and online assignments. It is the first major bill to give families the right to demand human review of any AI-generated grade. It also bans school biometric systems and prohibits AI training on student data without consent.
Why it matters: Illinois is a bellwether state with 50-plus AI bills this session. The AI grading review provision alone could fundamentally change how schools adopt automated assessment tools. If this passes, expect other states to follow rapidly.
Oklahoma's SB 1734 passed the Senate unanimously on March 23, 2026, banning unsupervised AI use in classrooms, prohibiting AI from being the primary basis for grading, discipline, or promotion decisions, and requiring every district to adopt a written AI policy before the 2027-28 school year. Parents must receive annual disclosure about AI use and can opt students out without academic penalty.
Why it matters: This is one of the most comprehensive educator-supervision requirements for AI in schools. The approach is practical: it does not ban AI but requires human oversight at every step. The 2027-28 deadline gives districts about 18 months to prepare.
NYC's first remote learning snow day under Mayor Zohran Mamdani on January 26, 2026, reached 79 percent of students in the nation's largest school system. The city increased server capacity to support one million simultaneous logins, distributed devices in advance, stress-tested systems, and prepped educators for virtual instruction. The result was a dramatic improvement over the system's troubled February 2024 remote pivot.
Why it matters: Remote learning is now standard emergency preparedness, not a pandemic response. The 79 percent participation rate sets a benchmark. Device distribution plus system stress testing plus advance communication equals successful pivot. Every district should have this playbook.
Try This Week
Ask your technology director one question this week: "Which of our edtech vendors have disclosed whether their products use generative AI or machine learning, and what student data those systems can access?" Idaho, Oklahoma, Vermont, and Illinois are all moving toward requiring this transparency by law. You do not have to wait for legislation to start asking. Write down what you learn and share it with your team. The districts that ask these questions now will be ahead when the mandates arrive.
Until next time,
Dr. Janette Camacho
CEO, iTeachAI Academy
Free AI courses at classes.iteachai.co
17 free AI tools at iteachai.co/TeacherTools
Know a teacher who needs this? Forward this email.
Subscribe free at iteachaibot.com