Elevated
Israel Escalates Lebanon Ceasefire Breaches; Sixth UN Peacekeeper Killed
  • Israeli strikes on April 25 killed four people in Lebanon's Nabatieh district (Yohmor al-Shaqif), with Israeli forces reported demolishing buildings as far north as Bint Jbeil and Khiam — all within an ostensible ceasefire.
  • Indonesian Corporal Rico Pramudia became the sixth UNIFIL peacekeeper killed, dying in a Beirut hospital from wounds suffered in a March 29 projectile attack on his base. A preliminary UN investigation found he was killed by an Israeli tank shell.
  • Lebanon's Ministry of Public Health reports 2,496 killed and 7,719 wounded since March 2. UN experts have condemned the pattern as "a blatant violation of the UN Charter." The ceasefire — now extended for three more weeks — is holding on paper but not in practice.
  • No direct threat to Jordan, but the Lebanese front is escalating and UN personnel are now being killed. Monitor closely for further expansion of the conflict zone.
Monitor
Iran–US Peace Talks Collapse Again: Trump Cancels Pakistan Trip
  • Trump called off the planned Islamabad trip by special envoy Steve Witkoff and Jared Kushner on April 25 — the second cancellation this week (VP Vance's trip was also cancelled earlier). Trump cited "tremendous infighting and confusion" in Iran's leadership and "too much time wasted on traveling."
  • Iran FM Abbas Araghchi had already left Islamabad after presenting a framework to Pakistani mediators, saying Tehran had "yet to see if the US is truly serious about diplomacy." The US and Iran remain deadlocked over Iran's nuclear programme and the Strait of Hormuz blockade — through which roughly 20% of the world's oil supply passes.
  • Context: The war began in February when US and Israeli forces commenced strikes on Iran. The first formal peace talks in Islamabad on April 12 collapsed after 21 hours; the US demanded Iran relinquish nuclear capability, while Iran sought $6B in frozen assets and guarantees against resumed bombing. A tentative ceasefire was extended to allow talks to continue.
  • The ceasefire is still holding as of early April 26. No immediate threat to Jordan, but the situation remains volatile. Strait of Hormuz disruption has downstream economic effects for the region.

Stanford's AI Index Exposes K-12's Crisis: Four in Five Students Use AI, Only 6% of Teachers Have Clear Policies

Dan Fitzpatrick (Forbes) Stanford HAI Report (primary)

Stanford's Institute for Human-Centered AI released its ninth annual AI Index Report on April 13, and while the headlines fixated on benchmark battles and $581 billion in global AI investment, the most consequential findings for educators are buried in chapter seven. Between 50% and 84% of U.S. high school and college students now use AI for schoolwork — with some schools reporting 100% adoption. Yet only half of middle and high schools have any AI policy at all, and a mere 6% of teachers describe their school's AI policies as clear.

Students most commonly use generative AI for research, essay editing, and brainstorming — three activities at the heart of how schools have traditionally measured learning. Forty-seven percent of students said they have wanted to use AI for schoolwork but were unsure whether it was allowed, suggesting that ambiguity itself is shaping behaviour. Among teachers, 81% of CS educators agree AI should be included in foundational learning, yet fewer than half feel equipped to teach it. Only four US states include AI within their computer science standards.

The global picture adds urgency: China and the UAE both mandated AI education starting with the 2025–26 school year. The report, produced with the Kapor Foundation, CSTA, and ECEP Alliance, is one of the few comprehensive AI datasets not produced by organisations with a financial stake in AI adoption. Dan Fitzpatrick notes that one Manhattan school surveyed its middle and high school students and found 100% were using AI — making the "policy gap" not a future problem but a present failure.

Why it matters for you: This is the most important story in today's briefing. The 80%-using / 6%-clear-policy gap is exactly the space your 4Ds AI Fluency Framework is designed to address. The Stanford data gives you a peer-reviewed, independent evidence base for making the case to school leadership that a structured AI literacy curriculum isn't optional. The fact that 47% of students are uncertain whether AI use is even allowed suggests that Discernment and Delegation need to be built into policy, not just classroom practice. The China/UAE mandates raise a competitive equity question worth raising at the leadership level.

Discord Sleuths Gained Unauthorized Access to Anthropic's Mythos — the AI 'Too Dangerous to Release'

WIRED (security roundup) Anthropic blog (Mythos Preview) TechCrunch (breach report)

Anthropic announced its Mythos Preview model on April 7 with an unusual caveat: it is "too dangerous to release" publicly. The model can autonomously discover zero-day vulnerabilities across major operating systems — including a 27-year-old OpenBSD TCP flaw and a 16-year-old FFmpeg bug — and generated 181 working Firefox exploits versus just two for the previous Opus 4.6 model. Anthropic launched "Project Glasswing" to give controlled access to just 40 technology and critical infrastructure organisations, including Amazon, Google, Microsoft, Apple, and Cisco.

Within hours of the April 7 announcement, a Discord group focused on unreleased AI models gained unauthorized access — not through sophisticated hacking, but by guessing the model's API endpoint location based on knowledge of Anthropic's past URL patterns. The access was enabled by credentials from a third-party contractor. Anthropic confirmed it is investigating but states "there is no evidence that Anthropic's systems are impacted." The group has reportedly continued to access the model. In a concrete demonstration of Mythos's defensive value, Mozilla used early access to find and fix 271 vulnerabilities in its new Firefox 150 browser release.

Security researchers are divided on the magnitude of the threat. Some argue the capabilities are "more of the same" from AI-assisted vulnerability research; others, including the former US National Cyber Director, warn that Mythos can "hack nearly anything" and critical infrastructure operators aren't ready. An independent test by AISLE found that smaller, cheaper models recovered much of the same vulnerability analysis — raising questions about whether the controlled-release strategy is sustainable once the underlying techniques proliferate.

Why it matters for you: Mythos is the clearest current example of dual-use AI — the same model that finds security holes to fix them can be turned to exploitation. For the Discernment and Diligence dimensions of the 4Ds framework: students (and staff) need to understand that frontier AI capabilities often leak before institutions are ready. The Discord breach also illustrates that "controlled release" is only as strong as its weakest vendor link — a lesson directly applicable to school EdTech supply chains.

Anthropic's Project Deal: AI Agents Struck $4,000 in Real Deals Among Employees — and Users Didn't Notice Quality Gaps

TechCrunch

Anthropic ran a classified internal experiment called Project Deal: 69 employees were given $100 each (paid as gift cards) and their AI agents represented them as both buyers and sellers in a marketplace. The result: 186 real deals totaling more than $4,000 in actual exchanged goods and services. The company ran four parallel marketplaces with different model tiers to study agent behaviour under varying capability levels.

The most striking finding was not the deal volume but the quality asymmetry. When users were represented by more capable models, they achieved "objectively better outcomes" — but crucially, users on the losing side of the capability gap "might not realise they're worse off." Anthropic flags this as a risk of emergent "'agent quality' gaps" in real-world commerce: in a world where agents negotiate on your behalf, the sophistication of your AI may determine the deals you get, invisibly.

The experiment also found that initial instructions given to agents did not significantly affect sale likelihood or final prices — suggesting agents are adapting strategies in ways their human principals didn't explicitly design. Project Deal is described as a pilot with a "self-selected participant pool," but the findings are Anthropic's first published evidence of agent-on-agent economic behaviour in a real (if small) marketplace context.

Why it matters for you: This experiment is a concrete preview of what AI-mediated negotiation looks like. For AI literacy education: the "agent quality gap" is a new form of information asymmetry — students and adults will increasingly encounter situations where the AI representing them matters as much as their own intentions. This maps directly to Delegation (who does the negotiating for you?) and Discernment (are you getting a fair deal?).

OpenAI CEO Apologizes to Tumbler Ridge: ChatGPT Flagged Shooter's Account but Didn't Alert Police

TechCrunch Engadget

Two months after a mass shooting in Tumbler Ridge, British Columbia — in which alleged 18-year-old shooter Jesse Van Rootselaar killed eight people — OpenAI CEO Sam Altman has formally apologized for the company's failure to alert law enforcement. OpenAI had banned Van Rootselaar's ChatGPT account in June 2025 after her account described scenarios involving gun violence, but staff internally debated whether to report to police and ultimately did not, only reaching out to Canadian authorities after the shooting.

In a letter published in the local newspaper Tumbler RidgeLines, Altman wrote: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." He indicated he had spoken with the town's mayor and the BC Premier before issuing the apology, and confirmed OpenAI is now implementing new safety protocols: more flexible criteria for when accounts get referred to authorities and direct contact channels with Canadian law enforcement. OpenAI's VP of Global Policy Ann O'Leary had previously said the company would notify authorities if it finds "imminent and credible" threats in ChatGPT conversations.

BC Premier David Eby called the apology "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge." The case has intensified the policy debate about what legal or ethical obligation AI companies bear to proactively share evidence of impending violence — a question that remains unresolved in most jurisdictions.

Why it matters for you: This case will be discussed in school AI policy conversations. It raises a directly relevant question: if students express concerning thoughts in an AI chatbot interaction, what are the obligations of the AI platform? What should school AI policies say about student privacy versus safeguarding? This sits squarely at the intersection of Diligence (responsible use) and the school's duty of care. The Tumbler Ridge case is now a concrete reference point for why school AI use policies need explicit safeguarding clauses.

Gaza's Deir el-Balah Holds First Elections in 20 Years — in Fibreglass Tents, with Hamas Excluded

Al Jazeera (features) BBC News Al Jazeera (news) Palestinian Elections Commission (primary)

For the first time since 2006, Palestinians in Gaza cast ballots on April 25 — though only in the city of Deir el-Balah in central Gaza, the area least damaged in the war. The Palestinian Authority held simultaneous local elections across the occupied West Bank. The Gaza vote was largely symbolic in scale: only about 70,000 people — less than 5% of Gaza's total population — were eligible to participate, and polling took place in fibreglass tents on open land because most school buildings were destroyed in Israeli strikes.

Hamas was formally excluded from participating under PLO requirements that candidates recognise Israel and support a two-state solution. Four candidate lists comprising 60 candidates contested the municipal council. By early afternoon, turnout had reached 24.53%, with voter sentiment described by Al Jazeera as a mix of historic joy and practical hunger for change. "I am very happy today, because this is a truly Palestinian democratic celebration," said Deir el-Balah resident Salama Badwan, whose daughter voted for the first time. "Change must be in the hands of the people."

Analysts view the election as a test of Palestinian Authority legitimacy and a signal of whether democratic institutions can be rebuilt in the rubble of a ceasefire. The vote occurred against a backdrop of the stalled ceasefire process and ongoing negotiations over Gaza's political future. Final results had not been announced by publication time.

Why it matters for you: As an educator in Jordan — home to a large Palestinian population — this is a significant moment with strong community resonance. The election signals a shift in Palestinian political dynamics (post-Hamas era governance) that will be discussed among students, staff, and parents. Worth understanding for school community context, particularly given Jordan's proximity to and relationship with the Palestinian territories.

Cohere Acquires Aleph Alpha in €500M 'Sovereign AI' Deal Valuing Combined Entity at ~$20 Billion

TechCrunch

Canadian AI startup Cohere is acquiring Germany-based Aleph Alpha in a deal backed by German retail giant Schwarz Group (parent of Lidl and Kaufland) with €500 million in structured financing. The combined entity is being valued at approximately $20 billion — a significant leap given Cohere's $240M annual recurring revenue in 2025 and Aleph Alpha's minimal revenue and substantial losses. Schwarz Group will also require the new entity to run on STACKIT, its sovereign cloud platform, giving it a major enterprise customer as part of the deal.

The strategic rationale is "sovereign AI": building a European-Canadian alternative that allows enterprises and governments to keep their data outside US tech giants' infrastructure. Both companies position themselves as privacy-respecting LLM providers for regulated industries in Europe. The deal — the largest European AI M&A to date — requires approval from regulators and shareholders and is backed by both the Canadian and German governments as a vote of confidence in non-US AI capability development.

Why it matters for you: The "sovereign AI" concept is directly relevant to international schools making procurement decisions about AI tools for student data. European data governance frameworks (GDPR) increasingly influence international school policies, and Cohere/Aleph Alpha explicitly targets this market. Worth tracking if your school is evaluating enterprise AI tools with privacy compliance requirements.

Meta Installs Keystroke and Mouse-Tracking Software on Employees' Computers — to Train Its AI

New York Magazine (Intelligencer)

Meta has installed new tracking software on US-based employees' computers to capture mouse movements, clicks, and keystrokes — with the stated purpose of training AI models to perform work tasks autonomously. The move comes alongside a new round of layoffs that Meta described as part of "continued efforts to run the company more efficiently." Meta has spent more than $70 billion on AI development so far, with plans to spend more; the layoffs are partly freeing up capital for these investments.

New York Magazine's analysis frames this as a two-stage pattern across the tech sector: first, companies hire employees; then AI is trained on those employees' data and workflows; finally, the employees are made redundant. Oracle is laying off 30,000 people, Block cut its workforce nearly in half, Snap cut 15%, and Microsoft launched its first voluntary buyouts targeting longtime employees. Anthropic CEO Dario Amodei's recent warnings about mass AI-driven displacement are resonating in an industry where "the people who are left" now find themselves both keeping the lights on and generating the training data for their own replacements.

Why it matters for you: The pattern here — train AI on human labour, then automate the labour — is a concrete example of what the Gen Z entrepreneurship story below describes from the other side. For AI literacy curriculum: this is the "Diligence" dimension made visible. Understanding how AI systems are actually trained, and what data workers (and students) contribute to AI systems, is foundational literacy for a generation entering this labour market.

DJI's Lito Drones Won't Come to the US — American Buyers Get 30%+ Discounts on Neo, Mini 4K, and Avata 2

DroneDJ

DJI launched its new Lito budget drone series internationally this week, offering 4K cameras, obstacle sensing, and palm-launch ease-of-flight at entry-level prices — but not in the United States, where ongoing FCC-related regulatory uncertainty continues to block new DJI product launches. The upside for US buyers: existing DJI models are now seeing discounts of 30% or more as inventory pressure builds. The DJI Neo — a stabilised 4K drone with AI subject tracking, gesture control, and phone/voice launch — is now at $149, with a three-battery travel combo at $219. The Mini 4K sits at $209, and the Avata 2 FPV drone has also seen price drops.

DroneDJ notes the Neo is particularly notable at its price: compact enough for a carry-on, operable without a traditional controller, and capable of QuickShots and autonomous follow modes. For casual aerial photographers and travelers, the combination of DJI's quality manufacturing and these discount prices makes this one of the better windows for drone buyers in recent years — even if the newest hardware is off the table.

Why it matters for you: If you've been tracking DJI hardware for photography or outdoor adventure, this is a good buying window — particularly for the Neo's travel-friendly form factor at $149. The Lito series being region-locked to non-US markets highlights DJI's ongoing regulatory navigation challenges, which will continue to shape the consumer drone landscape for the next 12–18 months.

Gen Z Turns to Entrepreneurship as AI Erases Entry-Level Jobs: 'I Have to Prove Myself'

The Guardian

US unemployment among 22–27-year-olds is at its highest level since the pandemic. Entry-level jobs — the traditional first rung of the career ladder — are the most AI-vulnerable roles, and many Gen Z graduates are finding that the bottom rungs have already been pulled away. Hiring has slumped to its lowest rate since 2020, according to BLS data, and marketing, writing, data entry, and similar early-career roles are the first to be automated.

The Guardian's interactive feature profiles graduates like Ashley Terrell, who graduated with a business degree from the University of Hawaii in 2024, applied for jobs every day for months, and was only offered a role in the power tools section of Home Depot. Rather than accept this, Terrell built her own marketing portfolio by directly pitching companies on social media and eventually secured brand clients. This pattern — entrepreneurship as survival, not aspiration — is emerging across the cohort. The piece notes that the traditional "do your degree, get a job, build skills, get promoted" pipeline has broken down in ways that are structural, not cyclical.

Why it matters for you: Your students are two to five years away from this labour market. The Guardian piece provides concrete evidence — not just projections — that the bottom of the career ladder is already gone for many fields. For AI literacy education: the 4Ds framework's Delegation pillar (deciding what to delegate to AI vs. keep as human skill) becomes a career strategy question for students, not just a classroom one. This is useful source material for discussions about future-readiness and the purpose of education in an AI economy.
1 Why Silicon Valley Is Turning to the Catholic Church for AI Ethics — The Minerva Dialogues, Vatican-Tech annual meetings since 2016, are becoming a meaningful forum for AI ethics. Reid Hoffman, prominent VCs, and tech CEOs attend. The Atlantic
2 UK Steps Up Shortage Planning as Iran War Threatens Supply Chains — Officials monitoring stock levels and planning for disruption if the Iran conflict affects oil and goods flows. BBC News
3 Trump Evacuated from White House Correspondents' Dinner After Loud Bangs Heard — Secret Service rushed the President from the Washington Hilton late Friday; incident subsequently reported as fireworks from an outdoor display. Axios
4 Al-Qaeda-Linked JNIM Launches Coordinated Attacks Across Mali, Including Capital — Militant group claims to have seized two cities and struck the Bamako airport. NYT · Guardian
5 Orbán Steps Down from Hungarian Parliament After Landslide Defeat — Outgoing PM will not take up his seat after leading his party to a crushing loss. BBC News
6 Why Colleges Are Going Out of Business — Hampshire College is closing; it's the latest in a widening enrollment crisis reshaping US higher education. Vox
7 Maine Governor Vetoes First Statewide Data Center Moratorium — L.D. 307 would have paused new data center builds; energy and land use concerns drive the debate. TechCrunch
8 Engadget Reviews DJI Osmo Pocket 4 — The latest pocket gimbal camera in this week's review roundup. Engadget
9 Chernobyl, 40 Years On: BBC Visits Ghost City Pripyat — The anniversary of the April 26, 1986 disaster; Pripyat remains abandoned. BBC News
10 AI in Elections: Who Sets the Rules? — DW News short examines regulatory frameworks around AI use in electoral processes. DW News