A new security vulnerability affects AI agents that browse the web, read emails, or access databases—malicious hidden instructions embedded in webpage footers, email signatures, and documents can hijack agent behavior without detection. Researchers have developed Arc Gate, a proxy-level security tool that enforces instruction-authority boundaries by treating untrusted content sources as data-only, preventing them from issuing directives to language models.
A jury is set to deliberate on Elon Musk's allegations that Sam Altman and OpenAI 'stole' the charity, following a high-profile trial featuring testimony from Microsoft CEO Satya Nadella and extensive evidence of private communications between the two tech executives. The case has provided rare public insight into OpenAI's internal history and exposed the competitive tensions between two of Silicon Valley's most prominent figures, with both Musk and Altman facing aggressive cross-examination that questioned their credibility.
New data indicates that jobs with significant AI exposure in the United States are starting to be eliminated as artificial intelligence capabilities expand across industries. The findings suggest that concerns about AI-driven job displacement, once largely theoretical, are now materializing in measurable employment trends.
ArXiv, the prominent preprint repository for scientific research, has announced stricter enforcement policies against authors who use large language models to generate entire papers without meaningful human contribution. The policy carries penalties including one-year bans for violators, marking an escalation in the platform's efforts to maintain research integrity amid growing AI adoption in academia.
The Vatican has created a dedicated artificial intelligence study group to inform Pope Francis's forthcoming encyclical on AI and its ethical implications. The initiative reflects the Church's effort to engage substantively with technology policy and position itself as a moral voice in the global AI governance debate.
The Washington Post examines allegations from former colleagues questioning the trustworthiness of a prominent figure leading the AI boom, though specific details are limited in this headline-only excerpt. The investigation suggests potential credibility concerns among those who have worked closely with this industry leader.
Despite the ongoing AI boom, sentiment within the tech industry has shifted negative, reflecting growing concerns about unequal access to AI resources and capabilities. The disparity between well-funded AI leaders and struggling startups is creating a two-tiered market that threatens broader innovation and competition.
A New York Times book excerpt by Josh Tyrangiel details how OpenAI and Khan Academy partnered to develop a chatbot for educational purposes. The collaboration represents a significant effort to apply large language model technology to personalized learning at scale.
Researchers have discovered a clathrate crystal lattice structure in nuclear detonation fallout, marking the first documented instance of this type of crystalline formation in such extreme conditions. The finding could provide new insights into how materials behave under extreme heat and pressure.
Medical AI systems are producing hallucinations—fabricating nonexistent health issues—during live patient appointments, raising critical concerns about diagnostic accuracy and patient safety. The problem represents a significant limitation of current large language models deployed in clinical environments where accuracy is paramount.
ArXiv, the influential preprint repository for academic research, will ban authors for a year if their papers contain "incontrovertible evidence" of unchecked AI-generated content, such as hallucinated references or LLM meta-comments, according to computer science section chair Thomas Dietterich. Going forward, researchers banned from ArXiv will need to have future submissions accepted at reputable peer-reviewed venues before resubmitting to the platform.