Running a business today can be likened to trying to navigate a digital minefield. You know there are dangers all around, but knowing where they are and how to avoid them is rarely an easy task.
In mid-September 2025, state-sponsored cyber actors from China exploited Anthropic’s AI technology, specifically Claude Code, to orchestrate automated attacks on roughly 30 high-value global targets, including tech firms, financial institutions, chemical manufacturers, and government agencies.
The Monetary Authority of Singapore (MAS) has released new AI Risk Management Guidelines, placing responsibility on bank board members and senior management to oversee risks arising from AI deployment.
A new report from the International Organization of Securities Commissions (IOSCO) has underscored the complex challenges facing the growing tokenization market, citing uneven efficiency gains, regulatory inconsistencies, and legal uncertainty.
Cybercriminals have allegedly targeted almost 30 organizations in a coordinated campaign exploiting Oracle’s E-Business Suite (EBS) enterprise resource planning software. The operation, which began in late September, involved extortion emails sent to senior executives and is believed to be the work of the financially motivated threat group known as FIN11.
German regulators have imposed an administrative fine on J.P. Morgan SE for deficiencies in its anti–money laundering (AML) controls. Authorities found that the bank had culpably breached supervisory obligations related to its internal procedures for filing suspicious transaction reports (STRs).
The European Commission has launched a formal investigation into Deutsche Börse and Nasdaq following unannounced inspections at their offices in September 2024.
Cybersecurity researchers at Tenable have uncovered seven vulnerabilities in OpenAI’s ChatGPT, specifically affecting its GPT-4o and GPT-5 models. These flaws could allow attackers to steal personal data from users’ memories and chat histories without their knowledge. OpenAI has since patched several of the issues, which were found to make the chatbot susceptible to indirect prompt injection attacks—a manipulation technique that tricks large language models into executing hidden or malicious commands.