• OpenAI admits new models

    From Mike Powell@1:2320/105 to All on Friday, December 12, 2025 09:50:30
    OpenAI admits new models likely to pose 'high' cybersecurity risk

    Date:
    Thu, 11 Dec 2025 20:20:00 +0000

    Description:
    Better models also mean higher risk, but there are mitigations.

    FULL STORY

    Future OpenAI Large Language Models (LLM) could pose higher cybersecurity
    risks as, in theory, they could be able to develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex
    and stealthy cyber-espionage campaigns.

    This is according to OpenAI itself who, in a recent blog, said that cyber capabilities in its AI models are advancing rapidly.

    While this might sound sinister, OpenAI is actually viewing this from a positive perspective, saying that the advancements also bring meaningful benefits for cyberdefense.

    Crashing the browser

    To prepare in advance for future models that might be abused this way, OpenAI said it is investing in strengthening models for defensive cybersecurity
    tasks and creating tools that enable defenders to more easily perform
    workflows such as auditing code and patching vulnerabilities.

    The best way to go about it, as per the blog, is a combination of access controls, infrastructure hardening, egress controls, and monitoring.

    Furthermore, OpenAI announced that it would soon introduce a program that should give users and customers working on cybersecurity tasks access to improved capabilities, in a tiered manner.

    Finally, the Microsoft-backed AI giant said it plans on establishing an advisory group called Frontier Risk Council. This group should consist of seasoned cybersecurity experts and practitioners and, after an initial focus
    on cybersecurity, should expand its reach elsewhere.

    Members will advise on the boundary between useful, responsible capability
    and potential misuse, and these learnings will directly inform our
    evaluations and safeguards. We will share more on the council soon, the blog reads.

    OpenAI also said that cyber misuse could be viable from any frontier model in the industry, which is why it is part of the Frontier Model Forum, where it shares knowledge and best practices with industry partners.

    In this context, threat modeling helps mitigate risk by identifying how AI capabilities could be weaponized, where critical bottlenecks exist for different threat actors, and how frontier models might provide meaningful uplift.

    Via Reuters

    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/openai-admits-new-models-likely-to-pose -high-cybersecurity-risk

    $$
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)