• Apple's AI headlines are

    From Mike Powell@1:2320/105 to All on Wednesday, January 08, 2025 10:33:00
    Apple's AI headlines are more of a break from reality than breaking news

    Date:
    Wed, 08 Jan 2025 10:07:21 +0000

    Description:
    Apple Intelligence fails to accurately summarize news stories.

    FULL STORY

    Apple's much-hyped Apple Intelligence is facing a crisis of trust after
    several attempts at summarizing news headlines produced inaccurate and sometimes bizarre results. The feature has an understandable appeal for
    iPhone owners as Apple tries to condense notifications into digestible snippets. But instead of accurately summarizing, the AI occasionally indulges in creative writing.

    It's gotten bad enough that major news organizations are complaining about
    how the headlines mislead readers, asking Apple to fix or remove the tool before it further embarrasses them. There have been a few particularly prominent examples since Apple debuted the feature.

    In December, Apple Intelligence wrote a headline for a BBC story about Luigi Mangione, the accused killer of UnitedHealthcare CEO Brian Thompson, claiming Mangione had shot himself . That detail was entirely invented by the
    algorithm. The broadcaster wasn't thrilled about being blamed for something
    it didnt write.

    Similarly, a New York Times story did not claim Israel's Prime Minister Benjamin Netanyahu was arrested, regardless of Apple's AI headline. Apple
    only responded this week in a statement:

    "Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback," the company said in the statement. "A software update in the coming weeks will further clarify when
    the text being displayed is summarization provided by Apple Intelligence. We encourage users to report a concern if they view an unexpected notification summary."

    When TechRadar reached out for comment, Apple said it had nothing to add to
    the statement. And while it's good that Apple has plans to fix the issue, it does feel a little like putting a Wet Paint sign on a wall after you already have a red stripe down the back of your shirt.

    Headlining AI

    Errors are endemic to generative AI; the hallucinations appear no matter what model you use, which can make the tools built by Apple, Google, or OpenAI unpredictable. These systems are trained to process and summarize
    information, but theyre not immune to confusion.

    Google faced a similar backlash last year when its AI Overviews , summaries shared on top of search results, delivered some questionable facts. One could argue that errors like these are just growing pains, but when it comes to
    news, mistakes arent easily forgiven or forgotten.

    News brands rely on people trusting their reporting, so this isn't as simple
    as chalking errors up to bad summaries. A wild claim unsupported by facts and attributed to a supposedly professional newsroom can make people unfairly distrustful of that news source. The last thing journalists and the public
    need is AI inaccuracies messing with headlines.

    Besides rolling out that promised update, Apple will likely have plenty of fine-tuning to do for the AI headlines. That might mean stricter guardrails
    for the AI, or maybe a more prominent warning that the headlines are AI-generated. If Apple can't fix this, Apple Intelligence may have to be renamed Apple Imagination.

    ======================================================================
    Link to news story: https://www.techradar.com/computing/artificial-intelligence/apples-ai-headline s-are-more-of-a-break-from-reality-than-breaking-news

    $$
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Aaron Thomas@1:342/202 to Mike Powell on Wednesday, January 08, 2025 11:26:16
    In December, Apple Intelligence wrote a headline for a BBC story about Luigi Mangione, the accused killer of UnitedHealthcare CEO Brian
    Thompson, claiming Mangione had shot himself . That detail was entirely invented by the algorithm. The broadcaster wasn't thrilled about being blamed for something it didnt write.

    What I get from this is: when humans share misinformation, they get punished for it. But when AI shares misinformation, the company can just blame it on "a software glitch" and instead of being punished they can just say "we're going to address this in the next system update."

    --- Mystic BBS v1.12 A48 (Linux/64)
    * Origin: JoesBBS.com, Telnet:23 SSH:22 HTTP:80 (1:342/202)