A brand new investigation reveals that the most well-liked information app within the U.S. printed over three dozen inaccurate, AI-lifted, or AI-bylined tales previously three years — with real-world results.
NewsBreak, the hottest information app within the U.S., advertises itself as a neighborhood information supply. It tops the Google Play retailer, with over 50 million downloads, and dominates the Apple App Retailer information charts, outperformed solely by X and Reddit.
The app solely operates within the U.S. and works as an aggregator, pooling information from completely different retailers, like Fox, Reuters, and CNN, onto one platform.
A Wednesday Reuters report discovered that NewsBreak used AI not less than 40 instances since 2021 to publish inaccurate tales, put up tales from different sources underneath pretend bylines, and take content material from opponents.
For instance, two AI-based tales on NewsBreak incorrectly acknowledged that Pennsylvania-based charity Harvest912 was internet hosting a 24-hour well being clinic for the homeless.
“You might be doing HARM by publishing this misinformation – homeless folks will stroll to those venues to attend a clinic that’s not occurring,” Harvest912 wrote in a January e-mail to NewsBreak.
One other e-mail to NewsBreak from Colorado-based meals financial institution Meals to Energy detailed how NewsBreak incorrectly acknowledged when meals can be distributed three separate instances — in January, February, and March.
The meals financial institution needed to clarify the difficulty to individuals who confirmed up in response to the NewsBreak articles, and ship them dwelling with out the meals they anticipated.
NewsBreak advised Reuters that it took down the 5 articles with inaccurate info.
Associated: Microsoft Changed Its Information Editors With AI. It is Introduced One Catastrophe After One other
In relation to AI instruments and pretend bylines, NewsBreak seems to have used 5 pretend names as bylines for AI-generated repostings of tales from different websites.
Previous NewsBreak advisor and former Wall Avenue Journal govt editor Norm Pearlstine flagged the difficulty in a Might 2022 firm memo to NewsBreak CEO Jeff Zheng, writing “I can’t consider a sooner method to destroy the NewsBreak model.”
Zheng responded to the memo, acknowledging the issue and asking the staff to repair it.
Associated: OpenAI Can Now Entry Monetary Occasions Articles to Practice AI
NewsBreak is not the one information outlet going through scrutiny over AI content material. Bloomberg reported earlier this month that native San Francisco newspaper Hoodline was counting on AI to churn out tales — and, at one level, attributing these tales to distinctive AI personas full with their very own bios.
AI has additionally been recognized to generate inaccurate content material. Information outlet CNET used AI to write down over 70 articles final yr and needed to concern corrections for a lot of as a result of truth errors.
In the meantime, final week, Google introduced “greater than a dozen technical enhancements” after customers discovered that AI overviews in its search engine gave some inaccurate solutions.
Associated: Google’s AI Overviews Are Already Getting Main Issues Mistaken
A brand new investigation reveals that the most well-liked information app within the U.S. printed over three dozen inaccurate, AI-lifted, or AI-bylined tales previously three years — with real-world results.
NewsBreak, the hottest information app within the U.S., advertises itself as a neighborhood information supply. It tops the Google Play retailer, with over 50 million downloads, and dominates the Apple App Retailer information charts, outperformed solely by X and Reddit.
The app solely operates within the U.S. and works as an aggregator, pooling information from completely different retailers, like Fox, Reuters, and CNN, onto one platform.
A Wednesday Reuters report discovered that NewsBreak used AI not less than 40 instances since 2021 to publish inaccurate tales, put up tales from different sources underneath pretend bylines, and take content material from opponents.
For instance, two AI-based tales on NewsBreak incorrectly acknowledged that Pennsylvania-based charity Harvest912 was internet hosting a 24-hour well being clinic for the homeless.
“You might be doing HARM by publishing this misinformation – homeless folks will stroll to those venues to attend a clinic that’s not occurring,” Harvest912 wrote in a January e-mail to NewsBreak.
One other e-mail to NewsBreak from Colorado-based meals financial institution Meals to Energy detailed how NewsBreak incorrectly acknowledged when meals can be distributed three separate instances — in January, February, and March.
The meals financial institution needed to clarify the difficulty to individuals who confirmed up in response to the NewsBreak articles, and ship them dwelling with out the meals they anticipated.
NewsBreak advised Reuters that it took down the 5 articles with inaccurate info.
Associated: Microsoft Changed Its Information Editors With AI. It is Introduced One Catastrophe After One other
In relation to AI instruments and pretend bylines, NewsBreak seems to have used 5 pretend names as bylines for AI-generated repostings of tales from different websites.
Previous NewsBreak advisor and former Wall Avenue Journal govt editor Norm Pearlstine flagged the difficulty in a Might 2022 firm memo to NewsBreak CEO Jeff Zheng, writing “I can’t consider a sooner method to destroy the NewsBreak model.”
Zheng responded to the memo, acknowledging the issue and asking the staff to repair it.
Associated: OpenAI Can Now Entry Monetary Occasions Articles to Practice AI
NewsBreak is not the one information outlet going through scrutiny over AI content material. Bloomberg reported earlier this month that native San Francisco newspaper Hoodline was counting on AI to churn out tales — and, at one level, attributing these tales to distinctive AI personas full with their very own bios.
AI has additionally been recognized to generate inaccurate content material. Information outlet CNET used AI to write down over 70 articles final yr and needed to concern corrections for a lot of as a result of truth errors.
In the meantime, final week, Google introduced “greater than a dozen technical enhancements” after customers discovered that AI overviews in its search engine gave some inaccurate solutions.
Associated: Google’s AI Overviews Are Already Getting Main Issues Mistaken