How they did it: The staff requested language fashions the place they stand on numerous matters, comparable to feminism and democracy. They used the solutions to plot them on a political compass, then examined whether or not retraining fashions on much more politically biased coaching information modified their habits and skill to detect hate speech and misinformation (it did).
Why it issues: As AI language fashions are rolled out into services and products utilized by tens of millions, understanding their underlying political assumptions couldn’t be extra essential. That’s as a result of they’ve the potential to trigger actual hurt. A chatbot providing health-care recommendation may refuse to supply recommendation on abortion or contraception, for instance. Learn the total story.
—Melissa Heikkilä
Learn subsequent: AI language fashions have just lately turn into blended up within the US tradition wars, with some calling for builders to create unbiased, purely fact-based AI chatbots. In her weekly publication all about AI, The Algorithm, Melissa delves into why it’s a pleasant idea—however technically unattainable to construct. Learn it to seek out out extra, and in case you don’t already, signal as much as obtain it in your inbox each Monday.
The must-reads
I’ve combed the web to seek out you in the present day’s most enjoyable/essential/scary/fascinating tales about know-how.
1 A girl was wrongfully arrested after a false face recognition match
It’s notable that each particular person we all know this has occurred to has been Black. (NYT $)
+ The motion to restrict face recognition tech may lastly get a win. (MIT Expertise Assessment)
2 AI startups are combating soiled 😈
We’re speaking faux names, rivals posing as clients, and even bombing Zoom calls. (NYT $)
+ It’s all beginning to look rather a lot like a bubble. (WP $)
3 A vote in San Francisco may change the way forward for driverless automobiles
All eyes are on whether or not the state board will approve an enormous enlargement of autonomous taxis on Thursday. (NBC)
+ Large tech corporations are struggling to win over native residents and public officers. (WSJ $)
4 Is Texas’ electrical energy grid going to have the ability to deal with electrical autos?
There are causes to be optimistic, not only for that state however the US as a complete. (The Atlantic $)
5 Criminals are enthusiastic early adopters of AI instruments
On the darkish net, they declare to have created two giant language fashions that may help with unlawful actions. (Wired $)
+ Criminals are additionally utilizing AI-generated books to rip-off individuals. (NYT $)
+ We’re hurtling towards a glitchy, spammy, scammy, AI-powered web. (MIT Expertise Assessment)
6 The period of plentiful low cost stuff could also be coming to an finish
Perhaps that’s not an entirely unhealthy factor, frankly, for the sake of the planet. (WSJ $)
7 Individuals are eager to recreate Black Twitter elsewhere
There’s been an enormous exodus from the location. However the place ought to people go? (WP $)
8 Large cities want to vary
To thrive, they should reinvent themselves to be extra than simply locations the place individuals work. (Vox)
+ What cities want now. (MIT Expertise Assessment)
9 WhatsApp is engaged on 32-person voice chats
Appears like pure chaos! (The Verge)
10 Even Zoom is making workers return into the workplace
Ironic, maybe. However not that stunning. (Quartz $)
Quote of the day
How they did it: The staff requested language fashions the place they stand on numerous matters, comparable to feminism and democracy. They used the solutions to plot them on a political compass, then examined whether or not retraining fashions on much more politically biased coaching information modified their habits and skill to detect hate speech and misinformation (it did).
Why it issues: As AI language fashions are rolled out into services and products utilized by tens of millions, understanding their underlying political assumptions couldn’t be extra essential. That’s as a result of they’ve the potential to trigger actual hurt. A chatbot providing health-care recommendation may refuse to supply recommendation on abortion or contraception, for instance. Learn the total story.
—Melissa Heikkilä
Learn subsequent: AI language fashions have just lately turn into blended up within the US tradition wars, with some calling for builders to create unbiased, purely fact-based AI chatbots. In her weekly publication all about AI, The Algorithm, Melissa delves into why it’s a pleasant idea—however technically unattainable to construct. Learn it to seek out out extra, and in case you don’t already, signal as much as obtain it in your inbox each Monday.
The must-reads
I’ve combed the web to seek out you in the present day’s most enjoyable/essential/scary/fascinating tales about know-how.
1 A girl was wrongfully arrested after a false face recognition match
It’s notable that each particular person we all know this has occurred to has been Black. (NYT $)
+ The motion to restrict face recognition tech may lastly get a win. (MIT Expertise Assessment)
2 AI startups are combating soiled 😈
We’re speaking faux names, rivals posing as clients, and even bombing Zoom calls. (NYT $)
+ It’s all beginning to look rather a lot like a bubble. (WP $)
3 A vote in San Francisco may change the way forward for driverless automobiles
All eyes are on whether or not the state board will approve an enormous enlargement of autonomous taxis on Thursday. (NBC)
+ Large tech corporations are struggling to win over native residents and public officers. (WSJ $)
4 Is Texas’ electrical energy grid going to have the ability to deal with electrical autos?
There are causes to be optimistic, not only for that state however the US as a complete. (The Atlantic $)
5 Criminals are enthusiastic early adopters of AI instruments
On the darkish net, they declare to have created two giant language fashions that may help with unlawful actions. (Wired $)
+ Criminals are additionally utilizing AI-generated books to rip-off individuals. (NYT $)
+ We’re hurtling towards a glitchy, spammy, scammy, AI-powered web. (MIT Expertise Assessment)
6 The period of plentiful low cost stuff could also be coming to an finish
Perhaps that’s not an entirely unhealthy factor, frankly, for the sake of the planet. (WSJ $)
7 Individuals are eager to recreate Black Twitter elsewhere
There’s been an enormous exodus from the location. However the place ought to people go? (WP $)
8 Large cities want to vary
To thrive, they should reinvent themselves to be extra than simply locations the place individuals work. (Vox)
+ What cities want now. (MIT Expertise Assessment)
9 WhatsApp is engaged on 32-person voice chats
Appears like pure chaos! (The Verge)
10 Even Zoom is making workers return into the workplace
Ironic, maybe. However not that stunning. (Quartz $)
Quote of the day