Responsible ai has a burnout problem

(MIT Technology Review) Rumman Chowdhury, who leads Twitter’s Machine Learning Ethics, Transparency, and Accountability team and is another pioneer in applied AI ethics, faced that problem in a previous role. 

“I burned out really hard at one point. And [the situation] just kind of felt hopeless,” she says. 

All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support.


Ethical AI Team Says Bias Bounties Can More Quickly Expose Algorithmic Flaws

(Singularity Hub) Bias in AI systems is proving to be a major stumbling block in efforts to more broadly integrate the technology into our society. A new initiative that will reward researchers for finding any prejudices in AI systems could help solve the problem.

The effort is modeled on the bug bounties that software companies pay to cybersecurity experts who alert them of any potential security flaws in their products. The idea isn’t a new one; “bias bounties” were first proposed by AI researcher and entrepreneur JB Rubinovitz back in 2018, and various organizations have already run such challenges.


When Your Layoff has a hashtag

(The New York Times) There were few scenes of workers packing up their cubicles, shoving plaques in boxes and commiserating over beers. Instead, there were tweets.

Hung Truong, an engineer, watched his Twitter feed fill with layoff-related posts this past week. Friends and former colleagues were all out of work. Mr. Truong, 39, had been there. At the start of the pandemic, he’d lost his job at Lyft. And he recalled the strange relief of posting that on Twitter, knowing he wouldn’t have to give the painful update to followers one by one.


Twitter employees are venting on social media and in private forums about Elon Musk's agreement to buy Twitter

(Business Insider) "Living the plot of succession is fucking exhausting," tweeted Rumman Chowdhury, the director of Twitter's ML Ethics, Transparency, and Accountability team.

She added that the company did not address the developments when they first emerged: "I am sitting on 2023 company wide strategy readouts and I guess we are going to collectively ignore what's going on."


How to survive as an ai ethicist

(MIT Technology Review) It’s never been more important for companies to ensure that their AI systems function safely, especially as new laws to hold them accountable kick in. The responsible AI teams they set up to do that are supposed to be a priority, but investment in it is still lagging behind.

People working in the field suffer as a result, as I found in my latest piece. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online. 


Twitter Worker Who Pointed Out Right-Wing Bias on Platform Fired by Musk

(Newsweek) Chowdhury was part of the team who wrote a damning report in October 2021 that revealed Twitter's algorithm—which dictates what users see in their Home timeline, unless they choose to see the most-recent tweets in reverse chronological order—favored right-wing content over left-wing posts.

"In six out of seven countries—all but Germany—tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group," the report reads.

On top of this, Chowdhury's analysis also found that: "Right-leaning news outlets, as defined by the independent organizations listed above [AllSides and Ad Fontes Media], see greater algorithmic amplification on Twitter compared to left-leaning news outlets."


Do AI systems need to come with safety warnings?

(MIT Technology Review) Considering how powerful AI systems are, and the roles they increasingly play in helping to make high-stakes decisions about our lives, homes, and societies, they receive surprisingly little formal scrutiny. 

That’s starting to change, thanks to the blossoming field of AI audits. When they work well, these audits allow us to reliably check how well a system is working and figure out how to mitigate any possible bias or harm. 


A bias bounty for AI will help to catch unfair algorithms faster

The hope is that it will help boost a blossoming sector that works to hold artificial intelligence systems accountable.


When AI Ethics gets ugly

(Protocol) Hello, and welcome to Protocol Enterprise! Today: how an AI ethics startup became embroiled in an ownership and control dispute, why zero-trust security is important and overhyped, and a new law proposes banning the sale of smartphone location data.


People are sharing shocking responses from the new AI-powered Bing

(Business Insider) Rumman Chowdhury, a data scientist and former AI lead at Accenture, asked Bing questions about herself — and she said it responded with comments about her appearance, according to screenshots she posted on Twitter.