Meta “Responsible AI” Team Dissolved Amid Growing Safety Concerns

Meta responsible AI team layoff
Meta responsible AI team layoff

YEREVAN (CoinChapter.com) — Meta Platforms, Inc., formerly known as Facebook, has announced the restructuring of its “Responsible AI” team. This decision shifts the company’s approach to artificial intelligence. Here are more details.

Meta No Longer Has a “Responsible AI” Team

The Responsible AI team at Meta was previously tasked with understanding and mitigating the potential harms associated with AI technology. The recent restructuring involves integrating members of this team into various divisions across the company. This move, according to Meta, intends to embed AI safety considerations more directly into the development of core products and technologies.

Meta’s decision reflects a broader trend in the tech industry where companies are increasingly focusing on embedding ethical AI practices within their operational frameworks rather than maintaining separate oversight entities. A spokesperson from Meta assured that the company continues to prioritize safe and responsible AI development, despite the dissolution of the centralized team.

However, users could take their assurances with a grain of salt, as the tech giant had their ethics questioned in court before.

Facebook/Meta Doesn’t Have the Best Track Record in Ethics

Here is a short recap of why consumers might not trust Meta on their AI ethics.

For example, in 2017 and 2018, Meta was accused of concealing the misuse of Facebook users’ data. Shareholders’ class action lawsuit was revived, alleging that the company did not disclose the extent of data misuse during this period​​.

The allegations continued in November 2021. Ohio’s Attorney General sued Meta, following allegations by whistleblower Frances Haugen, for allegedly violating federal securities law by deceiving the public​​.

Moreover, a dozen US states sued Meta in 2023 for its contribution to the youth mental health crisis. The lawsuit alleges that the company knowingly designed features on Instagram and Facebook that addict children to its platforms​​.

DC Attorney General Brian Schwalb was one of the plaintiffs in the Meta lawsuit over mental health.
DC Attorney General Brian Schwalb’s tweet about the Meta lawsuit over mental health.Source: X.com

Considering Meta’s former accusations, the decision to dissolve the “Responsible AI” team could spark further accusations amid growing AI safety concerns.

AI Safety Summit Raises Concerns

In early November 2023, global leaders from 29 countries and the European Union convened at the AI Safety Summit held at Bletchley Park, UK. The summit culminated in the joint signing of the Bletchley Declaration, which endorses clamping down on unregulated AI development.

The declaration emphasizes the need for a science-based understanding of AI safety risks and developing tailored, risk-based policies for each country.

AI safety summit
AI safety summit in Bletchley Park. Source: Aibusiness.com

Notably, ahead of the Bletchley Park summit, US President Joe Biden released an executive order on AI development, calling for a “coordinated approach” involving government, the private sector, and academia.

Countries such as Australia, France, Germany, Japan, Nigeria, South Korea, Ukraine, Indonesia, and the United Arab Emirates, which were signatories to the Bletchley Declaration, highlighted the international nature of the AI safety challenge.

Meta’s restructuring came amid heightened AI regulation. However, the company’s claims of further inducing AI safety might not bode well with consumers based on the aforementioned accusations and ethical disputes.

Leave a Comment

Related Articles

Our Partners

SwapCoin.com RapidCoin.com ChangeNOW.com Paybis.com WestcoastNFT.com