Tech

‘Ethics precedes regulation’: Hugging Face’s Margaret Mitchell on why tech needs AI ethicists now | Technology News

“Ethics precedes regulation,” said Margaret Mitchell, Chief Ethics Scientist at Hugging Face, at the ongoing AI Everything event in Cairo, Egypt. As artificial intelligence (AI) development continues to outpace regulation, Mitchell emphasises the critical function that AI ethicists serve in the technology sector. Even before formal regulations, ethical frameworks provide the necessary guidance for responsible development. 

Mitchell, in a conversation with indianexpress.com, shared that as AI systems are becoming increasingly sophisticated, the challenges they pose have moved far beyond conventional concerns about privacy and data protection. 

“Regulation is tending to lag AI development. That’s where AI ethicists can really come in, in order to break down the pros and cons in terms of different human rights and in terms of different values for the company and for society,” Mitchell explained. The role involves helping organisations navigate complex tradeoffs to “make beneficial technology” while “minimising the possible negative blowback from problematic issues.”

However, when asked whether AI ethics will play a bigger role going forward, Mitchell’s response is cautiously measured: “Oh, I don’t know that yet. I hope so.”

Encryption as a privacy cornerstone

Further into the conversation, on the question of what concrete measures AI developers should implement to protect user privacy, Mitchell advocates strongly for encryption, specifically, encryption that even the company itself cannot access. She pointed to Signal as an exemplary model, praising how the platform has worked with regulators to explain why secure encryption cannot include backdoors.

“There’s no back door just for good guys,” she says, referencing Signal President Meredith Whitaker’s advocacy on the issue. Mitchell argued that without proper encryption, companies remain vulnerable to both internal misuse and external pressure. She cited recent news about Google providing Gmail information to the US government without appropriate subpoenas as evidence of why encryption matters. “It’s not encrypted, so of course they can do that, right? So if there’s appropriate encryption, then it’s not even possible for a company to do that.”

The problem of algorithmic bias

Despite increased awareness of bias in AI systems, biased systems continue to be deployed. The populations most affected, she explained, are those already facing marginalisation in society. “From the get-go, they’re less represented in the data. That’s sort of part and parcel of being marginalised, is that you have less representation, and then the models are less able to model the kinds of outcomes that people who are marginalised actually need,” Mitchell explained.

Story continues below this ad

The scientist highlighted healthcare as an area where bias has particularly severe consequences, noting that systems “disproportionately fail more for women, for Black women in particular, in the US.” Mitchell also pointed to significant biases affecting Indian populations, which stem from training data dominated by US English speakers.

The problem is compounded by the demographics of who creates online content. “Predominantly people providing content on the Internet in the US are white males between 15 and 30 without kids. And so the content really reflects their viewpoints,” she said. This creates systems that work less effectively for marginalised populations and perpetuate harmful stereotypes.

On the privacy landscape

When it comes to how major technology companies handle privacy, Mitchell presented a nuanced view based on her experience working inside some of the industry’s largest players. Having worked at both Microsoft and Google, she observes significant differences in how companies approach user privacy.

Mitchell points to Meta as an example of a company that has “famously flouted a lot of privacy considerations”, noting that for some companies, lawsuits and fines are simply factored into business decisions. “What is the cost of being fined or sued? And that is taken as essentially a sunk cost for the decisions you’re making,” she explained.

Story continues below this ad

In contrast, her experience at Microsoft and Google painted a different picture. “I would say both of them take privacy very, very seriously,” Mitchell said, crediting both regulatory pressure and the need to maintain consumer trust. She noted that differential privacy, a statistical technique to ensure privacy, emerged from research at Microsoft, demonstrating how some large tech companies have contributed fundamental advances to privacy protection.

Apple, Microsoft, and Google, she argues, maintain their market positions partly because “they’re companies that people trust, and in order to build up that trust long term, you have to be able to robustly handle privacy.”

On open-source AI

Hugging Face is a platform that allows its users to share, discover, and collaborate on AI models, datasets, and applications. The company views ‘open-source’ as a fundamental driver for the democratisation of AI. 

Since the open-source approach hailed for transparency, which also means undue exposure to misuse. When asked how to strike a balance between the two distinct realities, the scientist offered a pragmatic stance. 

Story continues below this ad

“The thing about ethics is that you’re always unpacking the good and the bad. There is no such thing as a technology that’s only good without other kinds of bad,” she explained. The key, according to Mitchell, is taking a holistic view that considers long-term impacts and alignment with ethical values.

The executive shared that at Hugging Face, she has worked on implementing ‘gating’ mechanisms that offer a middle ground between fully open and closed models.  “People can’t just openly access a model. They have to register, they have to provide the reason, and then you actually have accountability for how they might use it,” she explained. 

This approach reflects Mitchell’s broader philosophy. “Part of threading the needle is just thinking about the overall landscape of pros and cons and then trying to figure out the path forward, merging closed and open ideas that create the most beneficial, foreseeable outcomes.”

What is the biggest risk affecting present times?

Mitchell identified a fundamental shift in how society perceives truth and fiction as one of the most pressing risks we face in 2026. 

Story continues below this ad

“There’s a weird risk that’s happening right now that’s really starting to balloon – people’s inability to tell fact from fiction,” Mitchell explained. The ease with which generative AI can create realistic content, combined with the absence of standardised watermarking or disclosure requirements, has created an environment where distinguishing authentic material from AI-generated content has become nearly impossible for most users.

“We’re entering an era now where it’s very easy to see content online that you think is real that’s not real, and plausible deniability for real content for people to say it’s AI-generated,” she noted. “So our sense of reality is completely disrupted at this point.”

With the unprecedented pace of AI’s evolution, Mitchell’s views underscore the complexity of the challenges ahead. And these challenges may require not just technical solutions but careful ethical consideration of how these systems would impact society’s most vulnerable and the nature of truth itself.

The author is at the AI Everything Event in Cairo, Egypt at the invitation of GITEX Global. The event is being organised by GITEX and hosted by Egypt’s Ministry of Communications and Information Technology (MCIT) in partnership with the Information Technology Industry Development Agency (ITIDA).

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button