AI & ML interests
Research and Policy Questions raised by (open) AI technology
Recent Activity
Welcome to the Hugging Face ML & Society Team Page!
We're a multidisciplinary team working on research and regulatory questions related to AI systems โ their (open) development, governance, and impact on society at large.
Team Members
- Yacine Jernite, Head of ML & Society
- Sasha Luccioni, AI & Climate Lead
- Giada Pistilli, Principal Ethicist
- Lucie-Aimรฉe Kaffee, Applied Policy Researcher, EU Policy
We also work closely with Irene Solaiman (Chief Policy Officer), and Avijit Ghosh, (Applied Policy Researcher) in the policy team, and with Meg Mitchell (Chief Ethics Scientist), and Bruna Trevelin (Legal Counsel) on various topics!
Resources
Team's Blog Posts
See a non-exhaustive list of our team's recent writings below:
Click to expand
Recent blog posts:- What kind of environmental impacts are AI companies disclosing? (And can we compare them?)| 09/17/2025
- Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI | 09/01/25
- Old Maps, New Terrain: Updating Labour Taxonomies for the AI Era | 08/20/2025
- The GPT-OSS models are hereโฆ and theyโre energy-efficient! | 08/07/25
- How Your Utility Bills Are Subsidizing Power-Hungry AI | 08/06/25
- What Open-Source Developers Need to Know about the EU AI Act's Rules for GPAI Models | 08/04/25
- AI Companionship: Why We Need to Evaluate How AI Systems Handle Emotional Bonds |07/21/25
- What is the Hugging Face Community Building? | 07/15/25
- Can AI Be Consentful? Rethinking Permission in the Age of Synthetic Everything | 07/08/25
- How Much Power does a SOTA Open Video Model Use? โก๐ฅ | 07/02/25
- Whose Voice Do We Hear When AI Speaks? | 06/20/25
- Open Source AI: A Cornerstone of Digital Sovereignty | 06/11/25
- AI Policy @๐ค: Response to the 2025 National AI R&D Strategic Plan | 06/02/25
- Bigger isn't always better: how to choose the most efficient model for context-specific tasks ๐ฑ๐ง๐ผโ๐ป | 05/28/25
- Highlights from the First ICLR 2025 Watermarking Workshop | 05/14/25
- AI Personas: The Impact of Design Choices | 05/07/25
- Reduce, Reuse, Recycle: Why Open Source is a Win for Sustainability | 05/07/25
- Consent by Design: Approaches to User Data in Open AI Ecosystems | 04/17/25
- AI Models Hiding Their Energy Footprint? Hereโs What You Can Do| 04/14/25
- Empowering Public Organizations: Preparing Your Data for the AI Era | 04/10/25
- Are AI Agents Sustainable? It depends | 04/07/25
- I Clicked โI Agreeโ, But What Am I Really Consenting To? | 03/26/25
- AI Policy @๐ค: Response to the White House AI Action Plan RFI | 03/19/25
- ๐ช๐บ EU AI Act: Comments on the Third Code of Practice Draft ๐ช๐บ | 03/13/25
- Announcing AI Energy Score Ratings | 02/11/25
- Announcing the winners of the Frugal AI Challenge ๐ฑ | 02/11/25
- From Hippocrates to AI: Reflections on the Evolution of Consent | 02/04/25
- AI Agents Are Here. What Now? | 01/13/25
- ๐ช๐บโ๏ธ EU AI Act: Systemic Risks in the First CoP Draft Comments โ๏ธ๐ช๐บ | 12/12/24
-
AI-companionship/INTIMA
Viewer โข Updated โข 380 โข 194 โข 26 -
INTIMA Companionship Benchmark Responses
๐บ10Visualizing model responses to companionship prompts
-
INTIMA: A Benchmark for Human-AI Companionship Behavior
Paper โข 2508.09998 โข Published โข 8 -
INTIMA Responses
๐4INTIMA Benchmark - Model Responses Explorer
-
AI-companionship/INTIMA
Viewer โข Updated โข 380 โข 194 โข 26 -
INTIMA Companionship Benchmark Responses
๐บ10Visualizing model responses to companionship prompts
-
INTIMA: A Benchmark for Human-AI Companionship Behavior
Paper โข 2508.09998 โข Published โข 8 -
INTIMA Responses
๐4INTIMA Benchmark - Model Responses Explorer
spaces
8
ML & Society at HF
๐ค machine learning and society team website
Lit Review With LMs
Uses HF inference and academic APIs for directed lit review
Legal Hackathons Nyu Ai2
Legal memos on AI from NYU hackathons
Labor Archive Explorer
Explore AI, Labor, and Economy trends from 2022-2025
Os Gpai Guide Flowchart
Flowchart for (open) GPAI developers to identify AI Act reqs