Here is the article.
My notes:
- It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
- We accept and respect her decision to resign from Google.
- critical DEI programs - Google it!
About Google's approach to research publication
I understand the concern over Timnit Gebru’s resignation from Google. She’s done a great deal
to move the field forward with her research. I wanted to share the email I sent to Google Research
and some thoughts on our research process.
Here’s the email I sent to the Google Research team on Dec. 3, 2020:
Hi everyone,
I’m sure many of you have seen that Timnit Gebru is no longer working at Google.
This is a difficult moment, especially given the important research topics she was
involved in, and how deeply we care about responsible AI research as an org and
as a company.
Because there’s been a lot of speculation and misunderstanding on social media,
I wanted to share more context about how this came to pass, and assure you we’re
here to support you as you continue the research you’re all engaged in.
Timnit co-authored a paper with four fellow Googlers as well as some external
collaborators that needed to go through our review process (as is the case with
all externally submitted papers). We’ve approved dozens of papers that Timnit
and/or the other Googlers have authored and then published, but as you know,
papers often require changes during the internal review process (or are even
deemed unsuitable for submission). Unfortunately, this particular paper was
only shared with a day’s notice before its deadline — we require two weeks
for this sort of review — and then instead of awaiting reviewer feedback, it was
approved for submission and submitted.
A cross functional team then reviewed the paper as part of our regular process
and the authors were informed that it didn’t meet our bar for publication and were
given feedback about why. It ignored too much relevant research — for example,
it talked about the environmental impact of large models, but disregarded subsequent
research showing much greater efficiencies. Similarly, it raised concerns about bias
in language models, but didn’t take into account recent research to mitigate these
issues. We acknowledge that the authors were extremely disappointed with the
decision that Megan and I ultimately made, especially as they’d already submitted
the paper.
Timnit responded with an email requiring that a number of conditions be met in
order for her to continue working at Google, including revealing the identities of
every person who Megan and I had spoken to and consulted as part of the review
of the paper and the exact feedback. Timnit wrote that if we didn’t meet these
demands, she would leave Google and work on an end date. We accept and
respect her decision to resign from Google.
Given Timnit's role as a respected researcher and a manager in our Ethical AI team,
I feel badly that Timnit has gotten to a place where she feels this way about the work
we’re doing. I also feel badly that hundreds of you received an email just this week
from Timnit telling you to stop work on critical DEI programs. Please don’t. I
understand the frustration about the pace of progress, but we have important work
ahead and we need to keep at it.
I know we all genuinely share Timnit’s passion to make AI more equitable and
inclusive. No doubt, wherever she goes after Google, she’ll do great work and
I look forward to reading her papers and seeing what she accomplishes.
Thank you for reading and for all the important work you continue to do.
-Jeff
I’ve also received questions about our research and review process, so I wanted to share
more here. I'm going to be talking with our research teams, especially those on the Ethical
AI team and our many other teams focused on responsible AI, so they know that we strongly
support these important streams of research. And to be clear, we are deeply committed to
continuing our research on topics that are of particular importance to individual and intellectual
diversity -- from unfair social and technical bias in ML models, to the paucity of representative
training data, to involving social context in AI systems. That work is critical and I want our
research programs to deliver more work on these topics -- not less.
In my email above, I detailed some of what happened with this particular paper. But let me
give a better sense of the overall research review process. It’s more than just a single
approver or immediate research peers; it’s a process where we engage a wide range of
researchers, social scientists, ethicists, policy & privacy advisors, and human rights specialists
from across Research and Google overall. These reviewers ensure that, for example, the
research we publish paints a full enough picture and takes into account the latest relevant
research we’re aware of, and of course that it adheres to our AI Principles.
Those research review processes have helped improve many of our publications and research
applications. While more than 1,000 projects each year turn into published papers, there are
also many that don’t end up in a publication. That’s okay, and we can still carry forward
constructive parts of a project to inform future work. There are many ways we share our research;
e.g. publishing a paper, open-sourcing code or models or data or colabs, creating demos, working
directly on products, etc.
This paper surveyed valid concerns with large language models, and in fact many teams at Google
are actively working on these issues. We’re engaging the authors to ensure their input informs the
work we’re doing, and I’m confident it will have a positive impact on many of our research and
product efforts.
But the paper itself had some important gaps that prevented us from being comfortable putting
Google affiliation on it. For example, it didn’t include important findings on how models can be
made more efficient and actually reduce overall environmental impact, and it didn’t take into account
some recent work at Google and elsewhere on mitigating bias in language models. Highlighting
risks without pointing out methods for researchers and developers to understand and mitigate those
risks misses the mark on helping with these problems. As always, feedback on paper drafts generally
makes them stronger when they ultimately appear.
We have a strong track record of publishing work that challenges the status quo -- for example,
we’ve had more than 200 publications focused on responsible AI development in the last year alone.
Just a few examples of research we’re engaged in that tackles challenging issues:
Measuring and reducing gendered correlations in pre-trained NLP models
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims
What Does AI Mean for Smallholder Farmers? A Proposal for Farmer-Centered AI Research [forthcoming]
SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
Accelerating eye movement research via accurate and affordable smartphone eye tracking
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Assessing the impact of coordinated COVID-19 exit strategies across Europe
Practical Compositional Fairness: Understanding Fairness in Multi-Component Ranking Systems
I’m proud of the way Google Research provides the flexibility and resources to explore many avenues
of research. Sometimes those avenues run perpendicular to one another. This is by design. The
exchange of diverse perspectives, even contradictory ones, is good for science and good for society.
It’s also good for Google. That exchange has enabled us not only to tackle ambitious problems, but
to do so responsibly.
Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review
research before publication. To give a sense of that rigor, this blog post captures some of the detail
in one facet of review, which is when a research topic has broad societal implications and requires
particular AI Principles review -- though it isn’t the full story of how we evaluate all of our research,
it gives a sense of the detail involved: https://blog.google/technology/ai/update-work-ai-responsible-innovation/
We’re actively working on improving our paper review processes, because we know that too many
checks and balances can become cumbersome. We will always prioritize ensuring our research
is responsible and high-quality, but we’re working to make the process as streamlined as we can
so it’s more of a pleasure doing research here.
A final, important note -- we evaluate the substance of research separately from who’s doing it. But
to ensure our research reflects a fuller breadth of global experiences and perspectives in the first
place, we’re also committed to making sure Google Research is a place where every Googler can
do their best work. We’re pushing hard on our efforts to improve representation and inclusiveness
across Google Research, because we know this will lead to better research and a better experience
for everyone here.
No comments:
Post a Comment