Sara Hooker
  • Home
  • Research
  • Talks
  • Contact
  • Home
  • Research
  • Talks
  • Contact

Research

Please visit our research lab page for a full and up to-date list of all collaborations.
Efficient methods for natural language processing: a survey
Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H Martins, André FT Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, Roy Schwartz
(paper)
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics
Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, Sara Hooker
(paper)
Large language models are not zero-shot communicators
Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette
(paper)

Intriguing Properties of Compression on Multilingual Models
Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, Julia Kreutzer
(paper)
A Tale Of Two Long Tails
Daniel D'Souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker
(paper)
When less is more: Simplifying inputs aids neural network understanding
Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
(paper)
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
​Oreva Ahia, Julia Kreutzer, Sara Hooker
(paper)
Moving beyond “algorithmic bias is a data problem”
​
Sara Hooker
(paper)
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
(paper)
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
Kale-ab Tessera, Sara Hooker, Benjamin Rosman
(paper)
The Hardware Lottery
Sara Hooker
(paper)
Characterizing and Mitigating Bias in Compact Models
Sara Hooker*, Nyalleng Moorosi*, Gregory Clark, Samy Bengio, Emily Denton
(paper)
Estimating Example Difficulty using Variance of Gradients​
Chirag Agarwal, Daniel D'Souza, Sara Hooker
(paper)
What do compressed deep neural networks forget?
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
(website,paper,code)
​The State of Sparsity in Deep Neural Networks
Trevor Gale, Erich Elsen, Sara Hooker
(paper, code)
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
(slides, code, paper)
The (Un)reliability of Saliency Methods 
Pieter-Jan Kindermans*, Sara Hooker*, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Paper, Slides.
Proudly powered by Weebly