Sara Hooker
  • Home
  • Research
  • Talks
  • Educational Outreach
  • Contact
  • Home
  • Research
  • Talks
  • Educational Outreach
  • Contact

Research

When less is more: Simplifying inputs aids neural network understanding
Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
(paper)
A Tale Of Two Long Tails
Daniel D'Souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker
(paper)
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
​Oreva Ahia, Julia Kreutzer, Sara Hooker
(paper)
Moving beyond “algorithmic bias is a data problem”
​
Sara Hooker
(paper)
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
(paper)
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
Kale-ab Tessera, Sara Hooker, Benjamin Rosman
(paper)
The Hardware Lottery
Sara Hooker
(paper)
Characterizing and Mitigating Bias in Compact Models
Sara Hooker*, Nyalleng Moorosi*, Gregory Clark, Samy Bengio, Emily Denton
(paper)
Estimating Example Difficulty using Variance of Gradients​
Chirag Agarwal, Daniel D'Souza, Sara Hooker
(paper)
What do compressed deep neural networks forget?
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
(website,paper,code)
​The State of Sparsity in Deep Neural Networks
Trevor Gale, Erich Elsen, Sara Hooker
(paper, code)
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
(slides, code, paper)
The (Un)reliability of Saliency Methods 
Pieter-Jan Kindermans*, Sara Hooker*, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Paper, Slides.
Proudly powered by Weebly