When less is more: Simplifying inputs aids neural network understanding
Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
(paper)
Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
(paper)
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
Oreva Ahia, Julia Kreutzer, Sara Hooker
(paper)
Oreva Ahia, Julia Kreutzer, Sara Hooker
(paper)
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
(paper)
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
(paper)
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
Kale-ab Tessera, Sara Hooker, Benjamin Rosman
(paper)
Kale-ab Tessera, Sara Hooker, Benjamin Rosman
(paper)
Characterizing and Mitigating Bias in Compact Models
Sara Hooker*, Nyalleng Moorosi*, Gregory Clark, Samy Bengio, Emily Denton
(paper)
Sara Hooker*, Nyalleng Moorosi*, Gregory Clark, Samy Bengio, Emily Denton
(paper)
Estimating Example Difficulty using Variance of Gradients
Chirag Agarwal, Daniel D'Souza, Sara Hooker
(paper)
Chirag Agarwal, Daniel D'Souza, Sara Hooker
(paper)
What do compressed deep neural networks forget?
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
(website,paper,code)
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
(website,paper,code)
Proudly powered by Weebly