Please visit our research lab page for a full and up to-date list of all collaborations.
Efficient methods for natural language processing: a survey
Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H Martins, André FT Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, Roy Schwartz
(paper)
Marcos Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro H Martins, André FT Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Niranjan Balasubramanian, Leon Derczynski, Roy Schwartz
(paper)
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics
Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, Sara Hooker
(paper)
Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, Sara Hooker
(paper)
Large language models are not zero-shot communicators
Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette
(paper)
Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette
(paper)
Intriguing Properties of Compression on Multilingual Models
Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, Julia Kreutzer
(paper)
Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, Julia Kreutzer
(paper)
When less is more: Simplifying inputs aids neural network understanding
Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
(paper)
Robin Tibor Schirrmeister, Rosanne Liu, Sara Hooker, Tonio Ball
(paper)
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
Oreva Ahia, Julia Kreutzer, Sara Hooker
(paper)
Oreva Ahia, Julia Kreutzer, Sara Hooker
(paper)
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
(paper)
Donglin Zhuang, Xingyao Zhang, Shuaiwen Leon Song, Sara Hooker
(paper)
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
Kale-ab Tessera, Sara Hooker, Benjamin Rosman
(paper)
Kale-ab Tessera, Sara Hooker, Benjamin Rosman
(paper)
Characterizing and Mitigating Bias in Compact Models
Sara Hooker*, Nyalleng Moorosi*, Gregory Clark, Samy Bengio, Emily Denton
(paper)
Sara Hooker*, Nyalleng Moorosi*, Gregory Clark, Samy Bengio, Emily Denton
(paper)
Estimating Example Difficulty using Variance of Gradients
Chirag Agarwal, Daniel D'Souza, Sara Hooker
(paper)
Chirag Agarwal, Daniel D'Souza, Sara Hooker
(paper)
What do compressed deep neural networks forget?
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
(website,paper,code)
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
(website,paper,code)
Proudly powered by Weebly