Re.Work 2020 Deep Learning Summit
I spent the past two days at the Re.Work Deep Learning summit in SF. Here are some notes:
1. Anh Nguyen (https://anhnguyen.me/cv/) presented on explainable AI.
He compared a few methods: (1) Smoothed saliency maps (gradient and smoothed gradients), (2) Sliding patch, (3) LIME.
Saliency maps may not be too noisy especially with robust classifiers.
GoogleNet-R (A robust classifier i.e, adversarially trained with noisy images)
2. In a talk from FAIR (Facebook AI Research), Ari Morcos shows that training only batch norm achieves good classification performances. So he says that fixing all parameters except the parameters associated with batch nowm degraded performance but not as much as expected. This shows the importance of understanding what each parameter does.
3. Sara Hooker: She says that when you prune NN (with dropputs for example), the performance on the test set may get impacted, but she says the impact might not be the same across all classes and detailed analyses is necessary to understand the disproportionate effect on the test classes.
4. Dawn Song. She spoke about adversaries in NN and had some interesting slides.
5. ANML: Learning to Continually Learn (an improvement over MAML and OML).
Paper will come out any time soon on ANML.
6. Ilya Sutskever. The BEST talk. Recent advances in AI at OpenAI.
Dota5, GPT-2 language model, ANML, Hide and seek game.
7. Massively Improving Data-efficiency of Supervised Learning Systems using Self-supervision from Unlabeled Data (Reach this guys paper). He presented on - Contrastive Predictive Coding. See this paper for more: https://arxiv.org/pdf/1905.09272.pdf
1. Anh Nguyen (https://anhnguyen.me/cv/) presented on explainable AI.
He compared a few methods: (1) Smoothed saliency maps (gradient and smoothed gradients), (2) Sliding patch, (3) LIME.
Saliency maps may not be too noisy especially with robust classifiers.
GoogleNet-R (A robust classifier i.e, adversarially trained with noisy images)
2. In a talk from FAIR (Facebook AI Research), Ari Morcos shows that training only batch norm achieves good classification performances. So he says that fixing all parameters except the parameters associated with batch nowm degraded performance but not as much as expected. This shows the importance of understanding what each parameter does.
3. Sara Hooker: She says that when you prune NN (with dropputs for example), the performance on the test set may get impacted, but she says the impact might not be the same across all classes and detailed analyses is necessary to understand the disproportionate effect on the test classes.
4. Dawn Song. She spoke about adversaries in NN and had some interesting slides.
5. ANML: Learning to Continually Learn (an improvement over MAML and OML).
Paper will come out any time soon on ANML.
6. Ilya Sutskever. The BEST talk. Recent advances in AI at OpenAI.
Dota5, GPT-2 language model, ANML, Hide and seek game.
7. Massively Improving Data-efficiency of Supervised Learning Systems using Self-supervision from Unlabeled Data (Reach this guys paper). He presented on - Contrastive Predictive Coding. See this paper for more: https://arxiv.org/pdf/1905.09272.pdf
Comments
Post a Comment