Meeting #3879
Updated by Evgeniy Pavlovskiy about 4 years ago
Record of seminar: https://youtu.be/xMWuuKEl2SI
Requirements:
* TBD 22 Sept
*
|_.Paper|_.Link|
|1 VoiceFilter from Google |https://google.github.io/speaker-id/publications/VoiceFilter/ |
|2 Wavesplit - Июль 2020. SDR 21.0 на WSJ0-mix2 |https://arxiv.org/pdf/2002.08933v2.pdf|
|3 Neural Supersampling for Real-time Rendering (Facebook) (Facebok) |https://research.fb.com/wp-content/uploads/2020/06/Neural-Supersampling-for-Real-time-Rendering.pdf (https://neurohive.io/ru/novosti/nejroset-ot-fair-povyshaet-razreshenie-izobrazheniya-v-16-raz/) |
|4 DeepFaceDrawing: Deep Generation of Face Images from Sketches |http://geometrylearning.com/paper/DeepFaceDrawing.pdf (https://neurohive.io/ru/novosti/deepfacedrawing-nejroset-generiruet-izobrazheniya-ljudej-po-sketcham/)|
|5 Tacotron 2 (without wavenet) |https://github.com/NVIDIA/tacotron2 (paper: https://arxiv.org/pdf/1712.05884.pdf) |
|6 DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation. |https://arxiv.org/pdf/2006.04868.pdf |
|7 LoCo: Local Contrastive Representation Learning, |https://arxiv.org/abs/2008.01342 (https://arxiv.org/abs/2008.01342)|
|8 3D Self-Supervised Methods for Medical Imaging |https://arxiv.org/abs/2006.03829 (https://arxiv.org/abs/2006.03829)|
|9 Brain Tumor Survival Prediction using Radiomics Features, |https://arxiv.org/abs/2009.02903|
|10 Multilingual Speech Recognition with Corpus Relatedness Sampling. |https://isca-speech.org/archive/Interspeech_2019/pdfs/3052.pdf|
|11 Does BERT Make Any Sense? |https://arxiv.org/pdf/1909.10430.pdf|
|12 Unsupervised Cross-lingual Representation Learning at Scale |https://arxiv.org/pdf/1911.02116.pdf|
|13 Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness |https://arxiv.org/pdf/1905.13472.pdf|
|14 The Bottom-up Evolution of Representations in the Transformer:A Study with Machine Translation and Language Modeling Objectives |https://arxiv.org/pdf/1909.01380.pdf|
|15 Zero-Shot Learning - A ComprehensiveEvaluation of the Good, the Bad and the Ugly |https://arxiv.org/pdf/1707.00600.pdf|
|16 Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders |https://arxiv.org/pdf/1812.01784.pdf|
|17 Rethinking Generative Zero-Shot Learning: An EnsembleLearning Perspective for Recognising Visual Patches |https://arxiv.org/pdf/2007.13314.pdf|
|18 Plug and Play Language Models: A Simple Approach to Controlled Text Generation |https://arxiv.org/pdf/1912.02164.pdf|
|new item|new item|
Requirements:
* TBD 22 Sept
*
|_.Paper|_.Link|
|1 VoiceFilter from Google |https://google.github.io/speaker-id/publications/VoiceFilter/ |
|2 Wavesplit - Июль 2020. SDR 21.0 на WSJ0-mix2 |https://arxiv.org/pdf/2002.08933v2.pdf|
|3 Neural Supersampling for Real-time Rendering (Facebook) (Facebok) |https://research.fb.com/wp-content/uploads/2020/06/Neural-Supersampling-for-Real-time-Rendering.pdf (https://neurohive.io/ru/novosti/nejroset-ot-fair-povyshaet-razreshenie-izobrazheniya-v-16-raz/) |
|4 DeepFaceDrawing: Deep Generation of Face Images from Sketches |http://geometrylearning.com/paper/DeepFaceDrawing.pdf (https://neurohive.io/ru/novosti/deepfacedrawing-nejroset-generiruet-izobrazheniya-ljudej-po-sketcham/)|
|5 Tacotron 2 (without wavenet) |https://github.com/NVIDIA/tacotron2 (paper: https://arxiv.org/pdf/1712.05884.pdf) |
|6 DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation. |https://arxiv.org/pdf/2006.04868.pdf |
|7 LoCo: Local Contrastive Representation Learning, |https://arxiv.org/abs/2008.01342 (https://arxiv.org/abs/2008.01342)|
|8 3D Self-Supervised Methods for Medical Imaging |https://arxiv.org/abs/2006.03829 (https://arxiv.org/abs/2006.03829)|
|9 Brain Tumor Survival Prediction using Radiomics Features, |https://arxiv.org/abs/2009.02903|
|10 Multilingual Speech Recognition with Corpus Relatedness Sampling. |https://isca-speech.org/archive/Interspeech_2019/pdfs/3052.pdf|
|11 Does BERT Make Any Sense? |https://arxiv.org/pdf/1909.10430.pdf|
|12 Unsupervised Cross-lingual Representation Learning at Scale |https://arxiv.org/pdf/1911.02116.pdf|
|13 Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness |https://arxiv.org/pdf/1905.13472.pdf|
|14 The Bottom-up Evolution of Representations in the Transformer:A Study with Machine Translation and Language Modeling Objectives |https://arxiv.org/pdf/1909.01380.pdf|
|15 Zero-Shot Learning - A ComprehensiveEvaluation of the Good, the Bad and the Ugly |https://arxiv.org/pdf/1707.00600.pdf|
|16 Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders |https://arxiv.org/pdf/1812.01784.pdf|
|17 Rethinking Generative Zero-Shot Learning: An EnsembleLearning Perspective for Recognising Visual Patches |https://arxiv.org/pdf/2007.13314.pdf|
|18 Plug and Play Language Models: A Simple Approach to Controlled Text Generation |https://arxiv.org/pdf/1912.02164.pdf|
|new item|new item|