Meeting #3133
Updated by Evgeniy Pavlovskiy almost 5 years ago
h1. 1 Schedule
[[Seminars_schedule]]
h1. 2 Requirements
25 minutes for one presentation.
Main requirements to presentation:
* to be prepared in LaTeX (try use https://overleaf.com),
* to be short, understandable, clear and convenient,
* no more than 20 minutes for content deliver and 5 for questions,
* references on the last slide
h1. 3 Topics
Each student has to present a research and part of his thesis.
Opened list of cutting-edge topics:
|_.Topic|_.Link|_.Reporter|_.Scheduled|
|[ ]Reynolds Averaged Turbulence Modeling using Deep Neural
Networks with Embedded Invariance.|Julia Ling and Jeremy Templeton.Sandia National Laboratories. https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/reynolds-averaged-turbulence-modelling-using-deep-neural-networks-with-embedded-invariance/0B280EEE89C74A7BF651C422F8FBD1EB|*Omid Razizadeh*|26-Mar|
|[ ]Zero and Few shot learning|Schonfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., & Akata, Z. (2019). Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8247-8255). https://arxiv.org/pdf/1812.01784.pdf|*Nikita Nikolaev*|20-Feb|
|[ ] ERNIE|Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129.|*Daria Pirozhkova*|27-Feb|
|[ ] Reservoir Computing|...|*Sergey Garmaev*|27-Feb|
|[ ] Hybrid VAE for NLG|Semeniuta, S., Severyn, A., & Barth, E. (2017). A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390.|*Elena Voskoboy*|5-Mar|
|[ ] NASNet and AutoML| AutoML for large scale image classification and object detection, http://arxiv.org/abs/1707.07012 |*Oladotun Aluko*|12-Mar|
|[ ]BERT-DST|Chao, G. L., & Lane, I. (2019). Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. arXiv preprint arXiv:1907.03040.|*Alexey Korolev*|16-Apr|
|[ ] |Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf|*Alexander Rusnak*||
|[ ]Variational Circuits|Chen, S. Y. C., & Goan, H. S. (2019). Variational Quantum Circuits and Deep Reinforcement Learning. arXiv preprint arXiv:1907.00397.|*Kirill Kalmutskiy*||
|[ ]|https://pennylane.ai/qml/zreferences.html#huggins2018towards|||
|[ ] Tensor Networks in QML|William Huggins, Piyush Patel, K Birgitta Whaley, and E Miles Stoudenmire. Towards quantum machine learning with tensor networks. 2018. arXiv:1803.11537.|*Raphael Blankson*|30-Apr| arXiv:1803.11537.|||
|[ ] Tensor Networks|Edwin Stoudenmire and David J Schwab. Supervised learning with tensor networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, 4799–4807. Curran Associates, Inc., 2016. URL: http://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks.pdf.|||
|[ ] PennyLane.ai|Overview of the framework|*Raphael Blankson* as master topic|| framework|||
|[ ] Jasper|Li, J., Lavrukhin, V., Ginsburg, B., Leary, R., Kuchaiev, O., Cohen, J. M., ... & Gadde, R. T. (2019). Jasper: An end-to-end convolutional neural acoustic model. arXiv preprint arXiv:1904.03288.|*Geoffroy de Felcourt*||
|[ ] Weight Uncertainty|Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.|||
|[ ] |Wen, Y., Vicol, P., Ba, J., Tran, D., & Grosse, R. (2018). Flipout: Efficient pseudo-independent weight perturbations on mini-batches. arXiv preprint arXiv:1803.04386.|||
|[ ] | Lample, G., & Charton, F. (2019). Deep learning for symbolic mathematics. arXiv preprint arXiv:1912.01412. Facebook |*Mikhail Liz*|| |||
|\4=.*Old*|
|[x]MXNet DL framework|Chen T. et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems //arXiv preprint arXiv:1512.01274. – 2015. URL: https://arxiv.org/pdf/1512.01274|*Oladotun Aluko*|-*14-Nov*-, *21-Nov*|
|[x] |“Why Should I Trust You?” Explaining the Predictions of Any Classifier, https://arxiv.org/pdf/1602.04938.pdf, https://github.com/marcotcr/lime|*Rohan Kumar Rathore*|*5-Dec*|
|[ ]Manifold MixUp| Manifold Mixup: Better Representations by Interpolating Hidden States. URL: https://arxiv.org/pdf/1806.05236v4 || |
|[x]UMAP | McInnes, Leland and John Healy (2018). “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction”. In: ArXiv e-prints. arXiv: "1802.03426 [stat.ML]":http://arxiv.org/abs/1802.03426|*Alix Bernard*|*14-Nov*|
|[x]Artistic Style|Gatys L. A., Ecker A. S., Bethge M. A neural algorithm of artistic style //arXiv preprint arXiv:1508.06576. – 2015.|*Elena Voskoboy*|*14-Nov*|
|[x]EfficientNet|Tan M., Le Q. V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks //arXiv preprint arXiv:1905.11946. – 2019.|*Owen Siyoto*|*26-Dec*|
|\4=.*Fluid, Oil, Physics, Chemistry*|
|[x]Data-driven predictive using field inversion|Parish, Eric J., and Karthik Duraisamy. "A paradigm for data-driven predictive modeling using field inversion and machine learning." Journal of Computational Physics 305 (2016): 758-774.|*Omid Razizadeh*|*31-Oct*|
|[x]Predicting Oil Movement in a development System Using Deep Latent Dynamic Models|URL: Video: https://www.youtube.com/watch?v=N3iV-F4aqLA? Slides: https://bayesgroup.github.io/bmml_sem/2018/Temirchev_Metamodelling.pdf|||
|\4=.*Faces*|
|[x]SphereFace| SphereFace: Deep Hypersphere Embedding for Face Recognition URL: https://arxiv.org/pdf/1704.08063.pdf|*Mukul Vishwas*| |
|[x]Triplet Loss|https://arxiv.org/pdf/1503.03832.pdf|*Vassily Baranov*|*24-Oct*|
|[x]Style transfer SotA (state-of-the-art)| A Style-Based Generator Architecture for Generative Adversarial Networks. URL: https://arxiv.org/abs/1812.04948 |||
|Performance of Word Embeddings|review and experience|||
|[x]CosFace|Wang H. et al. Cosface: Large margin cosine loss for deep face recognition //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. – 2018. – С. 5265-5274. URL: https://arxiv.org/pdf/1801.09414.pdf|*Mikhail Liz*|*19-Dec*|
|\4=.*Quantum*|
|[x]Supervised learning with quantum enhanced feature spaces|Havlíček V. et al. Supervised learning with quantum-enhanced feature spaces //Nature. – 2019. – Т. 567. – №. 7747. – С. 209. URL: https://arxiv.org/pdf/1804.11326.pdf|*Raphael Blankson*|*12-Dec*|
|[x]FermiNet|Ab-Initio Solution of the Many-Electron Schr\" odinger Equation with Deep Neural Networks, https://arxiv.org/pdf/1909.02487|*Kristanek Antoine*|*28-Nov*|
|[ ]DisCoCat model| Grefenstette E. Category-theoretic quantitative compositional distributional models of natural language semantics //arXiv preprint arXiv:1311.1539. – 2013. URL: https://arxiv.org/abs/1311.1539|||
|[ ]DisCoCat toy model|Gogioso S. A Corpus-based Toy Model for DisCoCat //arXiv preprint arXiv:1605.04013. – 2016. URL: https://arxiv.org/pdf/1605.04013.pdf|||
|[ ]A Quantum-Theoretic Approach to Distributional Semantics|Blacoe W., Kashefi E., Lapata M. A quantum-theoretic approach to distributional semantics //Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. – 2013. – С. 847-857. URL: http://www.aclweb.org/anthology/N13-1105|||
|Solving the Quantum Many-Body problem with ANN| Carleo G., Troyer M. Solving the quantum many-body problem with artificial neural networks //Science. – 2017. – Т. 355. – №. 6325. – С. 602-606. URL: https://arxiv.org/pdf/1606.02318|*Andrey Yashkin*|*21-Nov*|
|\4=.*Economics*|
|[xx]Understanding consumer behavior|Lang T., Rettenmeier M. Understanding consumer behavior with recurrent neural networks //Workshop on Machine Learning Methods for Recommender Systems. – 2017. URL: https://doogkong.github.io/2017/papers/paper2.pdf|*Abhishek Saxena*, *Watana Pongsapas*|*?*, *31-Oct*|
|[xx]|Li X. et al. Empirical analysis: stock market prediction via extreme learning machine //Neural Computing and Applications. – 2016. – Т. 27. – №. 1. – С. 67-78.|*Kaivalya Anand Pandey*, *Rishabh Tiwari*|*21-Nov*, *12-Dec*|
|\4=.*Speech*|
|[ ]Tacotron 2|Shen J. et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions //2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). – IEEE, 2018. – С. 4779-4783. URL: https://arxiv.org/pdf/1712.05884.pdf|||
|[x]BERT (Google)|Devlin J. et al. Bert: Pre-training of deep bidirectional transformers for language understanding //arXiv preprint arXiv:1810.04805. – 2018. URL: https://arxiv.org/abs/1810.04805|*Nikita Nikolaev*|*12-Dec*|
|\4=.*Natural Language Processing*|
|[x]Text clustering | Xu J. et al. Self-taught convolutional neural networks for short text clustering //Neural Networks. – 2017. – Т. 88. – С. 22-31. URL: https://arxiv.org/abs/1701.00185 |*Alexander Donets*|*21-Nov*|
|[x]Universal Sentence Encoder|Cer D. et al. Universal sentence encoder //arXiv preprint arXiv:1803.11175. – 2018. URL: https://arxiv.org/pdf/1803.11175.pdf|*Alexey Korolev*|*28-Nov*|
|[x]ULMFiT|Howard J., Ruder S. Universal language model fine-tuning for text classification //arXiv preprint arXiv:1801.06146. – 2018. URL: https://arxiv.org/pdf/1801.06146.pdf|*Alexander Rusnak*|*26-Dec*|
|[x]ELMo|Peters M. E. et al. Deep contextualized word representations //arXiv preprint arXiv:1802.05365. – 2018. URL: http://www.aclweb.org/anthology/N18-1202|*Sergey Garmaev*|*7-Nov*|
|[ ]Skip-thoughts, Infersent, RandSent - Facebook|1. Kiros R. et al. Skip-thought vectors //Advances in neural information processing systems. – 2015. – С. 3294-3302. URL: https://arxiv.org/pdf/1506.06726.pdf
2. Conneau A. et al. Supervised learning of universal sentence representations from natural language inference data //arXiv preprint arXiv:1705.02364. – 2017. URL: https://arxiv.org/abs/1705.02364
3. Wieting J., Kiela D. No Training Required: Exploring Random Encoders for Sentence Classification //arXiv preprint arXiv:1901.10444. – 2019. URL: https://arxiv.org/pdf/1901.10444.pdf|||
|[ ]BigARTM|Vorontsov K. et al. Bigartm: Open source library for regularized multimodal topic modeling of large collections //International Conference on Analysis of Images, Social Networks and Texts. – Springer, Cham, 2015. – С. 370-381. URL: http://www.machinelearning.ru/wiki/images/e/ea/Voron15aist.pdf|||
|[x]Vision and Feature Norm|Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps. URL: https://aclweb.org/anthology/N16-1071|*Dinesh Reddy*|*7-Nov*|
|[x]Reinforcement Learning|Human-level control through deep reinforcement learning|*Kirill Kalmutskiy*|*28-Nov*|
|[ ]|Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., & Dill, D. L. (2018). Learning a SAT solver from single-bit supervision. arXiv preprint arXiv:1802.03685.|||
|[x]ERNIE|Enhanced Representation through Knowledge Integration. URL: https://arxiv.org/abs/1904.09223|*Mikhail Rodin*|-*5-Dec*- *26-Dec*|
|\4=.*Papers with code*|
|[x]Weight Agnostic Neural Networks|Weight Agnostic Neural Networks, Google, https://arxiv.org/abs/1906.04358|*Roman Kozinets*|*14-Nov*|
|[x]SeqSleepNet|Phan H. et al. SeqSleepNet: end-to-end hierarchical recurrent neural network for sequence-to-sequence automatic sleep staging //IEEE Transactions on Neural Systems and Rehabilitation Engineering. – 2019. – Т. 27. – №. 3. – С. 400-410.|*Daria Pirozhkova*|*7-Nov*|
|[ ]Deep-speare|Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme https://paperswithcode.com/paper/deep-speare-a-joint-neural-model-of-poetic|*Elizaveta Tagirova*|*19-Dec*|
h1. 4 Topics of master thesis
Opened list of reports on master thesis (statement of work, review, and results):
|_.Reporter|_.Topic|_.Scheduled|
|\3=.1st year students, review, method|
|1 to-be-listed | |-|
|\3=.2st year students, results|
|1 Razizadeh Omid | ||
|2 Siyoto Owen | ||
|3 Munyaradzi Njera | ||
|4 Kozinets Roman | ||
|5 Tagirova Elizaveta | ||
|6 Tsvaki Jetina | ||
|7 Ravi Kumar | ||
h1. 5 At fault
These students still didn't selected a paper to report or doesn't assigned to a time slot:
1st year students
* *noname*: refer, master
2nd year students
* *noname*: refer, master
h1. 6 Presence
The requirements of the seminar are:
* AS.BDA.RQ.1) deliver presentation: (i) on the topic of master thesis, (ii) review of a recognized paper.
* AS.BDA.RQ.2) attend not less than 50% of classes.
Here is a table of [[Presence]] conducted from meeting minutes (see minutes as issues in first column of the [[Seminars schedule]].
[[Seminars_schedule]]
h1. 2 Requirements
25 minutes for one presentation.
Main requirements to presentation:
* to be prepared in LaTeX (try use https://overleaf.com),
* to be short, understandable, clear and convenient,
* no more than 20 minutes for content deliver and 5 for questions,
* references on the last slide
h1. 3 Topics
Each student has to present a research and part of his thesis.
Opened list of cutting-edge topics:
|_.Topic|_.Link|_.Reporter|_.Scheduled|
|[ ]Reynolds Averaged Turbulence Modeling using Deep Neural
Networks with Embedded Invariance.|Julia Ling and Jeremy Templeton.Sandia National Laboratories. https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/reynolds-averaged-turbulence-modelling-using-deep-neural-networks-with-embedded-invariance/0B280EEE89C74A7BF651C422F8FBD1EB|*Omid Razizadeh*|26-Mar|
|[ ]Zero and Few shot learning|Schonfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., & Akata, Z. (2019). Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8247-8255). https://arxiv.org/pdf/1812.01784.pdf|*Nikita Nikolaev*|20-Feb|
|[ ] ERNIE|Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129.|*Daria Pirozhkova*|27-Feb|
|[ ] Reservoir Computing|...|*Sergey Garmaev*|27-Feb|
|[ ] Hybrid VAE for NLG|Semeniuta, S., Severyn, A., & Barth, E. (2017). A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390.|*Elena Voskoboy*|5-Mar|
|[ ] NASNet and AutoML| AutoML for large scale image classification and object detection, http://arxiv.org/abs/1707.07012 |*Oladotun Aluko*|12-Mar|
|[ ]BERT-DST|Chao, G. L., & Lane, I. (2019). Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. arXiv preprint arXiv:1907.03040.|*Alexey Korolev*|16-Apr|
|[ ] |Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf|*Alexander Rusnak*||
|[ ]Variational Circuits|Chen, S. Y. C., & Goan, H. S. (2019). Variational Quantum Circuits and Deep Reinforcement Learning. arXiv preprint arXiv:1907.00397.|*Kirill Kalmutskiy*||
|[ ]|https://pennylane.ai/qml/zreferences.html#huggins2018towards|||
|[ ] Tensor Networks in QML|William Huggins, Piyush Patel, K Birgitta Whaley, and E Miles Stoudenmire. Towards quantum machine learning with tensor networks. 2018. arXiv:1803.11537.|*Raphael Blankson*|30-Apr| arXiv:1803.11537.|||
|[ ] Tensor Networks|Edwin Stoudenmire and David J Schwab. Supervised learning with tensor networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, 4799–4807. Curran Associates, Inc., 2016. URL: http://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks.pdf.|||
|[ ] PennyLane.ai|Overview of the framework|*Raphael Blankson* as master topic|| framework|||
|[ ] Jasper|Li, J., Lavrukhin, V., Ginsburg, B., Leary, R., Kuchaiev, O., Cohen, J. M., ... & Gadde, R. T. (2019). Jasper: An end-to-end convolutional neural acoustic model. arXiv preprint arXiv:1904.03288.|*Geoffroy de Felcourt*||
|[ ] Weight Uncertainty|Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.|||
|[ ] |Wen, Y., Vicol, P., Ba, J., Tran, D., & Grosse, R. (2018). Flipout: Efficient pseudo-independent weight perturbations on mini-batches. arXiv preprint arXiv:1803.04386.|||
|[ ] | Lample, G., & Charton, F. (2019). Deep learning for symbolic mathematics. arXiv preprint arXiv:1912.01412. Facebook |*Mikhail Liz*|| |||
|\4=.*Old*|
|[x]MXNet DL framework|Chen T. et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems //arXiv preprint arXiv:1512.01274. – 2015. URL: https://arxiv.org/pdf/1512.01274|*Oladotun Aluko*|-*14-Nov*-, *21-Nov*|
|[x] |“Why Should I Trust You?” Explaining the Predictions of Any Classifier, https://arxiv.org/pdf/1602.04938.pdf, https://github.com/marcotcr/lime|*Rohan Kumar Rathore*|*5-Dec*|
|[ ]Manifold MixUp| Manifold Mixup: Better Representations by Interpolating Hidden States. URL: https://arxiv.org/pdf/1806.05236v4 || |
|[x]UMAP | McInnes, Leland and John Healy (2018). “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction”. In: ArXiv e-prints. arXiv: "1802.03426 [stat.ML]":http://arxiv.org/abs/1802.03426|*Alix Bernard*|*14-Nov*|
|[x]Artistic Style|Gatys L. A., Ecker A. S., Bethge M. A neural algorithm of artistic style //arXiv preprint arXiv:1508.06576. – 2015.|*Elena Voskoboy*|*14-Nov*|
|[x]EfficientNet|Tan M., Le Q. V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks //arXiv preprint arXiv:1905.11946. – 2019.|*Owen Siyoto*|*26-Dec*|
|\4=.*Fluid, Oil, Physics, Chemistry*|
|[x]Data-driven predictive using field inversion|Parish, Eric J., and Karthik Duraisamy. "A paradigm for data-driven predictive modeling using field inversion and machine learning." Journal of Computational Physics 305 (2016): 758-774.|*Omid Razizadeh*|*31-Oct*|
|[x]Predicting Oil Movement in a development System Using Deep Latent Dynamic Models|URL: Video: https://www.youtube.com/watch?v=N3iV-F4aqLA? Slides: https://bayesgroup.github.io/bmml_sem/2018/Temirchev_Metamodelling.pdf|||
|\4=.*Faces*|
|[x]SphereFace| SphereFace: Deep Hypersphere Embedding for Face Recognition URL: https://arxiv.org/pdf/1704.08063.pdf|*Mukul Vishwas*| |
|[x]Triplet Loss|https://arxiv.org/pdf/1503.03832.pdf|*Vassily Baranov*|*24-Oct*|
|[x]Style transfer SotA (state-of-the-art)| A Style-Based Generator Architecture for Generative Adversarial Networks. URL: https://arxiv.org/abs/1812.04948 |||
|Performance of Word Embeddings|review and experience|||
|[x]CosFace|Wang H. et al. Cosface: Large margin cosine loss for deep face recognition //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. – 2018. – С. 5265-5274. URL: https://arxiv.org/pdf/1801.09414.pdf|*Mikhail Liz*|*19-Dec*|
|\4=.*Quantum*|
|[x]Supervised learning with quantum enhanced feature spaces|Havlíček V. et al. Supervised learning with quantum-enhanced feature spaces //Nature. – 2019. – Т. 567. – №. 7747. – С. 209. URL: https://arxiv.org/pdf/1804.11326.pdf|*Raphael Blankson*|*12-Dec*|
|[x]FermiNet|Ab-Initio Solution of the Many-Electron Schr\" odinger Equation with Deep Neural Networks, https://arxiv.org/pdf/1909.02487|*Kristanek Antoine*|*28-Nov*|
|[ ]DisCoCat model| Grefenstette E. Category-theoretic quantitative compositional distributional models of natural language semantics //arXiv preprint arXiv:1311.1539. – 2013. URL: https://arxiv.org/abs/1311.1539|||
|[ ]DisCoCat toy model|Gogioso S. A Corpus-based Toy Model for DisCoCat //arXiv preprint arXiv:1605.04013. – 2016. URL: https://arxiv.org/pdf/1605.04013.pdf|||
|[ ]A Quantum-Theoretic Approach to Distributional Semantics|Blacoe W., Kashefi E., Lapata M. A quantum-theoretic approach to distributional semantics //Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. – 2013. – С. 847-857. URL: http://www.aclweb.org/anthology/N13-1105|||
|Solving the Quantum Many-Body problem with ANN| Carleo G., Troyer M. Solving the quantum many-body problem with artificial neural networks //Science. – 2017. – Т. 355. – №. 6325. – С. 602-606. URL: https://arxiv.org/pdf/1606.02318|*Andrey Yashkin*|*21-Nov*|
|\4=.*Economics*|
|[xx]Understanding consumer behavior|Lang T., Rettenmeier M. Understanding consumer behavior with recurrent neural networks //Workshop on Machine Learning Methods for Recommender Systems. – 2017. URL: https://doogkong.github.io/2017/papers/paper2.pdf|*Abhishek Saxena*, *Watana Pongsapas*|*?*, *31-Oct*|
|[xx]|Li X. et al. Empirical analysis: stock market prediction via extreme learning machine //Neural Computing and Applications. – 2016. – Т. 27. – №. 1. – С. 67-78.|*Kaivalya Anand Pandey*, *Rishabh Tiwari*|*21-Nov*, *12-Dec*|
|\4=.*Speech*|
|[ ]Tacotron 2|Shen J. et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions //2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). – IEEE, 2018. – С. 4779-4783. URL: https://arxiv.org/pdf/1712.05884.pdf|||
|[x]BERT (Google)|Devlin J. et al. Bert: Pre-training of deep bidirectional transformers for language understanding //arXiv preprint arXiv:1810.04805. – 2018. URL: https://arxiv.org/abs/1810.04805|*Nikita Nikolaev*|*12-Dec*|
|\4=.*Natural Language Processing*|
|[x]Text clustering | Xu J. et al. Self-taught convolutional neural networks for short text clustering //Neural Networks. – 2017. – Т. 88. – С. 22-31. URL: https://arxiv.org/abs/1701.00185 |*Alexander Donets*|*21-Nov*|
|[x]Universal Sentence Encoder|Cer D. et al. Universal sentence encoder //arXiv preprint arXiv:1803.11175. – 2018. URL: https://arxiv.org/pdf/1803.11175.pdf|*Alexey Korolev*|*28-Nov*|
|[x]ULMFiT|Howard J., Ruder S. Universal language model fine-tuning for text classification //arXiv preprint arXiv:1801.06146. – 2018. URL: https://arxiv.org/pdf/1801.06146.pdf|*Alexander Rusnak*|*26-Dec*|
|[x]ELMo|Peters M. E. et al. Deep contextualized word representations //arXiv preprint arXiv:1802.05365. – 2018. URL: http://www.aclweb.org/anthology/N18-1202|*Sergey Garmaev*|*7-Nov*|
|[ ]Skip-thoughts, Infersent, RandSent - Facebook|1. Kiros R. et al. Skip-thought vectors //Advances in neural information processing systems. – 2015. – С. 3294-3302. URL: https://arxiv.org/pdf/1506.06726.pdf
2. Conneau A. et al. Supervised learning of universal sentence representations from natural language inference data //arXiv preprint arXiv:1705.02364. – 2017. URL: https://arxiv.org/abs/1705.02364
3. Wieting J., Kiela D. No Training Required: Exploring Random Encoders for Sentence Classification //arXiv preprint arXiv:1901.10444. – 2019. URL: https://arxiv.org/pdf/1901.10444.pdf|||
|[ ]BigARTM|Vorontsov K. et al. Bigartm: Open source library for regularized multimodal topic modeling of large collections //International Conference on Analysis of Images, Social Networks and Texts. – Springer, Cham, 2015. – С. 370-381. URL: http://www.machinelearning.ru/wiki/images/e/ea/Voron15aist.pdf|||
|[x]Vision and Feature Norm|Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps. URL: https://aclweb.org/anthology/N16-1071|*Dinesh Reddy*|*7-Nov*|
|[x]Reinforcement Learning|Human-level control through deep reinforcement learning|*Kirill Kalmutskiy*|*28-Nov*|
|[ ]|Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., & Dill, D. L. (2018). Learning a SAT solver from single-bit supervision. arXiv preprint arXiv:1802.03685.|||
|[x]ERNIE|Enhanced Representation through Knowledge Integration. URL: https://arxiv.org/abs/1904.09223|*Mikhail Rodin*|-*5-Dec*- *26-Dec*|
|\4=.*Papers with code*|
|[x]Weight Agnostic Neural Networks|Weight Agnostic Neural Networks, Google, https://arxiv.org/abs/1906.04358|*Roman Kozinets*|*14-Nov*|
|[x]SeqSleepNet|Phan H. et al. SeqSleepNet: end-to-end hierarchical recurrent neural network for sequence-to-sequence automatic sleep staging //IEEE Transactions on Neural Systems and Rehabilitation Engineering. – 2019. – Т. 27. – №. 3. – С. 400-410.|*Daria Pirozhkova*|*7-Nov*|
|[ ]Deep-speare|Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme https://paperswithcode.com/paper/deep-speare-a-joint-neural-model-of-poetic|*Elizaveta Tagirova*|*19-Dec*|
h1. 4 Topics of master thesis
Opened list of reports on master thesis (statement of work, review, and results):
|_.Reporter|_.Topic|_.Scheduled|
|\3=.1st year students, review, method|
|1 to-be-listed | |-|
|\3=.2st year students, results|
|1 Razizadeh Omid | ||
|2 Siyoto Owen | ||
|3 Munyaradzi Njera | ||
|4 Kozinets Roman | ||
|5 Tagirova Elizaveta | ||
|6 Tsvaki Jetina | ||
|7 Ravi Kumar | ||
h1. 5 At fault
These students still didn't selected a paper to report or doesn't assigned to a time slot:
1st year students
* *noname*: refer, master
2nd year students
* *noname*: refer, master
h1. 6 Presence
The requirements of the seminar are:
* AS.BDA.RQ.1) deliver presentation: (i) on the topic of master thesis, (ii) review of a recognized paper.
* AS.BDA.RQ.2) attend not less than 50% of classes.
Here is a table of [[Presence]] conducted from meeting minutes (see minutes as issues in first column of the [[Seminars schedule]].