Meeting #3133
Updated by Evgeniy Pavlovskiy over 4 years ago
h1. 1 Schedule
[[Seminars_schedule]]
h1. 2 Requirements
25 minutes for one presentation.
Main requirements to presentation:
* to be prepared in LaTeX (try use https://overleaf.com),
* to be short, understandable, clear and convenient,
* no more than 20 minutes for content deliver and 5 for questions,
* references on the last slide
h1. 3 Topics
Each student has to present a research and part of his thesis.
Opened list of cutting-edge topics:
|_.Topic|_.Link|_.Reporter|_.Scheduled|
|[ ]Zero and Few shot learning|Schonfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., & Akata, Z. (2019). Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8247-8255). https://arxiv.org/pdf/1812.01784.pdf|*Nikita Nikolaev*|20-Feb|
|[ ] ERNIE|Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129.|*Daria Pirozhkova*|27-Feb|
|[ ] Reservoir Computing|Design Strategies for Weight Matrices of Echo State Networks|*Sergey Garmaev*|27-Feb|
|[ ] Hybrid VAE for NLG|Semeniuta, S., Severyn, A., & Barth, E. (2017). A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390.|*Elena Voskoboy*|5-Mar|
|[ ] NASNet and AutoML| AutoML for large scale image classification and object detection, http://arxiv.org/abs/1707.07012 |*Oladotun Aluko*|12-Mar|
|[ ]BERT-DST|Chao, G. L., & Lane, I. (2019). Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. arXiv preprint arXiv:1907.03040.|*Alexey Korolev*|16-Apr|
|[ ] |Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf|*Alexander Rusnak*|02-Apr|
|[ ]Variational Circuits|Chen, S. Y. C., & Goan, H. S. (2019). Variational Quantum Circuits and Deep Reinforcement Learning. arXiv preprint arXiv:1907.00397.|*Kirill Kalmutskiy*|12-Mar|
|[ ]|https://pennylane.ai/qml/zreferences.html#huggins2018towards
Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. 2018. arXiv:1803.00745.|*Andrey Yashkin*|5-Mar|
|[ ]Reynolds Averaged Turbulence Modeling using Deep Neural Networks with Embedded Invariance.|Julia Ling and Jeremy Templeton.Sandia National Laboratories. https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/reynolds-averaged-turbulence-modelling-using-deep-neural-networks-with-embedded-invariance/0B280EEE89C74A7BF651C422F8FBD1EB|*Omid Razizadeh*|26-Mar|
|[ ] Tensor Networks in QML|William Huggins, Piyush Patel, K Birgitta Whaley, and E Miles Stoudenmire. Towards quantum machine learning with tensor networks. 2018. arXiv:1803.11537.|*Raphael Blankson*|30-Apr|
|[ ] Tensor Networks|Edwin Stoudenmire and David J Schwab. Supervised learning with tensor networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, 4799–4807. Curran Associates, Inc., 2016. URL: http://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks.pdf.|*Richard Fambon*||
|[ ] PennyLane.ai|Overview of the framework|*Raphael Blankson* as master topic|28-May|
|[ ] Jasper|Li, J., Lavrukhin, V., Ginsburg, B., Leary, R., Kuchaiev, O., Cohen, J. M., ... & Gadde, R. T. (2019). Jasper: An end-to-end convolutional neural acoustic model. arXiv preprint arXiv:1904.03288.|*Geoffroy de Felcourt*|19-Mar|
|[ ] Weight Uncertainty|Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.|*Antoine Logeais*|9-Apr|
|[ ] |Wen, Y., Vicol, P., Ba, J., Tran, D., & Grosse, R. (2018). Flipout: Efficient pseudo-independent weight perturbations on mini-batches. arXiv preprint arXiv:1803.04386.|Thibault Kollen|2-Apr|
|[ ] | Lample, G., & Charton, F. (2019). Deep learning for symbolic mathematics. arXiv preprint arXiv:1912.01412. Facebook |*Mikhail Liz*|5-Mar|
|[ ] Layer-wise relevance propagation|Montavon, G., Binder, A., Lapuschkin, S., Samek, W., & Müller, K. R. (2019). Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (pp. 193-209). Springer, Cham.|*Rohan Rathore*|9-Apr|
|[ ] Deep Reinforcement Learning||*Kaivalya Pandey*|26-Mar|
|[ ] Complex Conv|Complex Convolution. IEEE|*Ravi Kumar*|30-Mar|
|[ ] | Understanding mixup training methods https://ieeexplore.ieee.org/document/8478159|*Jetina Tsvaki*|16-Apr|
|[ ] |Deep Learning based Approach to Reduced Order Modelling... https://arxiv.org/abs/1804.09269|*Alix Bernard*|26-Mar|
|[ ] Deep learning: A Generic approach for extreme condition traffic||*Watana Pongsapas*|12-Mar|
|[ ] |Uzkent B, Sheehan E, Meng C, Tang Z, Burke M, Lobell D, Ermon S. Learning to interpret satellite images in global scale using wikipedia. arXiv preprint arXiv:1905.02506. 2019 May 7.|*Owen Siyoto*|9-Apr|
|[ ] Implicit weight uncertainty in neural networks|Nick Pawlowski, etc|*Alexander Donets*|16-Apr| Donets*|12-Apr|
|[ ] Brain Tumor |Brain Tumor Sementation Using Deep Learning by Type Specific Sorting of Images|*Dinesh Reddy*|19-Mar|
|[ ] Blockchain for AI|Salah, K., Rehman, M. H. U., Nizamuddin, N., & Al-Fuqaha, A. (2019). Blockchain for AI: Review and open research challenges. IEEE Access, 7, 10127-10149.|*Abhishek Saxena*|9-Apr|
|\4=.*From previous semester (not reported yet)*|
|[ ]Manifold MixUp| Manifold Mixup: Better Representations by Interpolating Hidden States. URL: https://arxiv.org/pdf/1806.05236v4 || |
|\4=.*Quantum*|
|[ ]DisCoCat model| Grefenstette E. Category-theoretic quantitative compositional distributional models of natural language semantics //arXiv preprint arXiv:1311.1539. – 2013. URL: https://arxiv.org/abs/1311.1539|||
|[ ]DisCoCat toy model|Gogioso S. A Corpus-based Toy Model for DisCoCat //arXiv preprint arXiv:1605.04013. – 2016. URL: https://arxiv.org/pdf/1605.04013.pdf|||
|[ ]A Quantum-Theoretic Approach to Distributional Semantics|Blacoe W., Kashefi E., Lapata M. A quantum-theoretic approach to distributional semantics //Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. – 2013. – С. 847-857. URL: http://www.aclweb.org/anthology/N13-1105|||
|\4=.*Speech*|
|[ ]Tacotron 2|Shen J. et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions //2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). – IEEE, 2018. – С. 4779-4783. URL: https://arxiv.org/pdf/1712.05884.pdf|||
|\4=.*Natural Language Processing*|
|[ ]Skip-thoughts, Infersent, RandSent - Facebook|1. Kiros R. et al. Skip-thought vectors //Advances in neural information processing systems. – 2015. – С. 3294-3302. URL: https://arxiv.org/pdf/1506.06726.pdf
2. Conneau A. et al. Supervised learning of universal sentence representations from natural language inference data //arXiv preprint arXiv:1705.02364. – 2017. URL: https://arxiv.org/abs/1705.02364
3. Wieting J., Kiela D. No Training Required: Exploring Random Encoders for Sentence Classification //arXiv preprint arXiv:1901.10444. – 2019. URL: https://arxiv.org/pdf/1901.10444.pdf|||
|[ ]BigARTM|Vorontsov K. et al. Bigartm: Open source library for regularized multimodal topic modeling of large collections //International Conference on Analysis of Images, Social Networks and Texts. – Springer, Cham, 2015. – С. 370-381. URL: http://www.machinelearning.ru/wiki/images/e/ea/Voron15aist.pdf|||
|[ ]|Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., & Dill, D. L. (2018). Learning a SAT solver from single-bit supervision. arXiv preprint arXiv:1802.03685.|||
|\4=.*Papers with code*|
|[ ]Deep-speare|Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme https://paperswithcode.com/paper/deep-speare-a-joint-neural-model-of-poetic|*Elizaveta Tagirova*|*19-Dec*|
h1. 4 Topics of master thesis
Opened list of reports on master thesis (statement of work, review, and results):
|_.Reporter|_.Topic|_.Scheduled|
|\3=.1st year students, review, method|
|1 to-be-listed | |-|
|1 Raphael Blankson |PennyLane.ai overview |28-May|
|2 Kirill Kalmutskiy |Coursework |30-Apr|
|\3=.2st year students, results|
|1 Razizadeh Omid | ||
|2 Siyoto Owen | ||
|3 Munyaradzi Njera | ||
|4 Kozinets Roman | ||
|5 Tagirova Elizaveta | ||
|6 Tsvaki Jetina | ||
|7 Ravi Kumar | ||
h1. 5 At fault
These students still didn't selected a paper to report or doesn't assigned to a time slot:
1st year students
* *noname*: refer, master
2nd year students
* *noname*: refer, master
h1. 6 Presence
The requirements of the seminar are:
* AS.BDA.RQ.1) deliver presentation: (i) on the topic of master thesis, (ii) review of a recognized paper.
* AS.BDA.RQ.2) attend not less than 50% of classes.
Here is a table of [[Presence]] conducted from meeting minutes (see minutes as issues in first column of the [[Seminars schedule]].
[[Seminars_schedule]]
h1. 2 Requirements
25 minutes for one presentation.
Main requirements to presentation:
* to be prepared in LaTeX (try use https://overleaf.com),
* to be short, understandable, clear and convenient,
* no more than 20 minutes for content deliver and 5 for questions,
* references on the last slide
h1. 3 Topics
Each student has to present a research and part of his thesis.
Opened list of cutting-edge topics:
|_.Topic|_.Link|_.Reporter|_.Scheduled|
|[ ]Zero and Few shot learning|Schonfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., & Akata, Z. (2019). Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8247-8255). https://arxiv.org/pdf/1812.01784.pdf|*Nikita Nikolaev*|20-Feb|
|[ ] ERNIE|Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129.|*Daria Pirozhkova*|27-Feb|
|[ ] Reservoir Computing|Design Strategies for Weight Matrices of Echo State Networks|*Sergey Garmaev*|27-Feb|
|[ ] Hybrid VAE for NLG|Semeniuta, S., Severyn, A., & Barth, E. (2017). A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390.|*Elena Voskoboy*|5-Mar|
|[ ] NASNet and AutoML| AutoML for large scale image classification and object detection, http://arxiv.org/abs/1707.07012 |*Oladotun Aluko*|12-Mar|
|[ ]BERT-DST|Chao, G. L., & Lane, I. (2019). Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. arXiv preprint arXiv:1907.03040.|*Alexey Korolev*|16-Apr|
|[ ] |Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9. https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf|*Alexander Rusnak*|02-Apr|
|[ ]Variational Circuits|Chen, S. Y. C., & Goan, H. S. (2019). Variational Quantum Circuits and Deep Reinforcement Learning. arXiv preprint arXiv:1907.00397.|*Kirill Kalmutskiy*|12-Mar|
|[ ]|https://pennylane.ai/qml/zreferences.html#huggins2018towards
Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. 2018. arXiv:1803.00745.|*Andrey Yashkin*|5-Mar|
|[ ]Reynolds Averaged Turbulence Modeling using Deep Neural Networks with Embedded Invariance.|Julia Ling and Jeremy Templeton.Sandia National Laboratories. https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/reynolds-averaged-turbulence-modelling-using-deep-neural-networks-with-embedded-invariance/0B280EEE89C74A7BF651C422F8FBD1EB|*Omid Razizadeh*|26-Mar|
|[ ] Tensor Networks in QML|William Huggins, Piyush Patel, K Birgitta Whaley, and E Miles Stoudenmire. Towards quantum machine learning with tensor networks. 2018. arXiv:1803.11537.|*Raphael Blankson*|30-Apr|
|[ ] Tensor Networks|Edwin Stoudenmire and David J Schwab. Supervised learning with tensor networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, 4799–4807. Curran Associates, Inc., 2016. URL: http://papers.nips.cc/paper/6211-supervised-learning-with-tensor-networks.pdf.|*Richard Fambon*||
|[ ] PennyLane.ai|Overview of the framework|*Raphael Blankson* as master topic|28-May|
|[ ] Jasper|Li, J., Lavrukhin, V., Ginsburg, B., Leary, R., Kuchaiev, O., Cohen, J. M., ... & Gadde, R. T. (2019). Jasper: An end-to-end convolutional neural acoustic model. arXiv preprint arXiv:1904.03288.|*Geoffroy de Felcourt*|19-Mar|
|[ ] Weight Uncertainty|Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.|*Antoine Logeais*|9-Apr|
|[ ] |Wen, Y., Vicol, P., Ba, J., Tran, D., & Grosse, R. (2018). Flipout: Efficient pseudo-independent weight perturbations on mini-batches. arXiv preprint arXiv:1803.04386.|Thibault Kollen|2-Apr|
|[ ] | Lample, G., & Charton, F. (2019). Deep learning for symbolic mathematics. arXiv preprint arXiv:1912.01412. Facebook |*Mikhail Liz*|5-Mar|
|[ ] Layer-wise relevance propagation|Montavon, G., Binder, A., Lapuschkin, S., Samek, W., & Müller, K. R. (2019). Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (pp. 193-209). Springer, Cham.|*Rohan Rathore*|9-Apr|
|[ ] Deep Reinforcement Learning||*Kaivalya Pandey*|26-Mar|
|[ ] Complex Conv|Complex Convolution. IEEE|*Ravi Kumar*|30-Mar|
|[ ] | Understanding mixup training methods https://ieeexplore.ieee.org/document/8478159|*Jetina Tsvaki*|16-Apr|
|[ ] |Deep Learning based Approach to Reduced Order Modelling... https://arxiv.org/abs/1804.09269|*Alix Bernard*|26-Mar|
|[ ] Deep learning: A Generic approach for extreme condition traffic||*Watana Pongsapas*|12-Mar|
|[ ] |Uzkent B, Sheehan E, Meng C, Tang Z, Burke M, Lobell D, Ermon S. Learning to interpret satellite images in global scale using wikipedia. arXiv preprint arXiv:1905.02506. 2019 May 7.|*Owen Siyoto*|9-Apr|
|[ ] Implicit weight uncertainty in neural networks|Nick Pawlowski, etc|*Alexander Donets*|16-Apr| Donets*|12-Apr|
|[ ] Brain Tumor |Brain Tumor Sementation Using Deep Learning by Type Specific Sorting of Images|*Dinesh Reddy*|19-Mar|
|[ ] Blockchain for AI|Salah, K., Rehman, M. H. U., Nizamuddin, N., & Al-Fuqaha, A. (2019). Blockchain for AI: Review and open research challenges. IEEE Access, 7, 10127-10149.|*Abhishek Saxena*|9-Apr|
|\4=.*From previous semester (not reported yet)*|
|[ ]Manifold MixUp| Manifold Mixup: Better Representations by Interpolating Hidden States. URL: https://arxiv.org/pdf/1806.05236v4 || |
|\4=.*Quantum*|
|[ ]DisCoCat model| Grefenstette E. Category-theoretic quantitative compositional distributional models of natural language semantics //arXiv preprint arXiv:1311.1539. – 2013. URL: https://arxiv.org/abs/1311.1539|||
|[ ]DisCoCat toy model|Gogioso S. A Corpus-based Toy Model for DisCoCat //arXiv preprint arXiv:1605.04013. – 2016. URL: https://arxiv.org/pdf/1605.04013.pdf|||
|[ ]A Quantum-Theoretic Approach to Distributional Semantics|Blacoe W., Kashefi E., Lapata M. A quantum-theoretic approach to distributional semantics //Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. – 2013. – С. 847-857. URL: http://www.aclweb.org/anthology/N13-1105|||
|\4=.*Speech*|
|[ ]Tacotron 2|Shen J. et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions //2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). – IEEE, 2018. – С. 4779-4783. URL: https://arxiv.org/pdf/1712.05884.pdf|||
|\4=.*Natural Language Processing*|
|[ ]Skip-thoughts, Infersent, RandSent - Facebook|1. Kiros R. et al. Skip-thought vectors //Advances in neural information processing systems. – 2015. – С. 3294-3302. URL: https://arxiv.org/pdf/1506.06726.pdf
2. Conneau A. et al. Supervised learning of universal sentence representations from natural language inference data //arXiv preprint arXiv:1705.02364. – 2017. URL: https://arxiv.org/abs/1705.02364
3. Wieting J., Kiela D. No Training Required: Exploring Random Encoders for Sentence Classification //arXiv preprint arXiv:1901.10444. – 2019. URL: https://arxiv.org/pdf/1901.10444.pdf|||
|[ ]BigARTM|Vorontsov K. et al. Bigartm: Open source library for regularized multimodal topic modeling of large collections //International Conference on Analysis of Images, Social Networks and Texts. – Springer, Cham, 2015. – С. 370-381. URL: http://www.machinelearning.ru/wiki/images/e/ea/Voron15aist.pdf|||
|[ ]|Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., & Dill, D. L. (2018). Learning a SAT solver from single-bit supervision. arXiv preprint arXiv:1802.03685.|||
|\4=.*Papers with code*|
|[ ]Deep-speare|Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme https://paperswithcode.com/paper/deep-speare-a-joint-neural-model-of-poetic|*Elizaveta Tagirova*|*19-Dec*|
h1. 4 Topics of master thesis
Opened list of reports on master thesis (statement of work, review, and results):
|_.Reporter|_.Topic|_.Scheduled|
|\3=.1st year students, review, method|
|1 to-be-listed | |-|
|1 Raphael Blankson |PennyLane.ai overview |28-May|
|2 Kirill Kalmutskiy |Coursework |30-Apr|
|\3=.2st year students, results|
|1 Razizadeh Omid | ||
|2 Siyoto Owen | ||
|3 Munyaradzi Njera | ||
|4 Kozinets Roman | ||
|5 Tagirova Elizaveta | ||
|6 Tsvaki Jetina | ||
|7 Ravi Kumar | ||
h1. 5 At fault
These students still didn't selected a paper to report or doesn't assigned to a time slot:
1st year students
* *noname*: refer, master
2nd year students
* *noname*: refer, master
h1. 6 Presence
The requirements of the seminar are:
* AS.BDA.RQ.1) deliver presentation: (i) on the topic of master thesis, (ii) review of a recognized paper.
* AS.BDA.RQ.2) attend not less than 50% of classes.
Here is a table of [[Presence]] conducted from meeting minutes (see minutes as issues in first column of the [[Seminars schedule]].