Factorized second order methods in neural networks
dc.contributor.advisor | Vincent, Pascal | |
dc.contributor.author | George, Thomas | |
dc.date.accessioned | 2018-05-31T13:22:43Z | |
dc.date.available | NO_RESTRICTION | fr |
dc.date.available | 2018-05-31T13:22:43Z | |
dc.date.issued | 2018-03-21 | |
dc.date.submitted | 2017-08 | |
dc.identifier.uri | http://hdl.handle.net/1866/20190 | |
dc.subject | Apprentissage automatique | fr |
dc.subject | Apprentissage profond | fr |
dc.subject | Optimisation | fr |
dc.subject | Second ordre | fr |
dc.subject | Gradient naturel | fr |
dc.subject | Machine learning | fr |
dc.subject | Deep learning | fr |
dc.subject | Optimization | fr |
dc.subject | Second order | fr |
dc.subject | Natural gradient | fr |
dc.subject.other | Applied Sciences - Artificial Intelligence / Sciences appliqués et technologie - Intelligence artificielle (UMI : 0800) | fr |
dc.title | Factorized second order methods in neural networks | fr |
dc.type | Thèse ou mémoire / Thesis or Dissertation | |
etd.degree.discipline | Informatique | fr |
etd.degree.grantor | Université de Montréal | fr |
etd.degree.level | Maîtrise / Master's | fr |
etd.degree.name | M. Sc. | fr |
dcterms.abstract | Les méthodes d'optimisation de premier ordre (descente de gradient) ont permis d'obtenir des succès impressionnants pour entrainer des réseaux de neurones artificiels. Les méthodes de second ordre permettent en théorie d'accélérer l'optimisation d'une fonction, mais dans le cas des réseaux de neurones le nombre de variables est bien trop important. Dans ce mémoire de maitrise, je présente les méthodes de second ordre habituellement appliquées en optimisation, ainsi que des méthodes approchées qui permettent de les appliquer aux réseaux de neurones profonds. J'introduis un nouvel algorithme basé sur une approximation des méthodes de second ordre, et je valide empiriquement qu'il présente un intérêt pratique. J'introduis aussi une modification de l'algorithme de rétropropagation du gradient, utilisé pour calculer efficacement les gradients nécessaires aux méthodes d'optimisation. | fr |
dcterms.abstract | First order optimization methods (gradient descent) have enabled impressive successes for training artificial neural networks. Second order methods theoretically allow accelerating optimization of functions, but in the case of neural networks the number of variables is far too big. In this master's thesis, I present usual second order methods, as well as approximate methods that allow applying them to deep neural networks. I introduce a new algorithm based on an approximation of second order methods, and I experimentally show that it is of practical interest. I also introduce a modification of the backpropagation algorithm, used to efficiently compute the gradients required in optimization. | fr |
dcterms.language | eng | fr |
Files in this item
This item appears in the following Collection(s)
This document disseminated on Papyrus is the exclusive property of the copyright holders and is protected by the Copyright Act (R.S.C. 1985, c. C-42). It may be used for fair dealing and non-commercial purposes, for private study or research, criticism and review as provided by law. For any other use, written authorization from the copyright holders is required.