Afficher la notice

dc.contributor.advisorBengio, Yoshua
dc.contributor.authorLee, Dong-Hyun
dc.date.accessioned2019-01-11T20:05:29Z
dc.date.availableNO_RESTRICTIONfr
dc.date.available2019-01-11T20:05:29Z
dc.date.issued2018-10-18
dc.date.submitted2018-07
dc.identifier.urihttp://hdl.handle.net/1866/21284
dc.subjectNeural networksfr
dc.subjectMachine learningfr
dc.subjectDeep learningfr
dc.subjectRepresentation learningfr
dc.subjectOptimizationfr
dc.subjectBiological plausibility,fr
dc.subjectLearning rulefr
dc.subjectBackpropagationfr
dc.subjectTarget propagationfr
dc.subjectRéseaux de neuronesfr
dc.subjectApprentissage automatiquefr
dc.subjectOptimisationfr
dc.subjectRégle d’apprentissagefr
dc.subjectRégle d’apprentissage biologiquement plausiblefr
dc.subjectRétropropagationfr
dc.subject.otherApplied Sciences - Artificial Intelligence / Sciences appliqués et technologie - Intelligence artificielle (UMI : 0800)fr
dc.titleDifference target propagationfr
dc.typeThèse ou mémoire / Thesis or Dissertation
etd.degree.disciplineInformatiquefr
etd.degree.grantorUniversité de Montréalfr
etd.degree.levelMaîtrise / Master'sfr
etd.degree.nameM. Sc.fr
dcterms.abstractBackpropagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of Backpropagation, this thesis proposes a novel approach, Target Propagation. The main idea is to compute targets rather than gradients, at each layer in which feedforward and feedback networks form Auto-Encoders. We show that a linear correction for the imperfectness of the Auto-Encoders, called Difference Target Propagation is very effective to make Target Propagation actually work, leading to results comparable to Backpropagation for deep networks with discrete and continuous units, Denoising Auto-Encoders and achieving state of the art for stochastic networks. In Chapters 1, we introduce several classical learning rules in Deep Neural Networks, including Backpropagation and more biological plausible learning rules. In Chapters 2 and 3, we introduce a novel approach, Target Propagation, more biological plausible learning rule than Backpropagation. In addition, we show that Target Propagation is comparable to Backpropagation in Deep Neural Networks.fr
dcterms.abstractL'algorithme de r etropropagation a et e le cheval de bataille du succ es r ecent de l'apprentissage profond, mais elle s'appuie sur des e ets in nit esimaux (d eriv ees partielles) a n d'e ectuer l'attribution de cr edit. Cela pourrait devenir un probl eme s erieux si l'on consid ere des fonctions plus profondes et plus non lin eaires, avec a l'extr^eme la non-lin earit e o u la relation entre les param etres et le co^ut est r eellement discr ete. Inspir ee par la pr esum ee invraisemblance biologique de la r etropropagation, cette th ese propose une nouvelle approche, Target Propagation. L'id ee principale est de calculer des cibles plut^ot que des gradients a chaque couche, en faisant en sorte que chaque paire de couches successive forme un auto-encodeur. Nous montrons qu'une correction lin eaire, appel ee Di erence Target Propaga- tion, est tr es e cace, conduisant a des r esultats comparables a la r etropropagation pour les r eseaux profonds avec des unit es discr etes et continues et des auto- encodeurs et atteignant l' etat de l'art pour les r eseaux stochastiques.fr
dcterms.languageengfr


Fichier(s) constituant ce document

Vignette

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice