Site Search
Computer Science


Windsor WaterfrontWindsor Waterfront Park
Windsor Waterfront Park
Dr. Ziad Kobti lecturingDr. Ziad Kobti
Dr. Ziad Kobti
Xiaobu Yuan, Ph.D.Dr. Xiaobu Yuan
Dr. Xiaobu Yuan
Jessica Chen, Ph.D.Dr. Jessica Chen
Dr. Jessica Chen
Imran Ahmad, Ph.D.Dr. Imran Ahmad
Dr. Imran Ahmad
Dr. Robert KentDr. Robert Kent
Dr. Robert Kent
Dr. Scott GoodwinDr. Scott Goodwin
Dr. Scott Goodwin
Alioune Ngom, Ph.D.Dr. Alioune Ngom
Dr. Alioune Ngom
Dr. Luis RuedaDr. Luis Rueda
Dr. Luis Rueda
Lambton TowerLambton Tower
Lambton Tower
Robin Gras, Ph.D.Dr. Robin Gras
Dr. Robin Gras
Christie Ezeife, Ph.D.Dr. Christie Ezeife
Dr. Christie Ezeife
Arunita Jaekel, Ph.D.Dr. Arunita Jaekel
Dr. Arunita Jaekel

An Alternative Deep Learning Architecture: Deep Random Forests

Add this event into your calendar using the iCAL format
  • Tue, 12/12/2017 - 2:00pm - 3:30pm

An Alternative Deep Learning Architecture: Deep Random Forests

PhD. Comprehensive Exam by:

Ryan Scott

Date:   Tuesday, December 12, 2017
Time: 2:00 pm – 3:30 pm
Location: Lambton Tower, 3105

Deep learning architectures are generally based on neural networks that use backpropagation to tune model parameters. Though powerful, they typically require enormous amounts of data to train them and they are very complex models. Thus, they are very sensitive to design choices, hyperparameter selection, and even their initialization. Further, they usually require vast amounts of computational power to train. The key benefit to deep learning is that it allows representation learning, so alternative strategies to deep learning must also provide the capacity to create abstract, information-rich representations of the data. Recent study in this domain has turned to deep learning via the stacking of random forests, using the outputs of the random forests in each layer (so far, class distributions at the leaves of trees in the forest) to transform the data. This alternative provides deep learning capacity with far fewer design choices and hyperparameters, is robust to design choices and hyperparameters, is much faster, and requires far less training data.

Thesis Committee:     

Internal Readers: Dr. Alioune Ngom and Dr. Boubakeur Boufama        
External Reader: Dr. Daniel Heath
Advisors: Dr. Robin Gras and Dr. Hugh MacIsaac

See More: