Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Machine Learning & AI: A Probabilistic Perspective - MGMT 963 at Queen's University, Summaries of Machine Learning

The course outline for mgmt 963, machine learning and ai, offered at queen's university, smith school of business during the winter, 2020 semester. The course is designed to equip graduate students with a solid foundation in machine learning (ml) and artificial intelligence (ai) concepts, modelling methodologies, and techniques. The course covers both theoretical and applied aspects of ml, with a focus on understanding the operation of widely used ml algorithms and developing ml models for business problems. The course requirements include guided readings, attempting suggested exercises, and a term paper that applies course material to a topic of the student's choice.

Typology: Summaries

2019/2020

Uploaded on 10/24/2020

lorabrown711
lorabrown711 🇺🇸

4 documents

1 / 1

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Queen's University, Smith School of Business
MGMT 963 Machine Learning and AI Winter, 2020
Reading Course
Mikhail Nediak mikhail.nediak@queensu.ca
Course Outline
Introduction
This course will equip graduate students in Analytics with a set of core concepts, modelling methodologies, and techniques
required for understanding widely-used deterministic machine learning (ML) and AI tools as a basis for future research
and applications. The intention in the course is to survey both the theory underpinning model solutions, and the
application of each model to classes of business problems. That is, a proper use of ML models involves understanding
both how messy problems can be formulated as ML problems and how various algorithmic approaches can be used to
solve them.
The goals of a theoretical component of the course are:
to learn the key theoretical results for ML models, and
to understand operation of widely used ML algorithms.
The goal of an applied component is to provide fundamental training in developing ML models for business problems.
We will assume little prior background, but the pace will be quite rapid. We begin with a review of key ML notions and
probability concepts. We will move on to the review of Bayesian and frequentist statistics, linear and logistic regression
and then will consider how more complex models and solutions can be built from the ground up. The course will consider
practical illustrations for each ML technique.
Readings
Required: Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012.
Recommended: Russell, Stuart J., and Peter Norvig. Artificial intelligence: a modern approach. Pearson Education
Limited, 2016.
Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
Administration
Course requirements include guided readings, attempting suggested exercises and discussing the progress in weekly
review meetings. The final mark in the course is assigned based on a term paper (100% of the mark) that applies course
material to the topic of student’s choice.
Tentative Schedule
All readings below are from the required text. The recommended reading pace is approximately 15 pages per day with a
faster pace in the first 4 weeks, assuming that some material may have to be revisited. Suggested exercises and specifically
emphasized topics/sections will be provided in the weekly review meetings. Additional readings may be recommended
individually based on the research interests.
Week Readings Topic
1 Chapters 1-3 Introduction to ML; probability c oncepts; generative models for discrete data
2 Chapters 4-5 Gaussian models; Bayesian statistics
3 Chapters 6-7 Frequentist statistics; Linear regression
4 Chapters 8-9 Logistic regression; generalized linear models
5 Chapter 10, Sections 11.1-11.4 Directed graphical models; mixture models; the EM algorithm
6 Sections 11.5-11.6, Chapter 12 More on mixture models; latent linear models
7 Chapter 13 Sparse linear models
8 Chapters 14-15 Kernels; Gaussian processes
9 Chapter 16, Sections 17.1-17.2 Adaptive basis function models; Markov models
10 Sections 17.3-17.6, Chapter 17-18 Hidden Markov and state-space models
11 Chapters 21, 23 Variational inference; Monte Carlo inference
12 Chapters 24-25 MCMC inference; Clustering

Partial preview of the text

Download Machine Learning & AI: A Probabilistic Perspective - MGMT 963 at Queen's University and more Summaries Machine Learning in PDF only on Docsity!

Queen's University, Smith School of Business MGMT 963 Machine Learning and AI Winter, 2020

Reading Course

Mikhail Nediak mikhail.nediak@queensu.ca Course Outline

Introduction This course will equip graduate students in Analytics with a set of core concepts, modelling methodologies, and techniques required for understanding widely-used deterministic machine learning (ML) and AI tools as a basis for future research and applications. The intention in the course is to survey both the theory underpinning model solutions, and the application of each model to classes of business problems. That is, a proper use of ML models involves understanding both how messy problems can be formulated as ML problems and how various algorithmic approaches can be used to solve them. The goals of a theoretical component of the course are:

  • to learn the key theoretical results for ML models, and
  • to understand operation of widely used ML algorithms. The goal of an applied component is to provide fundamental training in developing ML models for business problems. We will assume little prior background, but the pace will be quite rapid. We begin with a review of key ML notions and probability concepts. We will move on to the review of Bayesian and frequentist statistics, linear and logistic regression and then will consider how more complex models and solutions can be built from the ground up. The course will consider practical illustrations for each ML technique.

Readings Required : Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012. Recommended : Russell, Stuart J., and Peter Norvig. Artificial intelligence: a modern approach. Pearson Education Limited, 2016. Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.

Administration Course requirements include guided readings, attempting suggested exercises and discussing the progress in weekly review meetings. The final mark in the course is assigned based on a term paper (100% of the mark) that applies course material to the topic of student’s choice.

Tentative Schedule All readings below are from the required text. The recommended reading pace is approximately 15 pages per day with a faster pace in the first 4 weeks, assuming that some material may have to be revisited. Suggested exercises and specifically emphasized topics/sections will be provided in the weekly review meetings. Additional readings may be recommended individually based on the research interests. Week Readings Topic 1 Chapters 1-3 Introduction to ML; probability concepts; generative models for discrete data 2 Chapters 4-5 Gaussian models; Bayesian statistics 3 Chapters 6-7 Frequentist statistics; Linear regression 4 Chapters 8-9 Logistic regression; generalized linear models 5 Chapter 10, Sections 11.1-11.4 Directed graphical models; mixture models; the EM algorithm 6 Sections 11.5-11.6, Chapter 12 More on mixture models; latent linear models 7 Chapter 13 Sparse linear models 8 Chapters 14-15 Kernels; Gaussian processes 9 Chapter 16, Sections 17.1-17.2 Adaptive basis function models; Markov models 10 Sections 17.3-17.6, Chapter 17-18 Hidden Markov and state-space models 11 Chapters 21, 23 Variational inference; Monte Carlo inference 12 Chapters 24-25 MCMC inference; Clustering