Nanodegree key: nd898
Version: 3.0.0
Locale: en-us
Become an expert in the core concepts of artificial intelligence and learn how to apply them to real-life problems.
Content
Part 01 : Introduction to Artificial Intelligence
Meet the instructional team including Sebastian Thrun, Peter Norvig, and Thad Starner who will be teaching you about the foundations of AI. Get acquainted with the resources available in your classroom & other important information about the program. Complete the lesson by building a Sudoku solver.
-
Module 01: Introduction to the Nanodegree
-
Lesson 01: Welcome to Artificial Intelligence
Welcome to the Artificial Intelligence Nanodegree program!
- Concept 01: Welcome to the Artificial Intelligence Nanodegree Program
- Concept 02: Meet Your Instructors
- Concept 03: Projects You Will Build
- Concept 04: Udacity Support
- Concept 05: Community Guidelines
- Concept 06: Weekly Lesson Plans
- Concept 07: References & Resources
- Concept 08: Get Started
- Concept 09: Lesson Plan - Week 1
-
Lesson 02: Knowledge, Community, and Careers
You are starting a challenging but rewarding journey! Take 5 minutes to read how to get help with projects and content.
-
Lesson 03: Get Help with Your Account
What to do if you have questions about your account or general questions about the program.
-
Lesson 04: Intro to Artificial Intelligence
An introduction to basic AI concepts and the challenge of answering "what is AI?"
- Concept 01: Welcome to AI!
- Concept 02: Navigation
- Concept 03: Game Playing
- Concept 04: Quiz: Tic Tac Toe
- Concept 05: Tic Tac Toe: Heuristics
- Concept 06: Quiz: Monty Hall Problem
- Concept 07: Monty Hall Problem: Explained
- Concept 08: Quiz: What is Intelligence?
- Concept 09: Defining Intelligence
- Concept 10: Agent, Environment And State
- Concept 11: Perception, Action and Cognition
- Concept 12: Quiz: Types of AI Problems
- Concept 13: Rational Behavior And Bounded Optimality
-
Lesson 05: Solving Sudoku With AI
In this lesson, you'll dive right in and apply Artificial Intelligence to solve every Sudoku puzzle.
- Concept 01: Intro
- Concept 02: Solving a Sudoku
- Concept 03: Setting up the Board
- Concept 04: Encoding the Board
- Concept 05: Strategy 1: Elimination
- Concept 06: Strategy 2: Only Choice
- Concept 07: Constraint Propagation
- Concept 08: Harder Sudoku
- Concept 09: Strategy 3: Search
- Concept 10: Coding the Solution
-
Lesson 06: Workspaces
Review the basic functionality of Workspaces—pre-configured development environments in the Udacity classroom for projects and exercises.
-
Lesson 07: Setting Up Your Environment with Anaconda
If you do not want to use Workspaces, then follow these instructions to set up your own system using Anaconda, a popular tool to manage your environments and packages in python.
-
Lesson 08: Build a Sudoku Solver
Use constraint propagation and search to build an agent that reasons like a human would to efficiently solve any Sudoku puzzle.
-
-
Module 02: Career Services
-
Lesson 01: Jobs in AI
Learn about common jobs in artificial intelligence, and get tips on how to stay active in the community.
-
Lesson 02: Optimize Your GitHub Profile
Other professionals are collaborating on GitHub and growing their network. Submit your profile to ensure your profile is on par with leaders in your field.
- Concept 01: Prove Your Skills With GitHub
- Concept 02: Introduction
- Concept 03: GitHub profile important items
- Concept 04: Good GitHub repository
- Concept 05: Interview with Art - Part 1
- Concept 06: Identify fixes for example “bad” profile
- Concept 07: Quick Fixes #1
- Concept 08: Quick Fixes #2
- Concept 09: Writing READMEs with Walter
- Concept 10: Interview with Art - Part 2
- Concept 11: Commit messages best practices
- Concept 12: Reflect on your commit messages
- Concept 13: Participating in open source projects
- Concept 14: Interview with Art - Part 3
- Concept 15: Participating in open source projects 2
- Concept 16: Starring interesting repositories
- Concept 17: Next Steps
-
Part 02 : Constraint Satisfaction Problems
Take a deep dive into the constraint satisfaction problem framework and further explore constraint propagation, backtracking search, and other CSP techniques. Complete a classroom exercise using a powerful CSP solver on a variety of problems to gain experience framing new problems as CSPs.
-
Module 01: Constraint Satisfaction Problems
-
Lesson 01: Constraint Satisfaction Problems
Expand from the constraint propagation technique used in the Sudoku project to the Constraint Satisfaction Problem framework that can be used to solve a wide range of general problems.
- Concept 01: Lesson Plan - Week 2
- Concept 02: Introduction
- Concept 03: CSP Examples
- Concept 04: Map Coloring
- Concept 05: Constraint Graph
- Concept 06: Map Coloring Quiz
- Concept 07: Constraint Types
- Concept 08: Backtracking Search
- Concept 09: Why Backtracking?
- Concept 10: Improving Backtracking Efficiency
- Concept 11: Backtracking Optimization Quiz
- Concept 12: Forward Checking
- Concept 13: Constraint Propagation and Arc Consistency
- Concept 14: Constraint Propagation Quiz
- Concept 15: Structured CSPs
-
Lesson 02: CSP Coding Exercise
Practice formulating some classical example problems as CSPs, and then to explore using a powerful open source constraint satisfaction tool called Z3 from Microsoft Research to solve them.
-
-
Module 02: Additional Constraint Problem Topics
-
Lesson 01: Additional Readings
Reading list of applications and additional topics related to CSPs.
-
Part 03 : Classical Search
Learn classical graph search algorithms--including uninformed search techniques like breadth-first and depth-first search and informed search with heuristics including A*. These algorithms are at the heart of many classical AI techniques, and have been used for planning, optimization, problem solving, and more. Complete the lesson by teaching PacMan to search with these techniques to solve increasingly complex domains.
-
Module 01: Introduction
-
Lesson 01: Introduction
Peter Norvig, co-author of Artificial Intelligence: A Modern Approach, explains a framework for search problems, and introduces uninformed & informed search strategies to solve them.
-
-
Module 02: Uninformed Search
-
Lesson 01: Uninformed Search
Peter introduces uninformed search strategies—which can only solve problems by generating successor states and distinguishing between goal and non-goal states.
- Concept 01: Intro to Uninformed Search
- Concept 02: Example: Route Finding
- Concept 03: Quiz: Tree Search
- Concept 04: Tree Search Continued
- Concept 05: Quiz: Graph Search
- Concept 06: Quiz: Breadth First Search 1
- Concept 07: Breadth First Search 2
- Concept 08: Quiz: Breadth First Search 3
- Concept 09: Breadth First Search 4
- Concept 10: Breadth First Search 5
- Concept 11: Uniform Cost Search
- Concept 12: Uniform Cost Search 1
- Concept 13: Uniform Cost Search 2
- Concept 14: Uniform Cost Search 3
- Concept 15: Uniform Cost Search 4
- Concept 16: Uniform Cost Search 5
- Concept 17: Quiz: Search Comparison
- Concept 18: Search Comparison 1
- Concept 19: Quiz: Search Comparison 2
- Concept 20: Search Comparison 3
-
-
Module 03: Informed Search
-
Lesson 01: Informed Search
Peter introduces informed search strategies, which means that they use problem-specific knowledge to find solutions more efficiently than uninformed search.
- Concept 01: Intro to Informed Search
- Concept 02: On Uniform Cost
- Concept 03: A* Search
- Concept 04: A* Search 1
- Concept 05: A* Search 2
- Concept 06: A* Search 3
- Concept 07: A* Search 4
- Concept 08: A* Search 5
- Concept 09: Optimistic Heuristic
- Concept 10: Quiz: State Spaces
- Concept 11: State Spaces 1
- Concept 12: Quiz: State Spaces 2
- Concept 13: State Spaces 3
- Concept 14: Quiz: Sliding Blocks Puzzle
- Concept 15: Sliding Blocks Puzzle 1
- Concept 16: Sliding Blocks Puzzle 2
- Concept 17: A Note on Implementation
-
-
Module 04: Classroom Exercise: Search
-
Lesson 01: Classroom Exercise: Search
Complete a practice exercise where you'll implement informed and uninformed search strategies for the game PacMan.
-
-
Module 05: Additional Search Topics
-
Lesson 01: Additional Search Topics
References to additional readings on search.
-
Part 04 : Automated Planning
Learn to represent general problem domains with symbolic logic and use search to find optimal plans for achieving your agent’s goals. Planning & scheduling systems power modern automation & logistics operations, and aerospace applications like the Hubble telescope & NASA Mars rovers.
-
Module 01: Symbolic Logic & Reasoning
-
Lesson 01: Symbolic Logic & Reasoning
Peter Norvig returns to explain propositional logic and first-order logic, which provide a symbolic logic framework that enables AI agents to reason about their actions.
- Concept 01: Lesson Plan - Week 4
- Concept 02: Introduction
- Concept 03: Background and Expert Systems
- Concept 04: Propositional Logic
- Concept 05: Truth Tables
- Concept 06: Truth Table Question
- Concept 07: Propositional Logic Question
- Concept 08: Terminology
- Concept 09: Propositional Logic Limitations
- Concept 10: First Order Logic
- Concept 11: Models
- Concept 12: Syntax
- Concept 13: Vacuum World
- Concept 14: FOL Question
- Concept 15: FOL Question 2
-
-
Module 02: Automated Planning
-
Lesson 01: Introduction to Planning
Peter Norvig defines automated planning problems in comparison to more general problem solving techniques to set the stage for classical planning algorithms in the next lesson.
- Concept 01: Problem Solving vs Planning
- Concept 02: Planning vs Execution
- Concept 03: Vacuum Cleaner Example
- Concept 04: Quiz: Sensorless Vacuum Cleaner Problem
- Concept 05: Partially Observable Vacuum Cleaner Example
- Concept 06: Quiz: Stochastic Environment Problem
- Concept 07: Infinite Sequences
- Concept 08: Finding a Successful Plan
- Concept 09: Quiz: Finding a Successful Plan Question
- Concept 10: Problem Solving via Mathematical Notation
- Concept 11: Tracking the-Predict Update Cycle
-
Lesson 02: Classical Planning
Peter presents a survey of Classical Planning techniques: forward planning (progression search) & backward planning (regression search).
-
-
Module 03: Build a Forward Planning Agent
-
Lesson 01: Build a Forward-Planning Agent
In this project you’ll use experiment with search and symbolic logic to build an agent that automatically develops and executes plans to achieve their goals.
-
-
Module 04: Additional Planning Topics
-
Lesson 01: Additional Planning Topics
Peter discusses plan space search & situational calculus. Finish the lesson with readings on advanced planning topics & modern applications of automated planning.
-
Part 05 : Optimization Problems
Learn about iterative improvement optimization problems and classical algorithms emphasizing gradient-free methods for solving them. These techniques can often be used on intractable problems to find solutions that are "good enough" for practical purposes, and have been used extensively in fields like Operations Research & logistics. Finish the lesson by completing a classroom exercise comparing the different algorithms' performance on a variety of problems.
-
Module 01: Optimization Problems
-
Lesson 01: Introduction
Thad Starner introduces the concept of iterative improvement problems, a class of optimization problems that can be solved with global optimization or local search techniques covered in this lesson.
-
-
Module 02: Local Search
-
Lesson 01: Hill Climbing
Thad introduces Hill Climbing, a very simple local search optimization technique that works well on many iterative improvement problems.
-
Lesson 02: Simulated Annealing
Thad explains Simulated Annealing, a classical global optimization technique for optimization.
-
Lesson 03: Genetic Algorithms
Thad introduces another optimization technique: Genetic Algorithms, which uses a population of samples to make iterative improvements towards the goal.
-
-
Module 03: Optimization Exercise
-
Lesson 01: Optimization Exercise
Complete a classroom exercise implementing simulated annealing to solve the traveling salesman problem.
-
-
Module 04: Additional Optimization Topics
-
Lesson 01: Additional Optimization Topics
Review similarities of the techniques introduced in this lesson with links to readings on advanced optimization topics, then complete an optimization exercise in the classroom.
-
Part 06 : Adversarial Search
Learn how to search in multi-agent environments (including decision making in competitive environments) using the minimax theorem from game theory. Then build an agent that can play games better than any human.
-
Module 01: Adversarial Search: Game Playing
-
Lesson 01: Search in Multiagent Domains
Thad returns to teach search in multi-agent domains, using the Minimax theorem to solve adversarial problems and build agents that make better decisions than humans.
- Concept 01: Lesson Plan - Week 8
- Concept 02: Overview
- Concept 03: The Minimax Algorithm
- Concept 04: Isolation
- Concept 05: Building a Game Tree
- Concept 06: Coding: Building a Game Class
- Concept 07: Which of These Are Valid Moves?
- Concept 08: Coding: Game Class Functionality
- Concept 09: Building a Game Tree (Contd.)
- Concept 10: Isolation Game Tree with Leaf Values
- Concept 11: How Do We Tell the Computer Not to Lose?
- Concept 12: MIN and MAX Levels
- Concept 13: Coding: Scoring Min & Max Levels
- Concept 14: Propagating Values Up the Tree
- Concept 15: Computing MIN MAX Values
- Concept 16: Computing MIN MAX Solution
- Concept 17: Choosing the Best Branch
- Concept 18: Coding: Minimax Search
- Concept 19: Max Number of Nodes Visited
- Concept 20: Max Moves
- Concept 21: The Branching Factor
- Concept 22: Number of Nodes in a Game Tree
- Concept 23: The Branching Factor (Contd.)
- Concept 24: Max Number of Nodes
-
Lesson 02: Optimizing Minimax Search
Thad explains some of the limitations of minimax search and introduces optimizations & changes that make it practical in more complex domains.
- Concept 01: Lesson Plan - Week 9
- Concept 02: Minimax Quiz
- Concept 03: Depth-Limited Search
- Concept 04: Coding: Depth-Limited Search
- Concept 05: Evaluation Function Intro
- Concept 06: Testing the Evaluation Function
- Concept 07: Testing the Evaluation Function Part 2
- Concept 08: Testing Evaluation Functions
- Concept 09: Testing the Evaluation Function Part 3
- Concept 10: Coding: #my_moves Heuristic
- Concept 11: Quiescent Search
- Concept 12: A Problem
- Concept 13: Iterative Deepening
- Concept 14: Understanding Exponential Time
- Concept 15: Exponential b=3
- Concept 16: Varying the Branching Factor
- Concept 17: Coding: Iterative Deepening
- Concept 18: Horizon Effect
- Concept 19: Horizon Effect (Contd.)
- Concept 20: Good Evaluation Functions
- Concept 21: Evaluating Evaluation Functions
- Concept 22: Alpha-Beta Pruning
- Concept 23: Alpha-Beta Pruning Quiz 1
- Concept 24: Alpha-Beta Pruning Quiz 2
- Concept 25: Coding: Alpha-Beta Pruning
- Concept 26: Solving 5x5 Isolation
- Concept 27: Coding: Opening Book
- Concept 28: Thad’s Asides
-
-
Module 02: Build an Adversarial Search Agent
-
Lesson 01: Build an Adversarial Game Playing Agent
Extend classical search to adversarial domains, to build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
Project Description - Build an Adversarial Game Playing Agent
-
-
Module 03: Additional Topics in Adversarial Search
-
Lesson 01: Extending Minimax Search
Thad introduces extensions to minimax search to support more than two players and non-deterministic domains.
- Concept 01: Introduction
- Concept 02: 3-Player Games
- Concept 03: 3-Player Games Quiz
- Concept 04: 3-Player Alpha-Beta Pruning
- Concept 05: Multi-player Alpha-Beta Pruning Reading
- Concept 06: Probabilistic Games
- Concept 07: Sloppy Isolation
- Concept 08: Sloppy Isolation Expectimax
- Concept 09: Expectimax Alpha-Beta Pruning
- Concept 10: Probabilistic Alpha-Beta Pruning
-
Lesson 02: Additional Adversarial Search Topics
Introduce Monte Carlo Tree Search, a highly-successful search technique in game domains, along with a reading list for other advanced adversarial search topics.
-
-
Module 04: Career Services
-
Lesson 01: Take 30 Min to Improve your LinkedIn
Find your next job or connect with industry peers on LinkedIn. Ensure your profile attracts relevant leads that will grow your professional network.
- Concept 01: Get Opportunities with LinkedIn
- Concept 02: Use Your Story to Stand Out
- Concept 03: Why Use an Elevator Pitch
- Concept 04: Create Your Elevator Pitch
- Concept 05: Use Your Elevator Pitch on LinkedIn
- Concept 06: Create Your Profile With SEO In Mind
- Concept 07: Profile Essentials
- Concept 08: Work Experiences & Accomplishments
- Concept 09: Build and Strengthen Your Network
- Concept 10: Reaching Out on LinkedIn
- Concept 11: Boost Your Visibility
- Concept 12: Up Next
-
Part 07 : Probabilistic Models
Learn to use Bayes Nets to represent complex probability distributions, and algorithms for sampling from those distributions. Then learn the algorithms used to train, predict, and evaluate Hidden Markov Models for pattern recognition. HMMs have been used for gesture recognition in computer vision, gene sequence identification in bioinformatics, speech generation & part of speech tagging in natural language processing, and more.
-
Module 01: Probability Refresher
-
Lesson 01: Probability
Sebastian Thrun briefly reviews basic probability theory including discrete distributions, independence, joint probabilities, and conditional distributions to model uncertainty in the real world.
- Concept 01: Lesson Plan - Week 10
- Concept 02: Intro to Probability and Bayes Nets
- Concept 03: Quiz: Probability / Coin Flip
- Concept 04: Quiz: Coin Flip 2
- Concept 05: Quiz: Coin Flip 3
- Concept 06: Quiz: Coin Flip 4
- Concept 07: Quiz: Coin Flip 5
- Concept 08: Probability Summary
- Concept 09: Quiz: Dependence
- Concept 10: What We Learned
- Concept 11: Quiz: Weather
- Concept 12: Quiz: Weather 2
- Concept 13: Quiz: Weather 3
- Concept 14: Quiz: Cancer
- Concept 15: Quiz: Cancer 2
- Concept 16: Quiz: Cancer 3
- Concept 17: Quiz: Cancer 4
- Concept 18: Bayes Rule
-
-
Module 02: Naive Bayes
-
Lesson 01: Naive Bayes
In this section, you'll learn how to build a spam e-mail classifier using the naive Bayes algorithm.
- Concept 01: Intro
- Concept 02: Guess the Person
- Concept 03: Known and Inferred
- Concept 04: Guess the Person Now
- Concept 05: Bayes Theorem
- Concept 06: Quiz: False Positives
- Concept 07: Solution: False Positives
- Concept 08: Bayesian Learning 1
- Concept 09: Bayesian Learning 2
- Concept 10: Bayesian Learning 3
- Concept 11: Naive Bayes Algorithm 1
- Concept 12: Naive Bayes Algorithm 2
- Concept 13: Building a Spam Classifier
- Concept 14: Exercise: Building a Spam Classifier
- Concept 15: Outro
-
-
Module 03: Bayes Networks
-
Lesson 01: Bayes Nets
Sebastian explains using Bayes Nets as a compact graphical model to encode probability distributions for efficient analysis.
- Concept 01: Lesson Plan - Week 11
- Concept 02: Introduction
- Concept 03: Quiz: Bayes Network
- Concept 04: Computing Bayes Rule
- Concept 05: Quiz: Two Test Cancer
- Concept 06: Quiz: Two Test Cancer 2
- Concept 07: Quiz: Conditional Independence
- Concept 08: Quiz: Conditional Independence 2
- Concept 09: Quiz: Absolute And Conditional
- Concept 10: Quiz: Confounding Cause
- Concept 11: Quiz: Explaining Away
- Concept 12: Quiz: Explaining Away 2
- Concept 13: Quiz: Explaining Away 3
- Concept 14: Conditional Dependence
- Concept 15: Quiz: General Bayes Net
- Concept 16: Quiz: General Bayes Net 2
- Concept 17: Quiz: General Bayes Net 3
- Concept 18: Value Of A Network
- Concept 19: Quiz: D Separation
- Concept 20: Quiz: D Separation 2
-
Lesson 02: Inference in Bayes Nets
Sebastian explains probabilistic inference using Bayes Nets, i.e. how to use evidence to calculate probabilities from the network.
- Concept 01: Probabilistic Inference
- Concept 02: Quiz: Overview and Example
- Concept 03: Quiz: Enumeration
- Concept 04: Quiz: Speeding Up Enumeration
- Concept 05: Quiz: Speeding Up Enumeration 2
- Concept 06: Quiz: Speeding Up Enumeration 3
- Concept 07: Quiz: Speeding Up Enumeration 4
- Concept 08: Causal Direction
- Concept 09: Quiz: Variable Elimination
- Concept 10: Quiz: Variable Elimination 2
- Concept 11: Quiz: Variable Elimination 3
- Concept 12: Variable Elimination 4
- Concept 13: Approximate Inference
- Concept 14: Quiz: Sampling Example
- Concept 15: Approximate Inference 2
- Concept 16: Rejection Sampling
- Concept 17: Quiz: Likelihood Weighting
- Concept 18: Likelihood Weighting 1
- Concept 19: Likelihood Weighting 2
- Concept 20: Gibbs Sampling
- Concept 21: Quiz: Monty Hall Problem
- Concept 22: Monty Hall Letter
-
-
Module 04: Hidden Markov Models
-
Lesson 01: Hidden Markov Models
Learn Hidden Markov Models, and apply them to part-of-speech tagging, a very popular problem in Natural Language Processing.
- Concept 01: Lesson Plan - Week 12
- Concept 02: Intro
- Concept 03: Part of Speech Tagging
- Concept 04: Lookup Table
- Concept 05: Bigrams
- Concept 06: When bigrams won't work
- Concept 07: Hidden Markov Models
- Concept 08: Quiz: How many paths?
- Concept 09: Solution: How many paths
- Concept 10: Quiz: How many paths now?
- Concept 11: Quiz: Which path is more likely?
- Concept 12: Solution: Which path is more likely?
- Concept 13: Viterbi Algorithm Idea
- Concept 14: Viterbi Algorithm
- Concept 15: Further Reading
- Concept 16: Outro
-
-
Module 05: Project: Part of Speech Tagging
-
Lesson 01: Part of Speech Tagging
In this project you will build a hidden Markov model (HMM) to perform part of speech tagging, a common pre-processing step in Natural Language Processing.
-
-
Module 06: Additional Topics in PGMs
-
Lesson 01: Dynamic Time Warping
Thad explains the Dynamic Time Warping technique for working with time-series data.
-
Lesson 02: Additional Topics in PGMs
Reading list of select topics to continue learning about probabilistic graphical models.
-
Part 08 : After the AI Nanodegree Program
Once you've completed the last project, review the information here to discover resources for you to continue learning and practicing AI.
-
Module 01: Additional Topics in AI
-
Lesson 01: Additional Topics in AI
Suggested resources to continue learning about artificial intelligence after completing the Nanodegree program.
-
Part 09 (Elective): Extracurricular
Additional lecture material on hidden Markov models and applications for gesture recognition.
-
Module 01: Hidden Markov Models
-
Lesson 01: Hidden Markov Models
Thad returns to discuss using Hidden Markov Models for pattern recognition with sequential data.
- Concept 01: Hidden Markov Models
- Concept 02: HMM Representation
- Concept 03: Sign Language Recognition
- Concept 04: Delta-y Quiz
- Concept 05: HMM: "I"
- Concept 06: HMM: "We"
- Concept 07: I vs We Quiz
- Concept 08: Viterbi Trellis: "I"
- Concept 09: "I" Transitions Quiz
- Concept 10: Viterbi Trellis: "I" (continued)
- Concept 11: Nodes for "I"
- Concept 12: Viterbi Path
- Concept 13: "We": Transitions Quiz
- Concept 14: "We": Transition Probabilities Quiz
- Concept 15: "We": Output Probabilities Quiz
- Concept 16: "We": Viterbi Path
- Concept 17: Which Gesture is Recognized?
- Concept 18: New Observation Sequence for "I"
- Concept 19: New Observation Sequence for "We"
- Concept 20: HMM Training
- Concept 21: Baum Welch
-
Lesson 02: Advanced HMMs
Thad shares advanced techniques that can improve performance of HMMs recognizing American Sign Language, and more complex HMM models for applications like speech synthesis.
- Concept 01: Multidimensional Output Probabilities
- Concept 02: Using a Mixture of Gaussians
- Concept 03: HMM Topologies
- Concept 04: Phrase Level Recognition
- Concept 05: Stochastic Beam Search
- Concept 06: Context Training
- Concept 07: Statistical Grammar
- Concept 08: State Tying
- Concept 09: HMM Resources
- Concept 10: Segmentally Boosted HMMs
- Concept 11: SBHMM Resources
- Concept 12: Using HMMs to Generate Data
- Concept 13: HMMs for Speech Synthesis
-