Join SuperDataScience
Start Your Free Trial Today >>Learn how to use powerful Deep Reinforcement Learning and Artificial Intelligence tools on examples of AI simple games!
Ever wish you could harness the power of Deep Learning and Machine Learning to craft intelligent bots built for gaming?
If youāre looking for a creative way to dive into Artificial Intelligence, then āArtificial Intelligence for Simple Gamesā is your key to building lasting knowledge.
Learn and test your AI knowledge of fundamental DL and ML algorithms using the fun and flexible environment of simple games such as Snake, the Travelling Salesman problem, mazes and more.
Why Choose This Course?
Whether youāre an absolute beginner or seasoned Machine Learning expert, this course provides a solid foundation of the basic and advanced concepts you need to build AI within a gaming environment and beyond.
Key algorithms and concepts covered in this course include: Genetic Algorithms, Q-Learning, Deep Q-Learning with both Artificial Neural Networks and Convolutional Neural Networks.
Dive into SuperDataScienceās much-loved, interactive learning environment designed to build knowledge and intuition gradually with practical, yet challenging case studies.
Code flexibility means that students will be able to experiment with different game scenarios and easily apply their learning to business problems outside of the gaming industry.
Community and Support: Join a community of like-minded learners and professionals, with access to expert support throughout your learning journey.
āAI for Simple Gamesā Curriculum:
Section #1 ā Dive into Genetic Algorithms by applying the famous Travelling Salesman Problem to an intergalactic game. The challenge will be to build a spaceship that travels across all planets in the shortest time possible!
Section #2 ā Learn the foundations of the model-free reinforcement learning algorithm, Q-Learning. Develop intuition and visualization skills, and try your hand at building a custom maze and design an AI able to find its way out.
Section #3 ā Go deep with Deep Q-Learning. Explore the fantastic world of Neural Networks using the OpenAI Gym development environment and learn how to build AIs for many other simple games!
Section #4 ā Finish off the course by building your very own version of the classic game, Snake! Here youāll utilize Convolutional Neural Networks by building an AI that mimics the same behavior we see when playing Snake.
So, are you ready to build your own game AI?
Come join us, never stop learning, and enjoy AI!
$35
/ MonthAccelerate your Career and boost your Earning Potential with our Expert Instructors & Community! What Youāll Unlock: - š 40+ Courses (over 200 hours!) - š 17 Specialized Career Paths - š Quizzes and Practice Activities - š Certificates for Courses & Career Paths - š Prizes for Learning - š§Ŗ Weekly Live Labs (plus recordings!) - šÆ Monthly Missions to practice even more - š¼ Monthly Career Booster Events - š¬ Full access to the SDS Community - ā” Monthly Speed Networking - š„ Monthly Resume Clinics - š„ Group Mentorship Program "In just a few months of learning at SDS, I landed a Data Analyst job!" ā Sanaz Afshar, California
$157
/ MonthAccelerate your Career and boost your Earning Potential with our Expert Instructors & Community! What Youāll Unlock: - š 40+ Courses (over 200 hours!) - š 17 Specialized Career Paths - š Quizzes and Practice Activities - š Certificates for Courses & Career Paths - š Prizes for Learning - š§Ŗ Weekly Live Labs (plus recordings!) - šÆ Monthly Missions to practice even more - š¼ Monthly Career Booster Events - š¬ Full access to the SDS Community - ā” Monthly Speed Networking - š„ Monthly Resume Clinics - š„ Group Mentorship Program Plus, Pro Plan perks just for you: - š A Personalized Career Path built around your goals - š§āš« 1-on-1 Mentoring Sessions every month - āļø Personalized Resume Reviews "In just a few months of learning at SDS, I landed a Data Analyst job!" ā Sanaz Afshar, California
Course content
Step 1 - The Introduction
06:36
Step 2 - Importing the Libraries
01:11
Step 3 - Creating the Bots
03:15
Step 4 - Initializing the Random DNA
03:39
Step 5 - Building the Crossover Method
07:01
Step 6 - Random Partial Mutations 1
05:01
Step 7 - Random Partial Mutations 2
06:10
Step 8 - Initializing the Main Code
03:03
Step 9 - Creating the First Population
01:38
Step 10 - Starting the Main Loop
01:52
Step 11 - Evaluating the Population
04:21
Step 12 - Sorting the Population
03:31
Step 13 - Adding Best Previous Bots to the Population
05:05
Step 14 - Filling in the Rest of the Population
07:53
Step 15 - Displaying the Results
04:56
Step 16 - Running the Code
03:36
Q-Learning Intuition: Plan of Attack
04:04
Q-Learning Intuition: What is Reinforcement Learning?
11:27
Q-Learning Intuition: The Bellman Equation
18:25
Q-Learning Intuition: The Plan
02:12
Q-Learning Intuition: Markov Decision Process
16:27
Q-Learning Intuition: Policy vs Plan
12:55
Q-Learning Intuition: Living Penalty
09:47
Q-Learning Intuition: Q-Learning Intuition
14:46
Q-Learning Intuition: Temporal Difference
19:27
Q-Learning Intuition: Q-Learning Visualization
13:31
Step 1 - Introduction
09:01
Step 2 - Importing the Libraries
00:58
Step 3 - Defining the Parameters
02:09
Step 4 - Environment and Q-Table Initialization
04:41
Step 5 - Preparing the Q-Learning Process 1
06:12
Step 6 - Preparing the Q-Learning Process 2
04:19
Step 7 - Starting the Q-Learning Process
03:44
Step 8 - Getting All Playable Actions
03:12
Step 9 - Playing a Random Action
01:27
Step 10 - Updating the Q-Value
03:43
Step 11 - Displaying the Results
04:52
Step 12 - Running the Code
03:48
Step 1 - Introduction
07:00
Step 2 - Brain - Importing the Libraries
03:06
Step 3 - Brain - Building the Brain Class
02:53
Step 4 - Brain - Creating the Neural Network
08:49
Step 5 - DQN Memory - Initializing the Experience Replay Memory
03:43
Step 6 - DQN Memory - Remembering New Experience
04:59
Step 7 - DQN Memory - Getting the Batches of Inputs and Targets
04:58
Step 8 - DQN Memory - Initializing the Inputs and the Targets
03:37
Step 9 - DQN Memory - Extracting Transitions from Random Experiences
07:03
Step 10 - DQN Memory - Updating the Inputs and the Targets
06:40
Step 11 - Training - Importing the Libraries
02:48
Step 12 - Training - Setting the Parameters
04:11
Step 13 - Training - Initializing the Environment, the Brain and DQN
04:03
Step 14 - Training - Starting the Main Loop
04:16
Step 15 - Training - Starting to Play the Game
02:42
Step 16 - Training - Taking an Action
04:34
Step 17 - Training - Updating the Environment
04:18
Step 18 - Training - Adding New Experience, Training the AI, Updating Current State
06:17
Step 19 - Training - Lowering Epsilon and Displaying the Results
06:01
Step 20 - Running the Code
11:38
Step 1 - Introduction
14:18
Step 2 - Brain - Importing the Libraries
03:36
Step 3 - Brain - Starting Building the Brain Class
03:16
Step 4 - Brain - Creating the Neural Network
08:41
Step 5 - Brain - Building a Method That Will Load a Model
02:06
Step 6 - DQN - Building the Experience Replay Memory
05:09
Step 7 - Training - Importing the Libraries
02:54
Step 8 - Training - Defining the Parameters
05:15
Step 9 - Training - Initializing the Environment the Brain and the DQN
05:16
Step 10 - Training - Building a Function to Reset the Current State
06:13
Step 11 - Training - Starting the Main Loop
03:01
Step 12 - Training - Resetting the Environment and Starting to Play the Game
02:31
Step 13 - Training - Selecting an Action to Play
04:26
Step 14 - Training - Updating the Environment
07:17
Step 15 - Training - Remembering New Experience and Training the AI
04:33
Step 16 - Training - Updating the Score and Current State
02:12
Step 17 - Training - Updating the Epsilon and Saving the Model
03:40
Step 18 - Training - Displaying the Results
07:35
Step 19 - Testing - Importing the Libraries
02:17
Step 20 - Testing - Defining the Parameters
02:39
Step 21 - Testing - Initializing the Environment and the Brain
03:10
Step 22 - Testing Resetting Current and Next State and Starting the Main Loop
01:23
Step 23 - Testing - Resetting the Game and Starting to Play the Game
02:05
Step 24 - Testing - Selecting an Action to Play
02:22
Step 25 - Updating the Environment and Current State
03:26
Step 26 - Running the Code
09:29