Dr. Thomas Starke – Deep Reinforcement Learning in Trading
Apply reinforcement learning to create, backtest, paper trade and live trade a strategy using two deep learning neural networks and replay memory. Learn to quantitatively analyze the returns and risks. Hands-on course in Python with implementable techniques and a capstone project in financial markets.
LIVE TRADING
- List and explain the need for reinforcement learning to tackle the delayed gratification experiment
- Describe states, actions, double Q-learning, policy, experience replay and rewards.
- Explain exploitation vs exploration tradeoff
- Create and backtest a reinforcement learning model
- Analyse returns and risk using different performance measures
- Practice the concepts on real market data through a capstone project
- Explain the challenges faced in live trading and list the solutions for them
- Deploy the RL model for paper and live trading
SKILLS COVERED
Finance and Math Skills
- Sharpe ratio
- Returns & Maximum drawdowns
- Stochastic gradient descent
- Mean squared error
Python
- Pandas, Numpy
- Matplotlib
- Datetime, TA-lib
- For loops
- Tensorflow, Keras, SGD
Reinforcement Learning
- Double Q-learning
- Artificial Neural Networks
- State, Rewards, Actions
- Experience Replay
- Exploration vs Exploitation
PREREQUISITES
This course requires a basic understanding of financial markets such as buying and selling of securities. To implement the strategies covered, the basic knowledge of “pandas dataframe”, “Keras” and “matplotlib” is required. The required skills are covered in the free course, ‘Python for Trading: Basic’, ‘Introduction to Machine Learning for Trading’ on Quantra. To gain an in-depth understanding of Neural Networks, you can enroll in the ‘Neural Networks in Trading’ course which is recommended but optional.
Deep Reinforcement Learning in Trading by Dr. Thomas Starke, what is it included (Content proof: Watch here!)
Section 1: Introduction
Section 2: Need for Reinforcement Learning
Section 3: State, Actions and Rewards
Section 4: Q Learning
Section 5: State Construction
Section 6: Policies in Reinforcement Learning
Section 7: Challenges in Reinforcement Learning
Section 8: Initialise Game Class
Section 9: Positions and Rewards
Section 10: Input Features
Section 11: Construct and Assemble State
Section 12: Game Class
Section 13: Experience Replay
Section 14: Artificial Neural Network Concepts
Section 15: Artificial Neural Network Implementation
Section 16: Backtesting Logic
Section 17: Backtesting Implementation
Section 18: Performance Analysis: Synthetic Data
Section 19: Performance Analysis: Real World Price Data
Section 20: Automated Trading Strategy
Section 21: Paper and Live Trading
Section 22: Capstone Project
Section 23: Future Enhancements
Section 24 (Optional): Python Installation
Section 25: Course Summary
ABOUT AUTHOR
Dr. Starke has a Ph.D. in Physics and currently leads the quant-trading team in one of the leading prop-trading firms in Australia, AAAQuants, as its CEO. He has also held the senior research fellow position at Oxford University. Dr. Starke has previously worked at the proprietary trading firm Vivienne Court, and at Memjet Australia, the world leader in highspeed printing. He has led strategic research projects for Rolls-Royce Plc (UK) and is also the co-founder of the microchip design company.
WHY QUANTRA?
USER TESTIMONIALS
Manogane Rammala
Graduate in Investment Management, University of Pretoria
In its current form, the course is already comprehensive to a very high degree. All of the content in sections 1, 2, and 3 really helps in building an understanding regarding the deep RL trading system. I would compare this course to a suit that would have to grow into. I am going to revisit the section on ‘experience replay’ to get a better grip on that subject matter. The capstone project will also be very educational from the perspective of experimentation. To summarize, I’d say that this course will be the greatest learning material for RL in the financial markets for a very long time. Thank you for making it available!
Vignesh Patel
Senior Associate, Cognizant, India
Deep Reinforcement Learning as a concept is vast and complex. In this course, the content is broken down into smaller specific topics that help you grasp the subject at hand. In the end, everything is bought back together seamlessly for you to see the full picture clearly. I love how the complex concepts are made easy to understand, so much so that I was able to do the capstone project at the end of the course all by myself. I only referred to the model solution after I successfully made the model in the capstone project on my own. This course has definitely increased my understanding and clarity on Deep Reinforcement learning.
Vinod Pandiripalli
Data Scientist, Franklin Templeton. India
The Deep Reinforcement Learning course has definitely opened a gate and brought me closer to my goal to achieve financial independence. It has given me great confidence in the area of Algorithmic Trading. The course is organised and designed in such a way that it made it easier for me to grasp the topics faster. The course was divided into smaller modules, which further helped me understand the concepts in greater depth. This course is a complete package, everything that you need to learn, is already available in the course. As a Data Scientist, was also able to upskill myself in the same domain, all thanks to this course.
Sale Page: https://quantra.quantinsti.com/course/deep-reinforcement-learning-trading
Archive: https://archive.ph/wip/mIAQH