University certificate
The world's largest faculty of engineering”
Description
Raise your level of knowledge in Deep Learning with this Professional master’s degree of 1,800 teaching hours. Enroll now”
Undoubtedly, one of the fastest growing sectors in recent years has been the technology sector, sponsored by the advances in engineering brought about by the development of Deep Learning. In this way, Chatbots proliferate, facial recognition applications, early detection of diseases such as cancer by identifying medical images of higher quality.
An endless number of possibilities that require an exhaustive mastery of Deep Learning by engineering professionals. In this sense, TECH has promoted the development of this 12-month Professional master’s degree, which provides students with the most advanced and current knowledge in this field.
It is a program that will lead the graduate to delve into the mathematical foundations, the construction of neural networks, the personalization of models and training with TensorFlow or delve into Deep Computer Vision with Convolutional Neural Networks. All this, together with didactic material based on video summaries of each topic, videos in detail, specialized readings and case studies that you can access, comfortably, 24 hours a day, from any electronic device with an Internet connection.
A syllabus that will allow you to enhance your skills to create projects that focus on data analysis, natural language processing or have their direct application in areas such as robotics, finance, gaming or autonomous cars, among others.
In this way, TECH opens up a world of possibilities thanks to a quality university degree, developed by real experts and offering greater freedom of self-management of study. And the fact is that, with no classroom attendance or class schedules, graduates will be able to access the syllabus at any time and reconcile their daily activities with an education that is at the academic forefront.
You are looking at a university program that will give you the necessary impulse to become part of the big technology companies of the moment. Enroll now”
This Professional master’s degree in Deep Learning contains the most complete and up-to-date scientific program on the market. The most important features include:
- The development of practical cases presented by experts in Data Engineering and Science
- The graphic, schematic and practical contents of the book provide technical and practical information on those disciplines that are essential for professional practice
- Practical exercises where self-assessment can be used to improve learning
- Its special emphasis on innovative methodologies
- Theoretical lessons, questions to the expert, debate forums on controversial topics, and individual reflection assignments
- Content that is accessible from any fixed or portable device with an Internet connection
With this program you don't have to worry about class attendance, you don't have to attend classes or fixed schedules. Access the syllabus whenever and wherever you want”
The program’s teaching staff includes professionals from the field who contribute their work experience to this educational program, as well as renowned specialists from leading societies and prestigious universities.
The multimedia content, developed with the latest educational technology, will provide the professional with situated and contextual learning, i.e., a simulated environment that will provide immersive education programmed to learn in real situations.
This program is designed around Problem-Based Learning, whereby the professional must try to solve the different professional practice situations that arise during the course. For this purpose, students will be assisted by an innovative interactive video system created by renowned experts in the field of educational coaching with extensive experience.
Master GANS and Diffusion Models and improve your projects to generate new, realistic and high quality images"
A program that will allow you to go deeper into Backward Pass and how the derivatives of vector functions are applied to learn automatically.
Syllabus
Thanks to the Relearning method, based on the continuous reiteration of key concepts throughout this academic itinerary, the engineer will be able to obtain advanced and effective learning, without having to invest large amounts of study hours. In this way, they will be able to delve into a comprehensive syllabus on Deep Learning model coding, advanced optimization techniques, deep neural network training, visualization of results and evaluation of Deep Learning models.
Access the most advanced and updated Deep Learning curriculum in the academic landscape from your digital device with an Internet connection”
Module 1. Mathematical Basis of Deep Learning
1.1. Functions and Derivatives
1.1.1. Linear Functions
1.1.2. Partial Derivative
1.1.3. Higher Order Derivatives
1.2. Multiple Nested Functions
1.2.1. Compound Functions
1.2.2. Inverse Functions
1.2.3. Recursive Functions
1.3. Chain Rule
1.3.1. Derivatives of Nested Functions
1.3.2. Derivatives of Compound Functions
1.3.3. Derivatives of Inverse Functions
1.4. Functions with Multiple Inputs
1.4.1. Multi-Variable Functions
1.4.2. Vectorial Functions
1.4.3. Matrix Functions
1.5. Derivatives of Functions with Multiple Inputs
1.5.1. Partial Derivative
1.5.2. Directional Derivatives
1.5.3. Mixed Derivatives
1.6. Functions with Multiple Vector Inputs
1.6.1. Linear Vector Functions
1.6.2. Non-Linear Vector Functions
1.6.3. Matrix Vector Functions
1.7. Creating New Functions from Existing Functions
1.7.1. Addition of Functions
1.7.2. Product of Functions
1.7.3. Composition of Functions
1.8. Derivatives of Functions with Multiple Vector Entries
1.8.1. Derivatives of Linear Functions
1.8.2. Derivatives of Non-Linear Functions
1.8.3. Derivatives of Compound Functions s
1.9. Vector Functions and Their Derivatives: A Step Further
1.9.1. Directional Derivatives
1.9.2. Mixed Derivatives
1.9.3. Matrix Derivatives
1.10. The Backward Pass
1.10.1. Error Propagation
1.10.2. Applying Update Rules
1.10.3. Parameter Optimization
Module 2. Deep Learning Principles
2.1. Supervised Learning
2.1.1. Supervised Learning Machines
2.1.2. Uses of Supervised Learning
2.1.3. Differences between Supervised and Unsupervised Learning
2.2. Supervised Learning Models
2.2.1. Linear Models
2.2.2. Decision Tree Models
2.2.3. Neural Network Models
2.3. Linear Regression
2.3.1. Simple Linear Regression
2.3.2. Multiple Linear Regression
2.3.3. Regression Analysis
2.4. Model Training
2.4.1. Batch Learning
2.4.2. Online Learning
2.4.3. Optimization Methods
2.5. Model Evaluation: Training Set vs. Test Set
2.5.1. Evaluation Metrics
2.5.2. Cross Validation
2.5.3. Comparison of Data Sets
2.6. Model Evaluation: The Code
2.6.1. Prediction Generation
2.6.2. Error Analysis
2.6.3. Evaluation Metrics
2.7. Variables Analysis
2.7.1. Identification of Relevant Variables
2.7.2. Correlation Analysis
2.7.3. Regression Analysis
2.8. Explainability of Neural Network Models
2.8.1. Interpretable Models
2.8.2. Visualization Methods
2.8.3. Evaluation Methods
2.9. Optimization
2.9.1. Optimization Methods
2.9.2. Regularization Techniques
2.9.3. The Use of Graphs
2.10. Hyperparameters
2.10.1. Selection of Hyperparameters
2.10.2. Parameter Search
2.10.3. Hyperparameter Tuning
Module 3. Neural Networks, the Basis of Deep Learning
3.1. Deep Learning
3.1.1. Types of Deep Learning
3.1.2. Applications of Deep Learning
3.1.3. Advantages and Disadvantages of Deep Learning
3.2. Surgery
3.2.1. Sum
3.2.2. Product
3.2.3. Transfer
3.3. Layers
3.3.1. Input Layer
3.3.2. Hidden Layer
3.3.3. Output Layer
3.4. Union of Layers and Operations
3.4.1. Architecture Design
3.4.2. Connection between Layers
3.4.3. Forward Propagation
3.5. Construction of the First Neural Network
3.5.1. Network Design
3.5.2. Establish the Weights
3.5.3. Network Training
3.6. Trainer and Optimizer
3.6.1. Optimizer Selection
3.6.2. Establishment of a Loss Function
3.6.3. Establishing a Metric
3.7. Application of the Principles of Neural Networks
3.7.1. Activation Functions
3.7.2. Backward Propagation
3.7.3. Parameter Adjustment
3.8. From Biological to Artificial Neurons
3.8.1. Functioning of a Biological Neuron
3.8.2. Transfer of Knowledge to Artificial Neurons
3.8.3. Establish Relations Between the Two
3.9. Implementation of MLP (Multilayer Perceptron) with Keras
3.9.1. Definition of the Network Structure
3.9.2. Model Compilation
3.9.3. Model Training
3.10. Fine Tuning Hyperparameters of Neural Networks
3.10.1. Selection of the Activation Function
3.10.2. Set the Learning Rate
3.10.3. Adjustment of Weights
Module 4. Deep Neural Networks Training
4.1. Gradient Problems
4.1.1. Gradient Optimization Techniques
4.1.2. Stochastic Gradients
4.1.3. Weight Initialization Techniques
4.2. Reuse of Pre-Trained Layers
4.2.1. Transfer Learning Training
4.2.2. Feature Extraction
4.2.3. Deep Learning
4.3. Optimizers
4.3.1. Stochastic Gradient Descent Optimizers
4.3.2. Adam and RMSprop Optimizers
4.3.3. Moment Optimizers
4.4. Learning Rate Programming
4.4.1. Automatic Learning Rate Control
4.4.2. Learning Cycles
4.4.3. Smoothing Terms
4.5. Overfitting
4.5.1. Cross Validation
4.5.2. Regularization
4.5.3. Evaluation Metrics
4.6. Practical Guidelines
4.6.1. Model Design
4.6.2. Selection of Metrics and Evaluation Parameters
4.6.3. Hypothesis Testing
4.7. Transfer Learning
4.7.1. Transfer Learning Training
4.7.2. Feature Extraction
4.7.3. Deep Learning
4.8. Data Augmentation
4.8.1. Image Transformations
4.8.2. Synthetic Data Generation
4.8.3. Text Transformation
4.9. Practical Application of Transfer Learning
4.9.1. Transfer Learning Training
4.9.2. Feature Extraction
4.9.3. Deep Learning
4.10. Regularization
4.10.1. L1 and L2
4.10.2. Regularization by Maximum Entropy
4.10.3. Dropout
Module 5. Model Customization and Training with TensorFlow
5.1. TensorFlow
5.1.1. Using the TensorFlow Library
5.1.2. Model Education with TensorFlow
5.1.3. Operations with Graphs in TensorFlow
5.2. TensorFlow and NumPy
5.2.1. NumPy Computational Environment for TensorFlow
5.2.2. Using NumPy Arrays with TensorFlow
5.2.3. NumPy Operations for TensorFlow Graphs
5.3. Model Customization and Training Algorithms
5.3.1. Building Custom Models with TensorFlow
5.3.2. Management of Training Parameters
5.3.3. Use of Optimization Techniques for Training
5.4. TensorFlow Functions and Graphs
5.4.1. Functions with TensorFlow
5.4.2. Use of Graphs for Model Training
5.4.3. Optimization of Graphs with TensorFlow Operations
5.5. Data Loading and Pre-Processing with TensorFlow
5.5.1. Loading Datasets with TensorFlow
5.5.2. Data Pre-Processing with TensorFlow
5.5.3. Using TensorFlow Tools for Data Manipulation
5.6. The tf.data API
5.6.1. Using the tf.data API for Data Processing
5.6.2. Constructing Data Streams with tf.data
5.6.3. Use of the tf.data API for Training Models
5.7. The TFRecord Format
5.7.1. Using the TFRecord API for Data Serialization
5.7.2. Loading TFRecord Files with TensorFlow
5.7.3. Using TFRecord Files for Training Models
5.8. Keras Pre-Processing Layers
5.8.1. Using the Keras Pre-Processing API
5.8.2. Construction of Pre-Processing Pipelined with Keras
5.8.3. Using the Keras Pre-Processing API for Model Training
5.9. The TensorFlow Datasets Project
5.9.1. Using TensorFlow Datasets for Data Loading
5.9.2. Data Pre-Processing with TensorFlow Datasets
5.9.3. Using TensorFlow Datasets for Model Training
5.10. Construction of a Deep Learning Application with TensorFlow. Practical Application
5.10.1. Building a Deep Learning Application with TensorFlow.
5.10.2. Training a Model with TensorFlow
5.10.3. Using the Application for the Prediction of Results
Module 6. Deep Computer Vision with Convolutional Neural Networks
6.1. The Cortex Visual Architecture
6.1.1. Functions of the Visual Cortex
6.1.2. Theories of Computational Vision
6.1.3. Models of Image Processing
6.2. Convolutional Layers
6.2.1. Reuse of Weights in Convolution
6.2.2. 2D Convolution
6.2.3. Activation Functions
6.3. Grouping Layers and Implementation of Grouping Layers with Keras
6.3.1. Pooling and Striding
6.3.2. Flattening
6.3.3. Types of Pooling
6.4. CNN Architecture
6.4.1. VGG Architecture
6.4.2. AlexNet Architecture
6.4.3. ResNet Architecture
6.5. Implementation of a ResNet-34 CNN Using Keras
6.5.1. Weight Initialization
6.5.2. Input Layer Definition
6.5.3. Output Definition
6.6. Use of Pre-Trained Keras Models
6.6.1. Characteristics of Pre-Trained Models
6.6.2. Uses of Pre-Trained Models
6.6.3. Advantages of Pre-Trained Models
6.7. Pre-Trained Models for Transfer Learning
6.7.1. Transfer Learning
6.7.2. Transfer Learning Process
6.7.3. Advantages of Transfer Learning
6.8. Classification and Localization in Deep Computer Vision
6.8.1. Image Classification
6.8.2. Localization of Objects in Images
6.8.3. Object Detection
6.9. Object Detection and Object Tracking
6.9.1. Object Detection Methods
6.9.2. Object Tracking Algorithms
6.9.3. Tracking and Localization Techniques
6.10. Semantic Segmentation
6.10.1. Deep Learning for Semantic Segmentation
6.10.2. Edge Detection
6.10.3. Rule-Based Segmentation Methods
Module 7. Processing Sequences Using RNN (Recurrent Neural Networks) and CNN (Convolutional Neural Networks)
7.1. Recurrent Neurons and Layers
7.1.1. Types of Neurons Recurring
7.1.2. Architecture of a Recurrent Layer
7.1.3. Applications of Recurrent Layers
7.2. Recurrent Neural Network (RNN) Training
7.2.1. Backpropagation over Time (BPTT)
7.2.2. Stochastic Downward Gradient
7.2.3. Regularization in RNN Training
7.3. Evaluation of RNN Models
7.3.1. Evaluation Metrics
7.3.2. Cross Validation
7.3.3. Hyperparameter Tuning
7.4. Prerenal RNNs
7.4.1. Prenetrated Networks
7.4.2. Transfer of Learning
7.4.3. Fine Tuning
7.5. Forecasting a Time Series
7.5.1. Statistical Models for Forecasting
7.5.2. Time Series Models
7.5.3. Models Based on Neural Networks
7.6. Interpretation of Time Series Analysis Results
7.6.1. Main Component Analysis
7.6.2. Cluster Analysis
7.6.3. Correlation Analysis
7.7. Handling of Long Sequences
7.7.1. Long Short-Term Memory (LSTM)
7.7.2. Gated Recurrent Units (GRU)
7.7.3. 1D Convolutional
7.8. Partial Sequence Learning
7.8.1. Deep Learning Methods
7.8.2. Generative Models
7.8.3. Reinforcement Learning
7.9. Practical Application of RNN and CNN
7.9.1. Natural Language Processing
7.9.2. Pattern Recognition
7.9.3. Computer Vision
7.10. Differences in Classical Results
7.10.1. Classical vs. RNN Methods
7.10.2. Classical vs. CNN Methods
7.10.3. Difference in Training Time
Module 8. Natural Language Processing (NLP) with Recurrent Neural Networks (RNN) and Attention
8.1. Text Generation Using RNN
8.1.1. Training an RNN for Text Generation
8.1.2. Natural Language Generation with RNN
8.1.3. Text Generation Applications with RNN
8.2. Training Data Set Creation
8.2.1. Preparation of the Data for Training an RNN
8.2.2. Storage of the Training Dataset
8.2.3. Data Cleaning and Transformation
8.3. Sentiment Analysis
8.3.1. Classification of Opinions with RNN
8.3.2. Detection of Themes in Comments
8.3.3. Sentiment Analysis with Deep Learning Algorithms
8.4. Encoder-Decoder Network for Neural Machine Translation
8.4.1. Training an RNN for Machine Translation
8.4.2. Use of an Encoder-Decoder Network for Machine Translation
8.4.3. Improving the Accuracy of Machine Translation with RNNs
8.5. Attention Mechanisms
8.5.1. Application of Care Mechanisms in RNN
8.5.2. Use of Care Mechanisms to Improve the Accuracy of the Models
8.5.3. Advantages of Attention Mechanisms in Neural Networks
8.6. Transformer Models
8.6.1. Using Transformers Models for Natural Language Processing
8.6.2. Application of Transformers Models for Vision
8.6.3. Advantages of Transformers Models
8.7. Transformers for Vision
8.7.1. Use of Transformers Models for Vision
8.7.2. Image Data Pre-Processing
8.7.3. Training a Transformer Model for Vision
8.8. Hugging Face Transformer Library
8.8.1. Using the Hugging Face Transformers Library
8.8.2. Application of the Hugging Face Transformers Library
8.8.3. Advantages of the Hugging Face Transformers Library
8.9. Other Transformers Libraries. Comparison
8.9.1. Comparison Between Different Transformers Libraries
8.9.2. Use of the Other Transformers Libraries
8.9.3. Advantages of the Other Transformers Libraries
8.10. Development of an NLP Application with RNN and Attention. Practical Application
8.10.1. Development of a Natural Language Processing Application with RNN and Attention
8.10.2. Use of RNN, Attention Mechanisms and Transformers Models in the Application.
8.10.3. Evaluation of the Practical Application
Module 9. Autoencoders, GANs and Diffusion Models
9.1. Representation of Efficient Data
9.1.1. Dimensionality Reduction
9.1.2. Deep Learning
9.1.3. Compact Representations
9.2. PCA Realization with an Incomplete Linear Automatic Encoder
9.2.1. Training Process
9.2.2. Implementation in Python
9.2.3. Use of Test Data
9.3. Stacked Automatic Encoders
9.3.1. Deep Neural Networks
9.3.2. Construction of Coding Architectures
9.3.3. Use of Regularization
9.4. Convolutional Autoencoders
9.4.1. Design of Convolutional Models
9.4.2. Convolutional Model Training
9.4.3. Results Evaluation
9.5. Noise Suppression of Automatic Encoders
9.5.1. Filter Application
9.5.2. Design of Coding Models
9.5.3. Use of Regularization Techniques
9.6. Sparse Automatic Encoders
9.6.1. Increasing Coding Efficiency
9.6.2. Minimizing the Number of Parameters
9.6.3. Using Regularization Techniques
9.7. Variational Automatic Encoders
9.7.1. Use of Variational Optimization
9.7.2. Unsupervised Deep Learning
9.7.3. Deep Latent Representations
9.8. Generation of Fashion MNIST Images
9.8.1. Pattern Recognition
9.8.2. Image Generation
9.8.3. Deep Neural Networks Training
9.9. Generative Adversarial Networks and Diffusion Models
9.9.1. Content Generation from Images
9.9.2. Modeling of Data Distributions
9.9.3. Use of Adversarial Networks
9.10. Implementation of the Models. Practical Application
9.10.1. Implementation of the Models
9.10.2. Use of Real Data
9.10.3. Results Evaluation
Module 10. Reinforcement Learning
10.1. Optimization of Rewards and Policy Search
10.1.1. Reward Optimization Algorithms
10.1.2. Policy Search Processes
10.1.3. Reinforcement Learning for Reward Optimization
10.2. OpenAI
10.2.1. OpenAI Gym Environment
10.2.2. Creation of OpenAI Environments
10.2.3. Reinforcement Learning Algorithms in OpenAI
10.3. Neural Network Policies
10.3.1. Convolutional Neural Networks for Policy Search
10.3.2. Deep Learning Policies
10.3.3. Extending Neural Network Policies
10.4. Stock Evaluation: the Credit Allocation Problem
10.4.1. Risk Analysis for Credit Allocation
10.4.2. Estimating the Profitability of Loans
10.4.3. Credit Evaluation Models Based on Neural Networks
10.5. Policy Gradients
10.5.1. Reinforcement Learning with Policy Gradients
10.5.2. Optimization of Policy Gradients
10.5.3. Policy Gradient Algorithms
10.6. Markov Decision Processes
10.6.1. Optimization of Markov Decision Processes
10.6.2. Reinforcement Learning for Markov Decision Processes
10.6.3. Models of Markov Decision Processes
10.7. Temporal Difference Learning and Q-Learning
10.7.1. Application of Temporal Differences in Learning
10.7.2. Application of Q-Learning in Learning
10.7.3. Optimization of Q-Learning Parameters
10.8. Implementation of Deep Q-Learning and Deep Q-Learning Variants
10.8.1. Construction of Deep Neural Networks for Deep Q-Learning
10.8.2. Implementation of Deep Q-Learning
10.8.3. Variations of Deep Q-Learning
10.9. Reinforcement Learning Algorithms
10.9.1. Reinforcement Learning Algorithms
10.9.2. Reward Learning Algorithms
10.9.3. Punishment Learning Algorithms
10.10. Design of a Reinforcement Learning Environment. Practical Application
10.10.1. Design of a Reinforcement Learning Environment
10.10.2. Implementation of a Reinforcement Learning Algorithm
10.10.3. Evaluation of a Reinforcement Learning Algorithm
Make the most of this opportunity to surround yourself with expert professionals and learn from their work methodology"
Professional Master's Degree in Deep Learning
Deep Learning is a branch of machine learning that focuses on the use of deep neural networks to analyze large data sets and make predictions autonomously. This tool is used in a wide variety of applications, spanning speech recognition, natural language processing and computer vision. If you want to learn about the latest trends in artificial intelligence and machine learning, the Professional Master's Degree in Deep Learning created by TECH Global University is ideal for you. The program has a 100% online mode of study and is composed of innovative teaching resources that will give a plus to your educational experience. The syllabus will allow you to explore aspects such as deep neural networks, natural language processing and computer vision. You will also study robotics, pattern recognition, reinforcement learning and advanced data processing techniques.
Learn all about Deep Learning
Deep Learning is a fundamental discipline for the development of intelligent systems capable of learning and adapting from large amounts of data. This approach is based on deep neural networks, which are composed of several interconnected layers that process information in a non-linear way. Over the course of the Professional Master's Degree, you will gain skills in key areas such as computer vision, natural language processing, robotics and pattern recognition. As you progress through this comprehensive program, designed by industry experts, you will deepen your understanding and application of advanced data processing techniques to solve complex problems. From this, you will enhance your knowledge in the industry, allowing you to aspire to excellent career prospects in fields such as research, software development, data engineering and consulting, among others.