University certificate
The world's largest faculty of engineering”
Why study at TECH?
Become an expert in Robotics and Computer Vision in 24 months with this TECH Global University Advanced master’s degree. Enroll now"

The rise of Artificial Intelligence and Robotics is changing the technological, economic and social landscape globally. In this context, specialization
in areas such as Machine Vision is crucial to keep up to date in an environment of rapid advances and disruptive changes. The increasing interaction between humans and machines, and the need to process visual information efficiently, requires highly skilled professionals to lead innovation and address the challenges.
An ideal scenario for engineering professionals who want to advance an emerging sector. For this reason, TECH Global University has designed this Advanced master’s degree in Robotics and Artificial Vision, which provides comprehensive training in these emerging disciplines, covering topics such as Augmented Reality, Artificial Intelligence and visual information processing in machines, among others.
A program that offers a theoretical-practical approach that allows graduates to apply their knowledge in real environments. All this, in addition, in a 100% online university degree, which allows students to adapt their learning to their personal and professional responsibilities. Thus, they will have access to high quality educational materials, such as videos, essential readings and detailed resources, providing them with a global vision of Robotics and Artificial Vision.
Likewise, thanks to the Relearning method, based on the continuous repetition of the most important contents, the student will reduce the hours of study and will consolidate the most important concepts in a simpler way.
A unique degree in the academic panorama that is also distinguished by the excellent team of specialists in this field, by the excellent team of specialists in this field. His excellent knowledge and experience and experience in the sector is evident in an advanced syllabus, which only TECH Global University.
Become an innovation leader and address ethical and safety challenges in creating innovative and effective solutions in different industry sectors"
This Advanced master’s degree in Robotics and Artificial Vision contains the most complete and up-to-date program on the market. The most important features include:
- The development of case studies presented by IT experts
- The graphic, schematic, and practical contents with which they are created, provide scientific and practical information on the disciplines that are essential for professional practice
- Practical exercises where the self-assessment process can be carried out to improve learning
- Special emphasis on innovative methodologies in the development of Robots and Artificial Vision
- Theoretical lessons, questions to the expert, debate forums on controversial topics, and individual reflection assignments
- Content that is accessible from any fixed or portable device with an Internet connection
Take advantage of the opportunity to study in a 100% online program, adapting your study time to your personal and professional circumstances"
Its teaching staff includes professionals from the field of Robotics, who bring to this program the experience of their work, as well as recognized specialists from reference societies and prestigious universities.
Its multimedia content, developed with the latest educational technology, will provide the professional with situated and contextual learning, i.e., a simulated environment that will deliver an immersive learning experience, programmed to prepare in real situations.
This program is designed around Problem-Based Learning, whereby students must try to solve the different professional practice situations that arise throughout the program. For this purpose, professionals will be assisted by an innovative interactive video system created by renowned and experienced experts.
Analyze through the best didactic material how to carry out the tuning and parameterization of SLAM algorithms"

Delve whenever and wherever you want into the advances achieved in Deep learning"
Syllabus
The Advanced master’s degree in Robotics and Artificial Vision is presented as an excellent option for engineering professionals looking to specialize in this cutting-edge field. The program modules are developed in a progressive order, allowing students to acquire knowledge gradually and efficiently. It also offers the opportunity to learn about robot design, programming and control, as well as machine vision algorithms and machine learning techniques, essential skills for success in this constantly evolving field, and all this with a Virtual Library, accessible 24 hours a day from any digital device with an internet connection.

Get a global view on Robotics and Machine Vision, thanks to access to high quality educational materials"
Module 1. Robotic. Robot Design and Modeling
1.1. Robotics and Industry 4.0
1.1.1. Robotics and Industry 4.0
1.1.2. Application Fields and Use Cases
1.1.3. Sub-Areas of Specialization in Robotics
1.2. Robot Hardware and Software Architectures
1.2.1. Hardware Architectures and Real-Time
1.2.2. Robot Software Architectures
1.2.3. Communication Models and Middleware Technologies
1.2.4. Robot Operating System (ROS) Software Integration
1.3. Mathematical Modeling of Robots
1.3.1. Mathematical Representation of Rigid Solids
1.3.2. Rotations and Translations
1.3.3. Hierarchical State Representation
1.3.4. Distributed Representation of the State in ROS (TF Library)
1.4. Robot Kinematics and Dynamics
1.4.1. Kinematics
1.4.2. Dynamics
1.4.3. Underactuated Robots
1.4.4. Redundant Robots
1.5. Robot Modeling and Simulation
1.5.1. Robot Modeling Technologies
1.5.2. Robot Modeling with URDF
1.5.3. Robot Simulation
1.5.4. Modeling with Gazebo Simulator
1.6. Robot Manipulators
1.6.1. Types of Manipulator Robots
1.6.2. Kinematics
1.6.3. Dynamics
1.6.4. Simulation
1.7. Terrestrial Mobile Robots
1.7.1. Types of Terrestrial Mobile Robots
1.7.2. Kinematics
1.7.3. Dynamics
1.7.4. Simulation
1.8. Aerial Mobile Robots
1.8.1. Types of Aerial Mobile Robots
1.8.2. Kinematics
1.8.3. Dynamics
1.8.4. Simulation
1.9. Aquatic Mobile Robots
1.9.1. Types of Aquatic Mobile Robots
1.9.2. Kinematics
1.9.3. Dynamics
1.9.4. Simulation
1.10. Bioinspired Robots
1.10.1. Humanoids
1.10.2. Robots with Four or More Legs
1.10.3. Modular Robots
1.10.4. Robots with flexible parts (Soft-Robotics)
Module 2. Intelligent Agents. Application of Artificial Intelligence to robots and softbots
2.1. Intelligent Agents and Artificial Intelligence
2.1.1. Intelligent Robots. Artificial Intelligence
2.1.2. Intelligent Agents
2.1.2.1. Hardware Agents. Robots
2.1.2.2. Software Agents. Softbots
2.1.3. Robotics Applications
2.2. Brain-Algorithm Connection
2.2.1. Biological Inspiration of Artificial Intelligence
2.2.2. Reasoning Implemented in Algorithms. Typology
2.2.3. Explainability of Results in Artificial Intelligence Algorithms
2.2.4. Evolution of Algorithms up to Deep Learning
2.3. Search Algorithms in the Solution Space
2.3.1. Elements in Solution Space Searches
2.3.2. Solution Search Algorithms in Artificial Intelligence Problems
2.3.3. Applications of Search and Optimization Algorithms
2.3.4. Search Algorithms Applied to Machine Learning
2.4. Machine Learning
2.4.1. Machine Learning
2.4.2. Supervised Learning Algorithms
2.4.3. Unsupervised Learning Algorithms
2.4.4. Reinforcement Learning Algorithms
2.5. Supervised Learning
2.5.1. Supervised Learning Methods
2.5.2. Decision Trees for Classification
2.5.3. Support Vector Machines
2.5.4. Artificial Neural Networks
2.5.5. Applications of Supervised Learning
2.6. Unsupervised Learning
2.6.1. Unsupervised Learning
2.6.2. Kohonen Networks
2.6.3. Self-Organizing Maps
2.6.4. K-Means Algorithm
2.7. Reinforcement Learning
2.7.1. Reinforcement Learning
2.7.2. Agents Based on Markov Processes
2.7.3. Reinforcement Learning Algorithms
2.6.4. Reinforcement Learning Applied to Robotics
2.8. Probabilistic Inference
2.8.1. Probabilistic Inference
2.8.2. Types of Inference and Method Definition
2.8.3. Bayesian Inference as a Case Study
2.8.4. Nonparametric Inference Techniques
2.8.5. Gaussian Filters
2.9. From Theory to Practice: Developing an Intelligent Robotic Agent
2.9.1. Inclusion of Supervised Learning Modules in a Robotic Agent
2.9.2. Inclusion of Reinforcement Learning Modules in a Robotic Agent
2.9.3. Architecture of a Robotic Agent Controlled by Artificial Intelligence
2.9.4. Professional Tools for the Implementation of the Intelligent Agent
2.9.5. Phases of the Implementation of AI Algorithms in Robotic Agents
Module 3. Deep Learning
3.1. Artificial Intelligence
3.1.1. Machine Learning
3.1.2. Deep Learning
3.1.3. The Explosion of Deep Learning Why Now?
3.2. Neural Networks
3.2.1. The Neural Network
3.2.2. Uses of Neural Networks
3.2.3. Linear Regression and Perceptron
3.2.4. Forward Propagation
3.2.5. Backpropagation
3.2.6. Feature Vectors
3.3. Loss Functions
3.3.1. Loss Functions
3.3.2. Types of Loss Functions
3.3.3. Choice of Loss Functions
3.4. Activation Functions
3.4.1. Activation Function
3.4.2. Linear Functions
3.4.3. Non-Linear Functions
3.4.4. Output vs. Hidden Layer Activation Functions
3.5. Regularization and Normalization
3.5.1. Regularization and Normalization
3.5.2. Overfitting and Data Augmentation
3.5.3. Regularization Methods: L1, L2 and Dropout
3.5.4. Normalization Methods: Batch, Weight, Layer
3.6. Optimization
3.6.1. Gradient Descent
3.6.2. Stochastic Gradient Descent
3.6.3. Mini Batch Gradient Descent
3.6.4. Momentum
3.6.5. Adam
3.7. Hyperparameter Tuning and Weights
3.7.1. Hyperparameters
3.7.2. Batch Size vs. Learning Rate Vs. Step Decay
3.7.3. Weights
3.8. Evaluation Metrics of a Neural Network
3.8.1. Accuracy
3.8.2. Dice Coefficient
3.8.3. Sensitivity Vs. Specificity/Recall vs. Precision
3.8.4. ROC Curve (AUC)
3.8.5. F1-Score
3.8.6. Matrix Confusion
3.8.7. Cross-Validation
3.9. Frameworks and Hardware
3.9.1. Tensor Flow
3.9.2. Pytorch
3.9.3. Caffe
3.9.4. Keras
3.9.5. Hardware for the Learning Phase
3.10. Creation of a Neural Network-Training and Validation
3.10.1. Dataset
3.10.2. Network Construction
3.10.3. Education
3.10.4. Visualization of Results
Module 4. Robotics in the Automation of Industrial Processes
4.1. Design of Automated Systems
4.1.1. Hardware Architectures
4.1.2. Programmable Logic Controllers
4.1.3. Industrial Communication Networks
4.2. Advanced Electrical Design I: Automation
4.2.1. Design of Electrical Panels and Symbology
4.2.2. Power and Control Circuits Harmonics
4.2.3. Protection and Grounding Elements
4.3. Advanced Electrical Design II: Determinism and Safety
4.3.1. Machine Safety and Redundancy
4.3.2. Safety Relays and Triggers
4.3.3. Safety PLCs
4.3.4. Safe Networks
4.4. Electrical Actuation
4.4.1. Motors and Servomotors
4.4.2. Frequency Inverters and Controllers
4.4.3. Electrically Actuated Industrial Robotics
4.5. Hydraulic and Pneumatic Actuation
4.5.1. Hydraulic Design and Symbology
4.5.2. Pneumatic Design and Symbology
4.5.3. ATEX Environments in Automation
4.6. Transducers in Robotics and Automation
4.6.1. Position and Velocity Measurement
4.6.2. Force and Temperature Measurement
4.6.3. Presence Measurement
4.6.4. Vision Sensors
4.7. Programming and Configuration of Programmable Logic Controllers PLCs
4.7.1. PLC Programming: LD
4.7.2. PLC Programming: ST
4.7.3. PLC Programming: FBD and CFC
4.7.4. PLC Programming: SFC
4.8. Programming and Configuration of Equipment in Industrial Plants
4.8.1. Programming of Drives and Controllers
4.8.2. HMI Programming
4.8.3. Programming of Manipulator Robots
4.9. Programming and Configuration of Industrial Computer Equipment
4.9.1. Programming of Vision Systems
4.9.2. SCADA/Software Programming
4.9.3. Network Configuration
4.10. Automation Implementation
4.10.1. State Machine Design
4.10.2. Implementation of State Machines in PLCs
4.10.3. Implementation of Analog PID Control Systems in PLCs
4.10.4. Automation Maintenance and Code Hygiene
4.10.5. Automation and Plant Simulation
Module 5. Automatic Control Systems in Robotics
5.1. Analysis and Design of Nonlinear Systems
5.1.1. Analysis and Modeling of Nonlinear Systems
5.1.2. Feedback Control
5.1.3. Linearization by Feedback
5.2. Design of Control Techniques for Advanced Non-linear Systems
5.2.1. Sliding Mode control
5.2.2. Lyapunov and Backstepping Control
5.2.3. Control Based on Passivity
5.3. Control Architectures
5.3.1. The Robotics Paradigm
5.3.2. Control Architectures
5.3.3. Applications and Examples of Control Architectures
5.4. Motion Control for Robotic Arms
5.4.1. Kinematic and Dynamic Modeling
5.4.2. Control in Joint Space
5.4.3. Control in Operational Space
5.5. Actuator Force Control
5.5.1. Force Control
5.5.2. Impedance Control
5.5.3. Hybrid Control
5.6. Terrestrial Mobile Robots
5.6.1. Equations of Motion
5.6.2. Control Techniques for Terrestrial Robots
5.6.3. Mobile Manipulators
5.7. Aerial Mobile Robots
5.7.1. Equations of Motion
5.7.2. Control Techniques in Aerial Robots
5.7.3. Aerial Manipulation
5.8. Control Based on Machine Learning Techniques
5.8.1. Control Using Supervised Learning
5.8.2. Control Using Reinforced Learning
5.8.3. Control Using Non-Supervised Learning
5.9. Vision-Based Control
5.9.1. Position-Based Visual Servoing
5.9.2. Image-Based Visual Servoing
5.9.3. Hybrid Visual Servoing
5.10. Predictive Control
5.10.1. Models and State Estimation
5.10.2. MPC Applied to Mobile Robots
5.10.3. MPC Applied to UAVs
Module 6. Planning Algorithms in Robots
6.1. Classical Planning Algorithms
6.1.1. Discrete Planning: State Space
6.1.2. Planning Problems in Robotics. Robotic Systems Models
6.1.3. Classification of Planners
6.2. The Trajectory Planning Problem in Mobile Robots
6.2.1. Forms of Environment Representation: Graphs
6.2.2. Search Algorithms in Graphs
6.2.3. Introduction of Costs in Networks
6.2.4. Search Algorithms in Heavy Networks
6.2.5. Algorithms with any Angle Approach
6.3. Planning in High Dimensional Robotic Systems
6.3.1. High Dimensionality Robotics Problems: Manipulators
6.3.2. Direct/Inverse Kinematic Model
6.3.3. Sampling Planning Algorithms PRM and RRT
6.3.4. Planning Under Dynamic Constraints
6.4. Optimal Sampling Planning
6.4.1. Problem of Sampling-Based Planners
6.4.2. RRT* Probabilistic Optimality Concept
6.4.3. Reconnection Step: Dynamic Constraints
6.4.4. CForest. Parallelizing Planning
6.5. Real Implementation of a Motion Planning System
6.5.1. Global Planning Problem. Dynamic Environments
6.5.2. Cycle of Action, Sensorization. Acquisition of Information from the Environment
6.5.3. Local and Global Planning
6.6. Coordination in Multi-Robot Systems I: Centralized System
6.6.1. Multirobot Coordination Problem
6.6.2. Collision Detection and Resolution: Trajectory Modification with Genetic Algorithms
6.6.3. Other Bio-Inspired Algorithms: Particle Swarm and Fireworks
6.6.4. Collision Avoidance by Choice of Maneuver Algorithm
6.7. Coordination in Multi-Robot Systems II: Distributed Approaches I
6.7.1. Use of Complex Objective Functions
6.7.2. Pareto Front
6.7.3. Multi-Objective Evolutionary Algorithms
6.8. Coordination in Multi-Robot Systems III: Distributed Approaches II
6.8.1. Order 1 Planning Systems
6.8.2. ORCA Algorithm
6.8.3. Addition of Kinematic and Dynamic Constraints in ORCA
6.9. Decision Planning Theory
6.9.1. Decision Theory
6.9.2. Sequential Decision Systems
6.9.3. Sensors and Information Spaces
6.9.4. Planning for Uncertainty in Sensing and Actuation
6.10. Reinforcement Learning Planning Systems
6.10.1. Obtaining the Expected Reward of a System
6.10.2. Mean Reward Learning Techniques
6.10.3. Inverse Reinforcement Learning
Module 7. Computer Vision
7.1. Human Perception
7.1.1. Human Visual System
7.1.2. Color
7.1.3. Visible and Non-Visible Frequencies
7.2. Chronicle of the Computer Vision
7.2.1. Principles
7.2.2. Evolution
7.2.3. The Importance of Computer Vision
7.3. Digital Image Composition
7.3.1. The Digital Image
7.3.2. Types of Images
7.3.3. Color Spaces
7.3.4. RGB
7.3.5. HSV and HSL
7.3.6. CMY-CMYK
7.3.7. YCbCr
7.3.8. Indexed Image
7.4. Image Acquisition Systems
7.4.1. Operation of a Digital Camera
7.4.2. The Correct Exposure for Each Situation
7.4.3. Depth of Field
7.4.4. Resolution
7.4.5. Image Formats
7.4.6. HDR Mode
7.4.7. High Resolution Cameras
7.4.8. High-Speed Cameras
7.5. Optical Systems
7.5.1. Optical Principles
7.5.2. Conventional Lenses
7.5.3. Telecentric Lenses
7.5.4. Types of Autofocus Lenses
7.5.5. Focal Length
7.5.6. Depth of Field
7.5.7. Optical Distortion
7.5.8. Calibration of an Image
7.6. Illumination Systems
7.6.1. Importance of Illumination
7.6.2. Frequency Response
7.6.3. LED Illumination
7.6.4. Outdoor Lighting
7.6.5. Types of Lighting for Industrial Applications. Effects
7.7. 3D Acquisition Systems
7.7.1. Stereo Vision
7.7.2. Triangulation
7.7.3. Structured Light
7.7.4. Time of Flight
7.7.5. Lidar
7.8. Multispectrum
7.8.1. Multispectral Cameras
7.8.2. Hyperspectral Cameras
7.9. Non-Visible Near Spectrum
7.9.1. IR Cameras
7.9.2. UV Cameras
7.9.3. Converting From Non-Visible to Visible by Illumination
7.10. Other Band Spectrums
7.10.1. X-Ray
7.10.2. terahertz
Module 8. Applications and State-of-the-Art
8.1. Industrial Applications
8.1.1. Machine Vision Libraries
8.1.2. Compact Cameras
8.1.3. PC-Based Systems
8.1.4. Industrial Robotics
8.1.5. Pick and place 2D
8.1.6. Bin Picking
8.1.7. Quality Control
8.1.8. Presence Absence of Components
8.1.9. Dimensional Control
8.1.10. Labeling Control
8.1.11. Traceability
8.2. Autonomous Vehicles
8.2.1. Driver Assistance
8.2.2. Autonomous Driving
8.3. Artificial Vision for Content Analysis
8.3.1. Filtering by Content
8.3.2. Visual Content Moderation
8.3.3. Tracking Systems
8.3.4. Brand and Logo Identification
8.3.5. Video Labeling and Classification
8.3.6. Scene Change Detection
8.3.7. Text or Credits Extraction
8.4. Medical Application
8.4.1. Disease Detection and Localization
8.4.2. Cancer and X-Ray Analysis
8.4.3. Advances in Artificial Vision Due to Covid-19
8.4.4. Assistance in the Operating Room
8.5. Spatial Applications
8.5.1. Satellite Image Analysis
8.5.2. Computer Vision for the Study of Space
8.5.3. Mission to Mars
8.6. Commercial Applications
8.6.1. Stock Control
8.6.2. Video Surveillance, Home Security
8.6.3. Parking Cameras
8.6.4. Population Control Cameras
8.6.5. Speed Cameras
8.7. Vision Applied to Robotics
8.7.1. Drones
8.7.2. AGV
8.7.3. Vision in Collaborative Robots
8.7.4. The Eyes of the Robots
8.8. Augmented Reality
8.8.1. Operation
8.8.2. Devices
8.8.3. Applications in the Industry
8.8.4. Commercial Applications
8.9. Cloud Computing
8.9.1. Cloud Computing Platforms
8.9.2. From cloud computing to production
8.10. Research and State-of-the-Art
8.10.1. Commercial Applications
8.10.2. What's Cooking?
8.10.3. The Future of Computer Vision
Module 9. Artificial Vision Techniques in Robotics: Image Processing and Analysis
9.1. Computer Vision
9.1.1. Computer Vision
9.1.2. Elements of a Computer Vision System
9.1.3. Mathematical Tools
9.2. Optical Sensors for Robotics
9.2.1. Passive Optical Sensors
9.2.2. Active Optical Sensors
9.2.3. Non-Optical Sensors
9.3. Image Acquisition
9.3.1. Image Representation
9.3.2. Color Space
9.3.3. Digitizing Process
9.4. Image Geometry
9.4.1. Lens Models
9.4.2. Camera Models
9.4.3. Camera Calibration
9.5. Mathematical Tools
9.5.1. Histogram of an Image
9.5.2. Convolution
9.5.3. Fourier Transform
9.6. Image Preprocessing
9.6.1. Noise Analysis
9.6.2. Image Smoothing
9.6.3. Image Enhancement
9.7. Image Segmentation
9.7.1. Contour-Based Techniques
9.7.3. Histogram-Based Techniques
9.7.4. Morphological Operations
9.8. Image Feature Detection
9.8.1. Point of Interest Detection
9.8.2. Feature Descriptors
9.8.3. Feature Matching
9.9. 3D Vision Systems
9.9.1. 3D Perception
9.9.2. Feature Matching between Images
9.9.3. Multiple View Geometry
9.10. Computer Vision based Localization
9.10.1. The Robot Localization Problem
9.10.2. Visual Odometry
9.10.3. Sensory Fusion
Module 10. Robot Visual Perception Systems with Automatic Learning
10.1. Unsupervised Learning Methods applied to Computer Vision
10.1.1. Clustering
10.1.2. PCA
10.1.3. Nearest Neighbors
10.1.4. Similarity and Matrix Decomposition
10.2. Supervised Learning Methods Applied to Computer Vision
10.2.1. Concept “Bag of words”
10.2.2. Support Vector Machine
10.2.3. Latent Dirichlet Allocation
10.2.4. Neural Networks
10.3. Deep Neural Networks:: Structures, Backbones and Transfer Learning
10.3.1. Feature Generating Layers
10.3.3.1. VGG
10.3.3.2. Densenet
10.3.3.3. ResNet
10.3.3.4. Inception
10.3.3.5. GoogLeNet
10.3.2. Transfer Learning
10.3.3. The Data Preparation for Training
10.4. Computer Vision with Deep Learning I: Detection and Segmentation
10.4.1. YOLO and SSD Differences and Similarities
10.4.2. Unet
10.4.3. Other Structures
10.5. Computer Vision with Deep Learning II: Generative Adversarial Networks
10.5.1. Image Super-Resolution Using GAN
10.5.2. Creation of Realistic Images
10.5.3. Scene Understanding
10.6. Learning Techniques for Localization and Mapping in Mobile Robotics
10.6.1. Loop Closure Detection and Relocation
10.6.2. Magic Leap. Super Point and Super Glue
10.6.3. Depth from Monocular
10.7. Bayesian Inference and 3D Modeling
10.7.1. Bayesian Models and "Classical" Learning
10.7.2. Implicit Surfaces with Gaussian Processes (GPIS)
10.7.3. 3D Segmentation Using GPIS
10.7.4. Neural Networks for 3D Surface Modeling
10.8. End-to-End Applications of Deep Neural Networks
10.8.1. End-to-End System. Example of Person Identification
10.8.2. Object Manipulation with Visual Sensors
10.8.3. Motion Generation and Planning with Visual Sensors
10.9. Cloud Technologies to Accelerate the Development of Deep Learning Algorithms
10.9.1. Use of GPUs for Deep Learning
10.9.2. Agile development with Google IColab
10.9.3. Remote GPUs, Google Cloud and AWS
10.10. Deployment of Neural Networks in Real Applications
10.10.1. Embedded Systems
10.10.2. Deployment of Neural Networks. Use
10.10.3. Network Optimizations in Deployment, Example with TensorRT
Module 11. Visual SLAM. Robot Localization and Simultaneous Mapping by Computer Vision Techniques
11.1. Simultaneous Localization and Mapping (SLAM)
11.1.1. Simultaneous Localization and Mapping. SLAM
11.1.2. SLAM Applications
11.1.3. SLAM Operation
11.2. Projective Geometry
11.2.1. Pin-Hole Model
11.2.2. Estimation of Intrinsic Parameters of a Chamber
11.2.3. Homography, Basic Principles and Estimation
11.2.4. Fundamental Matrix, Principles and Estimation
11.3. Gaussian Filters
11.3.1. Kalman Filter
11.3.2. Information Filter
11.3.3. Adjustment and Parameterization of Gaussian Filters
11.4. Stereo EKF-SLAM
11.4.1. Stereo Camera Geometry
11.4.2. Feature Extraction and Search
11.4.3. Kalman Filter for Stereo SLAM
11.4.4. Stereo EKF-SLAM Parameter Setting
11.5. Monocular EKF-SLAM
11.5.1. EKF-SLAM Landmark Parameterization
11.5.2. Kalman Filter for Monocular SLAM
11.5.3. Monocular EKF-SLAM Parameter Tuning
11.6. Loop Closure Detection
11.6.1. Brute Force Algorithm
11.6.2. FABMAP
11.6.3. Abstraction Using GIST and HOG
11.6.4. Deep Learning Detection
11.7. Graph-SLAM
11.7.1. Graph-SLAM
11.7.2. RGBD-SLAM
11.7.3. ORB-SLAM
11.8. Direct Visual SLAM
11.8.1. Analysis of the Direct Visual SLAM algorithm
11.8.2. LSD-SLAM
11.8.3. SVO
11.9. Visual Inertial SLAM
11.9.1. Integration of Inertial Measurements
11.9.2. Low Coupling: SOFT-SLAM
11.9.3. High Coupling: Vins-Mono
11.10. Other SLAM Technologies
11.10.1. Applications Beyond Visual SLAM
11.10.2. Lidar-SLAM
11.10.3. Range-only SLAM
Module 12. Application of Virtual and Augmented Reality Technologies to Robotics
12.1. Immersive Technologies in Robotics
12.1.1. Virtual Reality in Robotics
12.1.2. Augmented Reality in Robotics
12.1.3. Mixed Reality in Robotics
12.1.4. Difference between Realities
12.2. Construction of Virtual Environments
12.2.1. Materials and Textures
12.2.2. Lighting
12.2.3. Virtual Sound and Smell
12.3. Robot Modeling in Virtual Environments
12.3.1. Geometric Modeling
12.3.2. Physical Modeling
12.3.3. Model Standardization
12.4. Modeling of Robot Dynamics and Kinematics Virtual Physical Engines
12.4.1. Physical Motors. Typology
12.4.2. Configuration of a Physical Engine
12.4.3. Physical Motors in the Industry
12.5. Platforms, Peripherals and Tools Most Commonly Used in Virtual Reality
12.5.1. Virtual Reality Viewers
12.5.2. Interaction Peripherals
12.5.3. Virtual Sensors
12.6. Augmented Reality Systems
12.6.1. Insertion of Virtual Elements into Reality
12.6.2. Types of Visual Markers
12.6.3. Augmented Reality Technologies
12.7. Metaverse: Virtual Environments of Intelligent Agents and People
12.7.1. Avatar Creation
12.7.2. Intelligent Agents in Virtual Environments
12.7.3. Construction of Multi-User Environments for VR/AR
12.8. Creation of Virtual Reality Projects for Robotics
12.8.1. Phases of Development of a Virtual Reality Project
12.8.2. Deployment of Virtual Reality Systems
12.8.3. Virtual Reality Resources
12.9. Creating Augmented Reality Projects for Robotics
12.9.1. Phases of Development of an Augmented Reality Project
12.9.2. Deployment of Augmented Reality Projects
12.9.3. Augmented Reality Resources
12.10. Robot Teleoperation with Mobile Devices
12.10.1. Mixed Reality on Mobile Devices
12.10.2. Immersive Systems using Mobile Device Sensors
12.10.3. Examples of Mobile Projects
Module 13. Robot Communication and Interaction Systems
13.1. Speech Recognition: Stochastic Systems
13.1.1. Acoustic Speech Modeling
13.1.2. Hidden Markov Models
13.1.3. Linguistic Speech Modeling: N-Grams, BNF Grammars
13.2. Speech Recognition Deep Learning
13.2.1. Deep Neural Networks
13.2.2. Recurrent Neural Networks
13.2.3. LSTM Cells
13.3. Speech Recognition: Prosody and Environmental Effects
13.3.1. Ambient Noise
13.3.2. Multi-Speaker Recognition
13.3.3. Speech Pathologies
13.4. Natural Language Understanding: Heuristic and Probabilistic Systems
13.4.1. Syntactic-Semantic Analysis: Linguistic Rules
13.4.2. Comprehension Based on Heuristic Rules
13.4.3. Probabilistic Systems: Logistic Regression and SVM
13.4.4. Understanding Based on Neural Networks
13.5. Dialog Management: Heuristic/Probabilistic Strategies
13.5.1. Interlocutor's Intention
13.5.2. Template-Based Dialog
13.5.3. Stochastic Dialog Management: Bayesian Networks
13.6. Dialog Management: Advanced Strategies
13.6.1. Reinforcement-Based Learning Systems
13.6.2. Neural Network-Based Systems
13.6.3. From Speech to Intention in a Single Network
13.7. Response Generation and Speech Synthesis
13.7.1. Response Generation: From Idea to Coherent Text
13.7.2. Speech Synthesis by Concatenation
13.7.3. Stochastic Speech Synthesis
13.8. Dialogue Adaptation and Contextualization
13.8.1. Dialogue Initiative
13.8.2. Adaptation to the Speaker
13.8.3. Adaptation to the Context of the Dialogue
13.9. Robots and Social Interactions:Emotion Recognition, Synthesis and Expression
13.9.1. Artificial Voice Paradigms: Robotic Voice and Natural Voice
13.9.2. Emotion Recognition and Sentiment Analysis
13.9.3. Emotional Voice Synthesis
13.10. Robots and Social Interactions: Advanced Multimodal Interfaces
13.10.1. Combination of Vocal and Tactile Interfaces
13.10.2. Sign Language Recognition and Translation
13.10.3. Visual Avatars: Voice to Sign Language Translation
Module 14. Digital Image Processing
14.1. Computer Vision Development Environment
14.1.1. Computer Vision Libraries
14.1.2. Programming Environment
14.1.3. Visualization Tools
14.2. Digital image Processing
14.2.1. Pixel Relationships
14.2.2. Image Operations
14.2.3. Geometric Transformations
14.3. Pixel Operations
14.3.1. Histogram
14.3.2. Histogram Transformations
14.3.3. Operations on Color Images
14.4. Logical and Arithmetic Operations
14.4.1. Addition and Subtraction
14.4.2. Product and Division
14.4.3. And/Nand
14.4.4. Or/Nor
14.4.5. Xor/Xnor
14.5. Filters
14.5.1. Masks and Convolution
14.5.2. Linear Filtering
14.5.3. Non-Linear Filtering
14.5.4. Fourier Analysis
14.6. Morphological Operations
14.6.1. Erosion and Dilation
14.6.2. Closing and Opening
14.6.3. Top Hat and Black Hat
14.6.4. Contour Detection
14.6.5. Skeleton
14.6.6. Hole Filling
14.6.7. Convex Hull
14.7. Image Analysis Tools
14.7.1. Edge Detection
14.7.2. Detection of Blobs
14.7.3. Dimensional Control
14.7.4. Color Inspection
14.8. Object Segmentation
14.8.1. Image Segmentation
14.8.2. Classical Segmentation Techniques
14.8.3. Real Applications
14.9. Image Calibration
14.9.1. Image Calibration
14.9.2. Methods of Calibration
14.9.3. Calibration Process in a 2D Camera/Robot System
14.10. Image Processing in a Real Environment
14.10.1. Problem Analysis
14.10.2. Image Processing
14.10.3. Feature Extraction
14.10.4. Final Results
Module 15. Advanced Digital Image Processing
15.1. Optical Character Recognition (OCR)
15.1.1. Image Pre-Processing
15.1.2. Text Detection
15.1.3. Text Recognition
15.2. Code Reading
15.2.1. 1D Codes
15.2.2. 2D Codes
15.2.3. Applications
15.3. Pattern Search
15.3.1. Pattern Search
15.3.2. Patterns Based on Gray Level
15.3.3. Patterns Based on Contours
15.3.4. Patterns Based on Geometric Shapes
15.3.5. Other Techniques
15.4. Object Tracking with Conventional Vision
15.4.1. Background Extraction
15.4.2. Meanshift
15.4.3. Camshift
15.4.4. Optical Flow
15.5. Facial Recognition
15.5.1. Facial Landmark Detection
15.5.2. Applications
15.5.3. Facial Recognition
15.5.4. Emotion Recognition
15.6. Panoramic and Alignment
15.6.1. Stitching
15.6.2. Image Composition
15.6.3. Photomontage
15.7. High Dynamic Range (HDR) and Photometric Stereo
15.7.1. Increasing the Dynamic Range
15.7.2. Image Compositing for Contour Enhancement
15.7.3. Techniques for the Use of Dynamic Applications
15.8. Image Compression
15.8.1. Image Compression
15.8.2. Types of Compressors
15.8.3. Image Compression Techniques
15.9. Video Processing
15.9.1. Image Sequences
15.9.2. Video Formats and Codecs
15.9.3. Reading a Video
15.9.4. Frame Processing
15.10. Real Application of Image Processing
15.10.1. Problem Analysis
15.10.2. Image Processing
15.10.3. Feature Extraction
15.10.4. Final Results
Module 16. 3D Image Processing
16.1. 3D Imaging
16.1.1. 3D Imaging
16.1.2. 3D Image Processing Software and Visualizations
16.1.3. Metrology Software
16.2. Open 3D
16.2.1. Library for 3D Data Processing
16.2.2. Features
16.2.3. Installation and Use
16.3. The Data
16.3.1. Depth Maps in 2D Image
16.3.2. Pointclouds
16.3.3. Normal
16.3.4. Surfaces
16.4. Visualization
16.4.1. Data Visualization
16.4.2. Controls
16.4.3. Web Display
16.5. Filters
16.5.1. Distance Between Points, Eliminate Outliers
16.5.2. High Pass Filter
16.5.3. Downsampling
16.6. Geometry and Feature Extraction
16.6.1. Extraction of a Profile
16.6.2. Depth Measurement
16.6.3. Volume
16.6.4. 3D Geometric Shapes
16.6.5. Shots
16.6.6. Projection of a Point
16.6.7. Geometric Distances
16.6.8. Kd Tree
16.6.9. Features 3D
16.7. Registration and Meshing
16.7.1. Concatenation
16.7.2. ICP
16.7.3. Ransac 3D
16.8. 3D Object Recognition
16.8.1. Searching for an Object in the 3D Scene
16.8.2. Segmentation
16.8.3. Bin Picking
16.9. Surface Analysis
16.9.1. Smoothing
16.9.2. Orientable Surfaces
16.9.3. Octree
16.10. Triangulation
16.10.1. From Mesh to Point Cloud
16.10.2. Depth Map Triangulation
16.10.3. Triangulation of unordered PointClouds
Module 17. Convolutional Neural Networks and Image Classification
17.1. Convolutional Neural Networks
17.1.1. Introduction
17.1.2. Convolution
17.1.3. CNN Building Blocks
17.2. Types of CNN Layers
17.2.1. Convolutional
17.2.2. Activation
17.2.3. Batch Normalization
17.2.4. Polling
17.2.5. Fully Connected
17.3. Metrics
17.3.1. Matrix Confusion
17.3.2. Accuracy
17.3.3. Precision
17.3.4. Recall
17.3.5. F1 Score
17.3.6. ROC Curve
17.3.7. AUC
17.4. Main Architectures
17.4.1. AlexNet
17.4.2. VGG
17.4.3. Resnet
17.4.4. GoogleLeNet
17.5. Image Classification
17.5.1. Introduction
17.5.2. Analysis of Data
17.5.3. Data Preparation
17.5.4. Model Training
17.5.5. Model Validation
17.6. Practical Considerations for CNN Training
17.6.1. Optimizer Selection
17.6.2. Learning Rate Scheduler
17.6.3. Check Training Pipeline
17.6.4. Training with Regularization
17.7. Best Practices in Deep Learning
17.7.1. Transfer Learning
17.7.2. Fine Tuning
17.7.3. Data Augmentation
17.8. Statistical Data Evaluation
17.8.1. Number of datasets
17.8.2. Number of Labels
17.8.3. Number of Images
17.8.4. Data Balancing
17.9. Deployment
17.9.1. Saving and Loading Models
17.9.2. Onnx
17.9.3. Inference
17.10. Case Study: Image Classification
17.10.1. Data Analysis and Preparation
17.10.2. Testing the Training Pipeline
17.10.3. Model Training
17.10.4. Model Validation
Module 18. Object Detection
18.1. Object Detection and Tracking
18.1.1. Object Detection
18.1.2. Case Uses
18.1.3. Object Tracking
18.1.4. Case Uses
18.1.5. Occlusions, Rigid and Non-Rigid Poses
18.2. Evaluation Metrics
18.2.1. IOU - Intersection Over Union
18.2.2. Confidence Score
18.2.3. Recall
18.2.4. Precision
18.2.5. Recall–Precision Curve
18.2.6. Mean Average Precision (mAP)
18.3. Traditional Methods
18.3.1. Sliding Window
18.3.2. Viola Detector
18.3.3. HOG
18.3.4. Non Maximal Supresion (NMS)
18.4. Datasets
18.4.1. Pascal VC
18.4.2. MS Coco
18.4.3. ImageNet (2014)
18.4.4. MOTA Challenge
18.5. Two Shot Object Detector
18.5.1. R-CNN
18.5.2. Fast R-CNN
18.5.3. Faster R-CNN
18.5.4. Mask R-CNN
18.6. Single Shot Object Detector
18.6.1. SSD
18.6.2. YOLO
18.6.3. RetinaNet
18.6.4. CenterNet
18.6.5. EfficientDet
18.7. Backbones
18.7.1. VGG
18.7.2. ResNet
18.7.3. Mobilenet
18.7.4. Shufflenet
18.7.5. Darknet
18.8. Object Tracking
18.8.1. Classical Approaches
18.8.2. Particulate Filters
18.8.3. Kalman
18.8.4. Sorttracker
18.8.5. Deep Sort
18.9. Deployment
18.9.1. Computing Platform
18.9.2. Choice of Backbone
18.9.3. Choice of Framework
18.9.4. Model Optimization
18.9.5. Model Versioning
18.10. Study: detection and tracking of people
18.10.1. Detection of People
18.10.2. Monitoring of People
18.10.3. Re-Identification
18.10.4. Counting People in Crowds
Module 19. Image Segmentation with Deep Learning
19.1. Object Detection and Segmentation
19.1.1. Semantic Segmentation
19.1.1.1. Semantic Segmentation Use Cases
19.1.2. Instantiated Segmentation
19.1.2.1. Instantiated Segmentation Use Cases
19.2. Evaluation Metrics
19.2.1. Similarities with Other Methods
19.2.2. Pixel Accuracy
19.2.3. Dice Coefficient (F1 Score)
19.3. Cost Functions
19.3.1. Dice Loss
19.3.2. Focal Loss
19.3.3. Tversky Loss
19.3.4. Other Functions
19.4. Traditional Segmentation Methods
19.4.1. Threshold Application with Otsu and Riddlen
19.4.2. Self-Organized Maps
19.4.3. GMM-EM Algorithm
19.5. Semantic Segmentation Applying Deep Learning: FCN
19.5.1. FCN
19.5.2. Architecture
19.5.3. FCN Applications
19.6. Semantic Segmentation Applying Deep Learning: U-NET
19.6.1. U-NET
19.6.2. Architecture
19.6.3. U-NET Application
19.7. Semantic Segmentation Applying Deep Learning: Deep Lab
19.7.1. Deep Lab
19.7.2. Architecture
19.7.3. Deep Lab Application
19.8. Instantiated segmentation applying Deep Learning: Mask RCNN
19.8.1. Mask RCNN
19.8.2. Architecture
19.8.3. Application of a Mask RCNN
19.9. Video Segmentation
19.9.1. STFCN
19.9.2. Semantic Video CNNs
19.9.3. Clockwork Convnets
19.9.4. Low-Latency
19.10. Point Cloud Segmentation
19.10.1. The Point Cloud
19.10.2. PointNet
19.10.3. A-CNN
Module 20. Advanced image segmentation and advanced computer vision techniques
20.1. Database for General Segmentation Problems
20.1.1. Pascal Context
20.1.2. CelebAMask-HQ
20.1.3. Cityscapes Dataset
20.1.4. CCP Dataset
20.2. Semantic segmentation in medicine
20.2.1. Semantic segmentation in medicine
20.2.2. Datasets for medical problems
20.2.3. Practical Applications
20.3. Annotation Tools
20.3.1. Computer Vision Annotation Tool
20.3.2. LabelMe
20.3.3. Other Tools
20.4. Segmentation tools using different Frameworks
20.4.1. Keras
20.4.2. Tensorflow v2
20.4.3. Pytorch
20.4.4. Others
20.5. Semantic Segmentation Project. The Data, Phase 1
20.5.1. Problem Analysis
20.5.2. Input Source for Data
20.5.3. Data Analysis
20.5.4. Data Preparation
20.6. Semantic Segmentation Project. Training, Phase 2
20.6.1. Algorithm Selection
20.6.2. Education
20.6.3. Assessment
20.7. Semantic Segmentation Project. Results, Phase 3
20.7.1. Fine Tuning
20.7.2. Presentation of The Solution
20.7.3. Conclusions
20.8. Autoencoders
20.8.1. Autoencoders
20.8.2. Architecture of an Autoencoder
20.8.3. Noise Removal Autoencoders
20.8.4. Automatic Coloring Autoencoder
20.9. Generative Adversarial Networks (GANs)
20.9.1. Generative Adversarial Networks (GANs)
20.9.2. DCGAN Architecture
20.9.3. Conditional GAN Architecture
20.10. Enhanced Generative Adversarial Networks
20.10.1. Overview of the Problem
20.10.2. WGAN
20.10.3. LSGAN
20.10.4. ACGAN

Differentiate yourself from the rest of your competitors by acquiring specialized skills in a field with great growth potential"
Advanced Master’s Degree in Robotics and Artificial Vision
Robotics and artificial vision are two disciplines that have revolutionized the way we interact with technology and have transformed industry in various sectors. At TECH Global University, in collaboration with the School of Engineering, we have developed a postgraduate Advanced Master's Degree in Robotics and Artificial Vision to provide professionals with specialized virtual training in these areas of high demand in today's technology market. Thanks to an innovative methodology that mixes virtual classes and the Relearning method, you will be able to acquire solid competencies in an immersive and flexible environment that easily adapts to your routine
In this online postgraduate course, participants will acquire advanced knowledge in robotics and machine vision, from theoretical fundamentals to practical applications in the design and development of intelligent robotic systems. Our interdisciplinary approach enables participants to understand the key concepts of robotics and machine vision, as well as to apply advanced techniques and tools in solving real-world problems in different contexts. In addition, they will be guided by a specialized faculty with wide experience in the research and application of robotics and machine vision in industry and academia.