EE-662

From CYPHYNETS

(Difference between revisions)
Jump to: navigation, search

Revision as of 19:36, 3 January 2014

EE-662: Applied Paramter & State Estimation


Instructors

Dr. Abubakr Muhammad, Assistant Professor of Electrical Engineering

Email: abubakr [at] lums.edu.pk

Office: Room 9-311A, 3rd Floor, SSE Bldg

Course Details

Year: 2012-13

Semester: Spring

Category: Graduate

Credits: 3

Elective course for electrical engineering majors

Course Website: http://cyphynets.lums.edu.pk/index.php/EE-662

Course Description

In this course we develop a hands-on yet rigorous approach to tackling uncertainties in the dynamical evolution of an engineering system. We learn about the main sources of uncertainty and how to model them statistically. We learn that installing sensors on an uncertain system can help reduce this uncertainty. However, sensors themselves introduce noise. Still, there are amazingly efficient algorithms to process sensor data and minimize uncertainty due to both sensor and process noises. You will learn about the computer algorithm that navigated man to the moon and whose implementation requirements inspired the microelectronics revolution. Main topics of the course include Kalman filters, Bayesian estimation, Particle filters and Markov decision processes with lots of applications in robot navigation, geophysical data assimilation, signal detection, radar tracking, computer vision, aerospace guidance & control and many more.

Objectives

  • To introduce an applied perspective on using estimation techniques in state space models of nonlinear non-Gaussian dynamical systems.
  • To introduce applications of state estimation in robot navigation, geophysical data assimilation, signal detection, radar tracking, computer vision etc.

Learning Outcomes

  • To identify and model uncertainties in sensors and dynamics of engineering systems.
  • To learn a unifying mathematical framework for tackling a vast range of estimation problems.
  • To appreciate common outcomes in attempts at uncertainty quantification from seemingly diverse disciplines of mathematical statistics, machine learning, signal processing, inverse problems and stochastic control theory.

Pre-requisites

EE-561. Digital Control Systems AND EE-501. Applied Probability OR By permission of instructor

Text book

The course will be taught from the following textbooks.

  • Optimal State Estimation by Dan Simon (Wiley, 2006)

Other important references include

  • Probabilistic Robotics by Thrun, Burgard, Fox (MIT Press, 2006)
  • Statistical Signal Processing (Part 1: Estimation theory) by Kay.
  • Estimation with Applications to Tracking and Navigation by Yaakov Bar-Shalom, X. Rong Li, Thiagalingam Kirubarajan (Wiley, 2001)

Grading Scheme

Home-works : 20%

Project: 25%

Midterm Examination: 25%

Final: 30%

Policies and Guidelines

  • Quizzes will be announced. There will be no makeup quiz.
  • Homework will be due at the beginning of the class on the due date. Late homework will not be accepted.
  • You are allowed to collaborate on homework. However, copying solutions is absolutely not permitted. Offenders will be reported for disciplinary action as per university rules.
  • Any appeals on grading of homeworks, quiz or midterm scores must be resolved within one week of the return of graded material.
  • Attendance is in lectures and tutorials strongly recommended but not mandatory. However, you are responsible for catching the announcements made in the class.
  • Many of the homeworks will include MATLAB based computer exercise. Some proficiency in programming numerical algorithms is essential for both the homework and project.


Schedule

WEEK TOPICS REFERENCES
Week 1. Aug 19 Lecture 1. Introduction to concepts of control, feedback, feedforward, uncertainty and robustness;

Recitation. Review of SISO continuous-time signals and systems;

FranklinF Ch1;
Week 2. Aug 26 Lecture 2. Review of SISO feedback control; rational LTI systems; geometry of 2nd order poles; error expression in closed loop and open loop systems; sensitivity function; control design objectives;

Lecture 3. Summary of control design; compensators and PID controllers; introduction to sampled data systems; Naive approaches towards emulation; Euler's forward approximation; a pseudo-algorithm for controller implementation;

FranklinD 2, FranklinF 4.4
Week 3. Sept 2 Lecture 4. Digital control by emulation; Euler's forward and backward approximation; trapezoidal rule; approximation of a continuous time compensator; zero order hold (ZOH) and delays; general difference equations; introduction to the Z-transform;

Lecture 5. Solution of difference equations by Z-transform method; transfer functions; integrator approximation in transform domain; continuous-to-discrete approximations for controller synthesis by emulation; block diagram representations using delays, summers and gain

Recitation / Seminar. Feedback control scheduling of crane control systems. Announcement. Slides

FranklinD Ch 3

Home work #1

Home work #1 solutions

Week 4. Sept 9 Lecture 6. Impulse response and convolution in discrete-time systems; tests for linearity time-invariance, stability, causality; basic block diagrams; canonical forms;

Lecture 7. Frequency response of discrete-time LTI systems; Discrete-time Fourier transform; relation to Z-transform; time and frequency analysis of prototypical first order and second order discrete-time systems;

Lecture 8. Comparison of Z-transform and Laplace transform of sampled signals; mapping between s-plane and z-plane; mappings induced by Euler and trapezoidal approximations; Tustin's approximation of a continuous-time first order system; distortion in frequency response due to trapezoidal approximation;

FranklinD Ch 3; Oppenheim 5.1.1, 6.6.2; FranklinD 6.1;

Home work #2

Home work #2 solutions

Week 5. Sept 16 Lecture 9. Example on frequency response distortion(contd.); introduction to state space analysis; idea of a state; state-space model of Newtonian mechanics; FranklinD 6.1; FranklinF Ch 7;
Week 6. Sept 23 Lecture 10. Examples of state-space modeling; block diagrams and state-space models;

Lecture 11. Control canonical form and modal canonical form for SISO systems; how to find explicit transformations to setup control canonical form; the idea of a controllability matrix;

Recitation. Quiz #1

Quiz #1 solutions

FranklinF Ch 7.2, 7.3;

Home work #3

Home work #3 solutions

Week 7. Sept 30 Lecture 12. Controllability (contd.); invariance of controllability condition under invertible transformations; computing dynamic response from state-equations using Laplace transform; relationship between transfer functions and state-space models for a SISO LTI system;

Lecture 13. Interpretation of transfer function poles in state-space models (contd.); Interpretation of transfer function zeros in state-space models; constructing explicit transformations to obtain Modal canonical form; poles, modes and eigen-decomposition of system matrix; examples;

FranklinF Ch 7.3, 7.4;
Week 8. Oct 7 Lecture 14. Problem solving & review session

Midterm Exam

Midterm Exam solutions

Week 9. Oct 14 Eid/Midterm Break.
Week 10. Oct 21 Lecture 15. Review of canonical forms; the concept of state feedback; pole-placement; Ackermann's formula;

Lecture 16. Review of pole placement; Ackermann's formula and controllability; how to add references for trajectory tracking; state-estimator design; concept of an observer; observer design; observer canonical form; observer design by Ackermann's formula; observability matrix;

FranklinF 7.5.1, 7.5.2;

Home work #4

Home work #4 solutions

Week 11. Oct 28 Lecture 17. Review of combined state feedback and observor design; definitions of controllability and observability; physical interpretation of observability and controllability; examples of uncontrollable and unobservable systems from circuit theory and mechanics;

Lecture 18. Solutions of continuous-time LTI state-space models; a re-look at forced and natural responses; matrix exponential and its properties; Discrete-time state space models; discretized matrix equivalents of continuous-time LTI models;

FranklinF 7.7.1, 7.8; Chen 4.2;
Week 12. Nov 4 Lecture 19. Discrete-time state space models continued; example of double-integrator; a re-look at Zero order hold (ZOH) in state-space models and transfer functions;

Lecture 20. Discrete-time state-space form of Euler and Tustin's approximations; full-state feedback control in discrete-time LTI systems; pole placement; control canonical form; Ackermann's formula re-visited; graphical understanding of z-plane via mapping from s-plane contours of constant damping ratio, natural frequency;

Lecture 21 / Recitation. State-space models from difference equations; FIR and IIR filters; state-space modeling example: Laplacian dynamics in networked control systems

FranklinD 4.3.3; 4.3.1; 4.2.3; 8.1;
Week 13. Nov 11 Lecture 22. Prediction estimators; observability in discrete-time; observability as a dual concept to controllability; derivation of Ackermann's formula for observer design;

Lecture 23. Discrete-time Regulator design; combining control law and estimator; proof of Separation Principle; regulators reinterpreted as classical z-domain compensators;

Quiz #2

Quiz #2 solutions

FranklinD 8.2.1; 8.3;

Home work #5

Home work #5 solutions

Week 14. Nov 18 Lecture 24. Adding reference to standard regulator design for trajectory tracking; Feedforward loop for zero tracking error; determination of pre-filter matrices; state-command structure; output command structure;

Lecture 25. Integral control; state augmentation; disturbance estimation; observability and disturbance estimation;

Recitation. Jordan decomposition method for handling repeated and complex poles.

FranklinD 8.4;

Notes on Jordan decomposition.

Week 15. Nov 25 Lecture 26. Optimal control in discrete-time; Lagrange multiplier method;

Lecture 27. Linear Quadratic Regulator (LQR); steady-state optimal control; algebraic Ricatti equations;

FranklinD 9.2;

Home work #6

Home work #6 solutions

Week 16. Dec 2 Lecture 28. Review Lecture

Quiz #3

Project Presentations.

==
Personal tools