Slides4c

Information about Slides4c

Published on January 7, 2008

Author: Esteban

Source: authorstream.com

Content

Uncertainty Representation:  Uncertainty Representation Sensing is always related to uncertainties. What are the sources of uncertainties? How can uncertainty be represented or quantified? How do they propagate - uncertainty of a function of uncertain values? How do uncertainties combine if different sensor reading are fused? What is the merit of all this for mobile robotics? Some definitions: Sensitivity: G=out/in Resolution: Smallest change which can be detected Dynamic Range: valuemax/ resolution (104 -106) Accuracy: errormax= (measured value) - (true value) Errors are usually unknown: deterministic non deterministic (random) 4.2 Uncertainty Representation (2):  Uncertainty Representation (2) Statistical representation and independence of random variables on blackboard 4.2 Gaussian Distribution:  Gaussian Distribution 0.4 -1 -2 1 2 4.2.1 The Error Propagation Law: Motivation:  The Error Propagation Law: Motivation Imagine extracting a line based on point measurements with uncertainties. The model parameters ri (length of the perpendicular) and qi (its angle to the abscissa) describe a line uniquely. The question: What is the uncertainty of the extracted line knowing the uncertainties of the measurement points that contribute to it ? 4.2.2 The Error Propagation Law:  The Error Propagation Law Error propagation in a multiple-input multi-output system with n inputs and m outputs. 4.2.2 The Error Propagation Law:  The Error Propagation Law One-dimensional case of a nonlinear error propagation problem It can be shown, that the output covariance matrix CY is given by the error propagation law: where CX: covariance matrix representing the input uncertainties CY: covariance matrix representing the propagated uncertainties for the outputs. FX: is the Jacobian matrix defined as: which is the transposed of the gradient of f(X). 4.2.2 Feature Extraction - Scene Interpretation:  Feature Extraction - Scene Interpretation A mobile robot must be able to determine its relationship to the environment by sensing and interpreting the measured signals. A wide variety of sensing technologies are available as we have seen in previous section. However, the main difficulty lies in interpreting these data, that is, in deciding what the sensor signals tell us about the environment. Choice of sensors (e.g. in-door, out-door, walls, free space …) Choice of the environment model sensing signal treatment feature extraction scene pretation inter- 4.3 Environment Feature:  Feature Features are distinctive elements or geometric primitives of the environment. They usually can be extracted from measurements and mathematically described. low-level features (geometric primitives) like lines, circles high-level features like edges, doors, tables or trash cans. In mobile robotics features help for localization and map building. 4.3 Environment Representation and Modeling ® Features:  Environment Representation and Modeling ® Features Environment Representation Continuos Metric ® x,y,q Discrete Metric ® metric grid Discrete Topological ® topological grid Environment Modeling Raw sensor data, e.g. laser range data, grayscale images large volume of data, low distinctiveness makes use of all acquired information Low level features, e.g. line other geometric features medium volume of data, average distinctiveness filters out the useful information, still ambiguities High level features, e.g. doors, a car, the Eiffel tower low volume of data, high distinctiveness filters out the useful information, few/no ambiguities, not enough information 4.3 Environment Models: Examples:  Environment Models: Examples A: Feature base Model B: Occupancy Grid 4.3 Feature extraction base on range images:  Feature extraction base on range images Geometric primitives like line segments, circles, corners, edges Most other geometric primitives the parametric description of the features becomes already to complex and no closed form solutions exist. However, lines segments are very often sufficient to model the environment, especially for indoor applications. 4.3.1 Features Based on Range Data: Line Extraction (1):  Features Based on Range Data: Line Extraction (1) Least Square Weighted Least Square 4.3.1 Features Based on Range Data: Line Extraction (2):  Features Based on Range Data: Line Extraction (2) 17 measurement error (s) proportional to r2 weighted least square: 4.3.1 Propagation of uncertainty during line extraction:  Propagation of uncertainty during line extraction ? (output covariance matrix) Jacobian: 4.3.1 Segmentation for Line Extraction:  Segmentation for Line Extraction 4.3.1 Angular Histogram (range):  Angular Histogram (range) 4.3.1 Extracting Other Geometric Features:  Extracting Other Geometric Features 4.3.1 Feature extraction:  Feature extraction Recognition of features is, in general, a complex procedure requiring a variety of steps that successively transform the iconic data to recognition information. Handling unconstrained environments is still very challenging problem. Scheme and tools in computer vision 4.3.2 Visual Appearance-base Feature Extraction (Vision):  Visual Appearance-base Feature Extraction (Vision) 4.3.2 Feature Extraction (Vision): Tools:  Feature Extraction (Vision): Tools Conditioning Suppresses noise Background normalization by suppressing uninteresting systematic or patterned variations Done by: gray-scale modification (e.g. trasholding) (low pass) filtering Labeling Determination of the spatial arrangement of the events, i.e. searching for a structure Grouping Identification of the events by collecting together pixel participating in the same kind of event Extracting Compute a list of properties for each group Matching (see chapter 5) 4.3.2 Filtering and Edge Detection:  Filtering and Edge Detection Gaussian Smoothing Removes high-frequency noise Convolution of intensity image I with G: with: Edges Locations where the brightness undergoes a sharp change, Differentiate one or two times the image Look for places where the magnitude of the derivative is large. Noise, thus first filtering/smoothing required before edge detection 4.3.2 Edge Detection:  Edge Detection Ultimate goal of edge detection an idealized line drawing. Edge contours in the image correspond to important scene contours. 4.3.2 Optimal Edge Detection: Canny:  Optimal Edge Detection: Canny The processing steps Convolution of image with the Gaussian function G Finding maxima in the derivative Canny combines both in one operation (a) A Gaussian function. (b) The first derivative of a Gaussian function. 4.3.2 Optimal Edge Detection: Canny 1D example:  Optimal Edge Detection: Canny 1D example (a) Intensity 1-D profile of an ideal step edge. (b) Intensity profile I(x) of a real edge. (c) Its derivative I’(x). (d) The result of the convolution R(x) = G’ Ä I, where G’ is the first derivative of a Gaussian function. 4.3.2 Optimal Edge Detection: Canny:  Optimal Edge Detection: Canny 1-D edge detector can be defined with the following steps: Convolute the image I with G’ to obtain R. Find the absolute value of R. Mark those peaks |R| that are above some predefined threshold T. The threshold is chosen to eliminate spurious peaks due to noise. 2D ® Two dimensional Gaussian function 4.3.2 Nonmaxima Suppression:  Nonmaxima Suppression Output of an edge detector is usually a b/w image where the pixels with gradient magnitude above a predefined threshold are white and all the others are black Nonmaxima suppression generates contours described with only one pixel thinness 4.3.2 Optimal Edge Detection: Canny Example:  Optimal Edge Detection: Canny Example Example of Canny edge detection After nonmaxima suppression 4.3.2 Gradient Edge Detectors:  Gradient Edge Detectors Roberts Prewitt Sobel 4.3.2 Example:  Example Raw image Filtered (Sobel) Thresholding Nonmaxima suppression 4.3.2 Comparison of Edge Detection Methods:  Comparison of Edge Detection Methods Average time required to compute the edge figure of a 780 x 560 pixels image. The times required to compute an edge image are proportional with the accuracy of the resulting edge images 4.3.2 Dynamic Thresholding:  Dynamic Thresholding Changing illumination Constant threshold level in edge detection is not suitable Dynamically adapt the threshold level consider only the n pixels with the highest gradient magnitude for further calculation steps. (a) Number of pixels with a specific gradient magnitude in the image of Figure 1.2(b). (b) Same as (a), but with logarithmic scale 4.3.2 Hough Transform: Straight Edge Extraction:  Hough Transform: Straight Edge Extraction All points p on a straight-line edge must satisfy yp = m1 xp + b1 . Each point (xp, yp) that is part of this line constraints the parameter m1 and b1. The Hough transform finds the line (line-parameters m, b) that get most “votes” from the edge pixels in the image. This is realized by four stepts Create a 2D array A [m,b] with axes that tessellate the values of m and b. Initialize the array A to zero. For each edge pixel (xp, yp) in the image, loop over all values of m and b: if yp = m1 xp + b1 then A[m,b]+=1 Search cells in A with largest value. They correspond to extracted straight-line edge in the image. 4.3.2 Grouping, Clustering: Assigning Features to Features:  Grouping, Clustering: Assigning Features to Features Connected Component Labeling 4.3.2 Floor Plane Extraction:  Floor Plane Extraction Vision based identification of traversable The processing steps As pre-processing, smooth If using a Gaussian smoothing operator Initialize a histogram array H with n intensity values: for For every pixel (x,y) in If increment the histogram: 4.3.2 Whole-Image Features:  Whole-Image Features OmniCam 4.3.2 Image Histograms:  Image Histograms The processing steps As pre-processing, smooth using a Gaussian smoothing operator Initialize with n levels: For every pixel (x,y) in increment the histogram: 4.3.2 Image Fingerprint Extraction:  Image Fingerprint Extraction Highly distinctive combination of simple features 4.3.2 Example::  Example: Suppose: the segmentation problem has already been solved, regression equations for the model fit to the points have a closed-form solution – which is the case when fitting straight lines. that the measurement uncertainties of the data points are known Probabilistic Line Extraction from Noisy 1D Range Data 4.XX Line Extraction :  Line Extraction Estimating a line in the least squares sense. The model parameters (length of the perpendicular) and (its angle to the abscissa) describe uniquely a line. n measurement points in polar coordinates modeled as random variables Each point is independently affected by Gaussian noise in both coordinates. 4.XX Line Extraction:  Line Extraction Task: find the line This model minimizes the orthogonal distances di of a point to the line Let S be the (unweighted) sum of squared errors. 4.XX Line Extraction:  Line Extraction The model parameters are now found by solving the nonlinear equation system Suppose each point a known variance modelling the uncertainty in radial and angular. variance is used to determine a weight for each single point, e.g. Then, equation (2.53) becomes 4.XX Line Extraction:  Line Extraction It can be shown that the solution of (2.54) in the weighted least square sense is How the uncertainties of the measurements propagate through ‘the system’ (eq. 2.57, 2.58)? 4.XX Line Extraction Error Propagation Law:  Line Extraction Error Propagation Law given the 2n x 2n input covariance matrix: and the system relationships (2.57) and (2.58). Then by calculating the Jacobian we can instantly form the error propagation equation () yielding the sought CAR 4.XX Feature Extraction: The Simplest Case – Linear Regression:  Feature Extraction: The Simplest Case – Linear Regression 4.XX Feature Extraction: Nonlinear Linear Regression:  Feature Extraction: Nonlinear Linear Regression 4.XX Feature Extraction / Sensory Interpretation:  Feature Extraction / Sensory Interpretation A mobile robot must be able to determine its relationship to the environment by sensing and interpreting the measured signals. A wide variety of sensing technologies are available as we have seen in previous section. However, the main difficulty lies in interpreting these data, that is, in deciding what the sensor signals tell us about the environment. Choice of sensors (e.g. in-door, out-door, walls, free space …) choice of the environment model 4.XX

Related presentations


Other presentations created by Esteban

Chapter 15 Knee Conditions
28. 11. 2007
0 views

Chapter 15 Knee Conditions

S WATER
09. 10. 2007
0 views

S WATER

ELearning Conf collins
12. 12. 2007
0 views

ELearning Conf collins

Istvan Bilik
29. 10. 2007
0 views

Istvan Bilik

The Rise of Mussolini in Italy
01. 11. 2007
0 views

The Rise of Mussolini in Italy

intro cicero
01. 11. 2007
0 views

intro cicero

titanic
05. 11. 2007
0 views

titanic

00 18 pp7
05. 11. 2007
0 views

00 18 pp7

defence ficci
05. 11. 2007
0 views

defence ficci

bisnovatiykogan
13. 11. 2007
0 views

bisnovatiykogan

firm
22. 11. 2007
0 views

firm

UDSL Pres 4
26. 11. 2007
0 views

UDSL Pres 4

Insomnia
29. 11. 2007
0 views

Insomnia

ag environment
28. 12. 2007
0 views

ag environment

SAS781 016ArticleSlideShow
01. 01. 2008
0 views

SAS781 016ArticleSlideShow

Friendship Quotes
02. 01. 2008
0 views

Friendship Quotes

Lesson 16 Leader of Russia
26. 10. 2007
0 views

Lesson 16 Leader of Russia

Electronics 1 20 06
07. 11. 2007
0 views

Electronics 1 20 06

BATALIN FEB 13 04
07. 01. 2008
0 views

BATALIN FEB 13 04

NRES322 14
08. 01. 2008
0 views

NRES322 14

Weese MRSA
19. 11. 2007
0 views

Weese MRSA

Shore Stephen
13. 12. 2007
0 views

Shore Stephen

Gerberding 07 AAIDD
24. 10. 2007
0 views

Gerberding 07 AAIDD

halloween history
05. 11. 2007
0 views

halloween history

june25 natural dhananjay
20. 02. 2008
0 views

june25 natural dhananjay

BEST03
24. 02. 2008
0 views

BEST03

ch9
27. 02. 2008
0 views

ch9

TrainingPanel
29. 02. 2008
0 views

TrainingPanel

080407
14. 03. 2008
0 views

080407

cares
27. 03. 2008
0 views

cares

UNIDO1
30. 03. 2008
0 views

UNIDO1

Praes WR Glaser DGfPs
16. 11. 2007
0 views

Praes WR Glaser DGfPs

Tireoidites CM 2006
28. 12. 2007
0 views

Tireoidites CM 2006

sommaruga
25. 10. 2007
0 views

sommaruga

Card Sort Indian Artifacts
19. 11. 2007
0 views

Card Sort Indian Artifacts

Exodus08
24. 10. 2007
0 views

Exodus08

Arthur Sadoff
28. 09. 2007
0 views

Arthur Sadoff

research industry partnerships
16. 11. 2007
0 views

research industry partnerships

Molvir Flavi Toga 01 19 06
24. 10. 2007
0 views

Molvir Flavi Toga 01 19 06

Chris Aalberts
25. 12. 2007
0 views

Chris Aalberts

mission mech garden intro
11. 12. 2007
0 views

mission mech garden intro

library agent
30. 10. 2007
0 views

library agent

FallOffDutySafety
06. 11. 2007
0 views

FallOffDutySafety

cours sts intro gen 2005
12. 11. 2007
0 views

cours sts intro gen 2005

JanConrad mar10 06
14. 11. 2007
0 views

JanConrad mar10 06

re nsdi06 slides
18. 12. 2007
0 views

re nsdi06 slides

AMC Italy Intro CMonticelli
31. 10. 2007
0 views

AMC Italy Intro CMonticelli

CROSSMARC Rome Intro
31. 10. 2007
0 views

CROSSMARC Rome Intro

Liz presentation
28. 12. 2007
0 views

Liz presentation

anheuser1
05. 11. 2007
0 views

anheuser1

AUST Overview for Website
06. 11. 2007
0 views

AUST Overview for Website

2b lanciotti
24. 10. 2007
0 views

2b lanciotti

OWK2006 Dixon SBDC
29. 10. 2007
0 views

OWK2006 Dixon SBDC

L23 ch14
03. 10. 2007
0 views

L23 ch14

JAA etno maj 2005
01. 12. 2007
0 views

JAA etno maj 2005

Japansk encefalitt
24. 10. 2007
0 views

Japansk encefalitt

MM5 yhe
04. 01. 2008
0 views

MM5 yhe