04 1 IR Basics 3

Information about 04 1 IR Basics 3

Published on January 14, 2008

Author: Carmela

Source: authorstream.com

Content

Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.):  Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University Organizational Remarks:  Organizational Remarks Exercises: Please, register for the exercises by sending me ([email protected]) an email till Friday, May 5th, with - Your name, - Matrikelnummer, - Studiengang (BA, MSc, Diploma, …) - Plans for exam (yes, no, undecided) This is just to organize the exercises but has no effect if you decide to drop this course later. Recap: IR System & Tasks Involved:  INDEX Recap: IR System & Tasks Involved INFORMATION NEED DOCUMENTS User Interface PERFORMANCE EVALUATION QUERY QUERY PROCESSING (PARSING & TERM PROCESSING) LOGICAL VIEW OF THE INFORM. NEED SELECT DATA FOR INDEXING PARSING & TERM PROCESSING Evaluation of IR Systems:  Evaluation of IR Systems Standard approaches for algorithm and computer system evaluation Speed / processing time Storage requirements Correctness of used algorithms and their implementation But most importantly Performance, effectiveness Another important issue: Usability, users’ perception Questions: What is a good / better search engine? How to measure search engine quality? How to perform evaluations? Etc. What does Performance/Effectiveness of IR Systems mean?:  What does Performance/Effectiveness of IR Systems mean? Typical questions: How good is the quality of a system? Which system should I buy? Which one is better? How can I measure the quality of a system? What does quality mean for me? Etc. Their answer depends on users, application, … Very different views and perceptions User vs. search engine provider, developer vs. manager, seller vs. buyer, … And remember: Queries can be ambiguous, unspecific, etc. Hence, in practice, use restrictions and idealization, e.g. only binary decisions Precision & Recall:  Precision & Recall PRECISION = # FOUND & RELEVANT # FOUND RECALL = # FOUND & RELEVANT # RELEVANT RESULT: DOCUMENTS: 1. DOC. B 2. DOC. E 3. DOC. F 4. DOC. G 5. DOC. D 6. DOC. H Restrictions: 0/1 Relevance, Set instead of order/ranking But: We can use this for eval. of ranking, too (via top N docs.) Calculating Precision & Recall:  Calculating Precision & Recall Precision: Can be calculated directly from the result Recall: Requires relevance ratings for whole (!) data collection In practice: Approaches to estimate recall 1.) Use a representative sample instead of whole data collection 2.) Document-source method 3.) Expanding queries 4.) Compare result with external sources 5.) Pooling method Precision & Recall – Special cases:  Precision & Recall – Special cases Special treatment is necessary, if no doc. is found or no relevant docs. exist (division by zero) NO REL. DOC. EXISTS: A = C = 0 1st CASE: B = 0 2nd CASE: B > 0 EMPTY RESULT SET: A = B = 0 1st CASE: C = 0 2nd CASE: C > 0 Precision & Recall Graphs:  Precision & Recall Graphs Comparing 2 systems: System 1: Prec 1 = 0.6, Rec 1 = 0.3 System 2: Prec 2 = 0.4, Rec 2 = 0.6 Which one is better? Prec.-Recall-Graph: The F Measure:  The F Measure Alternative measures exist, including ones combining Prec. p and Rec. r in 1 single value Example: The F Measure ( = rel. weight for recall, manually set) SOURCE: N. FUHR (UNIV. DUISBURG) SKRIPTUM ZUR VORLESUNG INFORMATION RETRIEVAL, SS 2006 Example for different  Calculating Average Prec. Values:  Calculating Average Prec. Values 1. Macro assessment Estimates the expected value for the precision of a randomly chosen query (query or user oriented) Problem: Queries with empty result set 2. Micro assessment Estimates the likelihood of a randomly chosen doc. being relevant (document or system oriented) Problem: Does not support monotony Monotony of Precision & Recall:  Monotony of Precision & Recall Monotony: Adding a query that delivers the same results for both systems does not change their quality assessment. Example (Precision): Precision & Recall for Rankings:  Distinguish between linear and weak ranking Basic idea: Evaluate precision and recall by looking at the top n results for different n Generally: Precision decreases and recall increases with growing n Precision & Recall for Rankings PRECISION RECALL Precision & Recall for Rankings (Cont.):  Precision & Recall for Rankings (Cont.) Realizing Evaluations:  Realizing Evaluations Now we have a system to evaluate and: Measures to quantify performance Methods to calculate them What else do we need? Documents dj (test set) Tasks (information needs) and respective queries qi Relevance judgments rij (normally binary) Results (delivered by the system) Evaluation = comparison of Given, perfect result: (qi, dj, rij) with result from the system: (qi, dj, rij(S1)) The TREC Conference Series:  The TREC Conference Series In the old days: IR evaluation critical because No good (i.e. big) test sets No comparability because of different test sets Motivation for initiatives such as TREC: Text REtrieval Conference (TREC), since 1992, see http://trec.nist.gov/ Goals of TREC: Create realistic, significant test sets Achieve comparability of different systems Establish common basics for IR evaluation Increase technology transfer between industries and research The TREC Conf. Series (Cont.):  The TREC Conf. Series (Cont.) TREC offers Various collections of test data Standardized retrieval tasks (queries & topics) Related relevance measures Different tasks (tracks) for certain problems Examples for Tracks targeted by TREC: Traditional text retrieval Spoken document retrieval Non-English or multilingual retrieval Information filtering User interactions Web search, SPAM (since 2005), Blog (since 2005) Video retrieval etc. Advantages and Disadv. of TREC:  Advantages and Disadv. of TREC TREC (and other IR initiatives) Very successful, progress which otherwise might probably not have happened But disadvantages exist as well, e.g. Only compares performance but not actual reasons for different behavior Unrealistic data (e.g. still too small, not represen- tative enough) Often just batch mode evaluation, no interactivity or user experience (Note: There are interactivity tracks!) Often no analysis of significance Note: Most of these arguments are general problems of IR evaluation and not necessarily TREC specific TREC Home Page:  TREC Home Page Visit the TREC site at http://trec.nist.gov and browse the different Tracks (gives you an idea about what is going on in the IR community) Recap: IR System & Tasks Involved:  INDEX Recap: IR System & Tasks Involved INFORMATION NEED DOCUMENTS User Interface PERFORMANCE EVALUATION QUERY QUERY PROCESSING (PARSING & TERM PROCESSING) LOGICAL VIEW OF THE INFORM. NEED SELECT DATA FOR INDEXING PARSING & TERM PROCESSING

Related presentations


Other presentations created by Carmela

1 Introduction to Strategy
11. 01. 2008
0 views

1 Introduction to Strategy

2 Serge Frechette
07. 05. 2008
0 views

2 Serge Frechette

Erguden
02. 05. 2008
0 views

Erguden

Canada
24. 04. 2008
0 views

Canada

2007 freshman class
23. 04. 2008
0 views

2007 freshman class

sport
16. 04. 2008
0 views

sport

tamuraADI2004
08. 04. 2008
0 views

tamuraADI2004

Imagine Cup
03. 04. 2008
0 views

Imagine Cup

henry
28. 03. 2008
0 views

henry

IAU Laos
21. 03. 2008
0 views

IAU Laos

ijcnlp 20080109
19. 03. 2008
0 views

ijcnlp 20080109

Motivation
17. 01. 2008
0 views

Motivation

Chapter 05
09. 01. 2008
0 views

Chapter 05

stone masonry
10. 01. 2008
0 views

stone masonry

ppp David Asteraki Presentation
12. 01. 2008
0 views

ppp David Asteraki Presentation

Promer Materials 34
13. 01. 2008
0 views

Promer Materials 34

syntax 1 checklist
13. 01. 2008
0 views

syntax 1 checklist

ARChapter4
15. 01. 2008
0 views

ARChapter4

Gravity Control
16. 01. 2008
0 views

Gravity Control

PersuasionThroughRhe toric
17. 01. 2008
0 views

PersuasionThroughRhe toric

Math
21. 01. 2008
0 views

Math

050525Barlow
22. 01. 2008
0 views

050525Barlow

Chemistry PostGrad
22. 01. 2008
0 views

Chemistry PostGrad

ci avian influenza
23. 01. 2008
0 views

ci avian influenza

meteorology03
25. 01. 2008
0 views

meteorology03

04 iran
04. 02. 2008
0 views

04 iran

mobiquitous keynote
04. 02. 2008
0 views

mobiquitous keynote

Jay Patterson 9 15 05
11. 02. 2008
0 views

Jay Patterson 9 15 05

LCROSS OverviewforObs
08. 01. 2008
0 views

LCROSS OverviewforObs

Movies
29. 01. 2008
0 views

Movies

2007 Popcorn Presentation
07. 02. 2008
0 views

2007 Popcorn Presentation

ylp
18. 02. 2008
0 views

ylp

poster
25. 01. 2008
0 views

poster

squirrel
25. 02. 2008
0 views

squirrel

04 Greg McDougall
05. 02. 2008
0 views

04 Greg McDougall

F07C107 19Defense1
10. 01. 2008
0 views

F07C107 19Defense1

BeansB
16. 01. 2008
0 views

BeansB

Laser2003
18. 01. 2008
0 views

Laser2003

MHL 2005 06
13. 02. 2008
0 views

MHL 2005 06

dasso asse 2004
22. 01. 2008
0 views

dasso asse 2004

Cation biogeo chemistry
22. 01. 2008
0 views

Cation biogeo chemistry

GSIYCF IYCF
15. 01. 2008
0 views

GSIYCF IYCF

Inspire HGL Version
29. 01. 2008
0 views

Inspire HGL Version

industriousnh
14. 03. 2008
0 views

industriousnh

su induction
07. 02. 2008
0 views

su induction