www talk

Information about www talk

Published on November 20, 2007

Author: Ethan

Source: authorstream.com

Content

Retroactive Answering of Search Queries:  Retroactive Answering of Search Queries Beverly Yang Glen Jeh Google Personalization:  Personalization Provide more relevant services to specific user Based on Search History Usually operates at a high level e.g., Re-order search results based on a user’s general preferences Classic example: User likes cars Query: “jaguar” Why not focus on known, specific needs? User likes cars User is interested in the 2006 Honda Civic The QSR system:  The QSR system QSR = Query-Specific (Web) Recommendations Alerts user when interesting new results to selected previous queries have appeared Example Query: “britney spears concert san francisco” No good results at time of query (Britney not on tour) One month later, new results (Britney is coming to town!) User is automatically notified Slide4:  Query treated as standing query New results are web page recommendations Challenges:  Challenges How do we identify queries representing standing interests? Explicit – Web Alerts. But no one does this Want to automatically identify How do we identify interesting new results? Web alerts: change in top 10. But that’s not good enough Outline:  Outline Introduction Basic QSR Architecture Identifying Standing Interests Determining Interesting Results User Study Setup Results Architecture:  Architecture Related Work:  Related Work Identifying User Goal [Rose & Levinson 2004], [Lee, Liu & Cho 2005] At a higher, more general level Identifying Satisfaction [Fox, et. al. 2005] One component of identifying standing interest Specific model, holistic rather than considering strength and characteristics of each signal Recommendation Systems Too many to list! Outline:  Outline Introduction Basic QSR Architecture Identifying Standing Interests Determining Interesting Results User Study Setup Results Definition:  Definition A user has a standing interest in a query if she would be interested in seeing new interesting results Factors to consider: Prior fulfillment/Satisfaction Query interest level Duration of need or interest Example:  Example QUERY (8s) -- html encode java RESULTCLICK (91s) – 2. http://www.java2html.de/ja… RESULTCLICK (247s) – 1. http://www.javapractices/… RESULTCLICK (12s) – 8. http://www.trialfiles.com/… NEXTPAGE (5s) – start = 10 RESULTCLICK (1019s) – 12. http://forum.java.su… REFINEMENT (21s) – html encode java utility RESULTCLICK (32s) – 7. http://www.javapracti… NEXTPAGE (8s) – start = 10 NEXTPAGE (30s) – start = 20 Example:  Example QUERY (8s) -- html encode java RESULTCLICK (91s) – 2. http://www.java2html.de/ja… RESULTCLICK (247s) – 1. http://www.javapractices/… RESULTCLICK (12s) – 8. http://www.trialfiles.com/… NEXTPAGE (5s) – start = 10 RESULTCLICK (1019s) – 12. http://forum.java.su… REFINEMENT (21s) – html encode java utility RESULTCLICK (32s) – 7. http://www.javapracti… NEXTPAGE (8s) – start = 10 NEXTPAGE (30s) – start = 20 Signals:  Signals Good ones: # terms # clicks, # refinements History match Repeated non-navigational Other: Session duration, number of long clicks, etc. Outline:  Outline Introduction Basic QSR Architecture Identifying Standing Interests Determining Interesting Results User Study Setup Results Web Alerts:  Web Alerts Heuristic: new result in top 10 Query: “beverly yang” Alert 10/16/2005: http://someblog.com/journal/images/04/0505/ Seen before through a web search Poor quality page Alert repeated due to ranking fluctuations QSR Example:  QSR Example Query: “rss reader” (not real) Signals:  Signals Good ones: History presence Rank (inverse!) Popularity and relevance (PR) scores Above dropoff PR scores of a few results are much higher than PR scores of the rest Content match Other: Days elapsed since query, sole changed Outline:  Outline Introduction Basic QSR Architecture Identifying Standing Interests Determining Interesting Results User Study Setup Results Overview:  Overview Human subjects: Google Search History users Purpose: Demonstrate promise of system effectiveness Verify intuitions behind heuristics Many disclaimers: Study conducted internally!!! 18 subjects!!! Only a fraction of queries in each subject’s history!!! Need additional studies over broader populations to generalize results Questionnaire:  Questionnaire QUERY (8s) -- html encode java RESULTCLICK (91s) – 2. http://www.java2html.de/ja… RESULTCLICK (247s) – 1. http://www.javapractices/… RESULTCLICK (12s) – 8. http://www.trialfiles.com/… NEXTPAGE (5s) – start = 10 RESULTCLICK (1019s) – 12. http://forum.java.su… REFINEMENT (21s) – html encode java utility RESULTCLICK (32s) – 7. http://www.javapracti… NEXTPAGE (8s) – start = 10 NEXTPAGE (30s) – start = 20 Did you find a satisfactory answer for your query? Yes Somewhat No Can’t Remember How interested would you be in seeing a new high-quality result? Very Somewhat Vaguely Not How long would this interest last for? Ongoing Month Week Now How good would you rate the quality of this result? Excellent Good Fair Poor Outline:  Outline Introduction Basic QSR Architecture Identifying Standing Interests Determining Interesting Results User Study Setup Results Questions:  Questions Is there a need for automatic detection of standing interests? Which signals are useful for indicating standing interest in a query session? Which signals are useful for indicating quality of recommendations? Is there a need?:  Is there a need? How many Web alerts have you ever registered? Of the queries marked “very” or “somewhat” interesting (154 total), how many have you registered? 0: 73% 1: 20% 2: 7% >2: 0% 0: 100% Effectiveness of Signals:  Effectiveness of Signals Standing interests # clicks (> 8) # refinements (> 3) History match Also: repeated non-navigational, # terms (> 2) Quality Results PR score (high) Rank (low!!) Above Dropoff Standing Interest:  Standing Interest Prior Fulfillment:  Prior Fulfillment Interest Score:  Interest Score Goal: capture the relative standing interest a user has in a query session iscore = a * log(# clicks + # refinements) + b * log(# repetitions) + c * (history match score) Select query sessions with iscore > t Effectiveness of iscore:  Effectiveness of iscore Standing Interest: Sessions for which user is somewhat or very interested in seeing further results Select query sessions with iscore > t Vary t to get precision/recall tradeoff 90% precision, 11% recall 69% precision, 28% recall Compare: 28% precision by random selection Recall – percentage of standing interest sessions that appeared in the survey Quality of Results:  Quality of Results “Desired”: marked in survey as “good” or “excellent” Quality Score:  Quality Score Goal: capture relative quality of recommendation Apply score after result has passed a number of boolean filters qscore = a * PR score + b * rank c * topic match 1 b’ * ---- rank Effectiveness of qscore:  Effectiveness of qscore Recall: Percentage of URLs in the survey marked as “good” or “excellent” Select URLs with score > t Conclusion:  Conclusion Huge gap: Users’ standing interests/needs Existing technology to address them QSR: Retroactively answer search queries Automatic identification of standing interests and unfulfilled needs Identification of interesting new results Future work Broader studies Feedback loop Thank you!:  Thank you! Selecting Sessions:  Selecting Sessions Users may have thousands of queries Must only show 30 Try to include a mix of positive and negative sessions Prevents us from gathering some stats Process Filter special-purpose queries (e.g., maps) Filter sessions with 1-2 actions Rank sessions by iscore Take top 15 sessions by score Take 15 randomly chosen sessions Selecting Recommendations:  Selecting Recommendations Tried to only show good recommendations Assumption: some will be bad Process Only consider sessions with history presence Only consider results in top 10 (Google) Must pass at least 2 boolean signals Select top 50% according to qscore 3rd-Person study:  3rd-Person study Not enough recommendations in 1st-person study Asked subjects to evaluate recommendations made for other users’ sessions

Related presentations


Other presentations created by Ethan

EB3
02. 10. 2007
0 views

EB3

bayarbaatar nov5
03. 10. 2007
0 views

bayarbaatar nov5

op sys jit lean
28. 11. 2007
0 views

op sys jit lean

Everything about flowers v4
07. 12. 2007
0 views

Everything about flowers v4

CW Overview Amald iJuly2007
15. 11. 2007
0 views

CW Overview Amald iJuly2007

DXM CSAM 9 07
29. 12. 2007
0 views

DXM CSAM 9 07

LT GAPS Fresh Produce2
31. 12. 2007
0 views

LT GAPS Fresh Produce2

Osmotic Pressure and Colloids
02. 01. 2008
0 views

Osmotic Pressure and Colloids

SPCC Training
09. 11. 2007
0 views

SPCC Training

MOM 6
14. 11. 2007
0 views

MOM 6

Warren
03. 10. 2007
0 views

Warren

Program Overview
01. 01. 2008
0 views

Program Overview

Jepordy
06. 11. 2007
0 views

Jepordy

Canon 2006 Market Share Pres
19. 02. 2008
0 views

Canon 2006 Market Share Pres

2005 4160s2 05 rkane
04. 10. 2007
0 views

2005 4160s2 05 rkane

Deen Nutric
06. 03. 2008
0 views

Deen Nutric

z82352 4
10. 03. 2008
0 views

z82352 4

1 Presenting to Decision Makers
12. 03. 2008
0 views

1 Presenting to Decision Makers

9NSFEuropeOfficeGene ral
14. 03. 2008
0 views

9NSFEuropeOfficeGene ral

2 Competitive Advantage
18. 03. 2008
0 views

2 Competitive Advantage

Medieval Presentation revision 1
21. 03. 2008
0 views

Medieval Presentation revision 1

renfrew met obs SO nov06
15. 11. 2007
0 views

renfrew met obs SO nov06

Geography of Grass
07. 04. 2008
0 views

Geography of Grass

November 2005
30. 03. 2008
0 views

November 2005

EAB L 06 1166
28. 11. 2007
0 views

EAB L 06 1166

librarydoc 794
17. 12. 2007
0 views

librarydoc 794

4 InternationalAcademy IBProgram
19. 12. 2007
0 views

4 InternationalAcademy IBProgram

case1
07. 11. 2007
0 views

case1

A guide to Japanese ODA
09. 10. 2007
0 views

A guide to Japanese ODA

tel 00010398
04. 12. 2007
0 views

tel 00010398

Louw 26Mar AM
30. 12. 2007
0 views

Louw 26Mar AM

FUPWebinar121003
29. 12. 2007
0 views

FUPWebinar121003

CCAT Hawaii Overview
01. 10. 2007
0 views

CCAT Hawaii Overview