# Difference between revisions of "a Dynamic Bayesian Network Click Model for Web Search Ranking"

Line 5: | Line 5: | ||

A dynamic bayesian network (DBN) model is proposed in this paper in order to study the user's browsing and click behaviour, and eventually to infer the relevance of the documents. The proposed model addresses the issues with the above models through the following assumptions about the user's click and browsing behaviour: | A dynamic bayesian network (DBN) model is proposed in this paper in order to study the user's browsing and click behaviour, and eventually to infer the relevance of the documents. The proposed model addresses the issues with the above models through the following assumptions about the user's click and browsing behaviour: | ||

− | |||

− | + | * i) The user makes a linear transversal through the results and decides whether to click based on the perceived relevance of the document. | |

− | + | * ii) The user chooses to examine the next document if she/he is unsatisfied with the clicked document (based on the actual relevance). | |

− | iv) There is no limit on the number of clicks that a user can make during a search. | + | * iii) A click does not necessarily mean that the user is satisfied with the clicked document. With respect to this, the proposed model attempts to distinguish the perceived relevance and the actual relevance. |

+ | |||

+ | * iv) There is no limit on the number of clicks that a user can make during a search. | ||

The documents ranked on a result list of a given query are presented through a sequence in DBN. For a given position i, there is an observed variable C<sub>i</sub> indicating whether there was a click at position i. There are hidden binary variables defined in DBN in order to model examination, perceived relevance, and actual relevance. The Expectation Maximization algorithm is used to find the maximum likelihood estimate of the perceived relevance and the actual relevance variables. The forward-backward algorithm is used to to compute the posterior probabilities of the rest of the hidden variables. | The documents ranked on a result list of a given query are presented through a sequence in DBN. For a given position i, there is an observed variable C<sub>i</sub> indicating whether there was a click at position i. There are hidden binary variables defined in DBN in order to model examination, perceived relevance, and actual relevance. The Expectation Maximization algorithm is used to find the maximum likelihood estimate of the perceived relevance and the actual relevance variables. The forward-backward algorithm is used to to compute the posterior probabilities of the rest of the hidden variables. |

## Revision as of 15:38, 8 November 2011

One of the most common click models in Web search, known as the *position model*, is based on the position bias on the displayed ranked results. Under this model, it is assumed that the chance of click decreases towards the lower ranks on result pages due to the reduced visual attention from the user. A more recent click model, referred to as the *cascade model* of user behaviour, assumes that the user scans search results from top to bottom and eventually stops because either their information need is satisfied or their patience is exhausted.

The benefit of the cascade model over the position model is its ability to explain the click with respect to the relevance of the previous documents; therefore, the later model has shown state-of-the-art performance over the former one. However, the cascade model makes a strong assumption that there is only one click per search; hence, it can not explain the abandoned search or search with multiple clicks. Moreover, none of these models distinguish the perceived relevance and the actual relevance. The perceived relevance is the relevance of a document judged by the user based on their examination of the document as it is shown on a result page. The actual relevance is the relevance of the document judged by the user once she/he clicks on it and sees its content.

A dynamic bayesian network (DBN) model is proposed in this paper in order to study the user's browsing and click behaviour, and eventually to infer the relevance of the documents. The proposed model addresses the issues with the above models through the following assumptions about the user's click and browsing behaviour:

* i) The user makes a linear transversal through the results and decides whether to click based on the perceived relevance of the document.

* ii) The user chooses to examine the next document if she/he is unsatisfied with the clicked document (based on the actual relevance).

* iii) A click does not necessarily mean that the user is satisfied with the clicked document. With respect to this, the proposed model attempts to distinguish the perceived relevance and the actual relevance.

* iv) There is no limit on the number of clicks that a user can make during a search.

The documents ranked on a result list of a given query are presented through a sequence in DBN. For a given position i, there is an observed variable C_{i} indicating whether there was a click at position i. There are hidden binary variables defined in DBN in order to model examination, perceived relevance, and actual relevance. The Expectation Maximization algorithm is used to find the maximum likelihood estimate of the perceived relevance and the actual relevance variables. The forward-backward algorithm is used to to compute the posterior probabilities of the rest of the hidden variables.

Three types of experiments are performed to validate DBN and compare it with the existing models. First, they evaluate the click model in terms of the predicted click rate at position 1. Then they use the predicted relevance as a feature in a ranking function. In the last set of experiments, they use the predicted relevance as a supplementary information to train a ranking function.

The experimental results performed on the log of a commercial search engine indicate that DBN can accurately explain the observed clicks. They show that the function learned with the predicted relevance is not far from being as good as a function trained with a large amount of editorial data. They further show that combining both types of information can lead to an even more accurate ranking function.