Preview

355.2 Experiment 1: L2R Algorithmss For SDR Analysis

Satisfactory Essays
Open Document
Open Document
854 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
355.2 Experiment 1: L2R Algorithmss For SDR Analysis
355.2 Experiment 1: L2R algorithms for SDR
The eight algorithms in Ranklib [62]: {MART, RankNet, RankBoost, AdaRank, Coordi- nate Ascent, LambdaMART, ListNet, Random Forests} are tested for optimizing one of each of the evaluation metrics {NDCG@10, ERR@10, MAS P}.
Table 5.2 shows that all models built from learning to rank algorithms outperformed the baseline unsupervised BM25 score ranking in terms of the average of the evaluation measures used over the five folds: NDCG@10 (Equation (2.17)), ERR@10 (Equation
(2.18)) and MAS P@10 (Equation 2.20a). An average improvement of 25.6 % is achieved in NDCG@10 and 13.6 % in ERR@10 over the baseline BM25. The results recorded in Table 5.2 show that the Coordinate Ascent and tree-based algorithms (MART, Lamb-
…show more content…
From Table 5.4 and Figure 5.5, the im- provement achieved in terms of ERR@10 for using transcription related features (”WITH TRANS”)
40over using the feature vectors of length 50 (”WITHOUT TRANS”) is obvious. It can be noted from the table that with less features (”WITHOUT TRANS”), tree-based and other algorithms, outperform Coordinate Ascent in terms of ERR@10 as well. From Table 5.3 and Table 5.4, it can be seen that Random Forest algorithm performs fairly well in both cases with the transcription features and without the transcription features, however it may be affected if irrelevant features are present in the training data as mentioned in [68].
Our proposed algorithms involve feature reduction and bagging techniques to simplify the learning to rank model, decrease the training time as in Informed Forest (Algorithm 4.1) and PCA Reduced Forest (Algorithm 4.2) and to select the best set of features as well, for better search time as in ReducedForest (Algorithm 4.3). These algorithms are

You May Also Find These Documents Helpful

  • Satisfactory Essays

    NT1210 Labs 3.1-3.4

    • 1882 Words
    • 9 Pages

    If the data is too detailed, it may overlap with other information. However if the data is too general, then there may be crucial information missing.…

    • 1882 Words
    • 9 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Pt1420 Unit 1 Assignment

    • 303 Words
    • 2 Pages

    IBM Multimedia Analysis and Retrieval System [8]. The service enabled users to train new classifiers in December 2015.…

    • 303 Words
    • 2 Pages
    Satisfactory Essays
  • Satisfactory Essays

    LYT2 Task2

    • 4061 Words
    • 12 Pages

    Stein, S. S., Gerding, E. H., Rogers, A. C., Larson, K. K., & Jennings, N. R. (2011). Algorithms…

    • 4061 Words
    • 12 Pages
    Satisfactory Essays
  • Powerful Essays

    Atlantic Aquaculture

    • 2104 Words
    • 9 Pages

    A – In the Appendix the decision trees are shown and the following elements deserve attention. The nodes start with having high/low demand…

    • 2104 Words
    • 9 Pages
    Powerful Essays
  • Good Essays

    Scor eStore.com

    • 677 Words
    • 2 Pages

    Q2: Secondly, we are a bit unclear on the way in which the decision trees can be applied to this case.…

    • 677 Words
    • 2 Pages
    Good Essays
  • Good Essays

    Accuracy Assessment Paper

    • 344 Words
    • 2 Pages

    generated the accuracy report shown below. In order to save time, I only used 50 random…

    • 344 Words
    • 2 Pages
    Good Essays
  • Powerful Essays

    Decision Trees

    • 4326 Words
    • 18 Pages

    Section Page Preface................................................................................................................................. iv 1.0 Introduction................................................................................................................. 1 1.1 Advantages of using decision trees ....................................................................... 1 1.2 About this primer.................................................................................................. 1 1.3 To use this primer................................................................................................. 2 Decision…

    • 4326 Words
    • 18 Pages
    Powerful Essays
  • Good Essays

    The Naïve Bayes classification model will now be applied to the reduced variable dataset. The…

    • 642 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    Mis Decison Tree

    • 366 Words
    • 2 Pages

    Based on average square error, which of the two decision tree models appears to be better (the first one or the second one)?…

    • 366 Words
    • 2 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Gene Expression Data

    • 388 Words
    • 2 Pages

    | 2.4 ‘Gene shaving’ as a method for identifying distinct sets of genes with similar expression patterns…

    • 388 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    Therefore, LSPTAN builds a simpler network than the SP-TAN. We select only the best Super Parent to a test document. But there is no limitation on the choice of the Favorite Children. Thus, all the children attributes that increment the probability that the document belongs to a class, are included in the classification model.\looseness=-1…

    • 1277 Words
    • 6 Pages
    Good Essays
  • Powerful Essays

    Anacor Algorithm

    • 1200 Words
    • 5 Pages

    Other names for SVD are “Eckart-Young decomposition” after Eckart and Young (1936), who introduced the technique in psychometrics, and “basic structure” (Horst, 1963). The rescalings and centering, including their rationale, are well explained in Benzécri (1969), Nishisato (1980), Gifi (1981), and Greenacre (1984). Those who are interested in the general framework of matrix approximation and reduction of dimensionality with positive definite row and column metrics are referred to Rao (1980). The delta method is a method that can be used for the derivation of asymptotic distributions and is particularly useful for the approximation of the variance of complex statistics. There are many versions of the delta method, differing in the assumptions made and in the strength of the approximation (Rao, 1973, ch. 6; Bishop et al., 1975, ch. 14; Wolter, 1985, ch. 6).…

    • 1200 Words
    • 5 Pages
    Powerful Essays
  • Good Essays

    After creating and evaluating the full model, a back model and step model were created using backwards selection and stepwise selection, respectively. Additionally, there were two LASSO models created. Both of these LASSO models were created using a weight of 4 (TPR) to 1 (FPR). Full Model 2 also has these weights, so we will use this model to compare to the LASSO models.…

    • 1117 Words
    • 5 Pages
    Good Essays
  • Good Essays

    SVM achieved better detection rate and fewer false alarms. SVM can improve the accuracy and reduce the computation. SVMs treat every (m_n)-pixel image as a point in a mn-dimensional space. Secondly, SVMs compare a candidate point to successive pairs of known classes to determine its experimental class, rather than comparing the distances between the candidate and a series of single points in a high-dimensional space. For a forty-class training set containing ten images of each subject, an SVM facial recognition implementation achieved an average minimum mis classication rate of 3:0%…

    • 803 Words
    • 4 Pages
    Good Essays
  • Satisfactory Essays

    Some research has focused on comparison of these algorithms in performance and speed of calculation…

    • 4284 Words
    • 18 Pages
    Satisfactory Essays

Related Topics