Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #5203
    chenhm
    Participant

    My final submission only declared 177 sensor faults and the score ended up being above 300. How is total score calculated on each day as well as the final evaluation? Would the competition organizer release the correct answers to the testing data?

    My final submission only declared 177 sensor faults and the score ended up being above 300. How is total score calculated on each day as well as the final evaluation? Would the competition organizer release the correct answers to the testing data?

    #5639
    moredof
    Participant

    Hi,

    My team’s(Green) submission only have 112 failure sensors.

    And only 12 fault answers are exactly the same compare to your result.

    Obviously, the algorithms we used to decide a fault are very different.

    It’s very interesting…

    #5640
    eklund
    Participant

    You also get “points” for correctly identifying good anemometers.

    score = sum(YourSubmission==GroundTruth)

    So if YS = [1 0 0 1] and GT = [0 0 0 1], score = 3

    #5641
    chenhm
    Participant

    In that case, the score function is different from what was announced earlier during the competition. In addition, the best score (winner by dsuc) only has 62% classification accuracy. Would this indicate that the lack of sensor fault training data significantly hindered the participants from verifying their sensor fault hypotheses? Is it possible to reveal the ground truth to us at some point?

    #5642
    chenhm
    Participant

    Just curious whether any reply to the post has to be approved by the administrator.

    #5643
    pavleb
    Participant

    Can we expect an official release of the correct answers, similarly like for the case of PHM 2009 competition?

Viewing 6 posts - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.