ImagEVAL 2006 Official RESULTS

 

The results of the first edition of ImagEVAL have been officialy presented during the first day of the NicephoreDays.

The scientific animator (CEA List) presented the results for each task.

Presentations during the NicephoreDays :

As agreed with the consortium of ImagEVAL, all the results remain anonymous except

  • the 3 best runs and
  • the 3 best teams 

The other teams are free to make their participation public and to valorize their results.

NOTE This policy is explained by the presence of some compagnies among the participants. The ImagEVAL consortium hopes that a complete public diffusion, as it observed for other evaluation campaign, will be practiced for the futures editions.

EVALUATION FEATURES

For each task, we provide

  • Mean Average Precision (MAP)
  • Average Precision for each run and for each query
  • Recall & Precision values
  • Global recallD
  • ifference to the median (for the average precision) for each run : this information enables each team to know which queries their system performed well or badly.

NOTE Other evaluation features are directly available directly with TRECeval

ABOUT THE PROCESSING TIMES

We decided to consider the processing time and provide a mixed representation that displays the main ranking metric versus processing time.

Normalizing processing time remains a hard problem because of the complexity of isolating different run times.

The consortium proposed three main processing time

  • (1) Features Extraction
  • (2) Learning, Modelling processing
  • (3) Retrieval processing)

the participants usually reported different interpretations of these times, according to the specificities of their system. Nevertheless, after several request for more information, we managed to provide standardized processing time and propose a 2D representation

The “Main metric vs. Time” space can be easily split into four areas according to the speed and precision of a system:

Main metric / Time space

 

PDF version

ImagEVAL 2006 The Official Campaigns [ImagEVAL2006_OfficialCampaign.pdf]

ImagEVAL 2006 Graphs and Tables [ImagEVAL2006_Graphs&Tables.pdf;]

HTML version

Task1 : Transformed images

[sub task 1.1]

[sub task 1.2]

Task2 : Mixed research Text / Image

Task 3 : text area detection

Task 4 : Object detection

Task 5 : Attributes extractions

 

| Site map | Home | Participants | Campaigns | Communications | Contact us |