Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask

Example counts not matching official splits

#11
by giffmana - opened

Hi, I'm being redirected here for the question I had about FineVision, which used some datasets from The Cauldron. My original question is here, but copy-pasting for convenience:

Hi, I had a look at this dataset, and I'm a bit suspicious about some splits where the example count does not match the original dataset count. Examples:

  • DocVQA: FineVision: 10'189 examples. Original: 10'194 (train), 1286 (val), test
  • InfographicVQA: FineVision: 4394 examples. Original: 4406 (train), 579 (val), test
  • ST-VQA: FineVision: 17247 examples. Original: 17028 (train), 1893 (val), test
    Probably more, I only spot-checked those 3 for now.
    I could have understood if FV always had fewer, you maybe drop some dups or obviously broken examples. But that's not it. In ST-VQA, you have more examples than the original data. This is suspicious ; do you have an explanation?

OK, I found out the st-vqa case. It turns out there is no official train/val split. However, there is one that a paper made and everyone follows: Ronghang Hu 2020. That's where my numbers come from.

But it turns out Cauldron originally took the whole trainset (not Ronghang split) and excluded examples with more than one answer (I asked the authors) from the import:

> jq '.data.[].answers | length' train_task_2.json | sort | uniq -c
23121 1
 2953 2
> jq '.data.[].set_name' train_task_2.json | sort | uniq -c
26074 "train"

So this means when someone trains with the Cauldron split, one should not eval on the Ronghang Hu 2020 "val" split.

Sign up or log in to comment