Example counts not matching official splits
Hi, I had a look at this dataset, and I'm a bit suspicious about some splits where the example count does not match the original dataset count. Examples:
- DocVQA: FineVision: 10'189 examples. Original: 10'194 (train), 1286 (val), test
- InfographicVQA: FineVision: 4394 examples. Original: 4406 (train), 579 (val), test
- ST-VQA: FineVision: 17247 examples. Original: 17028 (train), 1893 (val), test
- Probably more, I only spot-checked those 3 for now.
I could have understood if FV always had fewer, you maybe drop some dups or obviously broken examples. But that's not it. In ST-VQA, you have more examples than the original data. This is suspicious ; do you have an explanation?
Hi, we merged all the publicly available splits of the datasets (train+val), and then dropped samples if they fail in our pipelines or if the text length exceeds a threshold. In e.g. ST-VQA this means the 17247 samples in FV come from the 17028+1893 = 18921 total publicly available samples which we then further prune if necessary. Will double check this though.
Thank you for the reply, please keep us posted. I have another question:
From your blog post, results are reported on docvqa, textvqa, infovqa datasets during training. Are those results reported on the public validation split, or on the proper test split (with uploading predictions to eval server)?


Regarding the first question:I double checked ST-VQA and we used this version that was already on the hub, which combines the two publicly available splits. In general we never wanted to have more than one split per subset (even if the original datasets consist of e.g. train/val) since it would make the utilization more convoluted while IMO not being necessary for the intended use-case of instruction-training general VLMs.
Regarding the second question:
For all the evaluations during training we used lmms-eval to try to keep the reproducibility as high as possible. The exact tasks we used are here, with the additional ones we report being run on the checkpoints after training finished. We didn't submit any predictions to additional eval servers.
I see. Do I understand correctly (from the Cleaning section of the blog) that the only reason to drop examples is either (1) corrupt image, or (2) Q+A > 8192 tokens, and there is no other reason an example would be dropped (such as image dedup for example)?
It's a little suspicious that for the three examples I happened to look at, the number of examples dropped from the combined train+val is almost exactly the number of val images, all three times, don't you think? I mean it's possible, it just seems highly unlikely a priori somehow.
Yes, these are the rules for dropping a sample.
You are right, that's my bad for only focussing on your initial example with too many samples. What you describe comes from our 'greedy'/'opportunistic' approach to the raw data collection. We essentially collected subsets from wherever we could find them, which might include train & val (as in the case of the ST-VQA version from Vik) or it might just be train (as in DocVQA and InfographicVQA). In general we tried to keep the samples we drop as minimal as possible, so they should not be in the order of magnitude as a whole split. I agree that this is not ideal, but it's the trade-off we settled on to collecting as much data as possible as a small team.
ok almost satisfied with the answer :) It makes sense that for info+doc it's a small handul of examples either corrupt or >8k. But it still looks sus to me that for st-vqa, which has usually short questions and very short answers, about 1.8k examples of the combined train+val would be dropped?
Fair enough! Okay I checked the logs I still have and I have to correct myself, we didn't take ST-VQA from Vik's version on the hub, it was already part of the_cauldron with the 17247 samples. Unfortunately in their paper they only abstractly describe their process which included some deduplication, so I would guess thats where they dropped the samples in question. Sorry I can't give a better answer ...
OK, I will ask them, thanks a lot for all your patient answers!
(For the eventual reader who stumbles upon this and wants to tag along. Asking The Cauldron here: https://huggingface.co/datasets/HuggingFaceM4/the_cauldron/discussions/11)
I don't think any of the authors are actually at HF anymore, but Hugo Laurençon is also at Meta, so you might have more luck over your internal channels ;)
In this thread you went back-and-forth a few times on whether you merge train+val or not. This is actually super important and connects to Xiaohua's question.
The numbers you report seem to be val numbers. You can look at the llms-evals harness, and for st-vqa, infovqa and docvqa the test-set numbers are only retrievable from an eval-server with manual upload. So you would remember having uploaded hundreds of predictions manually, because that's tedious (I know, I've done it, and remember the pain lol). Which makes it most likely the curves you show are from val.
This in turn makes merging train+val on any of the evaluated datasets equal to training on test. So, I hope you didn't actually merge train+val on any of the datasets at all; but then, I'm not sure why you'd think you several times in this thread? :-/
Yes, as I said in the answer to Xiaohua already, we did not upload any predictions to eval-servers.
I initially said we merged train+val for st-vqa because that seemed the most plausible reason after my (too) quick check (having not done this part myself), but after actually properly checking I corrected myself, and can unfortunately only point you to the cauldron on why the amount of samples here is bigger than the train set. The other examples (e.g. infovqa and docvqa) are as you pointed out just the train sets with a small amount of samples dropped.
In general, every large-scale dataset thats collected or scraped from the internet has the risk of 'training on the test set', which is why there is a deduplication step after the collection. We did this as well with a very lenient threshold on what we flagged as potential duplicates, and analysed the drop in performance. When comparing FineVision with all public baselines (through the same pipeline) we had both a smaller percentage of flagged samples and when completely removing these flagged samples from training the performance drop was also the smallest (1.6pp on average over all benchmarks). This does indicate that we do not train on the test-sets of the benchmarks we evaluate, or a least to a way lesser extent than similar work.
Since the benchmarks are at best proxies for the quality of the dataset and everyone might also have different preferences on what they would like to evaluate on and therefore decontaminate, we decided to release the datasets in it's 'raw' form in conjunction with the deduplication pipeline and our pre-computed test-set embeddings. This enables the community to work with the most data but also provides a way to responsibly control for contamination.
OK, that makes things clear. In the meantime, I also found the answer to the st-vqa case, will post in the other thread.
Thanks for your patience with me!
