The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found cdcp.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found cdcp.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
PIE Dataset Card for "cdcp"
This is a PyTorch-IE wrapper for the CDCP Huggingface dataset loading script.
Usage
from pie_datasets import load_dataset
from pie_documents.documents import TextDocumentWithLabeledSpansAndBinaryRelations
# load English variant
dataset = load_dataset("pie/cdcp")
# if required, normalize the document type (see section Document Converters below)
dataset_converted = dataset.to_document_type(TextDocumentWithLabeledSpansAndBinaryRelations)
assert isinstance(dataset_converted["train"][0], TextDocumentWithLabeledSpansAndBinaryRelations)
# get first relation in the first document
doc = dataset_converted["train"][0]
print(doc.binary_relations[0])
# BinaryRelation(head=LabeledSpan(start=0, end=78, label='value', score=1.0), tail=LabeledSpan(start=79, end=242, label='value', score=1.0), label='reason', score=1.0)
print(doc.binary_relations[0].resolve())
# ('reason', (('value', 'State and local court rules sometimes make default judgments much more likely.'), ('value', 'For example, when a person who allegedly owes a debt is told to come to court on a work day, they may be forced to choose between a default judgment and their job.')))
Data Schema
The document type for this dataset is CDCPDocument which defines the following data fields:
text(str)id(str, optional)metadata(dictionary, optional)
and the following annotation layers:
propositions(annotation type:LabeledSpan, target:text)relations(annotation type:BinaryRelation, target:propositions)urls(annotation type:Attribute, target:propositions)
See here for the annotation type definitions.
Document Converters
The dataset provides document converters for the following target document types:
pie_documents.documents.TextDocumentWithLabeledSpansAndBinaryRelationslabeled_spans:LabeledSpanannotations, converted fromCDCPDocument'spropositions- labels:
fact,policy,reference,testimony,value - if
propositionscontain whitespace at the beginning and/or the end, the whitespace are trimmed out.
- labels:
binary_relations:BinaryRelationannotations, converted fromCDCPDocument'srelations- labels:
reason,evidence
- labels:
See here for the document type definitions.
Collected Statistics after Document Conversion
We use the script evaluate_documents.py from PyTorch-IE-Hydra-Template to generate these statistics.
After checking out that code, the statistics and plots can be generated by the command:
python src/evaluate_documents.py dataset=cdcp_base metric=METRIC
where a METRIC is called according to the available metric configs in config/metric/METRIC (see metrics).
This also requires to have the following dataset config in configs/dataset/cdcp_base.yaml of this dataset within the repo directory:
_target_: src.utils.execute_pipeline
input:
_target_: pie_datasets.DatasetDict.load_dataset
path: pie/cdcp
revision: 001722894bdca6df6a472d0d186a3af103e392c5
For token based metrics, this uses bert-base-uncased from transformer.AutoTokenizer (see AutoTokenizer, and bert-based-uncased to tokenize text in TextDocumentWithLabeledSpansAndBinaryRelations (see document type).
Relation argument (outer) token distance per label
The distance is measured from the first token of the first argumentative unit to the last token of the last unit, a.k.a. outer distance.
We collect the following statistics: number of documents in the split (no. doc), no. of relations (len), mean of token distance (mean), standard deviation of the distance (std), minimum outer distance (min), and maximum outer distance (max). We also present histograms in the collapsible, showing the distribution of these relation distances (x-axis; and unit-counts in y-axis), accordingly.
Command
python src/evaluate_documents.py dataset=cdcp_base metric=relation_argument_token_distances
train (580 documents)
| len | max | mean | min | std | |
|---|---|---|---|---|---|
| ALL | 2204 | 240 | 48.839 | 8 | 31.462 |
| evidence | 94 | 196 | 66.723 | 14 | 42.444 |
| reason | 2110 | 240 | 48.043 | 8 | 30.64 |
test (150 documents)
| len | max | mean | min | std | |
|---|---|---|---|---|---|
| ALL | 648 | 212 | 51.299 | 8 | 31.159 |
| evidence | 52 | 170 | 73.923 | 20 | 39.855 |
| reason | 596 | 212 | 49.326 | 8 | 29.47 |
Span lengths (tokens)
The span length is measured from the first token of the first argumentative unit to the last token of the particular unit.
We collect the following statistics: number of documents in the split (no. doc), no. of spans (len), mean of number of tokens in a span (mean), standard deviation of the number of tokens (std), minimum tokens in a span (min), and maximum tokens in a span (max). We also present histograms in the collapsible, showing the distribution of these token-numbers (x-axis; and unit-counts in y-axis), accordingly.
Command
python src/evaluate_documents.py dataset=cdcp_base metric=span_lengths_tokens
| statistics | train | test |
|---|---|---|
| no. doc | 580 | 150 |
| len | 3901 | 1026 |
| mean | 19.441 | 18.758 |
| std | 11.71 | 10.388 |
| min | 2 | 3 |
| max | 142 | 83 |
Token length (tokens)
The token length is measured from the first token of the document to the last one.
We collect the following statistics: number of documents in the split (no. doc), mean of document token-length (mean), standard deviation of the length (std), minimum number of tokens in a document (min), and maximum number of tokens in a document (max). We also present histograms in the collapsible, showing the distribution of these token lengths (x-axis; and unit-counts in y-axis), accordingly.
Command
python src/evaluate_documents.py dataset=cdcp_base metric=count_text_tokens
| statistics | train | test |
|---|---|---|
| no. doc | 580 | 150 |
| mean | 130.781 | 128.673 |
| std | 101.121 | 98.708 |
| min | 13 | 15 |
| max | 562 | 571 |
- Downloads last month
- 300





