Blanca commited on
Commit
0e7f720
·
verified ·
1 Parent(s): 5bed01e

Update content.py

Browse files
Files changed (1) hide show
  1. content.py +6 -6
content.py CHANGED
@@ -7,11 +7,11 @@ INTRODUCTION_TEXT = """
7
 
8
  Critical Questions Generation is the task of automatically generating questions that can unmask the assumptions held by the premises of an argumentative text.
9
 
10
- This leaderboard, aims at benchmarking the capacity of language technology systems to create Critical Questions (CQs). That is, questions that should be asked in order to judge if an argument is acceptable or fallacious.
11
 
12
  The task consists on generating 3 Useful Critical Questions per argumentative text.
13
 
14
- All details on the task, the dataset, and the evaluation can be found in the paper [Benchmarking Critical Questions Generation: A Challenging Reasoning Task for Large Language Models](https://arxiv.org/abs/2505.11341)
15
 
16
  """
17
 
@@ -19,15 +19,15 @@ DATA_TEXT = """
19
 
20
  ## Data
21
 
22
- The [CQs-Gen dataset](https://huggingface.co/datasets/HiTZ/CQs-Gen) gathers 220 interventions of real debates. And contains:
23
 
24
- - `validation`: which contains 186 interventions and can be used for training or validation, as it contains ~25 reference questions per intervention already evaluated accoding to their usefulness (either Useful, Unhelpful or Invalid).
25
- - `test`: which contains 34 interventions. The reference questions of this set (~70) are kept private to avoid data contamination. The questions generated using the test set is what should be submitted to this leaderboard.
26
 
27
 
28
  ## Evaluation
29
 
30
- Evaluation is done by comparing each of the 3 newly generated question to the reference questions of the test set using Semantic Text Similarity, and inheriting the label of the most similar reference given the threshold of 0.65. Questions where no reference is found are considered Invalid. See the evaluation function [here](https://huggingface.co/spaces/HiTZ/Critical_Questions_Leaderboard/blob/main/app.py#L141), or find more details in the [paper](https://arxiv.org/abs/2505.11341).
31
 
32
  ## Leaderboard
33
 
 
7
 
8
  Critical Questions Generation is the task of automatically generating questions that can unmask the assumptions held by the premises of an argumentative text.
9
 
10
+ This leaderboard aims at benchmarking the capacity of language technology systems to create Critical Questions (CQs). That is, questions that should be asked in order to judge if an argument is acceptable or fallacious.
11
 
12
  The task consists on generating 3 Useful Critical Questions per argumentative text.
13
 
14
+ All details on the task, the dataset, and the evaluation can be found in the paper [Benchmarking Critical Questions Generation: A Challenging Reasoning Task for Large Language Models](https://arxiv.org/abs/2505.11341) or in the [Shared Task](https://hitz-zentroa.github.io/shared-task-critical-questions-generation/)
15
 
16
  """
17
 
 
19
 
20
  ## Data
21
 
22
+ The [CQs-Gen dataset](https://huggingface.co/datasets/HiTZ/CQs-Gen) gathers 220 interventions of real debates. Divided between:
23
 
24
+ - `validation`: which contains 186 interventions and can be used for training or validation, as it has ~25 reference questions per intervention already evaluated accoding to their usefulness (either Useful, Unhelpful or Invalid).
25
+ - `test`: which contains 34 interventions. The reference questions of this set (~70) are kept private to avoid data contamination. The questions generated using the test set are what should be submitted to this leaderboard.
26
 
27
 
28
  ## Evaluation
29
 
30
+ The evaluation of each question is computer by comparing each of the 3 newly generated question to the reference questions of the test set using Semantic Text Similarity, and inheriting the label of the most similar reference given the threshold of 0.65. Questions where no reference is found are considered Invalid. See the evaluation function [here](https://huggingface.co/spaces/HiTZ/Critical_Questions_Leaderboard/blob/main/app.py#L141), or find more details in the [paper](https://arxiv.org/abs/2505.11341).
31
 
32
  ## Leaderboard
33