Dataset Viewer
Auto-converted to Parquet Duplicate
question_with_context
listlengths
102
16.4k
paragraph_indices
listlengths
7
531
evidence
listlengths
7
531
answer
listlengths
1
389
metadata
dict
[ "<s>", "What", "Ġis", "Ġthe", "Ġseed", "Ġlex", "icon", "?", "</s>", "Introduction", "</s>", "A", "ffect", "ive", "Ġevents", "ĠB", "IB", "REF", "0", "Ġare", "Ġevents", "Ġthat", "Ġtypically", "Ġaffect", "Ġpeople", "Ġin", "Ġpositive", "Ġor", "Ġnegative", "Ġways...
[ 8, 10, 137, 211, 543, 592, 595, 734, 870, 908, 1002, 1006, 1015, 1061, 1150, 1165, 1270, 1371, 1397, 1485, 1508, 1541, 1565, 1599, 1607, 1654, 1677, 1778, 1791, 1876, 1889, 1909, 1912, 1920, 1936, 2127, 2159, 2175, 2238, 2347, 2363, 2...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ "a", "Ġvocabulary", "Ġof", "Ġpositive", "Ġand", "Ġnegative", "Ġpred", "icates", "Ġthat", "Ġhelps", "Ġdetermine", "Ġthe", "Ġpol", "arity", "Ġscore", "Ġof", "Ġan", "Ġevent" ]
{ "all_answers": [ { "text": "a vocabulary of positive and negative predicates that helps determine the polarity score of an event", "type": "ABSTRACTIVE" }, { "text": "seed lexicon consists of positive and negative predicates", "type": "EXTRACTIVE" } ], "all_evidence": [ ...
[ "<s>", "What", "Ġare", "Ġthe", "Ġresults", "?", "</s>", "Introduction", "</s>", "A", "ffect", "ive", "Ġevents", "ĠB", "IB", "REF", "0", "Ġare", "Ġevents", "Ġthat", "Ġtypically", "Ġaffect", "Ġpeople", "Ġin", "Ġpositive", "Ġor", "Ġnegative", "Ġways", ".", "ĠFo...
[ 6, 8, 135, 209, 541, 590, 593, 732, 868, 906, 1000, 1004, 1013, 1059, 1148, 1163, 1268, 1369, 1395, 1483, 1506, 1539, 1563, 1597, 1605, 1652, 1675, 1776, 1789, 1874, 1887, 1907, 1910, 1918, 1934, 2125, 2157, 2173, 2236, 2345, 2361, 24...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ "Using", "Ġall", "Ġdata", "Ġto", "Ġtrain", ":", "ĠAL", "Ġ--", "ĠBi", "GR", "U", "Ġachieved", "Ġ0", ".", "8", "43", "Ġaccuracy", ",", "ĠAL", "Ġ--", "ĠB", "ERT", "Ġachieved", "Ġ0", ".", "86", "3", "Ġaccuracy", ",", "ĠAL", "+", "CA", "+", "CO", "Ġ--",...
{ "all_answers": [ { "text": "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiG...
["<s>","How","Ġare","Ġrelations","Ġused","Ġto","Ġpropagate","Ġpol","arity","?","</s>","Introdu(...TRUNCATED)
[10,12,139,213,545,594,597,736,872,910,1004,1008,1017,1063,1152,1167,1272,1373,1399,1487,1510,1543,1(...TRUNCATED)
[0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
["based","Ġon","Ġthe","Ġrelation","Ġbetween","Ġevents",",","Ġthe","Ġsuggested","Ġpol","arity(...TRUNCATED)
{"all_answers":[{"text":"based on the relation between events, the suggested polarity of one event c(...TRUNCATED)
["<s>","How","Ġbig","Ġis","Ġthe","ĠJapanese","Ġdata","?","</s>","Introduction","</s>","A","ffec(...TRUNCATED)
[8,10,137,211,543,592,595,734,870,908,1002,1006,1015,1061,1150,1165,1270,1371,1397,1485,1508,1541,15(...TRUNCATED)
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0(...TRUNCATED)
["7","000000","Ġpairs","Ġof","Ġevents","Ġwere","Ġextracted","Ġfrom","Ġthe","ĠJapanese","ĠWe(...TRUNCATED)
{"all_answers":[{"text":"7000000 pairs of events were extracted from the Japanese Web corpus, 529850(...TRUNCATED)
["<s>","What","Ġare","Ġlabels","Ġavailable","Ġin","Ġdataset","Ġfor","Ġsupervision","?","</s>"(...TRUNCATED)
[10,12,139,213,545,594,597,736,872,910,1004,1008,1017,1063,1152,1167,1272,1373,1399,1487,1510,1543,1(...TRUNCATED)
[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[ "negative", ",", "Ġpositive" ]
{"all_answers":[{"text":"negative, positive","type":"EXTRACTIVE"}],"all_evidence":[["Affective event(...TRUNCATED)
["<s>","How","Ġbig","Ġare","Ġimprovements","Ġof","Ġsuper","vs","zed","Ġlearning","Ġresults","(...TRUNCATED)
[27,29,156,230,562,611,614,753,889,927,1021,1025,1034,1080,1169,1184,1289,1390,1416,1504,1527,1560,1(...TRUNCATED)
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[ "3", "%" ]
{"all_answers":[{"text":"3%","type":"ABSTRACTIVE"}],"all_evidence":[["FLOAT SELECTED: Table 4: Resul(...TRUNCATED)
["<s>","How","Ġdoes","Ġtheir","Ġmodel","Ġlearn","Ġusing","Ġmostly","Ġraw","Ġdata","?","</s>"(...TRUNCATED)
[11,13,140,214,546,595,598,737,873,911,1005,1009,1018,1064,1153,1168,1273,1374,1400,1488,1511,1544,1(...TRUNCATED)
[0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
["by","Ġexploiting","Ġdiscourse","Ġrelations","Ġto","Ġpropagate","Ġpol","arity","Ġfrom","Ġse(...TRUNCATED)
{"all_answers":[{"text":"by exploiting discourse relations to propagate polarity from seed predicate(...TRUNCATED)
["<s>","How","Ġbig","Ġis","Ġseed","Ġlex","icon","Ġused","Ġfor","Ġtraining","?","</s>","Introd(...TRUNCATED)
[11,13,140,214,546,595,598,737,873,911,1005,1009,1018,1064,1153,1168,1273,1374,1400,1488,1511,1544,1(...TRUNCATED)
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[ "30", "Ġwords" ]
{"all_answers":[{"text":"30 words","type":"ABSTRACTIVE"}],"all_evidence":[["We constructed our seed (...TRUNCATED)
["<s>","How","Ġlarge","Ġis","Ġraw","Ġcorpus","Ġused","Ġfor","Ġtraining","?","</s>","Introduct(...TRUNCATED)
[10,12,139,213,545,594,597,736,872,910,1004,1008,1017,1063,1152,1167,1272,1373,1399,1487,1510,1543,1(...TRUNCATED)
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[ "100", "Ġmillion", "Ġsentences" ]
{"all_answers":[{"text":"100 million sentences","type":"EXTRACTIVE"}],"all_evidence":[["As a raw cor(...TRUNCATED)
["<s>","Does","Ġthe","Ġpaper","Ġreport","Ġmacro","ĠF","1","?","</s>","</s>","1",".","1","em","<(...TRUNCATED)
[9,10,15,19,26,33,42,85,108,131,157,191,215,448,476,478,668,873,952,1180,1404,1602,1639,1650,1821,20(...TRUNCATED)
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
[ "Yes" ]
{"all_answers":[{"text":"Yes","type":"BOOLEAN"},{"text":"Yes","type":"BOOLEAN"}],"all_evidence":[["F(...TRUNCATED)
End of preview. Expand in Data Studio

Preprocessed QASPER dataset

Working doc: https://docs.google.com/document/d/1gYPhPNJ5LGttgjix1dwai8pdNcqS6PbqhsM7W0rhKNQ/edit?usp=sharing

Original:

Differences of our implementation over the original implementation:

  1. We use the dataset provided at https://huggingface.co/datasets/allenai/qasper since it doesn't require manually downloading files.
  2. We remove usage of allennlp since the Python package cannot be installed anymore.
  3. We add baselines to qasper/models. Currently, we have
    • QASPER (Longformer Encoder Decoder)
    • GPT-3.5-Turbo
    • TODO: RAG (with R=TF-IDF or Contriever) implemented in LangChain?
  4. We replace allennlp special tokens with the special tokens of the HF transformer tokenizer:
    • paragraph separator: '' -> tokenizer.sep_token
    • sequence pair start tokens: _tokenizer.sequence_pair_start_tokens -> tokenizer.bos_token

Usage

from datasets import load_dataset

dataset = load_dataset("ag2435/qasper")
Downloads last month
12