Code and Datasets from the paper, "The r/Jokes Dataset: a Large Scale Humor Collection" by Orion Weller and Kevin Seppi
Dataset files are located in data/{train/dev/test}.tsv for the regression task, while the full unsplit data can be found in data/preprocessed.tsv. These files will need to be unzipped after cloning the repo.
For related projects, see our work on Humor Detection (separating the humorous jokes from the non-humorous) or generating humor automatically.
** We do not endorse these jokes. Please view at your own risk **
The data is under the Reddit License and Terms of Service and users must follow the Reddit User Agreement and Privacy Policy, as well as remove any posts if asked to by the original user. For more details on this, please see the link above.
- Run
pip3 install -r requirements.txt - Gather the NLTK packages by running
bash download_nltk_packages.sh. This downloads the packagesaveraged_perceptron_tagger,words,stopwords,maxent_ne_chunker, used for analysis/preprocessing.
- Run
python3 gather_reddit_pushshift.pyaftercd prepare_datato gather the Reddit post ids. - Run
python3 preprocess.py --updateto update the Reddit post IDs with the full post. - Run
python3 preprocess.py --preprocessto preprocess the Reddit posts into final datasets
- Run
cd analysis - Run
python3 time_statistics.pyto gather the statistics that display over time - Run
python3 dataset_statistics.pyto gather the overall dataset statistics - See plots in the
./plotsfolder
- Run the first two commands in the
Reproducesection above - Update the code in the
preprocessfunction of thepreprocess.pyfile to NOT remove all jokes after 2020 (line 89). Then runpython3 preprocess.py --preprocess
If you found this repository helpful, please cite the following paper:
@ARTICLE{rjokesData2020,
title={The r/Jokes Dataset: a Large Scale Humor Collection},
author={Weller, Orion and Seppi, Kevin},
journal={"Proceedings of the 2020 Conference of Language Resources and Evaluation"},
month=May,
year = "2020",
}