A longitudinal assessment of the persistence of twitter datasets
974 - 984
JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY
MetadataShow full item record
Social media datasets are not always completely replicable. Having to adhere to requirements of platforms such as Twitter, researchers can only release a list of unique identifiers, which others can then use to recollect the data themselves. This leads to subsets of the data no longer being available, as content can be deleted or user accounts deactivated. To quantify the long‐term impact of this in the replicability of datasets, we perform a longitudinal analysis of the persistence of 30 Twitter datasets, which include more than 147 million tweets. By recollecting Twitter datasets ranging from 0 to 4 years old by using the tweet IDs, we look at four different factors quantifying the extent to which recollected datasets resemble original ones: completeness, representativity, similarity, and changingness. Although the ratio of available tweets keeps decreasing as the dataset gets older, we find that the textual content of the recollected subset is still largely representative of the original dataset. The representativity of the metadata, however, keeps fading over time, both because the dataset shrinks and because certain metadata, such as the users' number of followers, keeps changing. Our study has important implications for researchers sharing and using publicly shared Twitter datasets in their research.