Dataset Column: Datasets for Online Multimedia Verification

IntroductionOnline disinformation is a problem that has been attracting increased interest by researchers worldwide as the breadth and magnitude of its impact is progressively manifested and documented in a number of studies (Boididou et al., 2014; Zhou & Zafarani, 2018; Zubiaga et al., 2018). This emerging area of research is inherently multidisciplinary and there have been numerous treatments of the subject, each having a distinct perspective or theme, ranging from the predominant perspectives of media, journalism and communications (Wardle & Derakhshan, 2017) and political science (Allcott & Gentzkow, 2017) to those of network science (Lazer et al., 2018), natural language processing (Rubin et al., 2015) and signal processing, including media forensics (Zampoglou et al., 2017). Given the multimodal nature of the problem, it is no surprise that the multimedia community has taken a strong interest in the field.From a multimedia perspective, two research problems have attracted the bulk of researchers’ attention: a) detection of content tampering and content fabrication, and b) detection of content misuse for disinformation. The first was traditionally studied within the field of media forensics (Rocha et al, 2011), but has recently been under the spotlight as a result of the rise of deepfake videos (Güera & Delp, 2018), i.e. a special class of generative models that are capable of synthesizing highly convincing media content from scratch or based on some authentic seed content. The second problem has focused on the problem of multimedia misuse or <i>misappropriation</i>, i.e. the use of media content out of its original context with the goal of spreading misinformation or false narratives (Tandoc et al., 2018).Developing automated approaches to detect media-based disinformation is relying to a great extent on the availability of relevant datasets, both for training supervised learning models and for evaluating their effectiveness. Yet, developing and releasing such datasets is a challenge in itself for a number of reasons:<li>Identifying, curating, understanding, and annotating cases of media-based misinformation is a very effort-intensive task. More often than not, the annotation process requires careful and extensive reading of pertinent news coverage from a variety of sources similar to the journalistic practice of verification (Brandtzaeg et al., 2016).</li><li>Media-based disinformation is largely manifested in social media platforms and relevant datasets are therefore hard to collect and distribute due to the temporary nature of social media content and the numerous technical restrictions and challenges involved in collecting content (mostly due to limitations or complete lack of appropriate support by the respective APIs), as well as the legal and ethical issues in releasing social media-based datasets (due to the need to comply with the respective Terms of Service and any applicable data protection law).</li> In this column, we present two multimedia datasets that could be of value to researchers who study media-based disinformation and develop automated approaches to tackle the problem. The first, called <a href="https://mklab.iti.gr/results/fake-video-corpus/" data-mce-href="https://mklab.iti.gr/results/fake-video-corpus/">Fake Video Corpus</a> (Papadopoulou et al., 2019) is a manually curated collection of 200 debunked and 180 verified videos, along with relevant annotations, accompanied by a set of 5,193 near-duplicate instances of them that were posted on popular social media platforms. The second, called <a href="http://ndd.iti.gr/fivr/" data-mce-href="http://ndd.iti.gr/fivr/">FIVR-200K</a> (Kordopatis-Zilos et al., 2019), is an automatically collected dataset of 225,960 videos, a list of 100 video queries and manually verified annotations regarding the relation (if any) of the dataset videos to each of the queries (i.e. near-duplicate, complementary scene, same incident).For each of the two datasets, we present the design and creation process, focusing on issues and questions regarding the relevance of the collected content, the technical means of collection, and the process of annotation, which had the dual goal of ensuring high accuracy and keeping the manual annotation cost manageable. Given that each dataset is accompanied by a detailed journal article, in this column we only limit our description to high-level information, emphasizing the utility and creation process in each case, rather than on detailed statistics, which are disclosed in the respective papers.Following the presentation of the two datasets, we then proceed to a critical discussion, highlighting their limitations and some caveats, and delineating future steps towards high quality dataset creation for the field of multimedia-based misinformation.Related DatasetsThe complexity and challenge of the multimedia verification problem has led to the creation of numerous datasets and benchmarking efforts, each designed specifically for a particular task within this area. We can broadly classify these efforts in three areas: a) multimedia forensics, b) multimedia retrieval, and c) multimedia post classification. Datasets that are focused on the text modality, e.g. <a href="http://www.fakenewschallenge.org/" data-mce-href="http://www.fakenewschallenge.org/">Fake News Challenge</a>, <a href="https://www.clickbait-challenge.org/" data-mce-href="https://www.clickbait-challenge.org/">Clickbait Challenge</a>, <a href="https://pan.webis.de/semeval19/semeval19-web/" data-mce-href="https://pan.webis.de/semeval19/semeval19-web/">Hyperpartisan News Detection</a>, RumourEval (Derczynski et al 2017), etc. are beyond the scope of this post and are hence not included in this discussion.<b>Multimedia forensics: </b>Generating high-quality multimedia forensics datasets has always been a challenge, since creating convincing forgeries is normally a manual task requiring a fair amount of skill, and as a result such datasets have generally been few and limited in scale. With respect to <span style="cursor: pointer; text-decoration-line: underline; text-decoration-style: dashed;" title="Image splicing is a type of image tampering that involves making a composite picture by cutting an object from one image and adding it to another image." data-mce-style="cursor: pointer; text-decoration-line: underline; text-decoration-style: dashed;">image splicing</span>, our own survey (Zampoglou et al, 2017) listed a number of datasets that had been made available by this point, including our own <a href="https://mklab.iti.gr/results/the-wild-web-tampered-image-dataset/" data-mce-href="https://mklab.iti.gr/results/the-wild-web-tampered-image-dataset/">Wild Web tampered image dataset</a>, which consists of real-world forgeries that have been collected from the Web, including multiple near-duplicates, making it a large and particularly challenging collection. Recently, the <a href="http://kt.agh.edu.pl/~korus/downloads/dataset-realistic-tampering/" data-mce-href="http://kt.agh.edu.pl/~korus/downloads/dataset-realistic-tampering/">Realistic Tampering Dataset</a> (Korus et al,2017) was proposed, offering a large number of convincing forgeries for evaluation. On the other hand, <span style="cursor: pointer; text-decoration-line: underline; text-decoration-style: dashed;" title="A copy-move forgery is a particular kind of image tampering that involves copying and pasting from one region to another within the same image." data-mce-style="cursor: pointer; text-decoration-line: underline; text-decoration-style: dashed;">copy-move image forgeries</span> pose a different problem that requires specially designed datasets. Three such commonly used datasets are <a href="http://lci.micc.unifi.it/labd/2015/01/copy-move-forgery-detection-and-localization/" data-mce-href="http://lci.micc.unifi.it/labd/2015/01/copy-move-forgery-detection-and-localization/">those produced by MICC</a> (Amerini et al, 2011), the <a href="https://www5.cs.fau.de/research/data/image-manipulation/" data-mce-href="https://www5.cs.fau.de/research/data/image-manipulation/">Image Manipulation Dataset</a> by (Christlein et al, 2012), and <a href="http://www.vcl.fer.hr/comofod/" data-mce-href="http://www.vcl.fer.hr/comofod/">CoMoFoD</a> (Tralic et al, 2013). These datasets are still actively used in research.With respect to video tampering, there has been relative scarcity in high-quality large-scale datasets, which is understandable given the difficulty of creating convincing forgeries. The recently proposed <a href="https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018" data-mce-href="https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018">Multimedia Forensics Challenge</a> datasets include some large-scale sets of tampered images and videos for the evaluation of forensics algorithms. Finally, there has recently been increased interest towards the automatic detection of forgeries made with the assistance of particular software, and specifically face-swapping software. As the quality of produced face-swaps is constantly improving, detecting face-swaps is an important emerging verification task. The <a href="https://github.com/ondyari/FaceForensics" data-mce-href="https://github.com/ondyari/FaceForensics">FaceForensics++ dataset</a> (Rössler et al, 2019) is a very-large scale dataset containing face-swapped videos (and untampered face videos) from a number of different algorithms, aimed for the evaluation of face-swap detection algorithms.<b>Multimedia retrieval:</b> Several cases of multimedia verification can be considered to be an instance of a near-duplicate retrieval task, in which the query video (video to be verified) is run against a database of past cases/videos to check whether it has already appeared before. The most popular and publicly-available dataset for near-duplicate video retrieval is arguably the <a href="http://vireo.cs.cityu.edu.hk/webvideo/" target="_blank" rel="noopener" data-mce-href="http://vireo.cs.cityu.edu.hk/webvideo/">CC_WEB_VIDEO</a> dataset (Wu et al., 2007). This consists of 12,790 user-generated videos collected from popular video sharing websites (YouTube, Google Video, and Yahoo! Video). It is organized in 24 query sets, for each of which the most popular video was selected to serve as query, and the rest of the videos were manually annotated based on their duplicity to the query. Another relevant dataset is <a href="http://www.yugangjiang.info/research/VCDB/" target="_blank" rel="noopener" data-mce-href="http://www.yugangjiang.info/research/VCDB/">VCDB</a> (Jiang et al., 2014), which was compiled and annotated as a benchmark for the partial video copy detection problem and is composed of videos from popular video platforms (YouTube and Metacafe). VCDB contains two subsets of videos: a) the <i>core,</i> which consists of 28 discrete sets of videos with a total of 528 videos with over 9,000 pairs of manually annotated partial copies, and b) the <i>distractors,</i> which consists of 100,000 videos with the purpose to make the video copy detection problem more challenging.<b>Multimedia post classification: </b>A benchmark task under the name "Verifying Multimedia Use" (Boididou et al., 2015; Boididou et al., 2016) was organized and took place in the context of MediaEval 2015 and 2016 respectively. The task made a <a href="https://github.com/MKLab-ITI/image-verification-corpus" data-mce-href="https://github.com/MKLab-ITI/image-verification-corpus">dataset</a> available of 15,629 tweets containing images and videos, each of which made a false or factual claim with respect to the shared image/video. The released tweets were posted in the context of breaking news events (e.g. Hurricane Sandy, Boston Marathon bombings) or hoaxes. Video Verification DatasetsThe Fake Video Corpus (FVC)The Fake Video Corpus (Papadopoulou et al., 2018) is a collection of 380 user-generated videos and 5,193 near-duplicate versions of them, all collected from three online video platforms: YouTube, Facebook, and Twitter. The videos are annotated either as "verified" ("real") or as "debunked" ("fake") depending on whether the information they convey is accurate or misleading. Verified videos are typically user-generated takes of newsworthy events, while debunked videos include various types of misinformation, including staged content posing as UGC, real content taken out of context, or modified/tampered content (see <a href="#figure1" data-mce-href="#figure1">Figure 1</a> for examples). The near-duplicates of each video are arranged in temporally ordered "cascades", and each near-duplicate video is annotated with respect to its relation to the first video of the cascade (e.g. whether it is reinforcing or debunking the original claim). The FVC is the first, to our knowledge, large-scale dataset of debunked and verified user-generated videos (UGVs). The dataset contains different kinds of metadata for its videos, including channel (user) information, video information, and community reactions (number of likes, shares and comments) at the time of their inclusion.<a href="#real2" data-mce-href="#real2"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-2-sm.jpg" alt="" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-2-sm.jpg" /></a> <a href="#real3" data-mce-href="#real3"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-3-sm.jpg" alt="" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-3-sm.jpg" /></a> <a href="#real4" data-mce-href="#real4"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-4-sm.jpg" alt="" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-4-sm.jpg" /></a><br /> <a href="#fake2" data-mce-href="#fake2"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-6-sm.jpg" alt="" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-6-sm.jpg" /></a> <a href="#fake3" data-mce-href="#fake3"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-7-sm.jpg" alt="" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-7-sm.jpg" /></a> <a href="#fake4" data-mce-href="#fake4"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-8-sm.jpg" alt="" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-8-sm.jpg" /></a><br /> <a name="figure1"></a><strong>Figure 1. A selection of real (top row) and fake (bottom row) videos from the Fake Video Corpus. Click image to jump to larger version, description, and link to YouTube video.</strong>The initial set of 380 videos were collected and annotated using various sources including the <a href="https://caa.iti.gr/" data-mce-href="https://caa.iti.gr/">Context Aggregation and Analysis (CAA) service</a> developed within the InVID project and fact-checking sites such as <a href="https://www.snopes.com/" data-mce-href="https://www.snopes.com/">Snopes</a>. To build the dataset, all videos submitted to the CAA service between November 2017 and January 2018 were collected in an initial pool of approximately 1600 videos, which were then manually inspected and filtered. The remaining videos were annotated as "verified" or "debunked" using established third party sources (news articles or blog posts), leading to the final pool of 180 verified and 200 fake unique videos. Then, keyword-based search was run on the three platforms, and near-duplicate video detection was used to identify the video duplicates within the returned results. More specifically, for each of the 380 videos, its title was reformulated in a more general form, and translated into four major languages: Russian, Arabic, French, and German. The original title, the general form and the translations were submitted as queries to YouTube, Facebook, and Twitter. Then, the  near-duplicate retrieval algorithm of Kordopatis-Zilos etal (2017) was used on the resulting pool, and the results were manually inspected to remove erroneous matches.The purpose of the dataset is twofold: i) to be used for the analysis of the dissemination patterns of real and fake user-generated videos (by analyzing the traits of the near-duplicate video cascades), and ii) to serve as a benchmark for the evaluation of automated video verification methods. The relatively large size of the dataset is important for both of these tasks. With respect to the study of dissemination patterns, the dataset provides the opportunity to study the dissemination of the same or similar content by analyzing associations between videos not provided by the original platform APIs, combined with the wealth of associated metadata. In parallel, having a collection of 5,573 annotated "verified" or "debunked" videos- even if many are near-duplicate versions of the 380 cases - can be used for the evaluation (or even training) of verification systems, either based on visual content or the associated video metadata.The Fine-grained Incident Video Retrieval Dataset (FIVR-200K)The FIVR-200K dataset (Kordopatis-Zilos et al., 2019) consists of 225,960 videos associated with 4,687 Wikipedia events and 100 selected video queries (see <a href="#figure2" data-mce-href="#figure2">Figure 2</a> for examples). It has been designed to simulate the problem of Fine-grained Incident Video Retrieval (FIVR). The objective of this problem is: given a query video, retrieve all associated videos considering several types of associations with respect to an incident of interest. FIVR contains several retrieval tasks as special cases under a single framework. In particular, we consider three types of association between videos: a) <b>Duplicate Scene Videos (DSV)</b>, which share at least one scene (originating from the same camera) regardless of any applied transformation, b) <b>Complementary Scene Videos (CSV)</b>, which contain part of the same spatiotemporal segment, but captured from different viewpoints, and c) <b>Incident Scene Videos (ISV)</b>, which capture the same incident, i.e. they are spatially and temporally close, but have no overlap.For the collection of the dataset, we first crawled <a href="https://en.wikipedia.org/wiki/Portal:Current_events" data-mce-href="https://en.wikipedia.org/wiki/Portal:Current_events">Wikipedia's Current Event page</a> to collect a large number of major news events that occurred between 2013 and 2017 (five years). Each news event is accompanied with a topic, headline, text, date, and hyperlinks. To collect videos of the same category, we retained only news events with topic "Armed conflicts and attacks" or "Disasters and accidents". This ultimately led to a total of 4,687 events after filtering. To gather videos around these events and build a large collection with numerous video pairs that are associated through the relations of interest (DSV, CSV and ISV), we queried the public <a href="https://developers.google.com/youtube/" data-mce-href="https://developers.google.com/youtube/">YouTube API</a> with the event headlines. To ensure that the collected videos capture the corresponding event, we retained only the videos published within a timespan of one week from the event date. This process resulted in the collection of 225,960 videos.<a href="#example1" data-mce-href="#example1"><img class="alignnone size-full wp-image-7956" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-9-sm.jpg" alt="" width="281" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-9-sm.jpg" /></a> <a href="#example2" data-mce-href="#example2"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-13-sm.jpg" alt="" width="281" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-13-sm.jpg" /></a> <a href="#example3" data-mce-href="#example3"><img class="alignnone size-full wp-image-7958" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-17-sm.jpg" alt="" width="281" height="150" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-17-sm.jpg" /></a><br /> <a name="figure2"></a><strong>Figure 2. A selection of query videos from the Fine-grained Incident Video Retrieval dataset. Click image to jump to larger version, link to YouTube video, and several associated videos.</strong>Next, we proceeded with the selection of query videos. We set up an automated filtering and ranking process that implemented the following criteria: a) query videos should be relatively short and ideally focus on a single scene, b) queries should have many near-duplicates or same-incident videos within the dataset that are published by many different uploaders, c) among a set of near-duplicate/same-instance videos, the one that was uploaded first should be selected as query. This selection process was implemented based on a graph-based clustering approach and resulted in the selection of 635 query videos, of which we used the top 100 (ranked by corresponding cluster size) as the final query set.For the annotation of similarity relations among videos, we followed a multi-step process, in which we presented annotators with the results of a similarity-based video retrieval system and asked them to indicate the type of relation through a drop-down list of the following labels: a) <b>Near-Duplicate (ND)</b>, a special case where the whole video is near-duplicate to the query video, b) <b>Duplicate Scene (DS)</b>, where only some scenes in the candidate video are near-duplicates of scenes in the query video, c) <b>Complementary Scenes (CS)</b>, d) <b>Incident Scene (IS)</b>, and e) <b>Distractors (DI)</b>, i.e. irrelevant videos.To make sure that annotators were presented with as many potentially relevant videos as possible, we used visual-only, text-only and hybrid similarity in turn. As a result, each annotator reviewed video candidates that had very high similarity with the query video in terms either of their visual content, or text metadata (title and description) or the combination of similarities. Once an initial set of annotations were produced by two independent annotators, the annotators went twice again through the annotations two ensure consistency and accuracy.FIVR-200K was designed to serve as a benchmark that poses real-world challenges for the problem of reverse video search. Given a query video to be verified, the analyst would want to know whether the same or a very similar version of it has already been published. In that way, the user would be able to easily debunk cases of out-of-context video use (i.e. misappropriation) and on the other hand, if several videos are found that depict the same scene from different viewpoints at approximately the same time, then they could be considered to corroborate the video of interest.Discussion: Limitations and CaveatsWe are confident that the two video verification datasets presented in this column can be valuable resources for researchers interested in the problem of media-based disinformation and could serve both as training sets and as benchmarks for automated video verification methods. Yet, both of them suffer from certain limitations and care should be taken when using them to draw conclusions. A first potential issue has to do with the video selection bias arising from the particular way that each of the two datasets was created. The videos of the Fake Video Corpus were selected in a mixed manner trying to include a number of cases that were known to the dataset creators and their collaborators, and was also enriched by a pool of test videos that were submitted for analysis to a publicly available video verification service. As a result, it is likely to be more focused on viral and popular videos. Also, videos were included, for which debunking or corroborating information was found online, which introduces yet another source of bias, potentially towards cases that were more newsworthy or clear cut. In the case of the FIVR-200K dataset, videos were intentionally collected to be between two categories of newsworthy events with the goal of ending up with a relatively homogeneous collection, which would be challenging in terms of content-based retrieval. This means that certain types of content, such as political events, sports and entertainment, are very limited or not present at all in the dataset. A question that is related to the selection bias of the above datasets pertains to their relevance for multimedia verification and for real-world applications. In particular, it is not clear whether the video cases offered by the Fake Video Corpus are representative of actual verification tasks that journalists and news editors face in their daily work. Another important question is whether these datasets offer a realistic challenge to automatic multimedia analysis approaches. In the case of FIVR-200K, it was clearly demonstrated (Kordopatis-Zilos et al., 2019) that the dataset is a much harder benchmark for near-duplicate detection methods compared to previous datasets such as CC_WEB_VIDEO and VCDB. Even so, we cannot safely conclude that a method, which performs very well in FIVR-200K, would perform equally well in a dataset of much larger scale (e.g. millions or even billions of videos).Another issue that affects the access to these datasets and the reproducibility of experimental results relates to the ephemeral nature of online video content. A considerable (and increasing) part of these video collections is taken down (either by their own creators or from the video platform), which makes it impossible for researchers to gain access to the exact video set that was originally collected. To give a better sense of the problem, 21% of the Fake Video Corpus and 11% of the FIVR-200K videos were not available online on September 2019. This issue, which affects all datasets that are based on online multimedia content, raises the more general question of whether there are steps that can be taken by online platforms such as YouTube, Facebook and Twitter that could facilitate the reproducibility of social media research without violating copyright legislation or the platforms’ terms of service.The ephemeral nature of online content is not the only factor that renders the value of multimedia datasets very sensitive to the passing of time. Especially in the case of online disinformation, there seems to be an arms’ race, where new machine learning methods constantly get better in detecting misleading or tampered content, but at the same time new types of misinformation emerge, which are increasingly AI-assisted. This is particularly profound in the case of deepfakes, where the main research paradigm is based on the concept of competition between a <i>generator </i>(adversary) and a <i>detector </i>(Goodfellow et al., 2014). Last but not least, one may always be concerned about the potential ethical issues arising when publicly releasing such datasets. In our case, reasonable concerns for <span style="cursor: pointer; text-decoration-line: underline; text-decoration-style: dashed;" title="Despite the fact that the videos of the dataset are already publicly available, compiling a list of videos under the explicit label 'verification', i.e. associating them with misinformation, might indeed be considered as a potential risk to either individuals depicted in the videos or to the video creators/credits." data-mce-style="cursor: pointer; text-decoration-line: underline; text-decoration-style: dashed;">privacy risks</span>, which are always relevant when dealing with social media content, are addressed by complying with the relevant Terms of Service of the source platforms and by making sure that any annotation (label) assigned to the dataset videos is accurate. Additional ethical issues pertain to the potential "dual use" of the dataset, i.e. their use by adversaries to craft better tools and techniques to make misinformation campaigns more effective. A recent pertinent case was <a href="https://openai.com/blog/gpt-2-6-month-follow-up/" data-mce-href="https://openai.com/blog/gpt-2-6-month-follow-up/">OpenAI’s delayed release</a> of their very powerful GPT-2 model, which sparked numerous discussions and criticism, and making clear that there is no commonly accepted practice for ensuring reproducibility of research results (and empowering future research) and at the same time making sure that risks of misuse are eliminated.Future workGiven the challenges of creating and releasing a large-scale dataset for multimedia verification, the main conclusions from our efforts towards this direction so far are the following:<li>The field of multimedia verification is in constant motion and therefore the concept of a static dataset may not be sufficient to capture the real-world nuances and latest challenges of the problem. Instead new benchmarking models, e.g. in the form of open data challenges, and resources, e.g. constantly updated repository of "fake" multimedia, appear to be more effective for empowering future research in the area.</li><li>The role of social media and multimedia sharing platforms (incl. YouTube, Facebook, Twitter, etc.) seems to be crucial in enabling effective collaboration between academia and industry towards addressing the real-world consequences of online misinformation. While there have been recent developments towards this direction, including the announcements by both <a href="https://ai.facebook.com/blog/deepfake-detection-challenge" data-mce-href="https://ai.facebook.com/blog/deepfake-detection-challenge">Facebook</a> and <a href="https://twitter.com/GoogleAI/status/1169042538621100032" data-mce-href="https://twitter.com/GoogleAI/status/1169042538621100032">Alphabet’s Jigsaw</a> of new deepfake datasets, there is also doubt and scepticism about the degree of openness and transparency that such platforms are ready to offer, given the conflicts of interest that are inherent in the underlying business model. </li><li>Building a dataset that is fit for a highly diverse and representative set of verification cases appears to be a task that would require a community effort instead of effort from a single organisation or group. This would not only help towards distributing the massive dataset creation cost and effort to multiple stakeholders, but also towards ensuring less selection bias, richer and more accurate annotation and more solid governance.</li> ReferencesAllcott, H., Gentzkow, M., "Social media and fake news in the 2016 election", <em>Journal of economic perspectives</em>, 31(2), pp. 211–36, 2017.<br /> Amerini, I, Ballan, L., Caldelli, R., Del Bimbo, A., Serra, G., "A SIFT-based forensic method for copy-move attack detection and transformation recovery", <em>IEEE Transactions on Information Forensics and Security,</em> 6(3), pp. 1099–1110,2011.<br /> Boididou, C., Papadopoulos, S., Kompatsiaris, Y., Schifferes, S., Newman, N., "Challenges of computational verification in social multimedia", In <em>Proceedings of the 23rd ACM International Conference on World Wide Web, </em>pp. 743–748,2014.<br /> Boididou, C., Andreadou, K., Papadopoulos, S., Dang-Nguyen, D.T., Boato, G., Riegler, M., Kompatsiaris, Y., "Verifying multimedia use at MediaEval 2015". In <em>Proceedings of MediaEval 2015</em>, 2015<em>.<br /> </em>Boididou C., Papadopoulos S., Dang-Nguyen D., Boato G., Riegler M., Middleton S.E., Petlund A., Kompatsiaris Y., "Verifying multimedia use at MediaEval 2016". In <em>Proceedings of MediaEval 2016</em>, 2016.<br /> Brandtzaeg, P.B., Lüders, M., Spangenberg, J., Rath-Wiggins, L., Følstad, A., "Emerging journalistic verification practices concerning social media". <i>Journalism Practice</i>, 10(3), pp. 323–342, 2016.<br /> Christlein V., Riess C., Jordan J., Riess C., Angelopoulou, E., "An evaluation of popular copy-move forgery detection approaches". <em>IEEE Transactions on Information Forensics & Security</em>, 7(6), pp. 1841–1854, 2012.<br /> Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W.S., Zubiaga, A., "Semeval-2017 Task 8: Rumoureval: determining rumour veracity and support for rumours", <i>Proceedings of the 11th International Workshop on Semantic Evaluation</i>,pp. 69-76, 2017.<br /> Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y., "Generative adversarial nets". In <i>Advances in Neural Information Processing Systems, </i>pp. 2672–2680, 2014.<br /> Guan, H., Kozak, M., Robertson, E., Lee, Y., Yates, A.N., Delgado, A., Zhou, D., Kheyrkhah, T., Smith, J., Fiscus, J., "MFC datasets: Large-scale benchmark datasets for media forensic challenge evaluation", In <em>Proceedings of the 2019 IEEEWinter Applications of Computer Vision Workshops</em>, pp. 63–72, 2019.<br /> Güera, D., Delp, E.J., "Deepfake video detection using recurrent neural networks", In <em>Proceedings of the 15th IEEE International Conference on Advanced Video and Signal Based Surveillance</em>, pp. 1–6, 2018.<br /> Jiang, Y. G., Jiang, Y., Wang, J., "VCDB: A large-scale database for partial copy detection in videos". In <em>Proceedings of the </em><i>European Conference on Computer Vision</i>, pp. 357–371, 2014.<br /> Kiesel, J., Mestre, M., Shukla, R., Vincent, E., Adineh, P., Corney, D., Stein, B. Potthast, M., "Semeval-2019 Task 4: Hyperpartisan news detection". In <i>Proceedings of the 13th International Workshop on Semantic Evaluation</i>, pp. 829–839,2019.<br /> Kordopatis-Zilos, G., Papadopoulos, S., Patras, I., Kompatsiaris, I., "FIVR: Fine-grained incident video retrieval". <i>IEEE Transactions on Multimedia</i>, 21(10), pp. 2638–2652, 2019.<br /> Korus, P., Huang, J., "Multi-scale analysis strategies in PRNU-based tampering localization", <em>IEEE Transactions on Information Forensics & Security</em>, 21(4), pp. 809–824, 2017.<br /> Lazer, D.M., Baum, M.A., Benkler, Y., Berinsky, A.J., Greenhill, K.M., Menczer, F., Schudson, M., "The science of fake news", <em>Science</em>, 359(6380), pp. 1094–1096, 2018.<br /> Papadopoulou, O., Zampoglou, M., Papadopoulos, S., Kompatsiaris, I., "A corpus of debunked and verified user-generated videos". <i>Online Information Review</i>, 43(1), pp. 72–88, 2019.<br /> Rocha, A., Scheirer, W., Boult, T., Goldenstein, S., "Vision of the unseen: Current trends and challenges in digital image and video forensics", <i>ACM Computing Surveys</i>, 43(4), art. 26, 2011.<br /> Rössler, A. Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M. "Faceforensics++: Learning to detect manipulated facial images", In <em>Proceedings of the IEEE International Conference on Computer Vision</em>, 2019.<br /> Rubin, V.L., Chen, Y., Conroy, N.J., "Deception detection for news: Three types of fakes", In <em>Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community</em>, art. 83, 2015.<br /> Tandoc Jr, E.C., Lim, Z.W., Ling, R. "Defining “fake news”: A typology of scholarly definitions", <em>Digital journalism</em>, 6(2), pp. 137–153, 2018.<br /> Tralic, D., Zupancic I., Grgic S., Grgic M., "CoMoFoD - New database for copy-move forgery detection". In <em>Proceedings of the 55th International Symposium on Electronics in Marine</em>, pp. 49–54, 2013.<br /> Wardle, C., Derakhshan, H., "Information disorder: Toward an interdisciplinary framework for research and policy making", <em>Council of Europe Report</em>, 27, 2017.<br /> Wu, X., Hauptmann, A.G., Ngo, C.-W., "Practical elimination of near-duplicates from web video search", In <i>Proceedings of the 15th ACM International Conference on Multimedia</i>, pp. 218–227, 2007.<br /> Zampoglou, M., Papadopoulos, S., Kompatsiaris, Y., "Detecting image splicing in the wild (web)", In <em>Proceedings of the </em><i>2015 IEEE International Conference on Multimedia & Expo Workshops</i>, 2015.<br /> Zampoglou, M., Papadopoulos, S., Kompatsiaris, Y., "Large-scale evaluation of splicing localization algorithms for web images", <em>Multimedia Tools and Applications</em>, 76(4), pp. 4801–4834, 2017.<br /> Zhou, X., Zafarani, R., "Fake news: A survey of research, detection methods, and opportunities". <em>arXiv preprint arXiv:1812.00315</em>, 2018.<br /> Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., Procter, R., "Detection and resolution of rumours in social media: A survey", <i>ACM Computing Surveys</i>, 51(2), art. 32, 2018.Appendix A: Examples of videos in the Fake Video Corpus.Real videos<a href="https://www.youtube.com/watch?v=QcGMHjb1W5k" target="_blank" name="real2" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=QcGMHjb1W5k"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-2-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-2-lg.jpg" /></a><br /> US Airways Flight 1549 ditched in the Hudson River.<a href="https://www.youtube.com/watch?v=PfqCJJcOX8U" target="_blank" name="real3" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=PfqCJJcOX8U"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-3-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-3-lg.jpg" /></a><br /> A group of musicians playing in an Istanbul park while bombs explode outside the stadium behind them.<a href="https://www.youtube.com/watch?v=RXn1g0xtUMk" target="_blank" name="real4" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=RXn1g0xtUMk"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-4-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-4-lg.jpg" /></a><br /> A giant alligator crossing a Florida golf course.Fake videos<a href="https://www.youtube.com/watch?v=JZUmDyeynGY" target="_blank" name="fake2" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=JZUmDyeynGY"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-6-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-6-lg.jpg" /></a><br /> “Syrian boy rescuing a girl amid gunfire” - Staged (fabricated content): The video was filmed by Norwegian Lars Klevberg in Malta.<a href="https://www.youtube.com/watch?v=CE0Q904gtMI" target="_blank" name="fake3" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=CE0Q904gtMI"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-7-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-7-lg.jpg" /></a><br /> “Golden Eagle Snatches Kid” - Tampered: The video was created by a team of students in Montreal as part of their course on visual effects.<a href="https://www.youtube.com/watch?v=dtAeVl8Erhg" target="_blank" name="fake4" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=dtAeVl8Erhg"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-8-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-8-lg.jpg" /></a><br /> “Pope Francis slaps Donald Trump's hand for touching him” - Satire/parody: The video was digitally manipulated, and was made for the late-night television show Jimmy Kimmel Live.Appendix B: Examples of videos in the Fine-grained Incident Video Retrieval dataset.Example 1<a href="https://www.youtube.com/watch?v=8ja0i5wV-io" target="_blank" name="example1" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=8ja0i5wV-io"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-9-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-9-lg.jpg" /></a><br /> Query video from the <a href="https://en.wikipedia.org/wiki/American_Airlines_Flight_383_(2016)" target="_blank" rel="noopener" data-mce-href="https://en.wikipedia.org/wiki/American_Airlines_Flight_383_(2016)">American Airlines Flight 383 fire at Chicago O'Hare International Airport in October 28, 2016.</a><a href="https://www.youtube.com/watch?v=1U7P65aDLFs" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=1U7P65aDLFs"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-10-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-10-lg.jpg" /></a><br /> Duplicate scene video.<a href="https://www.youtube.com/watch?v=RrrbJK_F5OA" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=RrrbJK_F5OA"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-11-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-11-lg.jpg" /></a><br /> Complimentary scene video.<a href="https://www.youtube.com/watch?v=FFy-sUSz2zg" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=FFy-sUSz2zg"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-12-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-12-lg.jpg" /></a><br /> Incident scene video.Example 2<a href="https://www.youtube.com/watch?v=8nYB60jFq0Y" target="_blank" name="example2" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=8nYB60jFq0Y"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-13-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-13-lg.jpg" /></a><br /> Query video from the <a href="https://en.wikipedia.org/wiki/Boston_Marathon_bombing" target="_blank" rel="noopener" data-mce-href="https://en.wikipedia.org/wiki/Boston_Marathon_bombing">Boston Marathon bombing in April 15, 2013.</a><a href="https://www.youtube.com/watch?v=hMDwj23NSk0" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=hMDwj23NSk0"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-14-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-14-lg.jpg" /></a><br /> Duplicate scene video.<a href="https://www.youtube.com/watch?v=rmaGiXZnmSs" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=rmaGiXZnmSs"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-15-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-15-lg.jpg" /></a><br /> Complimentary scene video.<a href="https://www.youtube.com/watch?v=Sw3Qv_33hLA" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=Sw3Qv_33hLA"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-16-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-16-lg.jpg" /></a><br /> Incident scene video.Example 3<a href="https://www.youtube.com/watch?v=K8PckJMMpRU" target="_blank" name="example3" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=K8PckJMMpRU"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-17-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-17-lg.jpg" /></a><br /> Query video from the <a href="https://en.wikipedia.org/wiki/2017_Las_Vegas_shooting" target="_blank" rel="noopener" data-mce-href="https://en.wikipedia.org/wiki/2017_Las_Vegas_shooting">the Las Vegas shooting in October 1, 2017.</a><a href="https://www.youtube.com/watch?v=4Az2a1UJY-E" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=4Az2a1UJY-E"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-18-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-18-lg.jpg" /></a><br /> Duplicate scene video.<a href="https://www.youtube.com/watch?v=3uqvHnMokkg" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=3uqvHnMokkg"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-19-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-19-lg.jpg" /></a><br /> Complimentary scene video.<a href="https://www.youtube.com/watch?v=3r_Fsc69jfM" target="_blank" rel="noopener" data-mce-href="https://www.youtube.com/watch?v=3r_Fsc69jfM"><img class="alignnone wp-image-7954" src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-20-lg.jpg" alt="" width="450" height="240" data-mce-src="http://sigmm.hosting.acm.org/wp-content/uploads/2019/09/online-multimedia-verification-20-lg.jpg" /></a><br /> Incident scene video.

Introduction

Online disinformation is a problem that has been attracting increased interest by researchers worldwide as the breadth and magnitude of its impact is progressively manifested and documented in a number of studies (Boididou et al., 2014; Zhou & Zafarani, 2018; Zubiaga et al., 2018). This emerging area of research is inherently multidisciplinary and there have been numerous treatments of the subject, each having a distinct perspective or theme, ranging from the predominant perspectives of media, journalism and communications (Wardle & Derakhshan, 2017) and political science (Allcott & Gentzkow, 2017) to those of network science (Lazer et al., 2018), natural language processing (Rubin et al., 2015) and signal processing, including media forensics (Zampoglou et al., 2017). Given the multimodal nature of the problem, it is no surprise that the multimedia community has taken a strong interest in the field.

From a multimedia perspective, two research problems have attracted the bulk of researchers’ attention: a) detection of content tampering and content fabrication, and b) detection of content misuse for disinformation. The first was traditionally studied within the field of media forensics (Rocha et al, 2011), but has recently been under the spotlight as a result of the rise of deepfake videos (Güera & Delp, 2018), i.e. a special class of generative models that are capable of synthesizing highly convincing media content from scratch or based on some authentic seed content. The second problem has focused on the problem of multimedia misuse or misappropriation, i.e. the use of media content out of its original context with the goal of spreading misinformation or false narratives (Tandoc et al., 2018).

Developing automated approaches to detect media-based disinformation is relying to a great extent on the availability of relevant datasets, both for training supervised learning models and for evaluating their effectiveness. Yet, developing and releasing such datasets is a challenge in itself for a number of reasons:

  1. Identifying, curating, understanding, and annotating cases of media-based misinformation is a very effort-intensive task. More often than not, the annotation process requires careful and extensive reading of pertinent news coverage from a variety of sources similar to the journalistic practice of verification (Brandtzaeg et al., 2016).
  2. Media-based disinformation is largely manifested in social media platforms and relevant datasets are therefore hard to collect and distribute due to the temporary nature of social media content and the numerous technical restrictions and challenges involved in collecting content (mostly due to limitations or complete lack of appropriate support by the respective APIs), as well as the legal and ethical issues in releasing social media-based datasets (due to the need to comply with the respective Terms of Service and any applicable data protection law).

In this column, we present two multimedia datasets that could be of value to researchers who study media-based disinformation and develop automated approaches to tackle the problem. The first, called Fake Video Corpus (Papadopoulou et al., 2019) is a manually curated collection of 200 debunked and 180 verified videos, along with relevant annotations, accompanied by a set of 5,193 near-duplicate instances of them that were posted on popular social media platforms. The second, called FIVR-200K (Kordopatis-Zilos et al., 2019), is an automatically collected dataset of 225,960 videos, a list of 100 video queries and manually verified annotations regarding the relation (if any) of the dataset videos to each of the queries (i.e. near-duplicate, complementary scene, same incident).

For each of the two datasets, we present the design and creation process, focusing on issues and questions regarding the relevance of the collected content, the technical means of collection, and the process of annotation, which had the dual goal of ensuring high accuracy and keeping the manual annotation cost manageable. Given that each dataset is accompanied by a detailed journal article, in this column we only limit our description to high-level information, emphasizing the utility and creation process in each case, rather than on detailed statistics, which are disclosed in the respective papers.

Following the presentation of the two datasets, we then proceed to a critical discussion, highlighting their limitations and some caveats, and delineating future steps towards high quality dataset creation for the field of multimedia-based misinformation.

Related Datasets

The complexity and challenge of the multimedia verification problem has led to the creation of numerous datasets and benchmarking efforts, each designed specifically for a particular task within this area. We can broadly classify these efforts in three areas: a) multimedia forensics, b) multimedia retrieval, and c) multimedia post classification. Datasets that are focused on the text modality, e.g. Fake News Challenge, Clickbait Challenge, Hyperpartisan News Detection, RumourEval (Derczynski et al 2017), etc. are beyond the scope of this post and are hence not included in this discussion.

Multimedia forensics: Generating high-quality multimedia forensics datasets has always been a challenge, since creating convincing forgeries is normally a manual task requiring a fair amount of skill, and as a result such datasets have generally been few and limited in scale. With respect to image splicing, our own survey (Zampoglou et al, 2017) listed a number of datasets that had been made available by this point, including our own Wild Web tampered image dataset, which consists of real-world forgeries that have been collected from the Web, including multiple near-duplicates, making it a large and particularly challenging collection. Recently, the Realistic Tampering Dataset (Korus et al,2017) was proposed, offering a large number of convincing forgeries for evaluation. On the other hand, copy-move image forgeries pose a different problem that requires specially designed datasets. Three such commonly used datasets are those produced by MICC (Amerini et al, 2011), the Image Manipulation Dataset by (Christlein et al, 2012), and CoMoFoD (Tralic et al, 2013). These datasets are still actively used in research.

With respect to video tampering, there has been relative scarcity in high-quality large-scale datasets, which is understandable given the difficulty of creating convincing forgeries. The recently proposed Multimedia Forensics Challenge datasets include some large-scale sets of tampered images and videos for the evaluation of forensics algorithms. Finally, there has recently been increased interest towards the automatic detection of forgeries made with the assistance of particular software, and specifically face-swapping software. As the quality of produced face-swaps is constantly improving, detecting face-swaps is an important emerging verification task. The FaceForensics++ dataset (Rössler et al, 2019) is a very-large scale dataset containing face-swapped videos (and untampered face videos) from a number of different algorithms, aimed for the evaluation of face-swap detection algorithms.

Multimedia retrieval: Several cases of multimedia verification can be considered to be an instance of a near-duplicate retrieval task, in which the query video (video to be verified) is run against a database of past cases/videos to check whether it has already appeared before. The most popular and publicly-available dataset for near-duplicate video retrieval is arguably the CC_WEB_VIDEO dataset (Wu et al., 2007). This consists of 12,790 user-generated videos collected from popular video sharing websites (YouTube, Google Video, and Yahoo! Video). It is organized in 24 query sets, for each of which the most popular video was selected to serve as query, and the rest of the videos were manually annotated based on their duplicity to the query. Another relevant dataset is VCDB (Jiang et al., 2014), which was compiled and annotated as a benchmark for the partial video copy detection problem and is composed of videos from popular video platforms (YouTube and Metacafe). VCDB contains two subsets of videos: a) the core, which consists of 28 discrete sets of videos with a total of 528 videos with over 9,000 pairs of manually annotated partial copies, and b) the distractors, which consists of 100,000 videos with the purpose to make the video copy detection problem more challenging.

Multimedia post classification: A benchmark task under the name “Verifying Multimedia Use” (Boididou et al., 2015; Boididou et al., 2016) was organized and took place in the context of MediaEval 2015 and 2016 respectively. The task made a dataset available of 15,629 tweets containing images and videos, each of which made a false or factual claim with respect to the shared image/video. The released tweets were posted in the context of breaking news events (e.g. Hurricane Sandy, Boston Marathon bombings) or hoaxes. 

Video Verification Datasets

The Fake Video Corpus (FVC)

The Fake Video Corpus (Papadopoulou et al., 2018) is a collection of 380 user-generated videos and 5,193 near-duplicate versions of them, all collected from three online video platforms: YouTube, Facebook, and Twitter. The videos are annotated either as “verified” (“real”) or as “debunked” (“fake”) depending on whether the information they convey is accurate or misleading. Verified videos are typically user-generated takes of newsworthy events, while debunked videos include various types of misinformation, including staged content posing as UGC, real content taken out of context, or modified/tampered content (see Figure 1 for examples). The near-duplicates of each video are arranged in temporally ordered “cascades”, and each near-duplicate video is annotated with respect to its relation to the first video of the cascade (e.g. whether it is reinforcing or debunking the original claim). The FVC is the first, to our knowledge, large-scale dataset of debunked and verified user-generated videos (UGVs). The dataset contains different kinds of metadata for its videos, including channel (user) information, video information, and community reactions (number of likes, shares and comments) at the time of their inclusion.

  
  
Figure 1. A selection of real (top row) and fake (bottom row) videos from the Fake Video Corpus. Click image to jump to larger version, description, and link to YouTube video.

The initial set of 380 videos were collected and annotated using various sources including the Context Aggregation and Analysis (CAA) service developed within the InVID project and fact-checking sites such as Snopes. To build the dataset, all videos submitted to the CAA service between November 2017 and January 2018 were collected in an initial pool of approximately 1600 videos, which were then manually inspected and filtered. The remaining videos were annotated as “verified” or “debunked” using established third party sources (news articles or blog posts), leading to the final pool of 180 verified and 200 fake unique videos. Then, keyword-based search was run on the three platforms, and near-duplicate video detection was used to identify the video duplicates within the returned results. More specifically, for each of the 380 videos, its title was reformulated in a more general form, and translated into four major languages: Russian, Arabic, French, and German. The original title, the general form and the translations were submitted as queries to YouTube, Facebook, and Twitter. Then, the  near-duplicate retrieval algorithm of Kordopatis-Zilos etal (2017) was used on the resulting pool, and the results were manually inspected to remove erroneous matches.

The purpose of the dataset is twofold: i) to be used for the analysis of the dissemination patterns of real and fake user-generated videos (by analyzing the traits of the near-duplicate video cascades), and ii) to serve as a benchmark for the evaluation of automated video verification methods. The relatively large size of the dataset is important for both of these tasks. With respect to the study of dissemination patterns, the dataset provides the opportunity to study the dissemination of the same or similar content by analyzing associations between videos not provided by the original platform APIs, combined with the wealth of associated metadata. In parallel, having a collection of 5,573 annotated “verified” or “debunked” videos- even if many are near-duplicate versions of the 380 cases – can be used for the evaluation (or even training) of verification systems, either based on visual content or the associated video metadata.

The Fine-grained Incident Video Retrieval Dataset (FIVR-200K)

The FIVR-200K dataset (Kordopatis-Zilos et al., 2019) consists of 225,960 videos associated with 4,687 Wikipedia events and 100 selected video queries (see Figure 2 for examples). It has been designed to simulate the problem of Fine-grained Incident Video Retrieval (FIVR). The objective of this problem is: given a query video, retrieve all associated videos considering several types of associations with respect to an incident of interest. FIVR contains several retrieval tasks as special cases under a single framework. In particular, we consider three types of association between videos: a) Duplicate Scene Videos (DSV), which share at least one scene (originating from the same camera) regardless of any applied transformation, b) Complementary Scene Videos (CSV), which contain part of the same spatiotemporal segment, but captured from different viewpoints, and c) Incident Scene Videos (ISV), which capture the same incident, i.e. they are spatially and temporally close, but have no overlap.

For the collection of the dataset, we first crawled Wikipedia’s Current Event page to collect a large number of major news events that occurred between 2013 and 2017 (five years). Each news event is accompanied with a topic, headline, text, date, and hyperlinks. To collect videos of the same category, we retained only news events with topic “Armed conflicts and attacks” or “Disasters and accidents”. This ultimately led to a total of 4,687 events after filtering. To gather videos around these events and build a large collection with numerous video pairs that are associated through the relations of interest (DSV, CSV and ISV), we queried the public YouTube API with the event headlines. To ensure that the collected videos capture the corresponding event, we retained only the videos published within a timespan of one week from the event date. This process resulted in the collection of 225,960 videos.

  
Figure 2. A selection of query videos from the Fine-grained Incident Video Retrieval dataset. Click image to jump to larger version, link to YouTube video, and several associated videos.

Next, we proceeded with the selection of query videos. We set up an automated filtering and ranking process that implemented the following criteria: a) query videos should be relatively short and ideally focus on a single scene, b) queries should have many near-duplicates or same-incident videos within the dataset that are published by many different uploaders, c) among a set of near-duplicate/same-instance videos, the one that was uploaded first should be selected as query. This selection process was implemented based on a graph-based clustering approach and resulted in the selection of 635 query videos, of which we used the top 100 (ranked by corresponding cluster size) as the final query set.

For the annotation of similarity relations among videos, we followed a multi-step process, in which we presented annotators with the results of a similarity-based video retrieval system and asked them to indicate the type of relation through a drop-down list of the following labels: a) Near-Duplicate (ND), a special case where the whole video is near-duplicate to the query video, b) Duplicate Scene (DS), where only some scenes in the candidate video are near-duplicates of scenes in the query video, c) Complementary Scenes (CS), d) Incident Scene (IS), and e) Distractors (DI), i.e. irrelevant videos.

To make sure that annotators were presented with as many potentially relevant videos as possible, we used visual-only, text-only and hybrid similarity in turn. As a result, each annotator reviewed video candidates that had very high similarity with the query video in terms either of their visual content, or text metadata (title and description) or the combination of similarities. Once an initial set of annotations were produced by two independent annotators, the annotators went twice again through the annotations two ensure consistency and accuracy.

FIVR-200K was designed to serve as a benchmark that poses real-world challenges for the problem of reverse video search. Given a query video to be verified, the analyst would want to know whether the same or a very similar version of it has already been published. In that way, the user would be able to easily debunk cases of out-of-context video use (i.e. misappropriation) and on the other hand, if several videos are found that depict the same scene from different viewpoints at approximately the same time, then they could be considered to corroborate the video of interest.

Discussion: Limitations and Caveats

We are confident that the two video verification datasets presented in this column can be valuable resources for researchers interested in the problem of media-based disinformation and could serve both as training sets and as benchmarks for automated video verification methods. Yet, both of them suffer from certain limitations and care should be taken when using them to draw conclusions. 

A first potential issue has to do with the video selection bias arising from the particular way that each of the two datasets was created. The videos of the Fake Video Corpus were selected in a mixed manner trying to include a number of cases that were known to the dataset creators and their collaborators, and was also enriched by a pool of test videos that were submitted for analysis to a publicly available video verification service. As a result, it is likely to be more focused on viral and popular videos. Also, videos were included, for which debunking or corroborating information was found online, which introduces yet another source of bias, potentially towards cases that were more newsworthy or clear cut. In the case of the FIVR-200K dataset, videos were intentionally collected to be between two categories of newsworthy events with the goal of ending up with a relatively homogeneous collection, which would be challenging in terms of content-based retrieval. This means that certain types of content, such as political events, sports and entertainment, are very limited or not present at all in the dataset. 

A question that is related to the selection bias of the above datasets pertains to their relevance for multimedia verification and for real-world applications. In particular, it is not clear whether the video cases offered by the Fake Video Corpus are representative of actual verification tasks that journalists and news editors face in their daily work. Another important question is whether these datasets offer a realistic challenge to automatic multimedia analysis approaches. In the case of FIVR-200K, it was clearly demonstrated (Kordopatis-Zilos et al., 2019) that the dataset is a much harder benchmark for near-duplicate detection methods compared to previous datasets such as CC_WEB_VIDEO and VCDB. Even so, we cannot safely conclude that a method, which performs very well in FIVR-200K, would perform equally well in a dataset of much larger scale (e.g. millions or even billions of videos).

Another issue that affects the access to these datasets and the reproducibility of experimental results relates to the ephemeral nature of online video content. A considerable (and increasing) part of these video collections is taken down (either by their own creators or from the video platform), which makes it impossible for researchers to gain access to the exact video set that was originally collected. To give a better sense of the problem, 21% of the Fake Video Corpus and 11% of the FIVR-200K videos were not available online on September 2019. This issue, which affects all datasets that are based on online multimedia content, raises the more general question of whether there are steps that can be taken by online platforms such as YouTube, Facebook and Twitter that could facilitate the reproducibility of social media research without violating copyright legislation or the platforms’ terms of service.

The ephemeral nature of online content is not the only factor that renders the value of multimedia datasets very sensitive to the passing of time. Especially in the case of online disinformation, there seems to be an arms’ race, where new machine learning methods constantly get better in detecting misleading or tampered content, but at the same time new types of misinformation emerge, which are increasingly AI-assisted. This is particularly profound in the case of deepfakes, where the main research paradigm is based on the concept of competition between a generator (adversary) and a detector (Goodfellow et al., 2014). 

Last but not least, one may always be concerned about the potential ethical issues arising when publicly releasing such datasets. In our case, reasonable concerns for privacy risks, which are always relevant when dealing with social media content, are addressed by complying with the relevant Terms of Service of the source platforms and by making sure that any annotation (label) assigned to the dataset videos is accurate. Additional ethical issues pertain to the potential “dual use” of the dataset, i.e. their use by adversaries to craft better tools and techniques to make misinformation campaigns more effective. A recent pertinent case was OpenAI’s delayed release of their very powerful GPT-2 model, which sparked numerous discussions and criticism, and making clear that there is no commonly accepted practice for ensuring reproducibility of research results (and empowering future research) and at the same time making sure that risks of misuse are eliminated.

Future work

Given the challenges of creating and releasing a large-scale dataset for multimedia verification, the main conclusions from our efforts towards this direction so far are the following:

  • The field of multimedia verification is in constant motion and therefore the concept of a static dataset may not be sufficient to capture the real-world nuances and latest challenges of the problem. Instead new benchmarking models, e.g. in the form of open data challenges, and resources, e.g. constantly updated repository of “fake” multimedia, appear to be more effective for empowering future research in the area.
  • The role of social media and multimedia sharing platforms (incl. YouTube, Facebook, Twitter, etc.) seems to be crucial in enabling effective collaboration between academia and industry towards addressing the real-world consequences of online misinformation. While there have been recent developments towards this direction, including the announcements by both Facebook and Alphabet’s Jigsaw of new deepfake datasets, there is also doubt and scepticism about the degree of openness and transparency that such platforms are ready to offer, given the conflicts of interest that are inherent in the underlying business model. 
  • Building a dataset that is fit for a highly diverse and representative set of verification cases appears to be a task that would require a community effort instead of effort from a single organisation or group. This would not only help towards distributing the massive dataset creation cost and effort to multiple stakeholders, but also towards ensuring less selection bias, richer and more accurate annotation and more solid governance.

References

Allcott, H., Gentzkow, M., “Social media and fake news in the 2016 election”, Journal of economic perspectives, 31(2), pp. 211–36, 2017.
Amerini, I, Ballan, L., Caldelli, R., Del Bimbo, A., Serra, G., “A SIFT-based forensic method for copy-move attack detection and transformation recovery”, IEEE Transactions on Information Forensics and Security, 6(3), pp. 1099–1110,2011.
Boididou, C., Papadopoulos, S., Kompatsiaris, Y., Schifferes, S., Newman, N., “Challenges of computational verification in social multimedia”, In Proceedings of the 23rd ACM International Conference on World Wide Web, pp. 743–748,2014.
Boididou, C., Andreadou, K., Papadopoulos, S., Dang-Nguyen, D.T., Boato, G., Riegler, M., Kompatsiaris, Y., “Verifying multimedia use at MediaEval 2015”. In Proceedings of MediaEval 2015, 2015.
Boididou C., Papadopoulos S., Dang-Nguyen D., Boato G., Riegler M., Middleton S.E., Petlund A., Kompatsiaris Y., “Verifying multimedia use at MediaEval 2016”. In Proceedings of MediaEval 2016, 2016.
Brandtzaeg, P.B., Lüders, M., Spangenberg, J., Rath-Wiggins, L., Følstad, A., “Emerging journalistic verification practices concerning social media”. Journalism Practice, 10(3), pp. 323–342, 2016.
Christlein V., Riess C., Jordan J., Riess C., Angelopoulou, E., “An evaluation of popular copy-move forgery detection approaches”. IEEE Transactions on Information Forensics & Security, 7(6), pp. 1841–1854, 2012.
Derczynski, L., Bontcheva, K., Liakata, M., Procter, R., Hoi, G.W.S., Zubiaga, A., “Semeval-2017 Task 8: Rumoureval: determining rumour veracity and support for rumours”, Proceedings of the 11th International Workshop on Semantic Evaluation,pp. 69-76, 2017.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y., “Generative adversarial nets”. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014.
Guan, H., Kozak, M., Robertson, E., Lee, Y., Yates, A.N., Delgado, A., Zhou, D., Kheyrkhah, T., Smith, J., Fiscus, J., “MFC datasets: Large-scale benchmark datasets for media forensic challenge evaluation”, In Proceedings of the 2019 IEEEWinter Applications of Computer Vision Workshops, pp. 63–72, 2019.
Güera, D., Delp, E.J., “Deepfake video detection using recurrent neural networks”, In Proceedings of the 15th IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 1–6, 2018.
Jiang, Y. G., Jiang, Y., Wang, J., “VCDB: A large-scale database for partial copy detection in videos”. In Proceedings of the European Conference on Computer Vision, pp. 357–371, 2014.
Kiesel, J., Mestre, M., Shukla, R., Vincent, E., Adineh, P., Corney, D., Stein, B. Potthast, M., “Semeval-2019 Task 4: Hyperpartisan news detection”. In Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 829–839,2019.
Kordopatis-Zilos, G., Papadopoulos, S., Patras, I., Kompatsiaris, I., “FIVR: Fine-grained incident video retrieval”. IEEE Transactions on Multimedia, 21(10), pp. 2638–2652, 2019.
Korus, P., Huang, J., “Multi-scale analysis strategies in PRNU-based tampering localization”, IEEE Transactions on Information Forensics & Security, 21(4), pp. 809–824, 2017.
Lazer, D.M., Baum, M.A., Benkler, Y., Berinsky, A.J., Greenhill, K.M., Menczer, F., Schudson, M., “The science of fake news”, Science, 359(6380), pp. 1094–1096, 2018.
Papadopoulou, O., Zampoglou, M., Papadopoulos, S., Kompatsiaris, I., “A corpus of debunked and verified user-generated videos”. Online Information Review, 43(1), pp. 72–88, 2019.
Rocha, A., Scheirer, W., Boult, T., Goldenstein, S., “Vision of the unseen: Current trends and challenges in digital image and video forensics”, ACM Computing Surveys, 43(4), art. 26, 2011.
Rössler, A. Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M. “Faceforensics++: Learning to detect manipulated facial images”, In Proceedings of the IEEE International Conference on Computer Vision, 2019.
Rubin, V.L., Chen, Y., Conroy, N.J., “Deception detection for news: Three types of fakes”, In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, art. 83, 2015.
Tandoc Jr, E.C., Lim, Z.W., Ling, R. “Defining “fake news”: A typology of scholarly definitions”, Digital journalism, 6(2), pp. 137–153, 2018.
Tralic, D., Zupancic I., Grgic S., Grgic M., “CoMoFoD – New database for copy-move forgery detection”. In Proceedings of the 55th International Symposium on Electronics in Marine, pp. 49–54, 2013.
Wardle, C., Derakhshan, H., “Information disorder: Toward an interdisciplinary framework for research and policy making”, Council of Europe Report, 27, 2017.
Wu, X., Hauptmann, A.G., Ngo, C.-W., “Practical elimination of near-duplicates from web video search”, In Proceedings of the 15th ACM International Conference on Multimedia, pp. 218–227, 2007.
Zampoglou, M., Papadopoulos, S., Kompatsiaris, Y., “Detecting image splicing in the wild (web)”, In Proceedings of the 2015 IEEE International Conference on Multimedia & Expo Workshops, 2015.
Zampoglou, M., Papadopoulos, S., Kompatsiaris, Y., “Large-scale evaluation of splicing localization algorithms for web images”, Multimedia Tools and Applications, 76(4), pp. 4801–4834, 2017.
Zhou, X., Zafarani, R., “Fake news: A survey of research, detection methods, and opportunities”. arXiv preprint arXiv:1812.00315, 2018.
Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., Procter, R., “Detection and resolution of rumours in social media: A survey”, ACM Computing Surveys, 51(2), art. 32, 2018.

Appendix A: Examples of videos in the Fake Video Corpus.

Real videos


US Airways Flight 1549 ditched in the Hudson River.


A group of musicians playing in an Istanbul park while bombs explode outside the stadium behind them.


A giant alligator crossing a Florida golf course.

Fake videos


“Syrian boy rescuing a girl amid gunfire” – Staged (fabricated content): The video was filmed by Norwegian Lars Klevberg in Malta.


“Golden Eagle Snatches Kid” – Tampered: The video was created by a team of students in Montreal as part of their course on visual effects.


“Pope Francis slaps Donald Trump’s hand for touching him” – Satire/parody: The video was digitally manipulated, and was made for the late-night television show Jimmy Kimmel Live.

Appendix B: Examples of videos in the Fine-grained Incident Video Retrieval dataset.

Example 1


Query video from the American Airlines Flight 383 fire at Chicago O’Hare International Airport in October 28, 2016.


Duplicate scene video.


Complimentary scene video.


Incident scene video.

Example 2


Query video from the Boston Marathon bombing in April 15, 2013.


Duplicate scene video.


Complimentary scene video.


Incident scene video.

Example 3


Query video from the the Las Vegas shooting in October 1, 2017.


Duplicate scene video.


Complimentary scene video.


Incident scene video.

Bookmark the permalink.