Call For Papers

Call For Papers

The tremendous progress in generative AI has made the generation and manipulation of synthetic data easier and faster than before. To this end, multiple use cases are benefitting from it. The negative aspect of this progress and wide adoption of generative AI is deepfakes. Audio/image/video of an individual(s) is manipulated using generative methods without permission from the individual(s). This can make them be shown saying or doing something, which they may not have done in real. These unethically manipulated videos, popularly known as deepfakes have wide repercussions and negative effects on society in the form of the deepfakes’ potential in spreading disinformation and misinformation. Deepfakes unfortunately are used for trolling online as well. Authentication systems such as video KYC (Know Your Customer) are also not resilient as often face recognition and verification systems are deceived when high-quality deepfakes are used. To this end, it is important for platforms and systems to be able to identify if manipulation has been performed on a media. These systems, which detect and analyse the deepfakes are referred to as deepfakes detectors.

In the grand challenge, teams who participate in the test phase are invited to submit paper describing their method for peer review.

Submission Guideline

Participants are invited to submit the paper showcase the method for the challenge. Submissions should be comprehensive, detailing methodologies, results, and implications.

  1. - Paper length: 4 pages, excluding references.
  2. - Format: ACM MM format.
  3. - Language: English
  4. - Submission: TBD

All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at this CMT link. Accepted papers will be published in the official ACM-MM 2024 main proceedings.