VICSafer

We created these models for the purpose of enabling officer safety through safer viewing - an optional shielding technique that investigators use to limit the repetitive exposure to child abuse materials encountered during the investigative process.

Project VIC built a workflow system called the Safer Viewing Platform (SVP). SVP integrates the Sexual Assault Feature Extraction and Reduction (SAFER) model, the Safer Viewing Layer, other machine learning models, and a workflow design that gives investigators the ability to quickly triage large amounts of images and videos to find child abuse material. A dockerized version of SVP is available for licensing, as is the SVP source code, and the training data used to train the SAFER  model.

We have a dockerized SAFER model available for licensing for those adopters that want that flexibility. We created this model to find child abuse materials that have yet to be observed by law enforcement and recorded in a hash database. The SAFER model is trained with 21 different classes that identify the presence of human faces, estimates gender & age, and identifies body parts, sexual actions, bodily fluid, and other objects of interest to child sexual abuse investigators. We release updated models every several months that improve the accuracy of one or more of these classifiers.

The SAFER model is built on the Yolo v3 (Darknet) and Yolo v5 (PyTorch) architectures. Project VIC offers several variations of the SAFER model based on the integration needs of our partners.

Please reach out to us via our "help" button to learn more about the VICSafer model. We only share details with law enforcement and vendors under a formal agreement.