PLACE Ground Preserving Your Privacy: How the PLACE YOLOv8 Model de-identifies Street View Imagery

At PLACE, our 360-degree cameras roam the streets, documenting neighborhoods and landmarks to create datasets that serve as the basis for service delivery, digital twins and improved machine learning. But with such powerful data comes great responsibility, especially when it comes to personal privacy.

After exploring several approaches with various solution providers, we are now process our data for de-identification in –house. Given the wide range of geographies we capture, this enables us to develop more effective models for de-identification. Leveraging cutting-edge machine learning techniques, we have developed a custom YOLOv8 model that detects faces and license plates in our imagery We then blur and obscure these detected instances, ensuring that personally identifying information (PII) remains confidential.

What’s YOLO? YOLO stands for “You Only Look Once,” and is an open-source object detection algorithm developed by Ultralytics that’s lightning-fast and highly accurate. Unlike traditional methods, YOLO processes the entire image in one go, predicting bounding boxes and class probabilities simultaneously. It’s like having a supercharged set of eyes that spots objects in milliseconds.

Object detection involves identifying the location and class of objects within an image or video stream. YOLOv8 outputs bounding boxes that enclose detected objects, along with class labels and confidence scores for each box. It’s an excellent choice when you need to identify objects of interest in a scene without requiring precise shape information.

What makes PLACE’s YOLO so special?

Our Custom Twist: PLACE Team trained a custom YOLOv8 model using diverse and well sampled training dataset from multiple PLACE Ground collections across Africa, the Caribbean and the Pacific. Our focus was on two critical elements: faces and license plates. Sampled images were labeled for faces and number plates and revised manually.

What’s involved in customizing YOLO?

Custom Model Building is a multistep process which includes:

Data Collection: Obtain diverse and representative training dataset that follows the rules of best practice as mentioned in Ultralytics Yolov8 documentation. Data Annotation: Use tools like Label Studio to draw bounding boxes around objects in your images and label them with their corresponding classes. Ensuring a consistent labeling of the objects is one of the keys for building a robust model.

Dataset Preparation: Organize your images and annotations into folders compatible with YOLOv8. Create a YAML file (e.g., data_custom.yml) specifying the paths and class names.

Install Dependencies: Install Ultralytics and PyTorch libraries and it’s dependencies in the suitable Python environment. And select a model Variation; Decide which YOLOv8 model variation (Nano, Small, Medium or Large) you want to use.

Train and evaluate the Model: Train your custom YOLOv8 model using command-line arguments or Python scripts. Experiment with different hyperparameters. After training, evaluate the model’s performance on validation data. You can also run inference on new images, videos, or webcams.

Processing Power: Developing and training machine learning models, particularly deep learning architectures like YOLOv8, demands significant computational resources. To address this, PLACE opted for AWS EC2 instances equipped with powerful GPUs—a widely recognized choice in the field. Specifically, the configuration selected for developing our PII (Personally Identifiable Information) detection model was the g5.8xlarge instance.

Here are the key specifications of the g5.8xlarge instance:

GPU: The instance boasts four NVIDIA A10 Grid GPUs, which are purpose-built for accelerating deep learning workloads.

CPU: Powered by 32 virtual CPUs (vCPUs), it provides substantial compute capacity.

Memory: With 128 GB of RAM, the instance ensures efficient data handling during training and inference.

By leveraging this robust configuration, we achieved efficient model development and validation. The combination of GPUs, ample CPU cores, and memory allowed us to expedite training while maintaining accuracy.

Anonymization: Using the PII detection model custom developed from PLACE Ground collections the results are recorded and listed in text files associated with each exposure point (360 street view image). When a face or license plate is detected, another script is utilized to apply blur filter to these regions of the image, effectively anonymizing them.

We continue to develop and improve our YOLO model with each new dataset we collect. If you are planning to develop your own with similar uses, we would love to hear from you and share experiences.