Build a Simple Face Blurring Video Converter using Python
In an era where digital privacy is more important than ever, being able to anonymize individuals in video footage is a valuable skill. Whether you are a journalist, a researcher, or a content creator, blurring faces can help protect identities. In this tutorial, we will build a simple video converter using Python, OpenCV, and MediaPipe to detect and blur faces automatically.
Prerequisites and Tools
To follow this guide, you will need Python installed on your system. We will be using two primary libraries:
- OpenCV: A powerful library for real-time computer vision and image processing.
- MediaPipe: A framework by Google that provides high-fidelity face detection solutions.
Installation
Open your terminal or command prompt and install the necessary dependencies using the following command:
pip install opencv-python mediapipe
The Python Script
The following script reads a video file, detects every face in every frame, applies a Gaussian blur to those regions, and saves the result to a new file.
import cv2
import mediapipe as mp
def blur_video_faces(input_path, output_path):
# Initialize MediaPipe Face Detection
mp_face_detection = mp.solutions.face_detection
face_detection = mp_face_detection.FaceDetection(model_selection=1, min_detection_confidence=0.5)
# Open the video file
cap = cv2.VideoCapture(input_path)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Convert the BGR image to RGB
results = face_detection.process(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
if results.detections:
for detection in results.detections:
bbox = detection.location_data.relative_bounding_box
x, y, w, h = int(bbox.xmin * width), int(bbox.ymin * height), \
int(bbox.width * width), int(bbox.height * height)
# Extract and blur the face ROI
face_roi = frame[y:y+h, x:x+w]
if face_roi.size > 0:
blurred_face = cv2.GaussianBlur(face_roi, (99, 99), 30)
frame[y:y+h, x:x+w] = blurred_face
out.write(frame)
cap.release()
out.release()
print("Processing complete. Video saved to:", output_path)
# Usage
blur_video_faces('input_video.mp4', 'output_blurred.mp4')
How the Converter Works
1. Face Detection with MediaPipe
We use the MediaPipe Face Detection module because it is incredibly fast and works well even on standard CPUs. It returns a bounding box containing the coordinates of the detected face.
2. Region of Interest (ROI) Blurring
Once we have the coordinates, we isolate that specific part of the image (the ROI). We apply a GaussianBlur filter to this section. The kernel size (99, 99) determines the intensity of the blur; higher numbers result in a more obscured face.
3. Saving the Output
The script uses cv2.VideoWriter to stitch the processed frames back together. We use the 'mp4v' codec to ensure the output is saved in a widely compatible MP4 format.
Conclusion
You now have a functional Python tool to anonymize videos. This script can be further customized to detect multiple objects or even integrated into a web application using Flask or Django for a more user-friendly interface.
Comments
Post a Comment