top of page

Domestic Violence Recognizer Algorithm

ipv-help-me-gesture2.webp
Class:
SYMSYS1
Minds and Machines
Teammates:
Michelle Buyan
Machine Learning • Data Science  • Research  • Social Impact
Screenshot 2025-01-11 at 14.08.23.png
Introduction
Our project involves creating a machine learning model using Google’s Teachable Machine to detect the standard domestic violence help hand signal. This signal, popularized during the COVID-19 pandemic, has been instrumental in enabling victims to discreetly seek help. The model we developed aims to identify this hand signal through a camera and distinguish it from other hand gestures. The goal is to create a tool that can alert authorities when the signal is detected, potentially saving lives in real-time situations. We focused on building a robust, bias-free dataset that represents different races, ages, and hand features to improve the generalizability of the model.
Screenshot 2025-01-11 at 13.54.48.png
Screenshot 2025-01-11 at 13.54.56.png
Computational Goal and Dataset Assembly

The computational goal of our model is to differentiate between the help hand signal and other common hand gestures captured through a camera. Our dataset includes at least 65 images per class, featuring diverse backgrounds, lighting, races, and hand accessories (e.g., rings, nail polish). To collect these images, we used:

  1. Google search terms like “domestic violence hand signal” and “common hand gestures.”

  2. Photographs from our campus, ensuring variety in skin tones, hand sizes, and lighting conditions.

We organized the dataset into 80% training data and 20% testing data, with subfolders for each class.

Screenshot 2025-01-11 at 14.15.12.png
Ensuring Representativeness and Avoiding Bias

We prioritized a representative dataset by considering factors such as skin tone, lighting conditions, and hand accessories. To ensure diversity:

  • We included hands of different races and balanced data for hands with/without jewelry or nail polish.

  • Images were taken in various settings: indoors, outdoors, under bright and dim lighting.

  • We reviewed our dataset to identify underrepresented groups and actively sought additional samples.

Training, Testing, and Results

We trained the model using only the training data in Teachable Machine and tested it with the reserved testing data. Our model achieved an accuracy of 92% for both classes. Examples of successful and failed classifications, along with probability scores, were documented to analyze performance.

Hypothesis on Success and Failure

The model’s high success rate is likely due to the consistency in the hand signal images, with clear and close-up angles. However, failures may stem from differences in image quality and background patterns between training and testing data, leading the model to rely on irrelevant visual cues.

Parameter Adjustment

To improve performance, we adjusted the epoch parameter from 50 to 200. Increasing the epochs allowed the model to better learn patterns, improving the accuracy of previously misclassified images from 73% to 87%.

Reflection on Generalizability and Bias

While the model is effective in recognizing the hand signal from diverse images, it may overfit to specific angles or lighting conditions, reducing its generalizability. Additionally, the dataset primarily includes women’s hands, limiting its ability to detect signals from men’s hands or unusual angles. For real-world application, further dataset expansion is necessary to improve robustness.

Introducing Algorithmic Bias

To explore algorithmic bias, we introduced negative legacy bias by removing all images with faces in the hand signal class. This adjustment caused the model to misclassify hand signals with faces as other gestures, highlighting the risk of bias embedded in training data. Accuracy for hand signals dropped to 85%, demonstrating the model’s reduced reliability in detecting signals in real-world scenarios like video calls.

Real-World Implications

Algorithmic bias can lead to harmful misclassifications in life-critical applications. For example, ProPublica’s investigation of criminal sentencing algorithms revealed racial biases, disproportionately affecting Black individuals (Callahan, 2023). Similar biases in our model could fail victims of domestic violence if it struggles to recognize signals in diverse real-world settings. This underscores the importance of building unbiased, inclusive datasets.

merveondogan.com

© 2025 by Merve Ondogan

bottom of page