Faced with the challenge of bypassing an AI-integrated system using simple high-res photos or phone screens, a developer shifted focus to Face Anti-Spoofing (FAS) to enhance security. By employing texture analysis through Fourier Transform loss, the model distinguishes real skin from digital screens or printed paper based on microscopic texture differences. Trained on a diverse dataset of 300,000 samples and validated with the CelebA benchmark, the model achieved 98% accuracy and was compressed to 600KB using INT8 quantization, enabling it to run efficiently on low-power devices like an old Intel Core i7 laptop without a GPU. This approach highlights that specialized, lightweight models can outperform larger, general-purpose ones in specific tasks, and the open-source project invites contributions for further improvements.
Read Full Article: Lightweight Face Anti-Spoofing Model for Low-End Devices