People of ACM - Richa Singh

September 9, 2025

What are the challenges and opportunities for a student who wants to pursue a career in biometrics and pattern recognition?

Biometrics, and pattern recognition in general, is in an exciting phase, driven by rapid advances in AI, edge computing, and privacy-preserving technologies. Demand is high for professionals who can design reliable, secure, and privacy-aware recognition systems for applications ranging from smartphones and healthcare diagnostics to finance, forensics, and smart cities. Emerging opportunities include multimodal biometrics (face, voice, iris, gait), behavioral signals (keystroke dynamics, touchscreen gestures), and responsible AI practices (fairness, transparency, explainability).

However, the challenges are significant. The field is evolving quickly, requiring continuous upskilling and a strong grounding in mathematics, statistics, and programming, not just using AI models but understanding their inner workings and limitations. Practical challenges include bias in datasets, scarcity of high-quality labeled data, robustness in uncontrolled environments, and the need for real-time, energy-efficient systems. Moreover, the domain is inherently interdisciplinary, combining computer science, signal processing, psychology, ethics, and law.

Above all, biometrics is a socio-technical discipline. Success will depend not only on technical innovation but also on a commitment to privacy, consent, and equitable access. Students who combine deep technical expertise with ethical awareness will be best positioned to lead and shape the next generation of trustworthy recognition systems.

In your paper “A Comprehensive Overview of Biometric Fusion,” you note that “fusion” approaches are more accurate than single markers. What is the most important step we have made in developing fusion algorithms?

The step‑change was moving from hand‑tuned rules (for example, “trust fingerprint more than face in poor lighting”) to learning-based fusion. With machine learning—and now deep learning—systems learn how to combine modalities from data, adapting weights to signal quality and context. This allows models to capture non‑linear relationships among different modalities and adjust on the fly (e.g., up‑weighting periocular cues when masks reduce facial evidence). Current work builds on this with transformer‑style cross‑attention and quality‑aware fusion, and explores generative modeling to address missing or scarce modalities. These advances have made fusion practical at scale. The caveats are the usual important ones: ensure interpretability, robustness, and fairness, and scrutinize any synthetic data for leakage or bias.

In the paper “Enhancing Fine-Grained Classification for Low Resolution Images,” you propose a technique called attribute-assisted loss. In broad terms, will you tell us how this technique can improve how we perceive low-resolution images?

Humans cope with blurry images by leaning on semantic attributes (such as “yellow beak” or “striped tail”). Our attribute‑assisted loss trains models to do the same. Instead of optimizing only for final class labels which is hard when the pixels are scarce, we jointly learn to detect stable attributes and use them as intermediate signals that guide the classifier. This encourages a hierarchical representation: attributes that survive degradation become anchors for the final decision.

We’ve extended this idea to faces under multiple degradations, specifically focusing low‑resolution, masks, disguises, and injuries, by emphasizing reliable regions when others are occluded. The result is more robust performance on real‑world imagery, from challenging CCTV to medical settings.

What is another recent development in your field that will be especially impactful in the near future?

Two forces are shaping the field: generative AI and machine unlearning.

GenAI helps and hurts. On the plus side, high‑quality synthetic data can balance classes, model rare conditions, and stress‑test systems; generation‑aware training can improve robustness. On the risk side, GenAI enables deepfakes and biometric spoofs (voice, face, fingerprints). We are responding with multi‑modal verification, liveness checks, and detectors trained against strong synthetic adversaries.

Machine unlearning addresses the “right to be forgotten.” Deleting a user’s record isn’t enough once a model has trained on it. Unlearning aims to remove an individual’s influence without full retraining, enabling privacy compliance while preserving accuracy. I expect the field to converge on auditable, privacy‑preserving biometric systems (and in general responsible AI systems) that are multi‑modal, generation‑resilient, and capable of unlearning, balancing security with civil liberties.

Why is a journal such as AILET an important addition to the field?

AI is moving fast. AI Letters fills a crucial gap between traditional conferences and journals by publishing peer‑reviewed contributions that should reach the community now. AILET prioritizes theoretical breakthroughs, algorithmic innovation, practical real-world applications, and critical societal implications including ethics, policy, and responsible AI. It enables academic and industry researchers and practitioners to disseminate meaningful, well‑reviewed results without waiting for months‑long review cycles.

As a leading researcher, what is your perspective on the role of women in AI, particularly in the context of India and the Global South? What unique opportunities or challenges do you see with GenAI?

I’m optimistic. India’s talent pipeline is growing, and many institutions are young enough to build inclusive cultures from day one. Women researchers are not only working on globally relevant challenges but addressing locally salient problems. For example, women are leading research efforts pertaining to low‑resource languages, inclusive AI systems for financial access, agriculture, and public health. These contributions are broadening what the field values.

However, challenges persist: a leaky pipeline around mid‑career, limited family-care support, and geographic constraints. Solutions include visible role models, structured mentorship, flexible group policies, and evaluation that accounts for career breaks. With GenAI, we must ensure women’s voices shape both capabilities and guardrails. My goal is an AI ecosystem that is inclusive by design, not by retrofit, and I see encouraging momentum toward that end.

 

Richa Singh is a Professor of Computer Science & Engineering at IIT Jodhpur. She has published over 400 peer‑reviewed papers in areas including biometrics, pattern recognition, medical image analysis, and responsible AI. Her group’s work has been used during several significant events, including technology for injured‑face recognition during the 2023 Balasore train tragedy, and deepfake verification support for Indian newsrooms during the 2024 general elections.

Singh has been recognized with the NASSCOM AI Gamechangers Award and Facebook’s Ethics in AI (India) Research Award. She is also a Fellow of IEEE, IAPR, and INAE and an ACM Distinguished Member.

Her volunteer contributions to the community include serving as Founding Co‑Editor‑in‑Chief of ACM AI Letters (AILET), Associate Editor‑in‑Chief of Pattern Recognition, as well as a Program Co‑Chair of CVPR 2022. Colleagues also value her commitment to mentoring numerous graduate students and early‑career researchers.