Algorithmic Accountability in Welfare and Identity Systems
As digital technologies proliferate within governance and welfare systems, algorithms decide who gets included and who gets left out. These technologies rely on tricky data sets, and function with limited transparency and lack of requisite safeguards. Our roundtable and workshop examined how accountability frameworks must evolve to keep pace with these databases, platforms, and code.
.jpg)
National Law School of India University, Bangalore
February 14-15, 2026
Held as a part of Consultation on ‘Strengthening Accountability Systems: Reflections, Innovations, and Collective Action’
As digital technologies proliferate within governance and welfare systems, algorithms decide who gets included and who gets left out. These technologies rely on tricky data sets, and function with limited transparency and lack of requisite safeguards. Our roundtable and workshop examined how accountability frameworks must evolve to keep pace with these databases, platforms, and code.
Evidence from welfare and identity systems across states demonstrates how algorithmic decision-making can systematically exclude vulnerable populations: elderly persons, migrant workers, persons with disabilities, rural households, and those living in areas with poor connectivity or data errors. Automated deletions, opaque scoring logics, and rigid authentication protocols frequently translate into denial of entitlements, with affected citizens required to repeatedly “prove eligibility” rather than systems being held accountable for failure.
As the state increasingly governs through databases, platforms, and code, traditional accountability tools such as social audits, disclosures, and grievance mechanisms, remain poorly equipped to interrogate algorithmic power. This session places algorithmic governance firmly within the social accountability tradition, asking how democratic oversight must adapt when decisions are automated and responsibility is diffused across state agencies and private technology vendors.
Current Status: Audits, Disclosure, and Grievance Redress
Existing accountability mechanisms focus largely on financial flows, coverage numbers, and procedural compliance. Social audits rarely examine algorithmic rules, data architectures, exclusion thresholds, or system error rates. Disclosure norms do not require transparency around how welfare technologies make decisions or classify beneficiaries.
Grievance redress systems remain fragmented and individualised, often shifting the burden of correction onto beneficiaries rather than addressing systemic design flaws. Responsibility is further obscured when technology vendors operate core systems without public accountability.
Emerging jurisprudence on digital rights, privacy, and constitutional protections provides important entry points, but operational accountability frameworks remain underdeveloped.
Against this backdrop, the roundtable and workshop held on 14 February at NLSIU, Bengaluru brought together practitioners, researchers, workers and technologists to discuss the current problems with the algorithmic welfare technology and what pathways can be formed to ensure accountability.
Roundtable highlights: Rethinking welfare in the age of algorithms

The roundtable opened with a framing by Kumar Sambhav Shrivastava, situating algorithmic governance within broader questions of rights, due process, and constitutional accountability.
The expert discussion that followed surfaced four key areas of concern:
- Procedural gaps in design and deployment
Participants reflected on how systems are often implemented without adequate testing, contextual understanding, or safeguards against exclusion. - Algorithmic exclusion in practice
Field experiences highlighted how biometric failures, data mismatches, and rigid rules translate into denial of entitlements—often invisibly and at scale. - Data use and safeguards
Questions were raised around how beneficiary data is collected, processed, and potentially misused, alongside the absence of clear accountability structures. - A shift in welfare philosophy
A deeper concern emerged around the changing logic of welfare itself — from ensuring “no eligible person is left out” to prioritising the elimination of “ineligible beneficiaries.” Participants questioned whether algorithmic systems are reshaping how the state defines and identifies the poor.

Workshop: What should accountability look like?

Building on the roundtable, the workshop shifted focus from identifying problems to co-creating solutions. The discussion began by examining the limits of existing accountability mechanisms:
- Audits largely focus on financial flows and coverage numbers, rarely interrogating algorithmic rules or system design.
- Disclosure norms do not require transparency around how decisions are made or how beneficiaries are classified.
- Grievance redress systems remain fragmented and individualised, placing the burden of correction on citizens rather than addressing systemic failures.
- Private technology vendors, often central to these systems, operate with limited public accountability.

Table Discussion 1: Setting standards
Participants worked in groups to define what robust accountability in algorithmic welfare systems should include:
- Preventive safeguards to reduce recurring exclusions
- Minimum design standards for welfare technologies
- Structural reforms in grievance redress systems to ensure due process
- Community and citizen-led accountability mechanisms
Table Discussion 2: Operationalising reform
This session focused on translating principles into practice:
- What policy levers can make accountability standards enforceable?
- What roles should be played by government institutions, regulators, civil society, and technology providers?
- How can responsibility be clearly defined in systems where decision-making is distributed?