LOADinG Act will ensure the state uses AI responsibly
Leaders across industries are quick to tell us that artificial intelligence will transform our world. We don’t disagree. But as news of algorithmic bias, manipulative deepfakes and the potential automation of millions of jobs piles up, we must put appropriate guardrails in place, especially when it comes to our state’s use of AI tools.
This year, as New York kicks off Empire AI, a statewide journey to direct AI for the public good, we must ask serious questions about our state government’s technology tools.
That’s why we worked to pass the LOADinG Act (Legislative Oversight of Automated Decision-making in Government) in the last legislative session. It’s the first step in a comprehensive vision for ethical and responsible AI adoption that puts the needs of everyday New Yorkers and workers first.
Government decisions are frequently consequential, often with life-impacting outcomes, and from SNAP benefits to taxes, automated decision-making systems are already playing a role in government. When these tools are making consequential decisions, they deserve heightened scrutiny, especially given that many have shortcomings, including bias, privacy concerns and cybersecurity risks.
The LOADinG Act (S7543B/A9430B), which is awaiting Gov. Kathy Hochul’s signature, does not require legislative oversight of automated decision-making tools. Rather, it addresses these risks by specifically requiring human oversight of high-risk systems, subjecting systems to comprehensive pre-deployment and biennial impact assessments, and protecting the critical role of trained workers in government decision-making.
As New York and other state and local governments experiment with AI and other technology tools, enacting the LOADinG Act will ensure that we do so safely, responsibly and in a way that respects the invaluable contributions of expert public sector workers.
We’ve seen what happens when guardrails are not implemented as new tools are deployed. For example, 25 states are reliant on Deloitte’s systems that incorporate AI to manage their Medicaid and/or other public benefits programs. States have found numerous errors with these systems, resulting in faulty denials or cancellations of benefits and fixes that cost millions of dollars and take years to complete.
The replacement of human experts can result in latent failures that aren’t noticed until years after adoption. Consider the COMPAS recidivism algorithm, which predicts a criminal defendant’s risk of committing a future offense and has been in use since 2009: In 2016, ProPublica found COMPAS to be racially biased, then in 2018 further research found the system to be no more accurate than the average untrained layperson.
Government decisions can impact civil rights, constitutional protections or the provision of basic services like health care, housing and nutrition. Given the well-documented risks in automated decision-making tools, these tools must be subject to oversight. By signing the LOADinG Act, the governor will make New York a global leader in the fight for responsible AI adoption.