This month, two US legislators and a House representative introduced a new bill known as the Algorithmic Accountability Act. If it passes and becomes law, it could be the start of a significant shift in the paradigm and pave the way for many cases of tech companies finally being held accountable for their various breaches and violations of certain standards.
Why was it introduced?
The timing of this bill couldn’t be more relevant. There have been stories about AI and algorithmic bias frequently for the last couple of years. The biggest ones are probably Facebook’s discriminatory housing ads, Amazon’s algorithmic recruitment bias against female candidates, and the racism demonstrated by commercial facial recognition tools.
Facebook was charged with violation of the Fair Housing Act by the US Department of Housing and Urban Development because of the advertising tools that allowed for the exclusion of the audience based on race and sex. As a result, many social groups didn’t even see the ads that could’ve been very relevant to them.
Amazon had to scrap their automated recruitment tool that gave applicants ratings from 1 to 5 stars, based on their suitability for the position. The tool was trained to see male candidates as more preferable for developers’ jobs and downgraded graduates of women’s colleges and any resumes that had the word “women’s” in them.
So, effectively, it was making it even harder for women to break into STEM fields. Amazon’s facial recognition tool Rekognition was also found to contain gender and racial bias, classifying darker-skinner women as men on many occasions. IBM’s and Microsoft’s tools also shown the same bias, but they’ve taken steps to reduce it.
In fact, earlier this year, Microsoft has called for regulation of facial recognition tools on a governmental level to combat potential misuse and prevent discrimination of the sort. The Algorithmic Accountability Act, as well as the earlier initiative to prohibit companies from using facial recognition technology from collecting and sharing data without explicit consent, are the result of such scandals and calls. Other state-level legislation initiatives call for even tighter regulations.
What does Algorithmic Accountability Act entail?
The well-timed bill’s mission is to “direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments”. It requires companies with annual revenue over $50m, holding data of over 1m users or working as personal data brokers, to test their algorithms and fix in a timely manner anything that is “inaccurate, unfair, biased or discriminatory”.
The law, if voted in and passed, would require such companies to audit all the processes involved with sensitive data in any way such as machine learning and others. The algorithms that would fall under the audits would be those affecting legal rights of a consumer, performing predictive behavior assessments, processing large amounts of sensitive personal data, or “systematically monitor a large, publicly accessible physical space”. If the audits turn up any risks of discrimination, data privacy breaches and others, the companies would be required to address them within a timely manner.
The regulatory body enforcing the bill would be the Federal Trade Commission (FTC). It would have regulatory powers under the bill and will, within two years after it comes into force, be required to “promulgate regulations”. Such regulations would require the companies falling under the law to conduct impact assessments of their algorithmic decision-making systems. Such impact assessments have been successfully used as a tool of policy since the coming into force of the National Environmental Policy Act in 1970.
In case of the Algorithmic Accountability Act, the assessment would require a description of the automated decision-making system, which includes the purpose, the design, the data and the training. This is a bare minimum – the more costs, benefits, and risk assessment measures a company has in place, the higher are its chances of compliance. It would be up to the FTC to determine the exact benchmarks for meeting all the requirements.
How would it have worked if it were in force now?
For example, if the Act were in force during the Amazon recruitment tool controversy, the FTC would’ve been entitled to issue such a request to Amazon to do such an assessment. It would’ve required a review of potential consequences for “accuracy, fairness, bias, discrimination, privacy and security” of the tool, if not of the entire company. Amazon would’ve then been required to rectify those potential consequences immediately. Under the act, it would also have had an option to publish their findings, at their discretion.
The penalties for failing to comply with the Algorithmic Accountability Act, if enacted, would be the same as those outlined in the Federal Trade Commission Act.
How effective would the Act be?
On the one hand, it’s very important, especially today, to ensure that technology works for us rather than against us. Regulating the people in charge of such technology, particularly the sort that could ensure or prevent access of certain groups to certain things, could make a considerable difference in how it’s applied.
With the Algorithmic Accountability Act in place, together with other acts of the sort like the GDPR, the global legislative frameworks would finally start catching up to the fast developments of technology and AI. Having an overhead regulation at the federal level would be a large step towards achieving that. That’s particularly important in a country like the USA, which is arguably the global AI frontrunner.
Being home to Silicon Valley, which has traditionally been wary of regulators, the country can easily use the Algorithmic Accountability Act to shape the global impact of AI. It could also “inspire”, so to speak, the European regulators to put forward similar proposals, the same way they inspired the California Data Protection Act by enacting the GDPR.
On the other hand, however, it’s unclear to see how the act would work in practice until it actually comes into force. That is, if it comes into force at all. It was put forward by the Democrats, who are currently the minority in the Senate, so there’s no guarantee the Act would be voted in. Add to that the fact that very few members of the Congress actually know what AI and facial recognition are all about, and you get a murky understanding of the bill at best from the legislators.
In addition, it’s important to keep in mind that AI is, at the end of the day, created by people and learns any and all biases from them and them alone. So, however much you regulate it, only people can create the improved versions and ensure that it doesn’t learn any more bias. For that reason, the Algorithmic Accountability Act on its own would most likely take care of the symptom, not the cause. It might treat the symptom well, but it wouldn’t be able to change the cause, which is the prejudice and discrimination of the people, not the AI. But that is the system’s problem, which a mere legislative act cannot solve. It’s solved through education, representation and affirmative action. And if applied correctly, the AI can indeed help with that.
While some companies have internal regulations in place for AI technology, the scandals we described above show that they’re clearly not enough. The need for regulations of such technology is therefore apparent. It’s not going anywhere – it is the future that will eventually be part of all our lives and industries.
At the moment, the Algorithmic Accountability Act might be viewed as a “niche” initiative. But it’s very likely that in two years, which is the deadline by which the FTC needs to implement regulations, it’ll become much more relevant. We are already seeing the problems faulty and biased algorithms are causing – stories like Facebook’s housing discrimination and Amazon’s sexist tools are a few of many.
While not every lawmaker might see them as large problems, or even problems at all, we’re fortunate to have people like the three legislators who do, and who have proposed the Algorithmic Accountability Act as a solution. However, a purely legislative initiative is never going to be enough – strong engagement with the tech industry and the people involved is necessary for the measures to work efficiently.