laitimes

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

author:YiyiKing

Computers make mistakes and artificial intelligence makes things worse – and the law must recognize that, according to Nature. The tragic scandal at the UK Post Office highlights the need for legal change, especially as organisations adopt AI to enhance decision-making.

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

Nick Wallis. The Great Post Office Scandal

UK Post Office Scandal

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

The Japanese tech giant, which developed flawed accounting software at Fujitsu, stands out for its absence: Fujitsu played a secondary role?

More than 20 years ago, Japanese technology conglomerate Fujitsu created an accounting software system for the British Post Office. "Horizon" was rolled out to branches across the country, often small, independently operated stores that collected and delivered letters and parcels to tens of thousands of people. At the time, it was the largest civilian IT project in Europe. Horizon is plagued by errors, a phase that is familiar to any user of a new IT system.

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

UK Post Office Scandal: The Law Says Computers Are Reliable

These aren't irritating little bugs, though. In some cases, the Horizon IT system told employees that they had not deposited enough money at the end of the day's trading, resulting in a "loss" of thousands of pounds per night.

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

Between 1999 and 2015, hundreds of people have been fired or prosecuted for the disappearance of funds in branch accounts, and 59 have been falsely accused of stealing money

Many post office workers have been accused of theft or false accounting. They were told to either pay the difference or face prosecution. Between 2000 and 2014, more than 700 people were prosecuted each year, with an average of 30 incarcerated people. Families, livelihoods and relationships have been destroyed. Some of those affected took their own lives.

Many details have not yet surfaced. One of the most shocking incidents was Fujitsu's statement last week that it knew there was a flaw in the system when it delivered the Horizon to the post office in 1999.

However, one aspect of the scandal has attracted relatively little attention: the law in England and Wales assumes that computer systems are error-free, making it difficult to challenge computer outputs.

Rethinking Artificial Intelligence

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

Will AI lead to a reproducibility crisis in science?

These laws need to be reviewed by national and regional governments around the world where they exist because of their impact on next-generation IT systems, i.e., those that use artificial intelligence (AI). Companies are augmenting IT systems with artificial intelligence to enhance decision-making. It is inconceivable that this would happen in a legal system that assumes that computer evidence is reliable. Until such laws are reviewed, more innocent people will be at risk of being denied justice when AI-enhanced IT systems are found to be in error.

At the heart of the potential injustice of a law that presumes that computer operations are substantially correct is that if someone wants to challenge or challenge computer evidence, they have a burden to provide evidence of improper use or operation. This can be achieved, for example, through the recording of software-related code or keystroke logs. However, obtaining this information is difficult.

In most Horizon cases, the defendant has no way of knowing which documents or records indicate that the error occurred, and therefore cannot ask the post office to disclose the information when the defendant goes to court. This imbalance in knowledge means that individuals have little hope of being able to defend themselves against these allegations.

Find a way forward

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

保罗·马歇尔 (Paul Marshall)

Some lawyers and researchers involved in defending people sued by the post office advocate a different approach. In a 2021 article (see Ref. 2), Paul Marshall, a lawyer at Cornerstone Barristers in London, and his colleagues argue that the presumption of reliability of computer evidence requires a change to a requirement to disclose relevant data and code in legal cases. Where necessary, such disclosures should include information security standards and protocols followed, system audit reports, evidence that error reports and system changes were reliably managed, and records of steps taken to ensure that evidence is not tampered with.

When a group of claimants challenged the post office being wrongly charged, they requested and provided relevant documents from the Horizon case. The organization turned to IT experts for help and ultimately won the case in 2019. In contrast, in individual cases where the defendant sought expert help, the post office settled out of court, and the defendant had to sign a non-disclosure agreement – meaning that the computer evidence remained hidden.

The processes of IT systems can and must be explained in legal cases. There are a variety of ways to establish transparency without revealing trade secrets – a concern for some organizations and businesses.

Sandra Wachter, an artificial intelligence researcher at the University of Oxford in the UK, said that existing tools can explain how automated systems make decisions without revealing all the information about how algorithms work.

Back in the 80s of the 20th century

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

The law does not assume that early computer systems were error-free.

In the 80s of the 20th century (the early days of personal computing), the law did not assume that the operation of computers was correct. Instead, it needs to be proven before computer-generated information can be allowed as evidence in court. In 1999, the law was amended due to the recognition that the reliability of computers had improved, but the pendulum had already swung too far in the other direction.

Complex IT systems using AI are increasingly making decisions that affect lives and livelihoods, from banking and finance to medical diagnostics, from criminal justice to self-driving cars. As AI technology becomes mainstream, so will legal cases involving these systems.

Nature | The tragic scandal at the UK Post Office underscores the need for changes in AI-related laws

Sculpture of Alan Turnin in Bletchley Park, England. It's been 73 years since Turing proposed a "parody" game to test computer thinking skills.

Computer evidence cannot be assumed to be reliable, and the relevant laws that make such assumptions must be reviewed to avoid similar miscalculations from happening again.

bibliography

  1. Editorial. Computers make mistakes and AI will make things worse — the law must recognize that. Nature 625, 631–631 (2024). doi:10.1038/d41586-024-00168-8.
  2. Marshall, P. et al. Recommendations for the probity of computer evidence. Digit. Evid. Electron. Signat. Law Rev. 18–26 (2021) doi:10.14296/deeslr.v18i0.5240.
  3. Ball, P. Is AI leading to a reproducibility crisis in science? Nature 624, 22–25 (2023). doi: 10.1038/d41586-023-03817-6
  4. Editorial. ChatGPT is a black box: How AI research can break it open. Nature 619, 671–672 (2023). doi: 10.1038/d41586-023-02366-2

postscript

If you have any ideas, please leave a message to @YiyiKing

Read on