Yesterday (4th March 2021), we ran the first of our March free web workshops, on the topic of AI and the Law: Current Issues In Legal Technology. It was an interesting session, and we may run it again in the next couple of weeks. Any information on this will be on our Web Events page, here: https://www.cambridgelegalenglish.com/academics-1
One of the issues we discussed, among many, was a recent article in the Wall Street Journal , entitled ‘German Prosecutors Are Building AI In-House’. The article looks at the way some prosecuting authorities in Europe are constructing their own image-recognition systems. If you follow our posts, you will know that we are very interested in this topic. We recently (26th January 2021) posted about the position that human rights organisation 'Amnesty International' was taking on global facial recognition technology, for example. You can find that post here: https://www.cambridgelegalenglish.com/post/commercial-awareness-amnesty-international-seeks-ban-on-facial-recognition-systems-b2
One of the reasons European prosecutors are building AI 'in-house' is so that they can explain to a court exactly how the system works. As we have said before, AI – which is developing fast – brings many problems, in particular, legal and evidential ones. With image (or facial) recognition technology, for example, issues of privacy arise, which the European bloc and the legal systems within it take very seriously.
By building their own AI image-recognition technology, European prosecuting authorities hope to avoid potential legal problems, such as explaining to a court exactly how the system and its algorithms operate. These issues are often a problem because of the intellectual property involved, the unwillingness of commercial interests to disclose source codes, as well as data protection and privacy concerns.
In another interesting article in the New York Times , entitled ‘How One State Managed to Actually Write Rules on Facial Recognition’, it seems that Massachusetts is one of the first US states to put guardrails around the use of facial recognition technology in criminal investigations. The article talks about how difficult it is to strike the right balance between using the technology to detect and investigate criminal offences, and addressing very real concerns about both privacy and accuracy.
These articles show, once again, that whilst the technology may be advancing very fast, sometimes the law is slow to keep up and, where necessary, to regulate the potential concerns it raises.
© Cambridge Legal English Academy 2021