Datacros III

The use of AI technology in

The use of AI technology in fighting organized and financial crime

by Massimiliano Carpino, Legal, Ethics & Compliance Advisor at Transcrime

We live in an era in which technological development is more rapid and invasive than, perhaps, the developers themselves had ever imagined. Unlike previous technological revolutions, AI brings with it a series of unknown and unexplored zones that arouse both awe and fascination. The computational capacity to process huge amounts of data humanly unthinkable combined with the generative one (i.e. using machine learning algorithms to generate new content – text, audio, images, video – that previously relied on human creativity) gives us potentially much more refined tools to prevent and fight organized crime (both for LEAs and AML obliged parties and in general for the whole corporate world) but, at the same time, creates a series of potential critical issues not only with respect to the protection of the fundamental rights of individuals but also with regard to the reliability of the results produced, i.e. the understanding and irrefutability of the machine outputs.

The content of the legal challenge is not new but finding the right balance between opposing rights of equal standing according to the canons of law, strict necessity and proportionality is much more difficult. The real challenge for both producers and users, and sooner or later also for the courts, will be to be able to verify and understand ex post that the ex-ante balancing process has been carried out according to the canons that the jurisprudence of the Strasbourg and Luxembourg courts has consolidated over the past decades.

The AI legislator immediately took this step with Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, which already states in Article 1(1) “the purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems[1] in the Union and supporting innovation.”

It is therefore clear, as was also the case with the GDPR, that the promotion and removal of obstacles to the development of AI must be accompanied by a risk-based and human-centric approach, thus emphasizing, not even so implicitly, that AI is inherently risk-bearing, especially when the autonomy and adaptability of the machine is not supervised (and thus understood) by humans.

In a very general way and with respect to the field of crime prevention and law enforcement, at least two main concerns should be posed and solved by every practitioner:

    1. Violation of privacy and data protection rules for natural persons

    1. Effective human supremacy (i.e., control) over machine processing (and possibly decisions)

With regard to the first point, it would be advisable to identify the sources of the data that are fed to the AI system and in particular (i) the provenance; (ii) the genuineness (e.g. data that have not been manipulated and/or are not the result of previous automated enrichment); (iii) the relevance and appropriateness with respect to the purpose of the processing, which also includes the quantity of the data in line with the principle of minimization.

The sum of these elements represents the first fundamental factor for an anthropocentric and risk-based AI: the quality of the data.

However, even the quality of the data, which is in any case complex to achieve and guarantee, is not by itself sufficient to reduce the risks of over profiling, even indirectly, of the habits and behavior of individuals, producing an intrusion into private life that is often irrelevant and/or oversized to the purpose of fighting crime.

AI systems should therefore be instructed by humans to be able to recognize all those data correlations, analytical and statistical enrichments that are not relevant and strictly necessary to achieve the purpose of the processing by automatically segregating/eliminating them so as to ensure constitutional regulatory hygiene.

The last and perhaps most important issue involving all operators will be that of moral and intellectual independence from machine processing, especially when this processing is the antechamber of a decision that may have legal effects on individuals.

This is a point of attention that involves both so-called white box systems and, even more so, so-called black box systems.

With respect to the latter, the non-intelligibility and non-explainability of the algorithm’s logic (even if only for reasons related to third parties’ intellectual property rights) which, on the basis of certain data provided as input, generates a certain output, evidently places the operator in a position of objective subservience with respect to the machine’s processing, creating the danger that this result, considered correct by default, is used without any effective critical control with the consequence that any prejudices and discriminations generated are no longer curable, spreading like a snowball.

But, albeit to a lesser extent, even so-called white box systems can create this distortion on people’s fundamental rights if the objective comprehensibility and explainability of the algorithm does not correspond with the subjective one of the operators (lacking sufficient technical knowledge) who would find himself in the same position of subservience as the black box example.

Worthy of a brief mention in this regard is the judgment of the European Court of Justice (First Chamber) of 7 December 2023 in case C-634-21 (SCHUFA Holding AG; ECLI:EU:C: 2023:957), which, when faced with an appeal concerning credit risk scoring by a saver (but it could very well be criminal risk scoring), identifies the machine as the de facto decision maker as opposed to the formal decision maker (the credit institution).

In particular, what the Luxembourg court focuses on is whether, in the present case, the right not to be subject to a decision based solely on automated processing (Article 22(1) GDPR) has been respected.

In concrete terms, the way in which the decision-making process is structured is of essential importance.  In the present case, this process normally comprises several steps, such as profiling, the calculation of the scoring score and the decision on the granting of credit proper. On the other hand, it appears that the essential question is whether the decision-making process is designed in such a way that the credit scoring agency’s calculation of the scoring score predetermines the financial institution’s decision to grant or deny credit.

If the scoring is to be carried out without any human intervention that could, where appropriate, verify its result and the correctness of the decision to be taken with respect to the person applying for credit, it is logical to consider that it itself constitutes the decision referred to in Article 22(1) of the GDPR[2].

The decisive factor is the impact that the decision has on the person concerned. Given that a negative scoring score alone can have a negative impact on the data subject, i.e. significantly restrict him/her in the exercise of his/her freedoms, or stigmatize him/her in society, it seems justified in qualifying it as a decision when a financial institution attaches fundamental importance to it in the decision-making process.

According to the referring court, ‘what ultimately determines whether and how the third-party data controller (i.e. the credit institution) will enter into a contract with the data subject is the scoring score calculated by the credit rating agency on the basis of automated processing’. The referring court also explains that, although the third party is not obliged to make its decision on the basis of the scoring score, the fact remains that it ‘usually does so extensively’.

This is precisely the effect of subservience to the machine that must be avoided, especially when its processing can, if not properly controlled and verified, adversely affect the exercise of the individual’s fundamental rights and freedoms.

The same concerns are also shared by Europol[3] which in a recent publication devoted ample space to Ethical and social issues in AI for law enforcement (e.g. data bias and fairness; privacy and surveillance; accountability and transparency; the black box issue; human rights and discrimination) as well as to the Fundamental issue of balancing benefits and restrictions (e.g. addressing concerns of bias and discrimination; safeguarding privacy and data protection). Likewise, within the framework of the DATACROS III project, experts of different disciplines will come together and review the ever-evolving environment of AI. Producing actionable insights, in the form of a white paper, and discussing the ethical implementation of AI in the financial and organized crime investigation process.

It is necessary to adopt a flexible mental posture that adapts to the incessant technological evolutions with the curiosity and humility of a neophyte combined with the rigor and diligence required so that the machine, exploiting human laziness and the illusion of unattainable precision, does not take over the decision-making of those who are in charge of corporate or public crime prevention and enforcement, but rather enhances their capacity to make the right choice with a greater depth of knowledge and faster than was previously possible.


 

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence; art. 3 n.1, definition of AI system: ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

[2] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data; article 22 – Automated individual decision-making, including profiling: 1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent. 3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision. 4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

[3] EUROPOL, AI and policing/the benefits and challenges of artificial intelligence for law enforcement; An Observatory Report from the Europol Innovation Lab https://www.europol.europa.eu/publication-events/main-reports/ai-and-policing