Błażej Kuźniacki (ACTL, University of Amsterdam; PwC Poland)

Kamil Tyliński (Mishcon de Reya LLP; UCL School of Management)

Every new method or procedure involves not only potential benefits but also risks of unwanted, and sometimes serious consequences. It is therefore crucial to be able to identify the risks prior to any implementation of the new methods or procedures, so that the dangers stemming from such an implementation are reduced. With AI being a powerful tool capable of completely changing the legal industry and the way the tax law is practiced, an overview of the most serious legal risks of AI methods to be integrated to tax law is presented in this contribution. The purpose of this piece is not only to succinctly discuss and analyse the biggest legal risks stemming from the integration of AI to tax law, but also to suggest means to mitigate such risks and open the discussion around this topic.

A general outlook of legal risks related to an integration AI model to law

The major sources of legal risks stemming from the integration of AI to the tax law are General Data Protection Regulation (GDPR)[1] – the right to explanation (Articles 12, 14 and 15) and the right of human intervention (Article 22), and the European Convention on Human Rights (ECHR) – the right to a fair trial (Article 6) and the prohibition of discrimination in conjunction with protection of property (Article 14 in conjunction with Article 1 of the additional Protocol No. 1 to the ECHR).[2] Although the basis for the above rights may vary among jurisdictions, such rights generally create an internationally recognised paradigm of legal constraints for an application of AI in the field of (tax) law.

The right to explanation

The right to explanation points to transparency of the AI model. If an automated decision of the tax authorities is made with an AI model, then they must be able to explain the taxpayers how this decision was reached by providing information comprehensive enough to the taxpayers to act on it to contest a decision, correct inaccuracies, or request erasure.[3] The use of advanced and complicated AI models such as neural networks is not transparent enough yet to be used as the only tool which makes tax decisions without human oversight. Consequently, instead of neural networks, it is recommended in such circumstances to use AI models which can be explained easily, such as decision trees, different versions of an ensemble learning such as Boosting or Stacking, and Beam search like algorithms, so that it would be possible to provide meaningful information about the logic involved in the AI model. Approaches that control system behaviour can support such decisions. An explanation of the tax results should be presented in a way that can be understood by humans – factors that led the AI model to take a decision to apply or not to apply tax law. This allows humans to challenge such decisions, even before it is officially issued. Such approaches switch the emphasis from the computation rules to the control of the decision rules (Accountable Algorithms).

The right to human intervention

The right to human intervention (Article 22 of the GDPR) is a legal tool to contest decisions that rely on data processed through automatic means such as AI model (Enslaving the Algorithm). The GDPR formulates such right by giving a data subject (e.g. a taxpayer) a right to contest a decision if it affects rights or interests legitimately held by them (e.g. a tax advantage) and if it is solely based on the automated treatment of personal data regarding the data subject. As aptly pointed by scholars, the phrase “based solely on…” in Article 22(1) of the GDPR should be understood broadly, otherwise the right to human intervention will not be effective enough to  prevent AI models from leading to unintentional harm. The taxpayers must therefore have right to challenge an automated decision of AI model under the tax law and have it reviewed by a human (typically a tax advisor). To be possible to effectively use such right, however, the taxpayer must first know how the AI model reached the decision. This reveals an interplay between the right to explanation and the right to human intervention – the latter cannot effectively be used without the former.

The right to a fair trial

The right to a fair trial under Article 6 of the ECHR applies throughout the entire tax procedure as an expression of a fundamental right of the taxpayer (cf. the judgements of European Court of the Human Rights (ECHR) in cases Ravon and Imbrioscia).[4] In respect of AI integration to tax law, the two following elements of the right to a fair trial are most relevant: (i) the minimum guarantees of equality of arms and (ii) the right of defence. They mean that the taxpayers must be allowed to effectively review the information on which the tax authorities base their decisions (cf. the judgments of the European Court of Human Rights (ECHR) in Matyjek and Moiseyev cases).  For example, the taxpayer should be entitled to the legal factors relevant to decide on an application of the tax law and the logic behind the AI model which prompts the authorities to reach a given tax decision in order to be able to fully understand how the tax decision was reached. Otherwise, the taxpayers’ ability to counter argue and deliver counter evidence to the tax authorities will be frustrated (cf. the ECHR’s judgement in Mattoccia case). Likewise a fair balance between the tax authorities and the taxpayers before a court will be tilted in favor of the former. So here again, similarly to legal constraints following from the GDPR, the major risk for AI integration to tax law is the lack of sufficient explainability.

The prohibition of discrimination in conjunction with protection of property

Finally, the prohibition of discrimination in conjunction with protection of property under the ECHR requires the tax authorities to deliver tax decisions in a non-discriminatory fashion, i.e. to not treat the taxpayers differently without objective and reasonable justification. The difference in treatment of taxpayers is permissible insofar as it appears both suited for realising the legitimate aim pursued, and necessary (cf. the ECHR’s judgement in Schmidt). In respect of tax law, it would be prohibited to treat taxpayers differently based solely on an attribute or set of attributes that are irrelevant for an application of concrete tax provisions such as race or gender (cf. Economic Models of (Algorithmic) Discrimination).

Such discriminatory prohibited tax treatment, however, may follow from undesired results of AI models, since they have a tendency of suffering from bias, which is a consequence of the imbalanced data that was used to train them. In the light of many subjective terms under tax law and a plethora of ambiguous borderline (taxable vs non-taxable) situations, there is a risk of the model providing incorrect classifications, potentially because the training data (matters) that was available has not been randomized to allow the model to generalize, resulting in lack of robustness. With this in mind another challenge emerges – gathering substantial data that does not imply any bias and is capable of providing the robust and reliable model.

Deliberate discriminatory tax outcomes under the AI model

Furthermore, taking into account the considerable discretionary power for tax authorities under many tax provisions, in particular GAAR-type provisions (e.g. the general anti-avoidance rule based on Article 6 of the Anti-Tax Avoidance Directive), and various fiscal agendas among jurisdictions, it is not inconceivable that a government may use data-driven approaches to deliberately ensure discriminatory tax outcomes under the AI model to tax law. In other words, a government may consciously push towards AI models being aimed against taxpayers from some jurisdictions and treat them worse (more denial of a tax advantage) than others (less or no denial of a tax advantage), irrespective of scrutiny of relevant criteria under GAAR (or other similar tax provisions) to deny the tax advantages. For example, a smaller jurisdiction may wish to attract foreign direct investments from certain larger economies. To this end, its government will ensure that AI models favour taxpayers from larger countries so that a denial of a tax advantage does not take place under the GAAR in respect to the privileged taxpayers irrespective of relevant criteria being met under the GAAR to deny the benefits.[5] Other taxpayers, by contrast, may be treated in accordance with the relevant criteria under the GAAR or even harsher.

Keep human in and over AI loop!

A wise thing to do during modelling AI to tax law is to keep subject matter human experts in the loop and not rely on the models exclusively in order to mitigate major legal risk. Indeed, there may be countless situations when the models are prone to make incorrect judgement, especially in the constantly evolving tax legal environment. The main objective is that the methods used remain explainable, so that the results of the test could be easily supported by qualitative measures as well as the facts that are being considered when making a decision. Another important aspect is the transparency, without which any legal system is deemed malfunctioning. Taxpayers and tax authorities need to be provided sufficient information with regards to the matters considered and have to be aware of any automations and methods that can impact their situation (AI Now).

Omitting any of the above recommendations may lead to serious implications for the administrative bodies, as tax decisions reached via the non-explainable/non-transparent AI models will infringe the right to explanation and the right of human intervention as well as the right to a fair trial under the GDPR and the ECHR, respectively. Such tax decisions are also bound to be, in the most optimistic version rejected by the courts, very often accompanied by substantial damages, as shown by the recent orders in similar cases all over the world (Litigating Algorithms).

Nevertheless, this story does not end here, since a mitigation of another major legal risk of an integration of AI to tax law, which stems from the prohibition of discrimination in conjunction with protection of property under the ECHR, seems to be the most challenging. It would be very difficult to find completely unbiased data towards certain taxpayers and their arrangements to avoid undesired discriminative outcomes. Even though this could be overcome by the human experts in the loop, governments of some jurisdictions may wittingly aim to design their AI models to discriminate one group of taxpayers and arrangements against others if this serves best their fiscal and commercial agendas. This follows from the lack of agreement among jurisdictions in respect of desired effects of tax rules represented by algorithms, e.g. whether GAAR should or should not be applied exclusively to prevent abuse of tax law and to be predictable enough to allow taxpayer asses the risk of their tax optimization schemes in advance (the calibration and re-calibration of AI algorithms).

* This piece largely follows from a sub-chapter to the contribution by B. Kuźniacki & K. Tyliński, Identifying the Potential and Risks of Integration of AI to Taxation: The Case of General Anti-Avoidance Rule, in G. Aviv, P. D’Agostino and C. Piovesan (eds.), Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law, Thomson Reuters Canada (forthcoming 2021). The insights to this publication made by Błażej Kuźniacki are to be allocated to the ACTL research project ‘Designing the tax system for a Cashless, Platform-based and Technology-driven Society’ (CPT); for more info: www.actl.uva.nl under CPT project.

[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), Official Journal L 119, 4.5.2016, pp. 1–88. The California Consumer Privacy Act (CCPA) can be considered a US (California) equivalent of the GDPR – a state statute intended to enhance privacy rights and consumer protection for residents of California, United States, which was implemented on June 28, 2018, to amend Part 4 of Division 3 of the California Civil Code (Assembly Bill No. 375).

[2] The full text of the ECHR and the additional protocols is available at: https://www.echr.coe.int/Documents/Convention_ENG.pdf.

[3] See A. Bal, Ruled by Algorithms: The Use of ‘Black Box’ Models in Tax Law, Tax Notes International, September 16, 2019, pp. 1162-1163. The European Commission has provided examples of information that should be given to individuals in the following document: European Commission Article 29 Data Protection Working Party, Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, February 6, 2018.

[4] See more in G. Maisto, The Impact of the European Convention on Human Rights on Tax Procedures and Sanctions with Special Reference to Tax Treaties and the EU Arbitration Convention, in G. Kofler et al. (eds.), Human Rights and Taxation in Europe and the World, IBFD, 2010, p. 376.

[5] Cf. B Kuźniacki, Introduction of the Principal Purpose Test and Discretionary Benefits Provisions into Singapore’s Tax Treaties: Not as Black as It Is Painted, 24 Asia-Pacific Tax Bulletin, 2018, No. 2, sec. 4.

 


________________________

To make sure you do not miss out on regular updates from the Kluwer International Tax Blog, please subscribe here.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *