Buffalo Law Review

First Page


Document Type



Machine learning and other artificial intelligence (AI) systems are changing our world in profound, exponentially rapid, and likely irreversible ways.3 Although AI may be harnessed for great good, it is capable of and is doing great harm at scale to people, communities, societies, and democratic institutions. The dearth of AI governance leaves unchecked AI’s potentially existential risks. Whether sounding urgent alarm or merely jumping on the bandwagon, law scholars, law students, and lawyers at bar are contributing volumes of AI policy and legislative proposals, commentaries, doctrinal theories, and calls to corporate and international organizations for ethical AI leadership. Unfortunately, erroneous, incomplete, and overly simplistic treatments of AI technology undermine the utility of a significant portion of that literature. Moreover, many of those treatments are piecemeal, and those gaps produce barriers to the proper legal understanding of AI.

Profound concerns exist about AI and the actual and potential crises of societal, democratic, and individual harm that it causes or may cause in future. On the whole, the legal community is not currently equal to the task of addressing those concerns, lacking sufficient AI knowledge and technological competence, despite ethical mandates for diligence and competence. As a result, law and policy debates and subsequent actions may be fundamentally flawed or produce devastating unintended consequences because they relied upon erroneous, uninformed, or misconceived understandings of AI technologies, inputs, and processes. Like the elephant in the ancient Jain parable, the wise ones may conceive of only a fraction of the AI creature and some more or less blindly.

Now more than ever, lawyers need to be able to see around critically important corners. The general lack of understanding about AI technology robs the legal profession of that foresight. This state of affairs also raises significant ethical concerns. Worse, it undermines lawyers’ power, authority, and legitimacy to bring forward truly valid, meaningful ideas and solutions to prevent AI from becoming humanity’s apex predator.

This Article responds with several descriptive and theoretical contributions. As to its descriptive contributions, it aims to correct and augment the record about AI, particularly machine learning and its underlying technologies and processes. It endeavors to present a concisely and accessibly stated foundational, but sufficiently comprehensive, single-source explanation. The Article draws extensively from the scientific and technical literatures and undertakes an important interdisciplinary translational process by which to map the AI technical lexicon to legal terms of art and constructions in patent and other cases. Because their understanding is foundational, the Article drills down on three principal AI inputs: data, including data curation; statistical models; and algorithms. It then engages in illustrative issue-spotting within these AI factual frames, sketching out some of the many legal implications associated with those vital understandings.

Toward its theoretical contributions, the Article presents two conceptual sortings of AI and introduces a systems- and process-engineering-inspired taxonomy of AI. First, it categorizes AI by the degree of human involvement in and, conversely, the degree of AI autonomy in AI-mediated decision-making. Second, it conceptualizes AI as being static or dynamic. Those distinctions are vital to AI’s potential for harm, meaningful accountability, and, ultimately, the proper prioritization of AI governance efforts. Third, the Article briefly introduces a taxonomy that conceptualizes AI as a human-machine enterprise made up of series of processes. By perceiving “the whole of the AI elephant,” the role of human decision-making and its limits may be understood, and the human-machine enterprise that is AI and its constituent processes may be deconstructed, comprehended, and framed for subsequent scholarship, doctrinal and procedural analyses, and law and policy developments. With these, the Article hopes to help inform and empower lawyers to improve the security, justness, and well-being of people in the increasingly algorithmic world.