Search
Generic filters
Exact matches only
Filter by content type
Users
Attachments

Exploring the Promises of Tranformer-Based LMs for the Representation of Normative Claims in the Legal Domain

Alexandria

In this article, we explore the potential of transformer-based language models (LMs) to correctly represent normative statements in the legal domain, taking tax law as our use case. In our experiment, we use a variety of LMs as bases for both word- and sentence-based clusterers that are then evaluated on a small, expert-compiled test-set, consisting of real-world samples from tax law research literature that can be clearly assigned to one of four nor-mative theories. The results of the experiment show that clusterers based on sentence-BERT-embeddings deliver the most promising results. Based on this main experiment, we make first attempts at using the best performing models in a bootstrapping loop to build classifiers that map normative claims on one of these four nor-mative theories.

Reto Gubelmann, Siegfried Handschuh, Peter Hongler

26 Aug 2021

Publikationstyp
Wissenschaftlicher Artikel
Sprache
Englisch