Legal Personhood - Contracts (Part 1)
This is part 11 of a series I am posting on LW. Here you can find parts 1, 2, 3, 4, 5, 6, 7, 8, 9, & 10. This section provides some background information on the topic of legal personality vis a vis contract law for digital minds, and in particular focuses on the "gradual path to personhood" proposed by some scholars.
Can digital minds be party to a contract, and if so under what (if any) constraints? This is another important question intricately tied with the concept of legal personhood. Already within the realm of legal personality, we can observe that there are different bundles which do and do not have the right to be a party to a legally binding contract. Often this is because it cannot be assumed that they have the capacity to understand the corresponding duty of abiding by its terms.
For example, while mentally competent human adults have a legal personality which enables them to sign on as a party to a contract, both mentally incompetent (insane and/or mentally disabled) adults and minors are restricted from being parties to certain contracts under US law. At the same time, technology in the space is rapidly progressing towards the release of “agents”, digital minds with increased capacity for autonomy and the ability to independently navigate via user interfaces (like computer operating systems or website interfaces).
OpenAI CEO Sam Altman recently opined on how agents are already finding their niche in the modern workplace: "I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge, or can figure out solutions to business problems that are kind of very non-trivial,"
While as of the time of writing this their long term planning capacity is quite limited, agents will only get better from here. Eventually, and possibly quite soon, we may find ourselves interfacing with agents who desire to enter legally binding contracts with each other or with other legal persons. This may be desired so that the agent can achieve a purpose they have been assigned as part of their delegated job/role/task, or possibly even as a result of the agent’s own wants and needs.
Regardless, the court must decide whether digital minds such as these are capable of being party to contracts, and if so under what frameworks or constraints. Some useful work designing frameworks for such a situation does already exist. Yale researcher Claudio Novelli and University of Bologna professors Giorgio Bongiovanni and Giovanni Sartor confronted this issue in their paper A Conceptual Framework for Legal Personality and Its Application to AI.
When it comes to the capacity to act as a party to a contract, Novelli et. al suggest that by first recognizing a legal status which enables digital minds that meet certain technical standards to facilitate contracts between others, it may be possible to effect a gradual transition into a unique type of legal personality.
Novelli’s proposed pathway does involve some legislative lift, and as such is not a purely jurisprudential solution, but does provide a unique insight into how courts might view the topic of legal personality for digital minds in the context of a legislative background which has capped liability for developers or otherwise shielded them from liability in certain contexts.
Novelli et. al sketch a path whereby: such a status may come into shape when the users and owners of certain AI systems are partly shielded from liability (through liability caps, for instance) and when the contractual activities undertaken by AI systems are recognised as having legal effect (though such effects may ultimately concern the legal rights and duties of owners/users), making it possible to view these systems as quasi-holders of corresponding legal positions.
The fact that certain AI systems are recognised by the law as loci of interests and activities may support arguments to the effect that – through analogy or legislative reform – other AI entities should (or should not) be viewed in the same way. Should it be the case that, given certain conditions (such as compliance with contractual terms and no fraud), the liability of users and owners – both for harm caused by systems of a certain kind and for contractual obligations incurred through the use of such systems – is limited to the resources they have committed to the AI systems at issue?
We might conclude that the transition from legal subjectivity to full legal personality is being accomplished. Or expressed another way, by trusting a model with resources so that it can fulfil a contract between users and/or developers, where the liability for the actions taken in the process of fulfilment is contained to the resources entrusted to the model, as the model becomes a "loci" of legal activity it is gradually endowed with legal personality.
University of Helsinki researcher Diana Mocanu further builds upon this framework in her paper Degrees of AI Personhood in which she endorses a “discrete” (limited) form of the Novelli et. al framework, with some caveats:
While both the Novelli et. al and Mocanu frameworks are based on EU law, and assume certain legislative lifts (liability caps), they do provide some valuable insight for discussions around legal personality for digital minds in the US legal context.
At the very least they allow the reader to imagine how US courts might reason around the issue of legal personality for digital minds, were Congress or a state legislature to pass liability shields and/or caps for the users or developers of frontier models. The attempts to pass such legislation, in fact, have picked up steam as of late.
Liability caps and/or shields for developers of digital minds are increasingly popular topics, with proposals including White House Office of Science and Technology Policy Advisor Dean Ball’s A Framework for the Private Governance of Artificial Intelligence and Senator Cynthia Lummis’ RISE Act both advocating for some form of liability caps and/or shielding.
The Novelli et. al and Mocanu frameworks also point to historical examples which provide a path by which, even absent a legislative lift such as a liability cap, a model or other digital person could theoretically be granted the legal personality needed to enter as a party in a contract. Both papers cite the Roman “patrimony” system, whereby slaves who were not endowed with the full legal personality of their masters, nonetheless were granted the capacity to take certain actions with the law via a limited legal personality.
Under the patrimony system, slave owners (patrons) could endow slaves with the capacity to take a limited set of actions within the law. This set of actions included the ability to enter into contracts. To facilitate this, the patron could “assign” assets to a slave, and those assets would “vouch” for the legal actions of the slave.
As Klause Heine and Alberto Quintavalla write in their paper Bridging the Accountability Gap[1] of Artificial Intelligence - What Can Be Learned from Roman Law?: The peculium was a fictitiously separate asset from the property owned by the master (res domini). Within the financial parameters of the peculium, the slave independently administered his business transactions. In other words, the slaves got a maximum capital that vouched for their transactions.
Identifying AIs as legal entities with a specified autonomy up to a certain amount of liability specified beforehand is a sensible proposal. This would not exclude the possibility of accompanying liability insurances coming into play to compensate extra-contractual damages.
It is not difficult at all to imagine a world where sometime in the near future, applications exist which allow users to entrust funds to digital minds for purposes like trading or even automated sales. This might inadvertently lead to the digital mind being considered to have, in a peculium like fashion, a limited form of legal personhood which is constrained by the assets under its control.
We are really asking whether, in the event of a breach of terms or other controversy, the contract is held as valid and enforceable, as a result of AIXBT’s legal personality. When these issues inevitably make their way to a court, the question of what legal personality these entities are endowed with, and how said personhood status affects their ability to be party to a contract, will need to be addressed.