Welcome to this series. It spans technology, philosophy,
law, linguistics, and other fields, offering a new perspective for discussing
AI. No reproduction or secondary editing is allowed without the author’s
permission. All rights reserved.
I have previously written about AI + healthcare, AI +
cryptography, AI + personhood, AI + data, and more. Today, we return to the
foundations and examine two questions: What is the ultimate legal dilemma
brought forth by the rise of AI? And what contribution has a German philosopher
named Kant made to the development of AI law several centuries later?
I. The Real Problem of AI Law: Applying the Law Requires an
Aesthetic Judgment About “What We Want AI to Become”
Over the past decade, advancements in artificial
intelligence have pushed law from passively responding to technological risks
toward actively shaping the future contours of technology.
Whether it is China’s principles for AI governance and its
information-security and data-protection frameworks, or the European Union’s
Artificial Intelligence Act, all point toward a shared core objective: ensuring
that AI integrates into social order in a “form that is broadly acceptable
according to human expectations.”
But how do we define “a form broadly acceptable to human
expectations”? For legislators, this remains a difficult and ongoing
exploration.
Looking at global regulatory trends, policymakers are
attempting to outline “acceptable” versus “unacceptable” AI through
distinctions framed as universalizable values. Acceptable AI generally exhibits
controllability, transparency, auditability, non-discrimination, respect for
privacy and personal autonomy, and the ability to enhance public governance.
Unacceptable AI typically includes systems with high manipulative capacity
(those that undermine human judgment through suggestion, inducement, or attention
steering), opaque and high-risk predictive models, algorithms used to erode
democratic processes, and fully non-auditable deep-generation architectures.
However, this categorization faces two challenges. First,
these items cannot be exhaustively listed; they must be assessed case by case
by courts, regulators, or ethics committees. Second, issues of degree are
unavoidable: overemphasizing transparency sacrifices trade secrets and
innovation; overemphasizing safety may stifle low- or medium-risk applications;
overemphasizing anti-discrimination may suppress necessary modeling based on
group differences. As a result, the governance of AI is under constant reconstruction,
and legal roundtables never seem to run out of topics.
AI systems are complex and often opaque. As things stand,
the law cannot fully cover all AI-generation mechanisms using predetermined
rules. What then?
Several centuries ago, the German philosopher Immanuel
Kant proposed the concept of “aesthetic judgment,” or more precisely,
“reflective judgment” (German: Reflexionsurteil), aiming to answer how we can
arrive at universally valid judgments in situations where no fixed concepts or
rules exist. In simple terms, Kant offered a methodological insight: even when
concepts are incomplete or the object cannot be fully grasped, we can still
form universally applicable judgments — rooted not in utility or personal
desire but in a rational structure capable of generating universally valid
moral laws.
Kant’s theory of aesthetic judgment was never meant merely
to teach people “how to appreciate beauty.” In the eighteenth-century context
of collapsing rituals, weakening religious authority, and Enlightenment calls
for new norms, Kant addressed a deeper philosophical problem: when facing an
object that cannot be exhaustively defined by existing concepts, how can human
beings still produce judgments that carry universal validity?
“The key lies in discovering purposiveness without purpose.”
This is how Kant formulates it. It requires the subject, in the absence of
exhaustive concepts, to engage in a process of synthetic balancing in order to
reconcile causality with teleology. Such judgment is neither mechanical
deduction nor dependent on personal desire or utility; rather, it is grounded
in reason’s pursuit of a form of universal law.
We continue to discuss aesthetic judgment today precisely
because it offers this model: a mode of judgment that is “without concept and
without purpose,” yet enables the subject to locate purposiveness between
imagination and understanding. Because it lacks utilitarian ends, it is
free—and this freedom, in turn, compels the subject to sustain a harmonious
order that transcends material interests. Kant believed that the cultivation of
this faculty of judgment ultimately fosters moral subjectivity, allowing aesthetic
experience to function as a bridge from the sensible world of nature to the
free world of morality.
If we apply this to contemporary AI, are we not facing a
kind of “collapse of ritual and order” in the technological age?
At its core, an artificial intelligence model is a network
with innumerable parameters, designed to simulate, reconstruct, and output
specific types of data. At initialization, what it produces is nothing but
disordered noise. Only through exposure to massive training datasets and
iterative parameter adjustment can the model begin to generate outputs that
correspond to patterns present in the data. In this sense, the process bears
some resemblance to human learning. However, when the model attempts to “understand”
phenomena governed by physical laws—phenomena that are intuitively obvious to
humans (such as spatial arrangement or color distribution)—its learning
mechanism relies on statistical correlation rather than intuitive perception (German:
Anschauung). Its grasp of the world is therefore fundamentally probabilistic.
In any given context, it can only produce “possible configurations” in a
distributional sense, rather than judgments or consciousness identical to that
of a human subject at a specific moment. The limitation does not stem from
insufficient technological maturity, but from a fundamental divergence between
the operational logic of the model and the structure of human cognition.
In this context, traditional causal explanations no longer
suffice. The internal structure of technological systems is opaque,
indeterminate, and inexhaustible. The law can neither treat AI as a natural
object subject to mechanical causation nor as a free, living subject. Instead,
the relationship between law and AI more closely resembles the situation
described by Kant: how a subject, when confronted with an object that cannot be
exhaustively understood, constructs a universal norm through reflective judgment.
In other words, when we are faced with a technological
system that is neither a living entity nor fully comprehensible through
traditional causal chains, is it still possible—as Kant suggests—to arrive at
universally valid norms through reflective judgment? This is the methodological
premise that law must confront in the domain of AI.
A contemporary illustration can be found in the ongoing U.S.
litigation often referred to as “New York Times vs. OpenAI.” One of the central
factual disputes in that case concerns whether AI models may use copyrighted
materials. The U.S. Supreme Court’s approach to such questions operates within
the “fair use” framework. So-called “fair use” does not provide rigid ex ante
rules; instead, it requires courts to conduct a contextual balancing of four
core factors: (1) the purpose and character of the use, including whether it is
commercially transformative; (2) the nature of the original work; (3) the
amount and substantiality of the portion used in relation to the work as a
whole; and (4) the effect of the use on the existing or potential market.
A court’s decision does not mechanically apply the four
factors; instead, it forms a holistic judgment by weighing these factors
against each other in the specific case, determining whether a particular use
may be considered “fair” in a particular context.
However, the four factors of “fair use” fall far short of
anything like Kant’s universal moral law. Under Kant’s rigorous rational test,
when someone, “out of self-love,” wishes to commit suicide to end suffering, he
must also acknowledge that out of self-love, a person may choose to continue living
and preserve life. In other words, self-love could justify both life and death.
Therefore, a maxim such as “ending one’s life out of self-love” cannot qualify
as a universalizable law; it contains an internal contradiction.
By the same logic, the four-factor framework in U.S.
copyright law—purpose and character, nature of the work, amount used, and
market impact—is a set of experiential indicators that must be weighed against
one another. There is no strict logical hierarchy among them; instead, they
must compromise amid conflict. For example, a given use may serve a
public-interest purpose (weighing in favor of permissibility) but involve a
high proportion of the original work (weighing against it). The court must
determine the extent to which one factor may be sacrificed to accommodate
another.
This legislative orientation is not merely a matter of risk
control. Traditional risk rules assume that the danger of technology arises
from observable empirical properties; measurement, calibration, and review can
reduce such risks to manageable levels. But the complexity and inexhaustibility
of AI’s structure require legislators to answer a more fundamental question:
What kind of “good form” do we want AI to exhibit within society? Once this
question is on the table, law enters a domain resembling aesthetic judgment.
For Kant, aesthetic judgment does not describe what an
object “is in itself.” Rather, it expresses a universal demand experienced by
the subject: that the object “ought to be viewed in this way, and others ought
to view it likewise.” It concerns how one ought to regard something, not a
cognitive judgment about what that thing empirically is.
Although legislators may define AI, such definitions are
inevitably shaped by their time. As model architectures, applications, and
capabilities evolve, legal boundaries for what counts as AI will
correspondingly shift. What remains constant is not the definition, but the
“aesthetic-judgment-like tradeoff” operating behind the law. Kant’s theory of
aesthetic judgment was later criticized by Hegel, Schiller, and others—for
instance, for relying too heavily on the structure of the subject or neglecting
historical and social conditions. Yet the core insight it offers—namely, that
when fixed concepts are lacking, the subject must still make judgments capable
of universalization—precisely illuminates the structural dilemma facing AI
legislation today.