2026-04-10

When We Talk About AI Across Disciplines—AI, Kant, and Legislation (1)

 

Welcome to this series. It spans technology, philosophy, law, linguistics, and other fields, offering a new perspective for discussing AI. No reproduction or secondary editing is allowed without the author’s permission. All rights reserved.

I have previously written about AI + healthcare, AI + cryptography, AI + personhood, AI + data, and more. Today, we return to the foundations and examine two questions: What is the ultimate legal dilemma brought forth by the rise of AI? And what contribution has a German philosopher named Kant made to the development of AI law several centuries later?

What kind of AI do we hope to coexist with in the long term?
What exactly is the bottleneck in current AI law roundtables?
What standards should guide future aesthetic judgments of AI?

I. The Real Problem of AI Law: Applying the Law Requires an Aesthetic Judgment About “What We Want AI to Become”

Over the past decade, advancements in artificial intelligence have pushed law from passively responding to technological risks toward actively shaping the future contours of technology.

Whether it is China’s principles for AI governance and its information-security and data-protection frameworks, or the European Union’s Artificial Intelligence Act, all point toward a shared core objective: ensuring that AI integrates into social order in a “form that is broadly acceptable according to human expectations.”

But how do we define “a form broadly acceptable to human expectations”? For legislators, this remains a difficult and ongoing exploration.

Looking at global regulatory trends, policymakers are attempting to outline “acceptable” versus “unacceptable” AI through distinctions framed as universalizable values. Acceptable AI generally exhibits controllability, transparency, auditability, non-discrimination, respect for privacy and personal autonomy, and the ability to enhance public governance. Unacceptable AI typically includes systems with high manipulative capacity (those that undermine human judgment through suggestion, inducement, or attention steering), opaque and high-risk predictive models, algorithms used to erode democratic processes, and fully non-auditable deep-generation architectures.

However, this categorization faces two challenges. First, these items cannot be exhaustively listed; they must be assessed case by case by courts, regulators, or ethics committees. Second, issues of degree are unavoidable: overemphasizing transparency sacrifices trade secrets and innovation; overemphasizing safety may stifle low- or medium-risk applications; overemphasizing anti-discrimination may suppress necessary modeling based on group differences. As a result, the governance of AI is under constant reconstruction, and legal roundtables never seem to run out of topics.

AI systems are complex and often opaque. As things stand, the law cannot fully cover all AI-generation mechanisms using predetermined rules. What then?

Several centuries ago, the German philosopher Immanuel Kant proposed the concept of “aesthetic judgment,” or more precisely, “reflective judgment” (German: Reflexionsurteil), aiming to answer how we can arrive at universally valid judgments in situations where no fixed concepts or rules exist. In simple terms, Kant offered a methodological insight: even when concepts are incomplete or the object cannot be fully grasped, we can still form universally applicable judgments — rooted not in utility or personal desire but in a rational structure capable of generating universally valid moral laws.

Kant’s theory of aesthetic judgment was never meant merely to teach people “how to appreciate beauty.” In the eighteenth-century context of collapsing rituals, weakening religious authority, and Enlightenment calls for new norms, Kant addressed a deeper philosophical problem: when facing an object that cannot be exhaustively defined by existing concepts, how can human beings still produce judgments that carry universal validity?

“The key lies in discovering purposiveness without purpose.” This is how Kant formulates it. It requires the subject, in the absence of exhaustive concepts, to engage in a process of synthetic balancing in order to reconcile causality with teleology. Such judgment is neither mechanical deduction nor dependent on personal desire or utility; rather, it is grounded in reason’s pursuit of a form of universal law.

We continue to discuss aesthetic judgment today precisely because it offers this model: a mode of judgment that is “without concept and without purpose,” yet enables the subject to locate purposiveness between imagination and understanding. Because it lacks utilitarian ends, it is free—and this freedom, in turn, compels the subject to sustain a harmonious order that transcends material interests. Kant believed that the cultivation of this faculty of judgment ultimately fosters moral subjectivity, allowing aesthetic experience to function as a bridge from the sensible world of nature to the free world of morality.

If we apply this to contemporary AI, are we not facing a kind of “collapse of ritual and order” in the technological age?

At its core, an artificial intelligence model is a network with innumerable parameters, designed to simulate, reconstruct, and output specific types of data. At initialization, what it produces is nothing but disordered noise. Only through exposure to massive training datasets and iterative parameter adjustment can the model begin to generate outputs that correspond to patterns present in the data. In this sense, the process bears some resemblance to human learning. However, when the model attempts to “understand” phenomena governed by physical laws—phenomena that are intuitively obvious to humans (such as spatial arrangement or color distribution)—its learning mechanism relies on statistical correlation rather than intuitive perception (German: Anschauung). Its grasp of the world is therefore fundamentally probabilistic. In any given context, it can only produce “possible configurations” in a distributional sense, rather than judgments or consciousness identical to that of a human subject at a specific moment. The limitation does not stem from insufficient technological maturity, but from a fundamental divergence between the operational logic of the model and the structure of human cognition.

In this context, traditional causal explanations no longer suffice. The internal structure of technological systems is opaque, indeterminate, and inexhaustible. The law can neither treat AI as a natural object subject to mechanical causation nor as a free, living subject. Instead, the relationship between law and AI more closely resembles the situation described by Kant: how a subject, when confronted with an object that cannot be exhaustively understood, constructs a universal norm through reflective judgment.

In other words, when we are faced with a technological system that is neither a living entity nor fully comprehensible through traditional causal chains, is it still possible—as Kant suggests—to arrive at universally valid norms through reflective judgment? This is the methodological premise that law must confront in the domain of AI.

A contemporary illustration can be found in the ongoing U.S. litigation often referred to as “New York Times vs. OpenAI.” One of the central factual disputes in that case concerns whether AI models may use copyrighted materials. The U.S. Supreme Court’s approach to such questions operates within the “fair use” framework. So-called “fair use” does not provide rigid ex ante rules; instead, it requires courts to conduct a contextual balancing of four core factors: (1) the purpose and character of the use, including whether it is commercially transformative; (2) the nature of the original work; (3) the amount and substantiality of the portion used in relation to the work as a whole; and (4) the effect of the use on the existing or potential market.

A court’s decision does not mechanically apply the four factors; instead, it forms a holistic judgment by weighing these factors against each other in the specific case, determining whether a particular use may be considered “fair” in a particular context.

However, the four factors of “fair use” fall far short of anything like Kant’s universal moral law. Under Kant’s rigorous rational test, when someone, “out of self-love,” wishes to commit suicide to end suffering, he must also acknowledge that out of self-love, a person may choose to continue living and preserve life. In other words, self-love could justify both life and death. Therefore, a maxim such as “ending one’s life out of self-love” cannot qualify as a universalizable law; it contains an internal contradiction.

By the same logic, the four-factor framework in U.S. copyright law—purpose and character, nature of the work, amount used, and market impact—is a set of experiential indicators that must be weighed against one another. There is no strict logical hierarchy among them; instead, they must compromise amid conflict. For example, a given use may serve a public-interest purpose (weighing in favor of permissibility) but involve a high proportion of the original work (weighing against it). The court must determine the extent to which one factor may be sacrificed to accommodate another.

This legislative orientation is not merely a matter of risk control. Traditional risk rules assume that the danger of technology arises from observable empirical properties; measurement, calibration, and review can reduce such risks to manageable levels. But the complexity and inexhaustibility of AI’s structure require legislators to answer a more fundamental question: What kind of “good form” do we want AI to exhibit within society? Once this question is on the table, law enters a domain resembling aesthetic judgment.

For Kant, aesthetic judgment does not describe what an object “is in itself.” Rather, it expresses a universal demand experienced by the subject: that the object “ought to be viewed in this way, and others ought to view it likewise.” It concerns how one ought to regard something, not a cognitive judgment about what that thing empirically is.

Although legislators may define AI, such definitions are inevitably shaped by their time. As model architectures, applications, and capabilities evolve, legal boundaries for what counts as AI will correspondingly shift. What remains constant is not the definition, but the “aesthetic-judgment-like tradeoff” operating behind the law. Kant’s theory of aesthetic judgment was later criticized by Hegel, Schiller, and others—for instance, for relying too heavily on the structure of the subject or neglecting historical and social conditions. Yet the core insight it offers—namely, that when fixed concepts are lacking, the subject must still make judgments capable of universalization—precisely illuminates the structural dilemma facing AI legislation today.


2025-10-30

Before Your Startup Crosses Borders, Don’t Lose Your Name: The Essential Brand Protection 101 for Global Entrepreneurs

In the early days of a startup, most founders are obsessed with product design, fundraising, and market strategy—but they often forget one of their most valuable intangible assets: the brand itself.

This year, a Chinese AI startup named DeepSeek fell into a classic “brand protection trap” in the United States. Before it had time to register its U.S. trademark, someone else filed for the exact same name.

Here’s the catch: under the “first-to-file” system used by the U.S. Patent and Trademark Office (USPTO), the agency doesn’t check whether the applicant is acting in bad faith or “riding on a trend.” As long as the paperwork meets formal requirements, the application proceeds to publication. If the original owner doesn’t file an opposition in time, the mark could be officially registered under someone else’s name.

Once that happens, reclaiming your trademark becomes an uphill legal battle—costly, slow, and distracting. For a fast-growing company, this kind of surprise can completely disrupt its business rhythm.

Tesla once experienced a similar nightmare when entering the Chinese market. Years before Tesla officially launched in China, a local company had already registered the “TESLA” trademark. When Elon Musk’s team tried to use the name, they found it wasn’t legally theirs. After lengthy negotiations, Tesla reportedly paid tens of millions of RMB to settle and reclaim its name—a painful and expensive lesson.


Changing Attitudes Toward “Bad-Faith” Filings

In recent years, Chinese authorities have taken a stronger stance against malicious trademark registrations. Following public scandals and international disputes, the China National Intellectual Property Administration (CNIPA) and the courts have stepped up enforcement.

Now, if the applicant clearly acted in bad faith—for instance, registering a well-known foreign or domestic brand with obvious intent to profit—the trademark office can reject or later invalidate the mark.

The U.S., however, takes a very different approach: it’s a self-help system. You must actively protect your rights. The USPTO won’t step in on your behalf if someone else registers your brand. Even if you’ve already used the mark, you must formally oppose or cancel the competing application yourself.

Adding to the challenge, U.S. law allows Intent-to-Use (ITU) applications—meaning someone can file first and show actual use later. If you haven’t yet entered the market, that person’s filing could take priority.


The Real Lesson: Your Company Has an ID, But Your Brand Has a Soul

Registering a company gives you a legal identity.
Registering a trademark gives your business a name in the eyes of the world.

Think of it this way: your company is the legal shell—it signs contracts, pays taxes, and hires staff. But your brand is what customers remember and trust.

Apple, Coca-Cola, and Tesla all derive immense value not from their factories or equipment, but from the power of their names and logos.

If your company fails, you can start a new one. But if your brand name is taken by someone else, continuing to use it could be infringement—leading to lawsuits, forced rebranding, or massive damages. For startups, that can be fatal.


Common “Brand Protection Traps”

  1. Registering only in one country
    Many founders register their mark in China and assume that’s enough. Later, when expanding overseas, they discover their name has already been taken.

  2. Registering only one class
    Trademarks are categorized by goods and services. You might register your app under Class 9 (software), but someone else can still register the same name under Class 42 (SaaS services).

  3. Registering too late
    Some founders wait until after multiple funding rounds. By then, brand recognition is high—and trademark squatters are already circling.

  4. Improper ownership
    Some founders register under an individual’s name or an agency’s name, not the company’s. Later, when they try to transfer it back, they face ransom-level fees.


Practical Tips for Entrepreneurs

  • File early – Apply as soon as you name your company or product. Cover your key markets (China, the U.S., EU).

  • File broadly – Don’t limit yourself to one class; protect your brand in all relevant categories.

  • Secure your domain name – Register your .com or other domain early to avoid conflicts.

  • Use the Madrid System – For companies going global, the WIPO’s Madrid System allows multi-country registration through one application.

  • Check ownership – Ensure all trademarks are registered under your company’s name, not an agent or individual.


Beyond Trademarks: Build a True “Moat”

  • Patents – File early for core technologies.

  • Copyrights – Protect your code, design, and written works; formal registration strengthens your legal position.

  • Trade Secrets – For formulas or algorithms, use NDAs and internal controls.

Together, these protections form the long-term defensive wall around your business. Without them, even the best ideas can be copied—and surpassed—overnight.


In short:
Registering your company is like getting an ID.
Registering your brand is like owning your name.

Don’t let someone else take it first.

When We Talk About AI Across Disciplines—AI, Kant, and Legislation (1)

  Welcome to this series. It spans technology, philosophy, law, linguistics, and other fields, offering a new perspective for discussing A...