Ethics and Practice in Modern IP Law: AI, Confidentiality, and Conflicts of Interest
- Farzan Fallahpour
- Apr 14
- 4 min read
In the rapidly evolving landscape of 2026, the intersection of intellectual property (IP) law and emerging technology has created a paradigm shift in legal ethics. For senior practitioners and leading firms, staying ahead of these changes is not merely a matter of operational efficiency; it is a critical component of professional credibility and future-oriented leadership. Mastering the ethical nuances of artificial intelligence, digital confidentiality, and complex conflict management positions a lawyer as a sophisticated guardian of client interests in a digital-first world.
This article draws on guidance referenced in the original text, including the American Bar Association (ABA) and the United States Patent and Trademark Office (USPTO), to outline three pillars of modern IP ethics.
Note: Ethical rules and regulatory frameworks vary by jurisdiction. This post is general information and not legal advice.

Quick Summary
Modern IP ethics increasingly depends on how lawyers use AI, not whether they use it.
Client confidentiality faces new risks in digital workflows, especially with self-learning GenAI tools.
Conflicts of interest in IP can be technical and subject-matter specific, requiring strong screening systems (“ethical walls”).
1) The Ethical Use of AI by IP Practitioners
The legal industry has integrated various forms of AI for years, such as spam filters and fraud detection. However, the advent of Generative AI (GenAI) has introduced unprecedented ethical challenges.
The Duty of Technological Competence
According to ABA Formal Opinion 512, lawyers are required to maintain technological competence, which includes a reasonable understanding of the benefits and risks of the GenAI tools they employ. Practitioners do not need to be AI experts, but they must understand a tool’s capabilities and limitations. This duty is not static and requires continuous updates as technology evolves.
Oversight and the “Human in the Loop”
A core takeaway from the guidance referenced in the text is that ethical duties cannot be delegated to software. Key points include:
Verification of Accuracy: Lawyers have an affirmative duty to review and verify AI-generated content. Blind reliance is not considered a “reasonable inquiry.”
Hallucinations and Misstatements: AI “hallucinations” (fictional citations or facts) have led to sanctions where attorneys failed to catch errors before filing.
Candor Toward the Tribunal: Submitting AI-generated documents to the USPTO is treated as appearing in court. The Duty of Candor requires ensuring statements are true and citations are verified.
Disclosure Obligations
The text notes there is no general requirement to disclose AI use to the USPTO. However, a duty arises if the use is material to patentability. For example, if an AI system drafts claims lacking a significant contribution from a human inventor, this must be disclosed.
2) Client Confidentiality in Digital Workflows
In modern IP practice, clients entrust firms with highly sensitive information, including trade secrets and pre-patent technical data. The text states that the average cost of a data breach for professional services firms is now $4.56 million.
The Risks of GenAI and Client Data
The use of GenAI tools can jeopardize confidentiality, particularly with “self-learning” systems. Many models use user inputs for further training, meaning sensitive information could inadvertently filter into outputs provided to other parties.
Under ABA Rule 1.6 and 37 CFR 11.106, practitioners must:
Make reasonable efforts to prevent unauthorized access to client information
Vet vendors carefully to ensure data is not used for model training or shared with third parties
Obtain informed consent before inputting sensitive client data into self-learning AI tools
National Security and Export Controls
The text highlights an additional risk: inputting technical data into an AI tool that uses servers outside the United States may be deemed an illegal export of controlled technology. Practitioners must understand where a tool’s servers are hosted before uploading sensitive technical specifications.
Cybersecurity Best Practices
To meet ethical obligations, the text recommends “defense-in-depth” security:
Encryption (at rest and in transit)
Two-Factor Authentication (2FA) as a standard “reasonable precaution”
Access control using least privilege principles
3) Conflict of Interest in

As firms grow and lateral hiring increases, the risk of subject-matter conflicts in IP representation becomes a significant ethical hurdle.
Subject Matter vs. Economic Adversity
A “subject matter conflict” occurs when a firm represents two clients pursuing patents for the same or nearly identical technology. The text notes courts have treated representing competitors in the same general field as “permissible economic adversity,” but representing them on the exact same invention can lead to serious ethical violations and disqualification.
Building “Ethical Walls”
To mitigate risks, firms implement ethical walls (also called “Chinese walls”) to screen specific lawyers or staff from conflicting matters. Modern technology supports this through:
Role-based permissions restricting access to files, notes, and even calendar events
Automated screening tools to prevent participation in conflicting cases
Quarantines banning even casual discussion of sequestered matters
Training and Documentation
Teams should be trained to recognize and document potential competitor conflicts. This includes screening not only lateral attorney hires but also freelance paralegals who may have worked for opposing counsel on similar technical matters.
Conclusion: The Seniority of Ethical Vigilance
In the modern era, ethics and technology are inseparable. A firm’s ability to leverage AI while maintaining rigorous oversight, protect confidentiality across global digital workflows, and proactively manage complex subject-matter conflicts is a strong marker of a senior, credible practice.
The text concludes that establishing firm-wide AI policies, vetting digital vendors with a security-first mindset, and deploying robust ethical walls helps practitioners not only comply, but lead the profession into the future.

FAQ
Do lawyers have to understand GenAI tools to use them ethically?
The text cites ABA guidance stating lawyers must maintain technological competence and understand the benefits and risks of GenAI tools, including capabilities and limitations.
Are lawyers responsible for AI-generated errors?
Yes. The text states lawyers must verify AI-generated content, and hallucinations or misstatements have led to sanctions when not caught before filing.
Why can GenAI create confidentiality risks?
The text notes that self-learning AI systems may use user inputs for training, creating risk that sensitive client information could appear in outputs provided to others.
What is a “subject matter conflict” in IP?
The text defines it as representing clients pursuing patents for the same or nearly identical technology, which can lead to serious ethical violations and disqualification.




Comments