Business A.M
No Result
View All Result
Friday, February 13, 2026
  • Login
  • Home
  • Technology
  • Finance
  • Comments
  • Companies
  • Commodities
  • About Us
  • Contact Us
Subscribe
Business A.M
  • Home
  • Technology
  • Finance
  • Comments
  • Companies
  • Commodities
  • About Us
  • Contact Us
No Result
View All Result
Business A.M
No Result
View All Result
Home Knowledge@Wharton

Why AI Disclosure Matters at Every Level

Hiding AI use can erode trust in the workplace and beyond, writes Wharton’s Cornelia Walther.

by KNOWLEDGE WHARTON
February 13, 2026
in Knowledge@Wharton
Why AI Disclosure Matters at Every Level

The following article was written by Dr. Cornelia C. Walther, a visiting scholar at Wharton and director of global alliance POZE. A humanitarian practitioner who spent over 20 years at the United Nations, Walther’s current research focuses on leveraging AI for social good.

When a marketing executive uses AI to draft a client proposal, should they disclose it? What about a doctor using AI to analyze medical images, or a teacher generating discussion questions? As artificial intelligence weaves itself into the fabric of professional life, the question of disclosure has evolved from a philosophical curiosity into a pressing business imperative, one that reverberates through every level of human society.

 

The Individual Level: Where Ethics Meets Identity
At the individual level, AI disclosure touches something that we tend to take for granted: our relationship with authenticity. When we present AI-generated work as entirely our own, we navigate a complex terrain of aspirations, emotions, thoughts, and sensations that make up the human experience. We may aspire to appear competent, fear judgement, try to rationalize what “counts” as our work, or experience discomfort with potential deception.
Research reveals this tension. 52% of Americans are concerned about the expansion of AI into ever more areas of daily life, yet approximately 70% of knowledge workers use generative AI tools regularly, without consistently disclosing it. This gap between discomfort with (hidden) AI use and actual disclosure practices suggests cognitive dissonance at scale. We would all rather have clear information about the contribution of AI to content we consume; yet few of us are willing to walk the path of unconditional disclosure ourselves.
The ethics here aren’t simple. Consider a graphic designer who uses AI to generate initial concepts before extensive manual refinement. At what threshold does their work become “AI-assisted” enough to warrant disclosure? The answer depends partly on what we value: pure human creativity, or effective problem-solving regardless of tools used?
This interrogation has several layers, due to bias in how people evaluate AI-generated content. Studies consistently show that people increasingly tend to rate the quality of AI produced content more highly than the one produced by human labor — provided they don’t know it’s AI-generated. They will judge the exact same content harshly once they learn of its AI origins.
A 2024 study found that participants rated AI-generated advertisements as more creative and appealing than human-created ones, until they were told which was which. Once labeled as AI-generated, those same advertisements were rated as less authentic, trustworthy, and emotionally resonant. Similar patterns emerge in art: A study in 2023 found that people appreciated AI-generated artwork less when they knew its source, even when they couldn’t distinguish it from human-created art in blind tests.
This “AI disclosure penalty” creates a perverse incentive structure. If your work may be judged superior when AI involvement is hidden and inferior when disclosed, the rational choice, from a purely self-interested perspective, is to stay silent. Leaving aside the moral implications of tacit deception, such short-term calculation ignores the long-term corrosion of trust when undisclosed AI use is eventually discovered. For businesses, this paradox demands a strategic response: curating organically evolving cultures where balanced AI use is normalized and disclosure doesn’t trigger automatic devaluation.

 

The Organizational Level: Erecting or Eroding Trust
Moving to the organizational level, AI disclosure becomes a matter of institutional trust and professional standards. Companies face a delicate balance: encourage AI adoption for competitive advantage while maintaining stakeholder confidence.
PwC’s 2024 AI Business Survey found that although the vast majority of surveyed companies are actively exploring AI, 75% of them are lacking an AI governance framework. Fewer than one third has clear policies on disclosure to clients or customers. This policy vacuum creates fertile ground for trust erosion. When clients discover undisclosed AI use, the damage extends beyond individual relationships to professional reputations and industry credibility. This has happened in several high-profile legal cases where lawyers submitted AI-generated briefs containing fabricated citations.
Any disclosure regime faces a fundamental challenge: verification. How can we confirm whether someone used AI? Technical solutions like watermarking and detection tools exist but remain imperfect and easily circumvented. Self-reporting relies on honesty, the very thing imperfect disclosure requirements aim to ensure.
Organizations must ask: Who bears responsibility for disclosure? The individual contributor? Their manager? The company as an entity? Best practices are emerging: Some consulting firms now include “AI assistance” disclaimers in deliverables, while others embed disclosure in their contracts and engagement letters. The key is consistency — ad hoc approaches breed confusion and suspicion.

 

The National Level: Regulatory Frameworks and Cultural Values
At the national level, countries are grappling with whether to mandate AI disclosure through regulation. The EU’s AI Act includes transparency requirements for certain high-risk AI applications, while the United States has taken a more sector-specific approach. China’s regulations require disclosure for AI-generated content in specific contexts.
These diverging approaches reflect different cultural values around transparency, trust, and innovation. Yet they share a common recognition: Without disclosure frameworks, public trust in institutions deteriorates. According to the 2024 Edelman Trust Barometer, trust in business and technology has declined globally, with 67% of respondents saying they need more transparency about how organizations use AI.
The challenge for businesses operating internationally is navigating this patchwork of requirements while maintaining coherent ethical standards. A company might legally avoid disclosure in one jurisdiction while being required to provide it in another. Either way, legal compliance and individual ethical practice aren’t always aligned, which may lead to personal cognitive dissonance.

 

The Global Level: Shared Humanity in an AI Age
At the global level, AI disclosure becomes existential. It touches fundamental questions about human dignity, the nature of work, and what we owe each other as a species navigating technological transformation.
When AI use is systematically undisclosed, we risk creating a world where people can’t distinguish human from machine output, where trust becomes impossible, and where the value of human contribution is fundamentally questioned. Conversely, excessive disclosure requirements might stifle innovation and create paranoia about the pervasiveness of technological assistance.
The global conversation must balance multiple imperatives: fostering innovation, protecting vulnerable populations from AI harms, preserving meaningful human work, and maintaining the social trust necessary for functioning societies. Beyond considerations of business ethics this is a civilizational debate that should be nurtured publicly.

 

A 4A Framework for Navigating AI Disclosure
The central question that underpins the discussion around ethical AI disclosure is how to do so in ways that build, rather than bar, trust. In a world where AI capabilities will only grow, the businesses that thrive will be those that master this balance, using AI systematically while maintaining trust as the foundation of commerce and community.
As a business leader, you can’t wait for perfect regulatory clarity. Here’s a practical framework:

Awareness
Recognize that AI disclosure is more than a cumbersome compliance burden. It’s a trust issue that affects every level, from individual relationships to your company’s reputation. Audit where AI is currently used in your organization, disclosed or not.
Appreciation
Understand the legitimate concerns on all sides. Employees fear to seem less capable if they disclose AI use. Clients worry about paying for machine-generated work. Regulators aim to protect public interest. Each perspective has merit and must be acknowledged.

Acceptance
Accept that this is ambiguous territory requiring holistic judgment, not rigid rules. A lawyer using AI for research may have different disclosure obligations than one using it to draft arguments. Context matters. Develop guidelines that accommodate nuances while providing clear moral defaults.

Accountability
Establish straightforward ownership for disclosure decisions in your organization. Create safe channels for discussing AI use. Make disclosure the new normal, something that is expected and unremarkable. When errors occur, they are addressed transparently, triggering a dynamic of learning amid AI, rather than punishment for AI.
Ultimately, the answers may not lie in perfect enforcement but in individual mindsets and institutional culture shifts. Just as plagiarism detection tools matter less than cultivating academic integrity, AI disclosure may depend more on professional norms than technical surveillance. Now is the time for organizations to build cultures where disclosure is normalized, expected, and seen as a sign of sovereignty and sophistication rather than weakness. That requires trust, and psychological safety.

Previous Post

The Female CEO Problem: Solutions

Next Post

Who Gets Replaced by AI and Why?

Next Post
Who Gets Replaced by AI and Why?

Who Gets Replaced by AI and Why?

  • Trending
  • Comments
  • Latest
Igbobi alumni raise over N1bn in one week as private capital fills education gap

Igbobi alumni raise over N1bn in one week as private capital fills education gap

February 11, 2026
SIFAX subsidiary bets on operational discipline, cargo diversification to drive recovery at Lagos terminal

SIFAX subsidiary bets on operational discipline, cargo diversification to drive recovery at Lagos terminal

February 10, 2026
inDrive turns to advertising revenues as ride-hailing economics push platforms toward diversification

inDrive turns to advertising revenues as ride-hailing economics push platforms toward diversification

February 10, 2026
Egbin Power targets youth employability with tech skills initiative

Egbin Power targets youth employability with tech skills initiative

February 10, 2026

6 MLB teams that could use upgrades at the trade deadline

Top NFL Draft picks react to their Madden NFL 16 ratings

Paul Pierce said there was ‘no way’ he could play for Lakers

Arian Foster agrees to buy books for a fan after he asked on Twitter

Who Gets Replaced by AI and Why?

Who Gets Replaced by AI and Why?

February 13, 2026
Why AI Disclosure Matters at Every Level

Why AI Disclosure Matters at Every Level

February 13, 2026
The Female CEO Problem: Solutions

The Female CEO Problem: Solutions

February 13, 2026
Income Inequality: A Vicious Cycle?

Income Inequality: A Vicious Cycle?

February 13, 2026

Popular News

  • Igbobi alumni raise over N1bn in one week as private capital fills education gap

    Igbobi alumni raise over N1bn in one week as private capital fills education gap

    0 shares
    Share 0 Tweet 0
  • SIFAX subsidiary bets on operational discipline, cargo diversification to drive recovery at Lagos terminal

    0 shares
    Share 0 Tweet 0
  • inDrive turns to advertising revenues as ride-hailing economics push platforms toward diversification

    0 shares
    Share 0 Tweet 0
  • Egbin Power targets youth employability with tech skills initiative

    0 shares
    Share 0 Tweet 0
  • Reps summon Ameachi, others over railway contracts, $500m China loan

    0 shares
    Share 0 Tweet 0
Currently Playing

CNN on Nigeria Aviation

CNN on Nigeria Aviation

Business AM TV

Edeme Kelikume Interview With Business AM TV

Business AM TV

Business A M 2021 Mutual Funds Outlook And Award Promo Video

Business AM TV

Recent News

Who Gets Replaced by AI and Why?

Who Gets Replaced by AI and Why?

February 13, 2026
Why AI Disclosure Matters at Every Level

Why AI Disclosure Matters at Every Level

February 13, 2026

Categories

  • Frontpage
  • Analyst Insight
  • Business AM TV
  • Comments
  • Commodities
  • Finance
  • Markets
  • Technology
  • The Business Traveller & Hospitality
  • World Business & Economy

Site Navigation

  • Home
  • About Us
  • Contact Us
  • Privacy & Policy
Business A.M

BusinessAMLive (businessamlive.com) is a leading online business news and information platform focused on providing timely, insightful and comprehensive coverage of economic, financial, and business developments in Nigeria, Africa and around the world.

© 2026 Business A.M

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Technology
  • Finance
  • Comments
  • Companies
  • Commodities
  • About Us
  • Contact Us

© 2026 Business A.M