Skip to content

Responsible Reporting on the Usage of Claude Code or Other AI Systems. #11

@ViridianAStar

Description

@ViridianAStar

Section 1 Initial Concern

Though not a critical issue, I would recommend adding a disclaimer that Claude Code was used in the development process, as the usage of AI can introduce security flaws and vulnerabilities. This could potentially affect adoption by organizations; however, failing to provide proper disclosure and then having an incident occur could drive away far more of your user base.

Section 2 Legislative Reasoning

If this were touching something like Fedora, there would be a contribution policy, one you can find here: Fedora AI Contribution Policy. This is an important part of the process, as it ensures transparency. Additionally, while not presently law as far as I am aware, there will likely be legislation appearing in various places globally addressing the disclosure of AI usage. There is, in fact, already the EU AI Act, which, as far as I am aware, is not presently active but will become active at some future point. While I do not know the full breadth of its legislation, I am going to hedge a guess that you may want to preemptively disclose.

Section 3 Justification and Failure Modes

To restate the justification as to why this action is necessary: failure to be transparent affects you more drastically should an incident occur, compared to successful disclosure. This statement generally holds true.

Granted, there is a modern art to AI disclosure as its usage grows; to quote a professor of mine who discusses the failure modes of poor disclosure in an AI-positive marketplace of ideas:

While labels are meant to inform users about AI-generated content, they can fail in predictable ways if not designed thoughtfully. The following are four common failure modes:

  • The first failure mode is banner blindness. When a label becomes common, users learn to tune it out. That is not a moral flaw. It is normal cognition in an attention economy.
  • The second failure mode is inconsistency. A disclosure that changes wording, placement, or strength across platforms forces users to relearn meaning repeatedly. People rarely do. A transparency system that requires repeated relearning is a system designed to fail.
  • The third failure mode is false reassurance. When some content is labeled and other content is not, users may infer that unlabeled content is authentic, vetted, or “real.” That inference is risky because it turns a transparency tool into an implied authenticity claim. The social consequence is not only deception, but the gradual erosion of shared trust.
  • The fourth failure mode is accessibility exclusion. Icon-only labels, vague wording, or low-contrast designs can leave behind people who rely on screen readers, have low vision, or need clearer language. Accessibility is not a “nice to have” feature. It is part of whether transparency serves the whole public.

AI Disclosure Labels Risk Becoming Digital Background Noise

Section 4 Why Is This an Issue

I think it would be foolish to state that Dragonfly is not an innovative piece of software, and one that I personally would like to see grow and evolve. That being said, I would also hate to see Dragonfly take reputational hits and potentially fall out of greater favor due to any potential incidents in the future. This is why I am advocating for responsible disclosure of the usage of AI.

Metadata

Metadata

Assignees

Labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions