AI Policy

We recognize that the laws and ethics around AI usage are a matter of intense and heartfelt debate. We use AI for our projects, but we strive to be transparent and ethical in its use.

I. We encourage healthy AI skepticism

We prefer consideration of individual arguments on merit, over convenience to political movements. On either side. We stop frequently to calmly consider the individual arguments.

Many problems with AI come down to conflicts between pre-AI expectations and practices and post-AI realities. We accept that these problems are real and are in need of solutions. But we also accept that many of our pre-AI expectations and practices are now unreasonable, and many were flawed compromises struck by the powerful, and wrapped in false appeals to fairness, while simultaneously crowding small creators out of markets. While the emergence of AI forced a reckoning, pre-AI expectations and practices were already in desperate need of rework.

The most convincing argument against AI (we find), is that powerful producers of media will combine it with their disproportionate leverage over how and where media is created and shared. We expect (with good reason) that they will use the pursuit of efficiency as a cover to feed us all cheap robot shit, instead of empowering humans to create more value for humans. Enshittification is a threat to humanism, and a threat to art, and we draw a firm line here.

II. We distinguish work for robots from work for humans

Robots (of which AI is a species) are really useful for many things. AI is especially at finding, organizing, and summarizing information quickly. But AI sucks at understanding human context, particularly understanding humanity beyond the abstract, and recognizing what is truly of value. This is not merely a matter of improving simulation, but because AI is fundamentally limited in its scope of awareness and application. AI (and its suppliers) are incapable of owning risk. Like Mr. Meeseeks, AI is bound and defined by the scope of its given task, and then vanishes. These problems limit how AI can (and should) be used, but they are also why AI sucks at art. Good art is about context, risk, and value.

III. We build transparent processes with Humanist objectives

The best disinfectant is sunlight, so we prefer transparency not as a concession, but as a tool to inoculate our work against corruption by unclear purpose, and other seemingly necessary evils. This applies to our processes in general, but particularly to AI, because the risk of corruption is especially high, and the technology especially opaque.

A strict definition of Humanism is hard to pin down, because it necessarily embraces so many contexts and viewpoints in its application. It can be fairly described as the attitude that humans should invest value in humans – in all their inconvenient, inefficient, beautiful variety – and that no further appeal to higher powers or objectives is necessary. Practically, this means a strong distinction between humanity and its tools, and it requires the discipline to pursue value above convenience and immediate profit.

IV. We share our best practices

Our best practices are under continual refinement.

  • We use AI to find, share, and challenge ideas in brainstorming.
  • We use AI to tidy things up and move things around.
  • We use AI to establish a starting point for revision.
  • We refuse to charge money for AI generated content. Tying profit directly to an AI content farm is a temptation that leads quickly to enshittification.
  • We treat AI-generated content as temporary. We label it clearly where it is used, to ensure replacement.
  • We replace AI-generated content with content driven by human intent, that enriches the lives of humans.
  • We encourage creators who work with us to follow their own vision, with common style by agreement. We trust them to create content that is essentially human, but we don’t tell them what tools to use. What counts as “human” art cannot be planned or policed, and we do not attempt to do so.