There would be no user voting at all. Instead, one AI handles every upvote and downvote according to guidance written by the subreddit moderator(s).
For example, it might assign 20 votes to one post and -5 votes to another. (Of course, this would require Reddit to implement a feature to allow this for these voting AIs.)
The key part is that the voting guidance is public. Anyone can read the rules that explain how the AI is supposed to vote. For example, the AI might be instructed to reward originality, clarity, kindness, strong evidence, or creative thinking, and to downvote low effort posts, repetition, hostility, or bad faith arguments.
Why this could be interesting:
* It removes mob dynamics, karma farming, and timing effects. Visibility depends on meeting the stated values, not popularity.
* The subreddit develops a very coherent culture. People learn how to write for the AI rather than reminding other humans to “read the rules.”
* Posting becomes a kind of skill. You are not chasing vibes, you are demonstrating that you understood and followed the principles.
* The advice itself becomes part of the experiment. Users can debate whether the AI’s guidance is good, flawed, biased, or incomplete.
Moderators could update the guidance over time and keep a changelog explaining why priorities shifted. There could even be meta threads where users suggest amendments, even if mods keep final control.
What do you think of this idea?