-
-
Notifications
You must be signed in to change notification settings - Fork 12.1k
Description
Now that we have an AI policy (thanks to @mattip) that prohibits undisclosed and otherwise irresponsible AI use, we'll need a reliable method for detecting PRs that go against this. In many cases it can be pretty obvious that something is written by AI, e.g. excessive use of emdashes (—) and words like "streamline", "facilitate", "comprehensive"; phrases like "A key takeaway is ..." and "This underscores the importance of ..."; predictable markdown formatting; repetitive phrasing; etc.
But in some cases it's not always clear. Some authors consider it to be an insult when asked whether they've used AI when they didn't. This, at least for me, makes me worry that that's the case when I do ask that question (in cases where I'm unsure if it's AI-generated; not because I like insulting contributors or something).
I think it would help if we started using a tool that'll help us with this. For example, by adding a specific label in case it suspects slop. We could e.g. consider https://github.com/peakoss/anti-slop, which at first glance seems pretty solid, highly configurable, and is actively maintained.