Add an AI and Autonomous code contribution policy.#328
Add an AI and Autonomous code contribution policy.#328freakboy3742 wants to merge 9 commits intomainfrom
Conversation
|
I've incorporated updates reflecting the feedback that has been given to date. Barring significant additional feedback, my current plan is put this to the core team for endorsement towards the end of this week. |
gpshead
left a comment
There was a problem hiding this comment.
A pile of comments, not necessarily in linear thought order. Read them all to understand my thinking.
General theme: the AI_POLICY doc is a bit long. I push back against things that are going to discriminate against people based on tool use. Forced disclosure can drive some genuinely interested contributors away as it sets of a passively hostile "i'm gonna be bullied if i use AI" tone. Voluntary but encouraged disclosure is meaningful.
We're entering a world where some people perceived as easy victims who use AI may be targeted and harassed (or worse) because of it. Forced disclosure policies means honest people targeted get driven underground and/or out of communities - exactly as the trolls want. This can hurt people and projects.
Focus on what you want out of contributions - your contributor guidelines should already be covering that. The best AI policies are basically as a TL;DR saying to respect the contributor guidelines with a focus on respect maintainer time and attention and a reminder that abuse of that ends in closures and excommuncation.
When you have rules, explain the why rather than the what. Rules that are just hurdles for no communally justifiable reason are some set of passive aggressive, virtue signals (probably not communicating what you think), or pointless sign-not-a-cop roadblocks that will be ignored or fake-complied.
Limit words. brief policies get read. long policies are less likely to. what else can be trimmed?
(ironically... a model might do a good job here; I clearly didn't use one as I spent far too long forumulating my replies. I wrote too much and predict inconsistency, incoherence, and misedits across my comments 😅)
@gpshead Thanks for these comments - they're definitely helpful.
That's definitely a fair criticism.
As noted inline, any bullying behavior would trigger BeeWare's CoC, so that shouldn't be a concern. The real motivation for requiring declaration is legal advice (from an actual lawyer) that suggested prudence is the best path. If Anthropic's lawyers are willing to go on record with legal advice to the contrary, I'd love to hear that. Frankly, this has always been the weak point in OSS contribution - getting actual lawyers to make actual statements about what is needed. Is a CLA needed? Why or why not? I'd dearly like to have reasoned legal opinions - even if they're from companies with vested interests. It's taken 30+ years to get to a place where there's anything close to a common legal understanding of what OSS license compliance means in practice; I'd very much like to get clarity on what best practice means for AI contributions before I'm 80 :-) |
|
One reason I liked the terms of service callout is that some AI service provider ToS's have indemnification clauses for their customers in them. But it'd take a lawyer to understand implications. OSS is, as usual, the underexplored legal frontier. |
kattni
left a comment
There was a problem hiding this comment.
This looks good to me, however, I caught a few issues on my read-through. Suggested changes inline.
Co-authored-by: Kattni <kattni@kattni.com>
kattni
left a comment
There was a problem hiding this comment.
Looks good. Thanks for writing this up.
This is not what I'm worried about. Forced disclosure feeds into stalking people working in public on the internet to take actions in spaces far beyond any project's CoC. |
|
The only change I'm suggesting is that the word I'm trying to avoid the spread of policies aimed at outing people. |
|
I'm not @freakboy3742 or any core team member, but responding to this as a contributor:
For this type of thing, the current code of conduct has an added clause from the standard CC3 text:
In my (a relatively more sensitive to wording IMO contributor's) opinion, patterned stalking for AI users based on AI usage in the BeeWare community does represent a potential risk to the BeeWare community, so I think BeeWare would have the right to take neccessary actions here.
Frankly, I (as a contributor) personally would want BeeWare to use the word As for another middle ground that makes sense IMO: Perhaps some wording like |
I think we have very different concerns here. As I've indicated previously, I am operating on the advice of an actual lawyer who has indicated that it is advisable to know where content has come from, if only on the basis that more information about provenance is never bad. In that context, anything but a "requires" framing makes the effort pointless. You appear to be concerned about the possibility of someone declaring in a BeeWare PR that they use AI, and then being "stalked" outside a BeeWare space on the basis of that usage. While I can't categorically deny that this could occur, it strikes me as a bit of a straw man in service of "normalising" AI usage, rather than a problem that actually exists. I completely agree that some anti-AI folk are quite vocal in their opposition - but I haven't seen that opposition rise to the level where individual contributors need to live in fear of being personally targeted outside the spaces where they're contributing. If you can present plausible evidence that this is happening - and is happening at a scale that warrants a response - then we might need to revisit this. However, absent of that evidence, a "requires" clause doesn't seem an unreasonable ask. |
|
for posterity, linux kernel dev put its hat in the ring |
|
Thanks for engaging with me on this and understanding. That's a practical stance. |
This is the first step at adding an AI policy for BeeWare: Adding an actual policy.
Once ratified, links to this document will be added to the contribution guide.
It includes an updated pull request template, adding a checkbox for declaring AI tooling and prompt for declaring that usage.
It also includes an update to the contribution guide that can be used as a template for other projects. This is a significant change to the contribution guide in this repository - the current version has a number of dead links. It replaces that content with references to the current contribution guide on the website. When rolled out to other projects, this content can be used as-is, or can have references to that project's contribution guide (for Briefcase, Toga etc).
Submitted in draft form to allow discussion and ratification by the core team.
PR Checklist: