Reads okay for the most part. But I like how we see the same point about AI as a feature in some more serious real-life projects. There, we frame it as “Rust makes it harder for a ‘contributor’ to sneak in LLM-generated crap”.
In what situation are you accepting contributions that you haven’t vetted thoroughly enough to detect crap code? I’ve seen a lot of crap from developers that’s as bad or worse than LLM generated crap so there’s no way I’ll ever accept contributions to an important system without thoroughly vetting them unless they’re from one of a very few number of people I trust implicitly.
Well, no matter how thoroughly you vet, it’s always good to have a tool to back you up.
For example, we once got a pull request, which was purely AI-generated but I couldn’t tell that right away. So, I skimmed it to make sure no malicious code is part of it, then I gave it to the CI runner. And that failed pretty much immediately during a compile check, which made it obvious that the pull request author had never tried to compile it.
In that moment, I could stop wasting my time with that pull request, rather than try to debug why it’s not working or having to vet it more thoroughly…
I thoroughly agree, you should always have CI tools to ensure it builds, passes tests, and meets whatever formatting and/or linting standards the team sets. I was specifically responding to “Rust makes it harder for a ‘contributor’ to sneak in LLM-generated crap”. If I get a contribution from an untrusted party, I will start with the assumption that it’s utter garbage, buggy, broken, and malicious and review it until I’m convinced it’s not. Not because I assume the dev is bad but because it’s safer to assume the code is garbage. If I get a contribution from a trusted party (e.g. a member of the dev team/employee/whatever) I will review the code carefully though not with as much paranoia. I don’t particularly care if my teammates are using LLMs, but if they’re submitting code they don’t understand that’s a great way to get ejected from the “trusted contributors” group, and if they’re an employee it’s a good way to get fired if they keep doing it after being warned not to.
Reads okay for the most part. But I like how we see the same point about AI as a feature in some more serious real-life projects. There, we frame it as “Rust makes it harder for a ‘contributor’ to sneak in LLM-generated crap”.
In what situation are you accepting contributions that you haven’t vetted thoroughly enough to detect crap code? I’ve seen a lot of crap from developers that’s as bad or worse than LLM generated crap so there’s no way I’ll ever accept contributions to an important system without thoroughly vetting them unless they’re from one of a very few number of people I trust implicitly.
Well, no matter how thoroughly you vet, it’s always good to have a tool to back you up.
For example, we once got a pull request, which was purely AI-generated but I couldn’t tell that right away. So, I skimmed it to make sure no malicious code is part of it, then I gave it to the CI runner. And that failed pretty much immediately during a compile check, which made it obvious that the pull request author had never tried to compile it.
In that moment, I could stop wasting my time with that pull request, rather than try to debug why it’s not working or having to vet it more thoroughly…
I thoroughly agree, you should always have CI tools to ensure it builds, passes tests, and meets whatever formatting and/or linting standards the team sets. I was specifically responding to “Rust makes it harder for a ‘contributor’ to sneak in LLM-generated crap”. If I get a contribution from an untrusted party, I will start with the assumption that it’s utter garbage, buggy, broken, and malicious and review it until I’m convinced it’s not. Not because I assume the dev is bad but because it’s safer to assume the code is garbage. If I get a contribution from a trusted party (e.g. a member of the dev team/employee/whatever) I will review the code carefully though not with as much paranoia. I don’t particularly care if my teammates are using LLMs, but if they’re submitting code they don’t understand that’s a great way to get ejected from the “trusted contributors” group, and if they’re an employee it’s a good way to get fired if they keep doing it after being warned not to.
if nothing else it should be easier with rust because:
a) That fucking syntax is probably more legible to an AI than a human (sue me, Rust absolutists)
b) The language has more safety barriers; making use of AI safer by association