No doubt. git rebase is like a very sharp knife. In the right hands, it can accomplish great things, but in the wrong hands, it can also spell disaster.
As someone who HAS used it a fair amount, I generally don’t even recommend it to people unless they’re already VERY comfortable with the rest of git and ideally have some sense of how it works internally.
Yeah it is something people should take time to learn. I do think its “dangers” are pretty overstated, though, especially if you always do git rebase --interactive, since if anything goes wrong, you can easily get out with git rebase --abort.
In general there’s a pretty weird fear that you can fuck up git to the point at which you can’t recover. Basically the only time that’s really actually true is if you somehow lose uncommitted work in your working tree. But if you’ve actually committed everything (and you should always commit everything before trying any destructive operations), you can pretty much always get back to where you were. Commits are never actually lost.
You can get in some pretty serious messes, though. Any workflow that involves force-pushing or rebasing has the potential for data loss… Either in a literally destructive way, or in a “Seriously my keys must be somewhere but I have no idea where” kind of way.
When most people talk about rebase (for example) being reversible, what they’re usually saying is “you can always reverse the operation in the reflog.” Well yes, but the reflog is local, so if Alice messes something up with her rebase-force-push and realizes she destroyed some of Bob’s changes, Alice can’t recover Bob’s changes from her machine-- She needs to collaborate with Bob to recover them.
Pretty much everything that can act as a git remote (GitHub, gitlab, etc.) records the activity on a branch and makes it easy to see what the commit sha was before a force push.
But it’s a pretty moot point since no one that argues in favor of rebasing is suggesting you use it on shared branches. That’s not what it’s for. It’s for your own feature branches as you work, in which case there is indeed very little risk of any kind of loss.
No, you divide work so that the majority of it can be done in isolation and in parallel. Testing components together, if necessary, is done on integration branches as needed (which you don’t rebase, of course). Branches and MRs should be small and short-lived with merges into master happening frequently. Collaboration largely occurs through developers frequently branching off a shared main branch that gets continuously updated.
Trunk-based development is the industry-standard practice at this point, and for good reason. It’s friendlier for CI/CD and devops, allows changes to be tested in isolation before merging, and so on.
Ah, you’ve never worked somewhere where people regularly rebase and force-push to master. Lucky :)
I have no issue with rebasing on a local branch that no other repository knows about yet. I think that’s great. As soon as the code leaves local though, things proceed at least to “exercise caution.” If the branch is actively shared (like master, or a release branch if that’s a thing, or a branch where people are collaborating), IMO rebasing is more of a footgun than it’s worth.
You can mitigate that with good processes and well-informed engineers, but that’s kinda true of all sorts of dubious ideas.
Pushing to master in general is disabled by policy on the forge itself at every place I’ve worked. That’s pretty standard practice. There’s no good reason to leave the ability to push to master on.
There’s no reason to avoid force pushing a rebased version of your local feature branch to the remote version of your feature branch, since no one else should be touching that branch. I literally do this at least once a day, sometimes more. It’s a good practice that empowers you to craft a high-quality set of commits before merging into master. Doing this avoids the countless garbage fix typo commits (and spurious merge commits) that you’d have otherwise, making both reviews easier and giving you a higher-quality, more useful history after merge.
Why should no one be touching it? You’re basically forcing manually communicated sync/check points on a system that was designed to ameliorate those bottlenecks
No doubt.
git rebase
is like a very sharp knife. In the right hands, it can accomplish great things, but in the wrong hands, it can also spell disaster.As someone who HAS used it a fair amount, I generally don’t even recommend it to people unless they’re already VERY comfortable with the rest of git and ideally have some sense of how it works internally.
Yeah it is something people should take time to learn. I do think its “dangers” are pretty overstated, though, especially if you always do
git rebase --interactive
, since if anything goes wrong, you can easily get out withgit rebase --abort
.In general there’s a pretty weird fear that you can fuck up git to the point at which you can’t recover. Basically the only time that’s really actually true is if you somehow lose uncommitted work in your working tree. But if you’ve actually committed everything (and you should always commit everything before trying any destructive operations), you can pretty much always get back to where you were. Commits are never actually lost.
You can get in some pretty serious messes, though. Any workflow that involves force-pushing or rebasing has the potential for data loss… Either in a literally destructive way, or in a “Seriously my keys must be somewhere but I have no idea where” kind of way.
When most people talk about rebase (for example) being reversible, what they’re usually saying is “you can always reverse the operation in the reflog.” Well yes, but the reflog is local, so if Alice messes something up with her rebase-force-push and realizes she destroyed some of Bob’s changes, Alice can’t recover Bob’s changes from her machine-- She needs to collaborate with Bob to recover them.
Pretty much everything that can act as a git remote (GitHub, gitlab, etc.) records the activity on a branch and makes it easy to see what the commit sha was before a force push.
But it’s a pretty moot point since no one that argues in favor of rebasing is suggesting you use it on shared branches. That’s not what it’s for. It’s for your own feature branches as you work, in which case there is indeed very little risk of any kind of loss.
If “we work in a way that only one person can commit to a feature”, you may be missing the point of collaborative distributed development.
No, you divide work so that the majority of it can be done in isolation and in parallel. Testing components together, if necessary, is done on integration branches as needed (which you don’t rebase, of course). Branches and MRs should be small and short-lived with merges into master happening frequently. Collaboration largely occurs through developers frequently branching off a shared main branch that gets continuously updated.
Trunk-based development is the industry-standard practice at this point, and for good reason. It’s friendlier for CI/CD and devops, allows changes to be tested in isolation before merging, and so on.
Ah, you’ve never worked somewhere where people regularly rebase and force-push to master. Lucky :)
I have no issue with rebasing on a local branch that no other repository knows about yet. I think that’s great. As soon as the code leaves local though, things proceed at least to “exercise caution.” If the branch is actively shared (like master, or a release branch if that’s a thing, or a branch where people are collaborating), IMO rebasing is more of a footgun than it’s worth.
You can mitigate that with good processes and well-informed engineers, but that’s kinda true of all sorts of dubious ideas.
Pushing to master in general is disabled by policy on the forge itself at every place I’ve worked. That’s pretty standard practice. There’s no good reason to leave the ability to push to master on.
There’s no reason to avoid force pushing a rebased version of your local feature branch to the remote version of your feature branch, since no one else should be touching that branch. I literally do this at least once a day, sometimes more. It’s a good practice that empowers you to craft a high-quality set of commits before merging into master. Doing this avoids the countless garbage
fix typo
commits (and spurious merge commits) that you’d have otherwise, making both reviews easier and giving you a higher-quality, more useful history after merge.Why should no one be touching it? You’re basically forcing manually communicated sync/check points on a system that was designed to ameliorate those bottlenecks