Git Rebase for the Terrified
brethorsting.com212 points by aaronbrethorst 6 days ago
212 points by aaronbrethorst 6 days ago
I have been contributing code for 10+ years, and I have worked on teams that did rebase and others that did not.
Not once have a ever debugged a problem that benefited from rebase vs merge. Fundamentally, I do not debug off git history. Not once has git history helped debug outside of looking at the blame + offending PR and diff.
Can someone tell me when they were fixing a problem and they were glad that they rebased? Bc I can't.
I’ve worked on a code base that was about 15 years old and had gone through many team changes. It was a tricky domain with lots of complicated business logic. When making changes to the code, the commit history was often the only way to figure out if certain behavior was intended and why it was implemented this way. Documentation about how the product should behave often lacked the level of detail. I was certainly always thankful when a dev that was long gone from the team had written commit messages that communicated the intent behind a change.
Debugging from git history is a separate question from merge vs rebase. Debugging from history can be done with non-rebased merges, with rebased merges, and with squashed commits, without any noticeable difference. Pass `--first-parent` to git-log and git-bisect in the first two cases and it's virtually identical.
My preference for rebasing comes from delivering stacked PRs: when you're working on a chain of individually reviewable changes, every commit is a clean, atomic, deliverable patch. git-format-patch works well with this model. GitHub is a pain to use this way but you can do it with some extra scripts and setting a custom "base" branch.
The reason in that scenario to prefer rebasing over "merging in master" is that every merge from master into the head of your stack is a stake in the ground: you can't push changes to parent commits anymore. But the whole point of stacked diffs is that I want to be able to identify different issues while I work, which belong to different changes. I want to clean things up as I go, without bothering reviewers with irrelevant changes. "Oh this README could use a rewrite; let me fix that and push it all the way up the chain into its own little commit," or "Actually now that I'm here, let me update dependencies and ensure we're on latest before I apply my changes". IME, an ideal PR is 90% refactors and "prefactors" which don't change semantics, all the way up to "implemented functionality behind a feature flag", and 10% actual changes which change the semantics. Having an editable history that you can "keep bringing with you" is indispensible.
Debugging history isn't really related. Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.
> Debugging from git history is a separate question from merge vs rebase.
But the main benefit proponents or rebase say its for keeping the history clean which also makes it easier to pinpoint and offending commit.
Personally, a clean commit history was never something that made my job easier.
> Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.
I would agree that it is important for commits to go from working state to working state as you are working on a task, but this is an argument for atomic commits, not about commit history.
> Personally, a clean commit history was never something that made my job easier.
How do you define "clean"? I've certainly been aided by commit messages that help me identify likely places to investigate further, and hindered by commit messages that lack utility.
we're in the minority I think. I always find it easier to just debug a problem from first principles instead of assuming that it worked at some point and then someone broke it. often times that assumption is wrong, and often times the search for bad commit is more lengthy and less informative than doing the normal experimental process. I certainly admit that there are ases where the test is easily reproducible and bisect just spits out the answer, but that a seductive win. I certainly wouldn't start by reading the commit log and rewinding history until I at least had a general idea of the source of the problem, and it wasn't immediately obvious what to try next to get more information.
if you look at it as in investment in understanding the code base more than just closing the ticket as soon as possible, then the 'lets see what really going on here' approach makes more sense.
I have worked on several codebases where it was enforced that the commit be rebased off of whatever the main branch was, all units of work squashed to a single commit, and only "working" code be checked into the main branch. This gives you a really good linear history, and when you're disciplined about writing good final commit messages and tagging them to a ticket, it means bisecting to find challenging bugs later becomes tractable, as each commit nominally should work and should be ready to deploy for testing. I've personally solved a number of challenging regressions this way.
I think one should be allowed to push commits that don't build or pass tests provided a) they are marked as such (so you can skip them sooner when bisecting) and b) the HEAD commit after each push does build and pass tests.
>each commit nominally should work
Except it can be the result of 10 squashed commits.
Which is the entire point of it. Why should I look at ten commits when I can look at one and get the same exact data? Why should I pollute my production history for what a is likely a bunch of debugging commits? The branch is a scratchpad, you should feel empowered within your own branch, rebase allows you to be lazy in the development cycle while presenting a nice clean set of changes at the end of it.
You can split your work in multiple commits and at the same time drop/squash debugging or wip changes. The result allows you to go into much better detail than a PR description.
Most of those 10 squashed commits likely had commit comments like: "Cleanup based on PR feedback." etc.
I can give you an example of when I am glad I rebased. There have been many times I have been working on a feature that was going to take some time to finish. In that case my general workflow is to rebase against main every day or two. It lets me keep track of changes and handle conflicts early and makes the eventual merge much simpler. As for debugging I’ve never personally had to do this, but I imagine git bisect would probably work better with rebased, squashed commits.
> I can give you an example of when I am glad I rebased
I think the question was about situations where you were glad to rebase, when you could have merged instead
They kind of spoke to it. Rebasing to bring in changes from main to a feature branch which is a bit longer running keeps all your changes together.
All the commits for your feature get popped on top the commits you brought in from main. When you are putting together your PR you can more easily squash your commits together and fix up your commit history before putting it out for review.
It is a preference thing for sure but I fall into the atomic, self contained, commits camp and rebase workflows make that much cleaner in my opinion. I have worked with both on large teams and I like rebase more but each have their own tradeoffs
You can bring in changes and address conflicts early with merge too, I believe that's GP's point.
Yes but specifically with a rebase merge the commits aren’t interleaved with the commits brought in from mainline like they are with a merge commit.
EDIT: I may have read more into GPs post but on teams that I have been on that used merge commits we did this flow as well where we merged from main before a PR. Resolving conflicts in the feature branch. So that workflow isn’t unique to using rebase.
But using rebase to do this lets you later more easily rewrite history to cleanup the commits for the feature development.
i don't think i have ever even looked at what order the commits are, i only care about the diff vs the target branch when reviewing
especially since every developer has a different idea of what a commit should be, with there being no clear right answer
I used hg (mercurial) before git. Every time I see someone make an argument like yours I think "only because git's merge/branch model is bad and so you need hacks to make it acceptable".
Git won, which is why I've been using it for more than 10 years, but that doesn't mean it was ever best, it was just most popular and so the rest of the eco system makes it worth it accepting the flaws (code review tools and CI system both have much better git support - these are two critical things that if you use anything else will work against you).
Not only is git not the best, but one of the central value props of coding agents and chatbots used for programming is not having to use git in order to interact with free code.
I do the same except with merge. I don't see how rebase makes it any better.
> Fundamentally, I do not debug off git history.
Are you saying that you've never used git bisect? If that's the case, I think you're missing out.
Bisect is one of those things where if you're on a certain kind of project, it's really useful, and if you're not on that kind of project you never need it.
If the contributor count is high enough (or you're otherwise in a role for which "contribution" is primarily adjusting others' code), or the behaviors that get reported in bugs are specific and testable, then bisect is invaluable.
If you're in a project where buggy behavior wasn't introduced so much as grew (e.g. the behavior evolved A -> B -> C -> D -> E over time and a bug is reported due to undesirable interactions between released/valuable features in A, C, and E), then bisecting to find "when did this start" won't tell you that much useful. If you often have to write bespoke test scripts to run in bisect (e.g. because "test for presence of bug" is a process that involves restarting/orchestrating lots of services and/or debugging by interacting with a GUI), then you have to balance the time spent writing those with the time it'd take for you to figure out the causal commit by hand. If you're in a project where you're personally familiar with roughly what was released when, or where the release process/community is well-connected, it's often better to promote practices like "ask in Slack/the mailing list whether anyone has made changes to ___ recently, whoever pipes up will help you debug" rather than "everyone should be really good at bisect". Those aren't mutually exclusive, but they both do take work to install in a community and thus have an opportunity cost.
This and many other perennial discussions about Git (including TFA) have a common cause: people assume that criticisms/recommendations for how to use Git as a release coordinator/member of a disconnected team of volunteers apply to people who use Git who are members of small, tightly-coupled teams of collaborators (e.g. working on closed-source software).
From what I can tell the vast majority of developers don't use git bisect and never will.
FWIW, having squashed merge commits also reduces the relevance of bisect. It can still be useful but it’s not necessarily as critical of a tool.
This. This is why small commits are nice. If you do that you might as well rebase. If you squash you lose.
Git bisect is a wonder, especially combined with its ability to potentially do the success/fail testing on its own (with the help of some command you provide).
It is a tragedy that more people don't know about it.
I manage a maintained fork and periodically rebase our changes on top of upstream.
In this case, rebasing is nice because our changes stay in a contiguous block at the top (vs merging which would interleave them), so it's easy for me and others to see exactly where our fork diverges.
Doesn’t that mean you have to fix all the merge conflicts introduced by your commits on every rebase though?
if you don't have a merge from main into the branch further down, then git only bothers you about the most recently introduced conflicts conflicts --- the ones you'd have to resolve anyhow, and it remembers how you've resolved those.
Git has gotten pretty smart about that recently: once you resolve a conflict, if you get the same conflict again it automatically resolves it the same way. Works for both rebase and merge.
What is the argument here? Does it still hold if you s/rebase/comments/g ?
Of course a readable code history aids in debugging. Just as comments and indentation do. None of these are technically necessary, but still a good idea.
Of course running the rebase command doesn't guarantee a readable commit history, but it's hard to craft commits without it. Each and every commit on linux-kernel has been rebased probably a dozen times.
Isn't it more of a style decision where if your team rebases, and practices clean code discipline, and excellence is a habit rather than an enforcement — the signals visibly emit in `status`.
Most committers don't really understand remotes, much less rebasing.
Any time I'm doing anything remotely to do with merging, I use 'git diff' or 'git difftool'.
If I diff against master, I see changes in 300+ files, when I've only changed 5 (because other people have changed 300+ files.)
> Fundamentally, I do not debug off git history.
Neither. The usual argument I hear against rebase is that it destroys history. Since I don't debug off git history, I'm quite happy to destroy it, and get back to diffing my 5-file changes against (current) master.
Whenever I go spelunking in the history I always find clean, linear history, with small commits (where possible) to be much easier to understand and search than merges.
This may be outdated because git’s defaults have improved a lot over the years. When I first used git on a team was in 2011. As I recall, there were various commands like git log -p that would show nothing for a merge commit. So without extra knowledge of the git flags you would not find what you were looking for if it was in a side path of the merge history. This caused a lot of confusion at times. We switched to a rebase approach because linear history is easier for people to use.
To answer your question directly, if somewhat glibly, I’m glad I rebased every time I go looking for something in the history because I don’t have to think about the history as a graph. It’s easier.
More to your point, there are times when blame on a line does not show the culprit. If you move code, or do anything else to that line, then you have to keep searching. Sometimes it’s easier to look at the entire patch history of a file. If there is a way to repeatedly/recursively blame on a line, that’s cool and I’d love to know about it.
I now manage two junior engineers and I insist that they squash and rebase their work. I’ve seen what happens if they don’t. The merges get tangled and crazy, they include stuff from other branches they didn’t mean to, etc. the squash/rebase flow has been a way to make them responsible for what they put into the history, in a way that is simple enough that they got up to speed and own it.
Do you ever use git bisect?
I like to keep a linear history mainly so I don't have to think very hard about tools like that.
I've used git bisect on a repo whose commit graph is at least 20-wide at some points. In the two cases I used it, it identified the individual commit. I didn't think very hard about it. It was the first time I used bisect. Maybe I got lucky.
In fact, for searching how a file got to the state it is I prefer that when PRs are merged, they are merged and not rebased. I want the commit shas to be the same.
Rebasing on main loses provenance.
If you want a clean history doing it in the PR, before merging it. That way the PR is the single unit of work.
Merging a PR with rebase doesn't lose provenance. You can just keep all the commits in the PR branch. But even if you squash the branch into a single commit and merge (which these tools automate and many people do), it still doesn't lose provenance. The provenance is the PR itself. The PR is connected to a work item in the ticketing system. The git history preserves all the relevant info.
The provenance that is lost is the original base.
No, the original base is in the commit history. It's just not relevant any more after rebase. It's like your individual keystrokes before a commit are not relevant any more after a commit. They're not lost provenance.
> That way the PR is the single unit of work.
Well if I have a diff of the PR with just the changes, then the PR is already a "unit of work," regardless of merge or rebase, right?
The main benefit I've found is when there is work happening concurrently in multiple feature branches at once (e.g. by different people). Rebase-merging greatly simplifies dealing with merge conflicts as you only have a simple diff against a single branch to deal with. The more work you have in progress at once the more important this becomes.
Once we had a slowdown in our application that went unadressed for a couple of months. Using git bisect to binary search across a bunch of different commits and run a perf test, every commit being a "good" historical commit allowed that to be much easier, and I found the offending commit fast.
Ok, I see. This is a use case I did not think about. Worthy of a blog post, I think.
Besides testing for a perf slow down, any other use cases for git bisect + rebase?
This "debate" is just insane.
Any workflow that has a review process uses rebase. FULL STOP.
If you don't have your code reviewed and you push code to a shared repo, fine, don't use rebase if you don't want to.
If you haven't used git bisect to find a regression, you should try it.
You can write a test (outside of source control) and run `git bisect good` on a good commit and `git bisect bad` on bad one and it'll do a binary search (it's up to you to rerun your test each time and tell git whether that's a good or a bad commit). Rather quickly, it'll point you to the commit that caused the regression.
If you rebase, that commit will be part of a block of commits all from the same author, all correlated with the same feature (and likely in the same PR). Now you know who you need to talk to about it. If you merge, you can still start with the author of the commit that git bisect found, but it's more likely to be interleaved with nearby commits in such a way that when it was under development, it had a different predecessor. That's a recipe for bugs that get found later than they otherwise would've.
If you're not using git history to debug, you're probably not really aware of which problems would've turned out differently if the history was handled differently. If you do, you'll catch cases where one author or another would've caught the bug before merging to main, had they bothered to rebase, but instead the bug was only visible after both authors thought they were done.
> it's up to you to rerun your test each time and tell git whether that's a good or a bad commit
not true. You can use
git bisect run script-command arguments
where script-command is a ... script that will test the result of the build.I have often been happy to have a clean linear history when asking myself things like "does build X.Y.Z include this buggy change I found in commit abcdefg?". With a history full of merges, where a commit from 1st of January might be merged only on the 20th of July, this gets MUCH harder to answer.
This is especially true if you have multiple repos and builds from each one, such that you can't just checkout the commit for build X.Y.Z and easily check if the code contains that or not (you'd have to track through dependency builds, checkout those other dependencies, possibly repeat for multiple levels). If the date of a commit always reflects the date it made it into the common branch, a quick git log can tell you the basic info a lot of the time.
I use `git rebase` all the time, along with `git add -p`, `git diff` and other tools. It helps me to maintain logical commit history.
- Reshuffle commits into a more logical order. - Edit commit subjects if I notice a mistake. - Squash (merge) commits. Often, for whatever reason pieces of a fix end up in separate commits and it's useful to collect and merge them.
I'd like to make every commit perfect the first time but I haven't managed to do that yet. git-rebase really helps me clean things up before pushing or merging a branch.
Two very useful use cases for rebase: 1) rewrite history to organize the change list clearly. 2) stacked pull requests of arbitrary depth.
You’ve never run a bisect to identify which commit introduced a specific behavior?
This is when I’ve found it most useful. Having commits merged instead of squashed narrows down and highlights the root problem.
It’s a rare enough situation I don’t push for merge commits over squashed rebases because it’s not worth it, but when I have had to bisect and the commits are merged instead of squashed it is very very useful.
Those commit authors are who I noted as clear thinkers and have tracked over my career to great benefit.
This is the post that made jj click for me, and not coincidentally it is about a rebase operation that feels complicated in git but trivial in jj.
Allow me (today) to be that person to propose checking out Jujutsu instead [0]. Not only it has a superpower of atomic commits (reviewers will love you, peers will hate 8 small PRs that are chained together ;-)) but it's also more consistent than git and works perfectly well as a drop-in replacement.
In fact, I've been using Jujutsu for ~2 years as a drop-in and nobody complained (outside of the 8 small PRs chained together). Git is great as a backend, but Jujutsu shines as a frontend.
You don't have to chain 8 PRs together, Github tries really hard to hide this from you but you can in fact review one commit at a time, which means you don't need to have a stack of 8 PRs that cascade into each other.
Yup, that's what my team does. It works wonderfully, and it fits well with Github's "large PR" mindset imo. It could be a bit better in the Github UI, but so can most things. I vastly prefer it to individually reviewing tons of PRs.
The funny thing about this debate for me is that i find it comes down to the committer. If the committer makes small commits in a stacked PR, where each commit is a logical unit of work, part of the "story" being told about the overall change, then i don't personally find it's that useful to stack them. The committer did the hard part, they wrote the story of changes in a logical, easy to parse manner.
If the story is a mess, where the commits are huge or out of logical order, etc - then it doesn't matter much in my view.. the PR(s) sucks either way.
I find stacked PRs to be a workflow solution to what to me is a UI problem.
You do if you find yourself in a team where PRs are squash-merged. :-(
Does that happen on merge or before PR creation? I thought the setting only applied it when you hit the merge button, so you'd still have commits prior to the merge. Though that won't help if someone pre-squashes them :s
> reviewers will love you, peers will hate 8 small PRs that are chained together
My peers are my reviewers...
The key thing to point out is that jujutsu is a rebase-based workflow, and no on who uses jujutsu ever worries about rebasing (they may not even be aware of it). It's a good demonstration of a tool that got rebase right, unlike git.
Pre-jujutsu, I never rebased unless my team required it. Now I do it all the time.
Pre-jj, I never had linear history, unless the team required it. Now most of my projects have linear history.
A better UI makes a huge difference.
Also been using Jujutsu for about 2 years. I feel like I have learned so much about how git actually works by simply not using git.
Does it support submodules yet? That was the thing that stopped me using it last time I checked.
Submodules are cursed. I feel bad for you that you have to work in a repo that uses them.
What is the problem with submodules? I like to use them because it means the code I need from another repo remains the same until I update it. No unexpected breaking changes.
Using a package manager to share code between repos has worked far better for me than submodules.
This comment sums up the issues better than I could: https://news.ycombinator.com/item?id=31792396
Not natively, but you can still use the regular git commands to update them, and it works.
+1
`jj` is the only tool that make me use `rebase` personally. Before, I see as the punishment given by my team wishes :)
And `jj undo`, so nothing is terrifying.
And if a major problem, `jj op log` + `jj op log restore` fix it. This is the major super power of jj: Before I need to nuke the git repo on bad rebases (not chance in hell I can find how undo the massive 20+ steps bad rebase)
well git reflog is that, annoying, yes, but we have LLM so I don't actually need to remember how all the command syntax exactly like back in 2019.
Is git reflog that bad to use? It just lists a bunch of commit hashes. Find the one you want and hard reset to it.
It's the one I keep going to LLM for, others like rebase are muscle memory at this point.
I think I'd love to use Jujutsu, but I enjoy Magit (for Emacs) too much to entertain the thought of switching :/.
Besides, Magit rebasing is also pretty sweet.
I used to think like this, but then I realized: jj-mode.el exists[0] and you can still use magic since it's still a git repo underneath. Seriously, don't let this hold you back.
How do you handle publishing the stack?
It depends on what you're publishing to, but works with most other tools by using a bookmark for each publish target.
There’s tooling like https://github.com/LucioFranco/jj-spr for managing stacks of PRs, but for stacks of 2 or 3 it’s not too bad to do it manually.
I avoid rebase like plague (perhaps because of my early experiences with it). I used to get continuous conflicts for the same commits again and again, and the store and replay kinda helped with it but not always. Merge always worked for me (once I resolve conflicts, thats the end of it). Now I always merge main into my feature branch and then merge it back to main when ready. Does it pollute the history? Maybe, but Ive never looked. It does not matter to our team.
I think the callout to squash first will be helpful (if your lots of commits aren’t good info themselves)
Perhaps. But you can see the DX of rebase is abysmal compared to merge. squash, rerere, force push, remember to push to remote before rebase, more coordination if multiple people are working on feature branch etc.
I still prefer merge. Its simple and gets out of my way as long as I dont care about purity of history
I wish rebase was taught as the default - I blame the older inferior version control software. It’s honestly easier to reason about a rebase than a merge since it’s so linear.
Understanding of local versus origin branch is also missing or mystical to a lot of people and it’s what gives you confidence to mess around and find things out
The end result of a git rebase is arguably superior. However, I don't do it, because the process of running git rebase is a complete hassle. git merge is one-shot, whereas git rebase replays commits one-by-one.
Replaying commits one-by-one is like a history quiz. It forces me to remember what was going on a week ago when I did commit #23 out of 45. I'm grateful that git stores that history for me when I need it, but I don't want it to force me to interact with the history. I've long since expelled it from my brain, so that I can focus on the current state of the codebase. "5 commits ago, did you mean to do that, or can we take this other change?" I don't care, I don't want to think about it.
Of course, this issue can be reduced by the "squash first, then rebase" approach. Or judicious use of "git commit --amend --no-edit" to reduce the number of commits in my branch, therefore making the rebase less of a hassle. That's fine. But what if I didn't do that? I don't want my tools to judge me for my workflow. A user-friendly tool should non-judgmentally accommodate whatever convenient workflow I adopted in the past.
Git says, "oops, you screwed up by creating 50 lazy commits, now you need to put in 20 minutes figuring out how to cleverly combine them into 3 commits, before you can pull from main!" then I'm going to respond, "screw you, I will do the next-best easier alternative". I don't have time for the judgement.
> "oops, you screwed up by creating 50 lazy commits, now you need to put in 20 minutes figuring out how to cleverly combine them into 3 commits, before you can pull from main!"
You can also just squash them into 1, which will always work with no effort.
Then is not rebase your problem, but all your other practices. Long lived feature branches with lot's of unorganized commits with low cohesion.
Sometimes it's ok to work like this, but you asking git not being judgamental is like saying your roomba should accomodate to you didin't asking you to empty it's dust bag.
> Long lived feature branches
I always do long lived feature branches, and rarely have issues. When I hear people complain about it, I question their workflow/competence.
Lots of commits is good. The thing I liked about mercurial is you could squash, while still keeping the individual commits. And this is also why I like jj - you get to keep the individual commits while eliminating the noise it produces.
Lots of commits isn't inherently bad. Git is.
You can make long lived feature branches work with rebase, you just have to regularly rebase along the way.
I had a branch that lived for more than a year, ended up with 800+ commits on it. I rebased along the way, and the predictably the final merge was smooth and easy.
I don’t see how rebase frequency changes the problem of getting conflicts with some random commit within your long-lived branch, when doing a rebase.
I rebase often myself, but I don’t understand the logic here.
1) because git rerere remembers the resolutions to the ..
2) small conflicts when rebasing the long lived branch on the main branch
if instead I delayed any rebasing until the long lived branch was done, I'd have no idea of the scale of the conflicts, and the task could be very, very different.
Granted, in some cases there would be no or very few conflicts, and then both approaches (long-lived branch with or without rebases along the way) would be similar.
If you do a single rebase at the end, there is nothing to remember, you just get the same accumulated conflicts you also collectively get with frequent rebases. Hence I don’t understand the benefit of the latter in terms of avoiding conflicts.
You don't see a difference between dealing with conflicts within a few days of you doing the work that led to them (or someone else), and doing them all at once, perhaps months later?
While it is a bit of a pain, it can be made a lot easier with the --keep-base option. This article is a great example https://adamj.eu/tech/2022/03/25/how-to-squash-and-rebase-a-... of how to make rebasing with merge conflicts significantly easier. Like you said though, it's not super user-friendly but at least there are options out there.
This seems crazy to me as a self-admitted addict of “git commit --amend --no-edit && git push --force-with-lease”.
I don’t think the tool is judgmental. It’s finicky. It requires more from its user than most tools do. Including bending over to make your workflow compliant with its needs.
I don't mind rebasing a single commit, but I hate it when people rebase a list of commits, because that makes commits which never existed before, have probably never been tested, and generally never will be.
I've had failures while git bisecting, hitting commits that clearly never compiled, because I'm probably the first person to ever check them out.
Sometimes it feels like the least-bad alternative.
e.g. I'm currently working on a substantial framework upgrade to a project - I've pulled every dependency/blocker out that could be done on its own and made separate PRs for them, but I'm still left with a number of logically independent commits that by their nature will not compile on their own. I could squash e.g. "Update core framework", "Fix for new syntax rules" and "Update to async methods without locking", but I don't know that reviewers and future code readers are better served by that.
In mercurial you could have those in phase hidden for future reference. In jujutsu you can have those in a local set, but not push upstream. Only unfortunate thing with jujutsu is because it is trying to be a git overlay, you lose state that a mercurial clone on another machine would have.
I wonder how relevant and feasible this workflow would be: https://graydon2.dreamwidth.org/1597.html
Where you have two repositories, one "polished" where every commit always passes, and another for messier dev history.
It seems to me the "Not Rocket Science" invariant is upheld if you just require all PRs to be fast-forward changes. Which I guess is an argument in support of rebase, but a clean merge counts too. If the test suite passes on the PR branch, it'll pass on main, because that's what main will be afterward. Ideally you don't even test the same commit hash twice.
If you have expensive e2e tests, then you might want to keep a 'latest' tag on main that's only updated when those pass.
Funnily enough in all my years of using git, this thread is the first time I've encountered merge. It sounds easier I suppose, but I don't really have a problem with rebase and will likely just continue as is
Rebase your local history, merge collaborative work. It helps to just relabel rebase as "rewrite history". That makes it more clear that it's generally not acceptable to force push your rewritten history upstream. I've seen people trying to force push their changes and overwrite the remote history. If you need to force push, you probably messed up. Maybe OK on your own pull request branches assuming nobody else is working on them. But otherwise a bad idea.
I tend to rebase my unpushed local changes on top of upstream changes. That's why rebase exists. So you can rewrite your changes on top of upstream changes and keep life simple for consumers of your changes when they get merged. It's a courtesy to them. When merging upstream changes gets complicated (lots of conflicts), falling back to merging gives you more flexibility to fix things.
The resulting pull requests might get a bit ugly if you merge a lot. One solution is squash merging when you finally merge your pull request. This has as the downside that you lose a lot of history and context. The other solution is to just accept that not all change is linear and that there's nothing wrong with merging. I tend to bias to that.
If your changes are substantial, conflict resolution caused by your changes tends to be a lot easier for others if they get lots of small commits, a few of which may conflict, rather than one enormous one that has lots of conflicts. That's a good reason to avoid squash merges. Interactive rebasing is something I find too tedious to bother with usually. But some people really like those. But that can be a good middle ground.
It's not that one is better than the other. It's really about how you collaborate with others. These tools exist because in large OSS projects, like Linux, where they have to deal with a lot of contributions, they want to give contributors the tools they need to provide very clean, easy to merge contributions. That includes things like rewriting history for clarity and ensuring the history is nice and linear.
Maybe I'm old, but I still think a repository should be a repository: sitting on a server somewhere, receiving clean commits with well written messages, running CI. And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit. That's just a different set of operations. There's no reason a local copy should have the exact same implementation as a repository, git made a wrong turn in this, let's just admit it.
> And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit.
That's exactly what Git is. You have your own local copy that you can mess about with and it's only when you sync with the remote that anyone else sees it.
I agree but I think git got the distributed (ie all nodes the same) part right. I also think what you say doesn't take it far enough.
I think it should be possible to assign different instances of the repository different "roles" and have the tooling assist with that. For example. A "clean" instance that will only ever contain fully working commits and can be used in conjunction with production and debugging. And various "local" instances - per feature, per developer, or per something else - that might be duplicated across any number of devices.
You can DIY this using raw git with tags, a bit of overhead, and discipline. Or the github "pull" model facilitates it well. But either you're doing extra work or you're using an external service. It would be nice if instead it was natively supported.
This might seem silly and unnecessary but consider how you handle security sensitive branches or company internal (proprietary) versus FOSS releases. In the latter case consider the difficulty of collaborating with the community across the divide.
> I still think a repository should be a repository: sitting on a server somewhere, receiving clean commits with well written messages, running CI. And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit
This is one way to see things and work and git supports that workflow. Higher-level tooling tailored for this view (like GitHub) is plentiful.
> There's no reason a local copy should have the exact same implementation as a repository
...Except to also support the many git users who are different from you and in different context. Bending gits API to your preferences would make it less useful, harder to use, or not even suitable at all for many others.
> git made a wrong turn in this, let's just admit it.
Nope. I prefer my VCS decentralized and flexible, thank you very much. SVN and Perforce are still there for you.
Besides, it's objectively wrong calling it "a wrong turn" if you consider the context in which git was born and got early traction: Sharing patches over e-mail. That is what git was built for. Had it been built your way (first-class concepts coupled to p2p email), your workflow would most likely not be supported and GitHub would not exist.
If you are really as old as you imply, you are showing your lack of history more than your age.
If this was the main strategy used even for public/shared branches, then everyone would have to deal with changing, conflicting histories all the time.
I've heard people say before that it is easier to reason about a linear history, but I can't a think of a situation where this would let me solve a problem easier. All I can think of is a lot of downsides. Can you give an example where it helps?
Oh, that's why. I barely used any CVS before Git, so I was always puzzled about the "weird" opinions on this topic. I'm still puzzled by the fact that some people seem to reject entirely the idea of rewriting history - even locally before you have pushed/published it anywhere.
Sometimes people look sort of "superstitious" to me about Git. I believe this is caused by learning Git through web front-ends such as Github, GitLab, Gitea etc., that don't tell you the entire truth; desktop GUI clients also let the users only see Git through their own, more-or-less narrow "window".
TBH, sometimes Git can behave in ways you don't expect, like seeing conflicts when you thought there wouldn't be (but up to now never things like choosing the "wrong" version when doing merges, something I did fear when I started using it a ~decade ago).
However one usually finds an explanation after the fact. Something I've learned is that Git is usually right, and forcing it to do things is a good recipe to mess things up badly.
I've had recent interns who've struggled with rebase and they've never known anything but Git. Never understood why that was given they seem ok with basic commits and branching. I would agree that rebase is easier to reason about than merging yet I'm still needing to give what feels like a class on it.
The fact that people have a harder time understanding rebase is evidence that rebase is harder to reason about. Whether you update your understanding based on that evidence is up to you. If I have to pick between merge and rebase, I would generally pick merge. It seems to cause less conflicts with long-lived branches. Commits maintain their identity so each one has to be conflict-resolved at most once.
However, even better for me (and my team) is squash on PR resolve.
IMO it's one of those things where rebase is at first less intuitive but once you get it is a lot simpler & easier to reason about. In contrast merging at first seems more straightforward but is actually less so.
that's not a value judgement in either direction, both initially simpler and longterm simpler have their merits.
git rebase squash as a single commit on a single main branch is the one true way.
I know a lot of people want to maintain the history of each PR, but you won't need it in your VCS.
You should always be able to roll back main to a real state. Having incremental commits between two working stages creates more confusion during incidents.
If you need to consult the work history of transient commits, that can live in your code review software with all the other metadata (such as review comments and diagrams/figures) that never make it into source control.
Merging merge requests as merge commits (rather than fast-forwarding them) gives the same granularity in the main branch, while preserving the option to have bisect dive inside the original MR to actually find the change that made the interesting change in behavior.
I wish github created automation for this flow like they have for other variants.
But they have, with pull requests. When you merge a pull request it is done via the "subtree" merge strategt, which preserves partial commits and also does not flatten them.
Hard disagreement.
https://0x5.uk/2021/03/15/github-rebase-and-squash-considere...
> I know a lot of people want to maintain the history of each PR, but you won't need it in your VCS.
I strongly disagree. Losing this discourages swarming on issues and makes bisect worse.
> You should always be able to roll back main to a real state. Having incremental commits between two working stages creates more confusion during incidents.
If you only use merge commits this shouldn't be any more difficult. You just need to make sure you specify that you want to use the first parent when doing reverts.
This is one of the few hills I will die on. After working on a team that used Phabricator for a few years and going back to GitHub when I joined a new company, it really does make life so much nicer to just rebase -> squash -> commit a single PR to `main`
What was stopping you from squash -> merge -> push two new changesets to `main`? Isn't your objection actually to the specifics of the workflow that was mandated by your employer as opposed to anything inherent to merge itself?
> You should always be able to roll back main to a real state.
Well there's your problem. Why are you assuming there are non-working commits in the history with a merge based workflow? If you really need to make an incremental commit at a point where the build is broken you can always squash prior to merge. There's no reason to conflate "non-working commits" and "merge based workflow".
Why go out of the way to obfuscate the pathway the development process took? Depending on the complexity of the task the merge operation itself can introduce its own bugs as incompatible changes to the source get reconciled. It's useful to be able to examine each finished feature in isolation and then again after the merge.
> with all the other metadata (such as review comments and diagrams/figures) that never make it into source control.
I hate that all of that is omitted. It can be invaluable when debugging. More generally I personally think the tools we have are still extremely subpar compared to what they could be.
> I know a lot of people want to maintain the history of each PR, but you won't need it in your VCS.
Having worked on a maintenance team for years, this is just wrong. You don't know what someone will or won't need in the future. Those individual commits have had extra context that have been a massive help for me all sorts of times.
I'm fine with manually squashing individual "fix typo"-style commits, but just squashing the entire branch removes too much.
Disagree!
If those commits were ready for production, they would have been merged. ;)
Don't put a commit on main unless I can roll back to it.
When your PR build takes more than an hour you'll think twice before creating multiple PRs for multiple related commits (e.g. refactoring+feature) when working on a single issue.
I completely agree. It also forces better commit messages, because "maintaining the history of each PR" is forced into prose written by the person responsible for the code instead of hand-waving it away into "just check the commits" -- no thanks.
I never understood why rebase is such a staple in the git world. For me "loosing" historical data, like on which branch my work was done is a real issue.
In the same class, for commit to not have on which branch they were created as a metadata is a rel painpoint. It always a mess to find what commit were done for what global feature/bugfix in a global gitflow process...
I'll probably be looking into adding an commit auto suffix message with the current branch in the text, but it will only work for me, not any contributors...
Ideally you only rebase your own commit on your own feature branch, just before merging. Having a clean commit history before merging make the main branch/trunk more readable.
Also (and especially) it make it way easier to revert a single feature if all the relevant commits to that feature are already grouped.
For your issue about not knowing which branch the commits are from: that why I love merge commits and tree representation (I personally use 'tig', but git log also have a tree representation and GUI tools always have it too).
Sounds like you'd be a fan of Fossil (https://fossil-scm.org). See for instance: https://fossil-scm.org/home/doc/trunk/www/fossil-v-git.wiki#...
Let me expand on this with a link to the article "Rebase Considered Harmful" [0].
I also prefer Fossil to Git whenever possible, especially for small or personal projects.
> Surely a better approach is to record the complete ancestry of every check-in but then fix the tool to show a "clean" history in those instances where a simplified display is desirable and edifying
From your link. The actual issue that people ought to be discussing in this comment section imo.
THIS is the hill I will die on.
Why do we advocate destroying information/data about the dev process when in reality we need to solve a UI/display issue?
The amount of times in the last 15ish years I've solved something by looking back at the history and piecing together what happened (eg. refactor from A to B as part of a PR, then tweak B to eventually become C before getting it merged, but where there are important details that only resulted because of B, and you don't realize they are important until 2 years later) is high enough that I consider it very poor practice to remove the intermediate commits that actually track the software development process.
Isn't this just `--first-parent`? I think that should probably be the default in git. Maybe the only way this will happen is with a new SCM.
But the git authors are adamant that there's no convention for linearity, and somehow extended that to why there shouldn't be a "theirs" merge strategy to mirror "ours" (writing it out it makes even less sense, since "theirs" is what you'd want in a first-parent-linear repo, not "ours").
Which branch your work was done on is noise, not signal. There is absolutely zero signal lost by rebasing, and it prunes a lot of noise. If your branch somehow carries information, that information should be in your commit message.
I disagree, without this info, I can't easily tell if any commit is part of a feature or is a simple hotfix. I need to rely on the commiter to include the info in the commit message, which is almost always not the case.
But you are still relying on them to name the branch in such a way it encodes that info. It is unclear why this is superior to messages in commits.
It's worse than that: the branch name is lost after a merge. That "merge branch xyz" is simply the default commit message. So it doesn't matter what you do, commit messages are all you have!
Nothing stops you from doing both renade and merge commits.
Except perhaps crappy gui options in GitHub. I really wish they added that option as a button.
Every commit message starts with the ticket number of whatever issue tracking system you're using. If you're not using issue tracking with a system large enough for multiple devs, you've got a much bigger problem.
it's just "gitflow" is unnecessary complex (for most applications). with rebase you can work more or less as with "patches" and a single master, like many projects did in 90x, just much more comfortably and securely.
My default pull is ff-only. I don't like merging or rebasing by default.
When working in a short-lived branch, I like to rebase. I usually get no or simply easy-to-solve conflicts. I like my small and numerous commits stacked on top of the current develop. Regardless or whether we squash or not.
For long-lived branches (and technically for hard merges, though I've been using rerere more and more) merge is a better option.
What kills bisect, IMO, is large commits or commits with multiple subjects/goals. That's the reason I don't like squashed PRs.
Rebase is easy and not terrifying. Here's a 1k word article on how to do it correctly.
Or just do a merge and move on with your life.
If you want to have a linear history on main, either always rebase your branch onto main, or merge but only accept squashed commits onto main.
It's funny because I learned git on the job and we exclusively used rebase when I was learning my git fundamentals. I wouldn't say merging scares be, but it's never a tool a reach for.
> Here's a 1k word article
lol if 1k words is "not easy" for you, i think you have bigger problems than merge vs rebase.
PSA: I’m not terrified of rebase, yet it’s good to know this:
https://docs.github.com/en/get-started/using-git/about-git-r...
> Warning - Because changing your commit history can make things difficult for everyone else using the repository, it's considered bad practice to rebase commits when you've already pushed to a repository.
A similar warning is in Atlassian docs.
I think a large part of this is about how a branch is expected to be used.
Branches that people are expected to track (i.e. pull from or merge into their regularly) should never rebase/force-push.
Branches that are short-lived or only exist to represent some state can do so quite often.
Also branches that are write-only by a single person by consensus. E.g. "personal" PR branches that are not supposed to be modified by anyone but owner.
It is this, plus more:
- the web tooling must react properly to this (as GH does mostly)
- comments done at the commit level are complicated to track
- and given the reach of tools like GH, people shooting their own foot with this is (even experienced ones) most likely generate a decent level of support for these tools teams
Is there a reason why that recommendation cannot be changed to "don't ever force push unless you are certain no one else has fetched this branch"?
Well that's a distinction which makes sense in theory but is not realistic in practice for most projects with multiple contributors.
Over at ardour.org, we've never had an issue with this practice. Two core full time devs, dozens of others.
"fetched this branch" needs to include "started reviewing the PR", and probably other cases; it does mean switching modes for devs who usually rebase privately.
> The response is often hesitation or outright fear. I get it. Rebase has a reputation for destroying work, and the warnings you see online don’t help.
The best method for stop being terrified of destructive operations in git when I first learned it, was literally "cp -r $original-repo $new-test-repo && go-to-town". Don't know what will happen when you run `git checkout -- $file` or whatever? Copy the entire directory, run the command, look at what happens, then decide if you want to run that in your "real" repository.
Sound stupid maybe, but if it works, it works. Been using git for something like a decade now, and I'm no longer afraid of destructive git operations :)
> cp -r $original-repo $new-test-repo
This is almost exactly what git does, except it's a million times faster. Every commit is one of those copies, and you can instantly jump to any one of them using git checkout.
If you like this mental model, you'll feel right at home with git. You will love git reflog.
One step further which is in-scope-of-the-tool spirit will be git clone locally your repository.
And still one step further, just create a new branch to deal with the rebase/merge.
Yes there are may UX pain points in using git, but it also has the great benefits of extremely cheap and fast branching to experiment.
Yeah, works for normal "lets try out what happens when I do this" but it can get messy, depending on what you're trying out. That's why I always recommended beginners to literally "cp -r" the entire directory instead, together with the git repository, so they can feel freer to completely experiment and not be afraid of loosing anything.
I guess it's actually more of a mental "divider" than anything, it tends to relax people more when they can literally see that their old stuff is still there, and I think git branches can "scare" people in that way.
Granted, this is about people very new to git, not people who understands what is/isn't destructive, and just because a file isn't on disk doesn't mean git doesn't know exactly what it is.
> Granted, this is about people very new to git, not people who understands what is/isn't destructive, and just because a file isn't on disk doesn't mean git doesn't know exactly what it is.
I've been using git almost exclusively since 2012 and feel very comfortable with everything it does and where the sharp edges are. Despite that, I still regularly use the cp -r method when doing something even remotely risky. The reason being, that I don't want to have to spend time unwinding git if I mess something up. I have the understanding and capability of doing so, but it's way easier to just cp -r and then rm -rf && cp -r again if I encounter something unexpected.
Two examples situations where I do this:
1. If I'm rebasing or merging with commits that have a moderate to high risk of merge conflicts that could be complicated. I might get 75% through and then hit that one commit where there's a dozen spots of merge conflict and it isn't straightforwardly clear which one I want (usually because I didn't write them). It's usually a lot easier to just rm -rf the copy and start over in a clean cp -r after looking through the PR details or asking the person who wrote the code, etc.
2. If there are uncommitted files in the repo that I don't want to lose. I routinely slap personal helper scripts or Makefiles or other things on top of repos to ease my workflow, and those don't ever get committed. If they are non-trivial then I usually try to keep a copy of them somewhere else in case I need to restore, but I'm not alway ssuper disciplined about that. The cp -r method helps a lot
There are more scenarios but those are the big two that come to mind.
in my experience some of the trickiest situations are around gitignore file updates, crlf conversion, case [in]sentivity, etc. where clones and branches are less useful as a testing ground.
whoa. well, if it really works for you. The thing is, git has practically zero "destructive" commands, you almost always (unless you called garbage collector aggressively) return to the previous state of anything committed to it. `git reflog` is a good starting point.
I think i've seen someone coded user-friendlier `git undo` front for it.
I expanded on it more here: https://news.ycombinator.com/item?id=46601600
TDLR is: people feel safer when they can see that their original work is safe, while just making a new branch and playing around there is safe in 99% of the cases, people are more willing to experiment when you isolate what they want to keep.
The fastest way to eliminate fear is to practice. I had the team go through it one day. They didn't get a choice. I locked us on a screen share until everyone was comfortable with how rebasing works. The call lasted maybe 90 minutes. You just have to decide one day that you (or the team) will master this shit, spend a few hours doing it, and move on.
Rebase is a super power but there are a few ground rules to follow that can make it go a lot better. Doing things across many smaller commits can make rebase less painful downstream. One of the most important things is to learn that sometimes a rebase is not really feasible. This isn't a sign that your tools are lacking. This is a sign that you've perhaps deviated so far that you need to reevaluate your organization of labor.
One of the many things I like about fossil is the 'undo' command [0].
Also, since you can choose to keep the fossil repo in a separate directory, that's an additional space saver.
why copy anything at all? you just need to preserve the head and that's it. As simple as `git branch bkp` and that's all.
In order for that to work you need some level of confidence that rebase doesn't mess with your branch. Rebase has a reputation for "rewriting history".
Been using git since 2008, and this looks more intimidating than helpful. git rebase takes a list of patches and applies them one by one on top of a given "base" commit. And that's it. It's not all this complicated git command soup. It's just patches. No objects, no sha1s, no metadata, no branches, just literal textual diffs applied in order. It's the dumbest and also one of the most powerful things about git if you care at all about a readable history. If you don't, that's fine, but in some circles, e.g. most (if not all) open source projects, patch management and history hygiene is a very important part of good collaboration.
Github is not Git but I find the Squash and Merge functionality on Github's Pull Request system means I no longer need to worry about rebasing or squashing my commits locally before rebasing.
At work though it is still encouraged to rebase, and I have sometimes forgotten to squash and then had to abort, or just suck it up and resolve conflicts from my many local commits.
Couldn't agree more. Squash merges to main ONLY.
That way, I don't care if your branch contains 100 commits or 1 commit. I don't need to worry about commit messages like:
- fix 1
- fix 2
- dfljfdlkfdj
- does it work now?
Do whatever you want with your commits on your feature branch. Just make sure the title of your PR is clean and follows our formatting. Git history is always well formatted and linear.
It's the ideal solution.
This
Rebase only makes sense if you making huge PRs where you need to break it down into smaller commits to have them make sense.
If you keep your PRs small, squashing it works well enough, and is far less work and more consistent in teams.
Expecting your team to carefully group their commits and have good commit messages for each is a lot of unnecessary extra work.
Squash is not Github specific and is part of git:
git merge --squashRight, but they are referring to configuration on a GitHub repository that can make squash merge automatic for all pull request merges.
e.g. When clicking the big green "Merge pull request" button, it will automatically squash and merge the PR branch in.
So then I don't need to remind or wait for contributors to do a squash merge before merging in their changes. (Or worse, forget to squash merge and then I need to fix up main).
git rebase conflict resolution is a lot less scary with the zdiff3 merge.conflictStyle option.
Also incremental rebasing with mergify/git-imerge/git-mergify-rebase/etc is really helpful for long-lived branches that aren't merged upstream.
https://github.com/brooksdavis/mergify https://github.com/mhagger/git-imerge https://github.com/CTSRD-CHERI/git-mergify-rebase https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...
I also love git-absorb for automatic fixups of a commit stack.
Rebasing replays your commits on top of the current main branch, as if you’d just created your branch today. The result is a clean, linear history that’s easier to review and bisect when tracking down bugs.
The article discusses why contributors should rebase their feature branches (pull request).
The reason they give is for clean git history on main.
The more important reason is ensure the PR branch actually works if merged into current main. If I add my change onto main, does it then build, pass all tests, etc? What if my PR branch is old, and new commits have been added onto main that I don't have in my PR branch? Then I can merge and break main. That's why you need to update your PR branch to include the newer commits from main (and the "update" could be a rebase or a merge from main or possibly something else).
The downside of requiring contributors to rebase their PR branch is (1) people are confused about rebase and (2) if your repository has many contributors and frequent merges into main, then contributors will need to frequently rebase their PR branch, and each rebase their PR checks need to re-run, which can be time consuming.
My preference with Github is to squash merge into main[1] to keep clean git history on main. And to use merge queue[2], which effectively creates a temp branch of main+PR, runs your CI checks, and then the PR merge succeeds into main only if checks pass on the temp branch. This approach keeps super clean history on main, where every commit includes a specific PR number, and more importantly minimizes friction for contributors by reducing frequent PR rebases on large/busy repos. And it ensures main is never broken (as far as your CI checks can catch issues). There's also basically no downside for very small repos either.
1. https://docs.github.com/en/repositories/configuring-branches...
2. https://docs.github.com/en/repositories/configuring-branches...
Why would you rebase onto master locally in a team environment?
The way to do this are pull requests on the remote.
I see no need to ever rebase manually, just merge on your branch and always fast-forward squash merge (only sane default) with GitHub/GitLab/whatever.
Squash merges are a hacky solution to the git bisect problem that was solved correctly by --first-parent 20 years ago. There are fully employed software developers working on important stuff that literally were never alive in a world where squash merges were needed.
Don't erase history. Branch to a feature branch, develop in as many commits as you need, then merge to main, always creating a merge commit. Oftentimes, those commit messages that you're erasing with a squash are the most useful documentation in the entire project.
Using git history as documentation is hacky. A majority of feature branch commit messages aren't useful ("fix test case X", "fix typo", etc), especially when you are accepting external contributions. IF I wanted to use git history as a form of documentation (I don't. I want real documentation pages), I'd want the history curated into meaningful commits with descriptive commit messages, and squash merging is a great way to achieve that. Git bisect is not the only thing I do with git history after all.
And if I'm using GitHub/Gitlab, I have pull requests that I can look back on which basically retain everything I want from a feature branch and more (like peer review discussion, links to passing CI tests, etc). Using the Github squash merge approach, every commit in the main branch refers back to a pull request, which makes this super nice.
I don't see how this is not the first answer. What tools does everybody else work with??
> Before rebasing, push your current work to your remote fork. This gives you a backup you can recover from if anything goes wrong
I don't follow this. Just abort the rebase?
I have lost ~irretrievably work via rebase.
I was working on a local branch, periodically rebasing it to master. All was well, my git history was beautiful etc.
Then down the line I realised something was off. Code that should have been there wasn't. In the end I concluded some automatic commit application while rebasing gobbled up my branch changes. Or frankly, I don't even entirely know what happened (this is my best guess), all I know is, suddenly it wasn't there.
No big deal, right? It's VCS. Just go back in time and get a snapshot of what the repo looked like 2 weeks ago. Ah. Except rebase.
I like a clean linear history as much as the next guy, but in the end I concluded that the only real value of a git repo is telling the truth and keeping the full history of WTF really happened.
You could say I was holding it wrong, that if you just follow this one weird old trick doctor hate, rebase is fine. Maybe. But not rebasing and having a few more squiggles in my git history is a small price to pay for the peace of mind that my code change history is really, really all there.
Nowadays, if something leaves me with a chance that I cannot recreate the repo history at any point in time, I don't bother. Squash commits and keeping the branch around forever are OK in my book, for example. And I always commit with --no-ff. If a commit was never on master, it shouldn't show up in it.
> Just go back in time and get a snapshot of what the repo looked like 2 weeks ago. Ah. Except rebase.
This is false.
Any googling of "git undo rebase" will immediately point out that the git reflog stores all rebase history for convenient undoing.
Shockingly, got being a VCS has version control for the... versions of things you create in it, not matter if via merge or rebase or cherry-pick or whatever. You can of course undo all of that.
Up to a point - they are garbage collected, right?
And anyway, I don't want to dig this deep in git internals. I just want my true history.
Another way of looking at it is that given real history, you can always represent it more cleanly. But without it you can never really piece together what happened.
The reflog is not a git internal -- it is your local repository's "true history", including all operations that you ran.
The `git log` history that you push is just that curated specific view into what you did that you wish to share with others outside of your own local repository.
The reflog is to git what Ctrl+Z is to Microsoft Word. Saying you don't want to use the reflog to undo a rebase is a bit like saying you don't want to use Ctrl+Z to undo mistakes in Word.
(Of course the reflog is a bit more powerful of an undo tool than Ctrl+Z, as the reflog is append-only, so undoing something doesn't lose you the newer state, you can "undo the undo", while in Word, pressing Ctrl+Z and then typing something loses the tail of the history you undid.)
Indeed, like for Word, the undo history expires after a configurable time. The default is 90 days for reachable changes and 30 days for unreachable changes, which is usually enough to notice whether one messed up one's history and lost work. You can also set it to never expire.
It is fine for people to prefer merge over rebase histories to share the history of parallel work (if in turn they can live with the many drawbacks of not having linear history).
But it is misleading to suggest that rebase is more likely to lose work from interacting with it. Git is /designed/ to not lose any of your work on the history -- no matter the operation -- via the reflog.
But it's at best much harder to find stuff in the reflog than to simply use git's history browsing tools. "What's the state of my never-rebased branch at time X" is a trivial question to answer. Undoing a rebase, at best, involves some hard resets or juggling commit hashes.
None of it is impossible, but IMHO it's a lot of excitement of the wrong kind for essentially no reward.
> "What's the state of my never-rebased branch at time X" is a trivial question to answer.
Yes, but only because of reflog.
git log will also do the job, even if you never checked out the branch in this workspace.
>the worst case scenario for a rebase gone wrong is that you delete your local clone and start over.
Wouldn't it be enough to simply back up the branch (eg, git checkout -b current-branch-backup)? Or is there still a way to mess up the backup as well?
Yeah, deleting your local clone and starting over should normally not be necessary, unless you really mess things up badly.
The "local backup branch" is not really needed either because you can still reference `origin/your-branch` even after you messed up a rebase of `your-branch` locally.
Even if you force-pushed and overwrote `origin/your-branch` it's most likely still possible to get back to the original state of things using `git reflog`.
For amateurs at Git, recovery branches/tags are probably easier to switch back to than digging through reflog. Particularly if you're interacting with Git via some GUI that hides reflog away as some advanced feature.
Nice, git rerere would have saved me in the past.
> I always use VS Code for this step. Its merge conflict UI is the clearest I’ve found: it shows “Accept Current Change,” “Accept Incoming Change,” “Accept Both Changes,” and “Compare Changes” buttons right above each conflict.
I still get confused by vscode’s changing the terms used by Git. «Current» vs «incoming» are not clear, and can be understood to mean two different things.
- Is “current” what is on the branch I am rebasing on? Or is it my code? (It’s my code)
- Is “incoming” the code I’m adding to the repo? Or is it what i am rebasing on to? (Again, the latter is correct)
I find that many tools are trying to make Git easier to understand, but changing the terms is not so helpful. Since different tools seldom change to the same words, it just clutters any attempts to search for coherent information.
Git's "ours"/"theirs" terminology is often confusing to newcomers, especially when from a certain (incorrect, but fairly common) point of view their meaning may appear to be swapped between merge and rebase. I think in an attempt to make the terminology less confusing UIs tend to reinvent it, but they always fail miserably, ending up with the same problem, just with slightly different words.
This constant reinvention makes the situation even worse, because now the terminology is not only confusing, but also inconsistent across different tools.
We use SVN at work and it's a nightmare there too, "mine" and "theirs" and whatnot. I frequently end up looking at historical versions just to verify which is which.
If I have a merge conflict I typically have to be very conscious about what was done in both versions, to make sure the combination works.
I wish for "working copy" and "from commit 1234 (branch xyz)" or something informative, rather than confusing catch-all terms.
Please tell me you are using Git-SVN or Hg-SVN. Using bare SVN as a client hasn't been necessary in over a decade.
Using SmartSVN which makes life a fair bit better but still keeps this confusing terminology.
We'll be migrating to Git this year though so.
For reference, the codebase is over 20 years old, and includes binary dependencies like libraries. Makes it easy to compile old versions when needed, not so easy on the repository size...
That terminology is identical in git, likely inspired by cvs and svn, so that bit probably won't improve.
It's inherently confusing to juggle different trees, and clearly you need some terminology for it. At least this one has become a bit of a standard.
Main reason is we have relatively few merge conflicts despite merging a lot. So I always forget between instances.
I think even presenting them as options makes it even more confusing to newcomers. Usually I find that neither is correct and there's a change on both sides I need to manually merge (so I don't even pay attention to the terminology), but I've seen co-workers just blindly choose their changes because it's familiar looking then get confused when it doesn't work right.
For merges current is the branch you are on, for rebases it helps to see them as a serie of cherry picks, so current would be the branch you would be on while doing the cherry pick equivalent to this step of the rebase.
Maintaining linear history is arguably more work. But excessively non-linear history can be so confusing to reason over.
Linear history is like reality: One past and many potential futures. With non-linear history, your past depends on "where you are".
----- M -----+--- P
/
----- D ---+
Say I'm at commit P (for present). I got married at commit M and got a dog at commit D. So I got married first and got a Dog later, right? But if I go back in time to commit D where I got the dog, our marriage is not in my past anymore?! Now my wife is sneezing all the time. Maybe she has a dog allergy. I go back in time to commit D but can't reproduce the issue. Guess the dog can't be the problem.> So I got married first and got a Dog later, right?
No. In one reality, you got married with no dog, and in another reality you got a dog and didn't marry. Then you merged those two realities into P.
Going "back in time to commit D" is already incorrect phrasing, because you're implying linear history where one does not exist. It's more like you're switching to an alternate past.
The point is that it's harder to reason over.
I don't really agree that it's harder to reason over in the sense that it's hard to understand the consequences, but I also agree that a linear history is superior for troubleshooting, just like another comment pointed out that single squashed commits onto a main branch makes it easier to troubleshoot because you go from a working state to a non-working state between two commits.
there are others tricky time issues with staging/prod parallel branching models too, the most recent merge (to prod) contains older content, so time slips .. maybe for most people it's obvious but it caused me confusion a few times to compare various docker images
> the most recent merge (to prod) contains older content
Can't that also happen with a rebase? Isn't it an (all too easy to make) error any time you have conflicting changes from two different branches that you have to resolve? Or have I misunderstood your scenario?
You omitted the merge commit. M is taken so let's go with R. You jump back to M to confirm that the symptoms really don't predate the marriage. Then you jump to R to reproduce and track down the underlying cause of the bad interaction.
Had you simply rebased you would have lost the ability to separate the initial working implementation of D from the modifications required to reconcile it with M (and possibly others that predate it). At least, unless you still happen to have a copy of your pre-rebase history lying around but I prefer not to depend on happenstance.
> Had you simply rebased you would have lost the ability to separate the initial working implementation of D from the modifications required to reconcile it with M
I'd say: cleaning that up is an advantage. Why keep that around? It wouldn't be necessary if there was no update on the main branch in the meantime. With rebase you just pretend you started working after that update on main.
For the reason I stated that you quoted right there. Separating the potentially quite large set of changes of the initial (properly working) feature from the (hopefully not too large) set of changes reconciling that feature with the other (apparently incompatible in this example) feature. It provides yet another option for filtering the irrelevant from the relevant thus could prove quite useful at times.
Recall that the entire premise is that there's a bug (the allergy). So at some point a while back something went wrong and the developer didn't notice. Our goal is to pick up the pieces in this not-so-ideal situation.
What's the advantage of "cleaning up" here? Why pretend anything? In this context there shouldn't be a noticeable downside to having a few extra kilobytes of data hanging around. If you feel compelled to "clean up" in this scenario I'd argue that's a sign you should be refactoring your tools to be more ergonomic.
It might be worthwhile to consider the question, why have history in the first place? Why not periodically GC anything other than the N most recent commits behind the head of each branch and tag?
Until you commit to trunk, you haven't gotten the dog. You're proposing getting a dog. The action of getting the dog happens at merge to trunk. That's when history is created.
That’s also because there are multiple concerned that are tried to be presented as the same exposed output through a common feature. Having one branch that provides a linear logical overview of the project feature progression is not incompatible with having many other branches with all kind of messes going back and forth, merging and forking each other and so on.
In my experience, when there is a bug, it’s often quicker to fix it without having a look at the past commits, even when a regression occurs. If it’s not obvious just looking at the current state of the code, asking whoever touch that part last will generally give a better shortcut because there is so much more in the person mind than the whole git history.
Yes logs and commit history can brings the "haha" insight, and in some rare occasion it’s nice to have git bisect at hand.
Maybe that’s just me, and the pinnacle of best engineers will always trust the source tree as most important source of information and starting point to move forward. :)
I don't understand why I would push to origin my local branch. This will (potentially) require multiple push -f. I prefer to share the work in a state I considere complete.
As an alternative, just create a new branch! `git branch savepoint-pre-rebase`. That's all. This is extremely cheap (just copy a reference to a commit) and you are free to play all you want.
You are a little more paranoid? `git switch -c test-rebase` and work over the new branch.
> the worst case scenario for a rebase gone wrong is that you delete your local clone and start over. That’s it. Your remote fork still exists.
This is absolute nonsense. You commit your work, and make a "backup" branch pointing at the same commit as your branch. The worst case is you reset back to your backup.
> Rebase has a reputation for destroying work, and the warnings you see online don’t help.
Everyone using git needs to accept the following. Say it out aloud if you have to: no command in git can ever modify or delete a commit.
After a botched rebase your old work is one simple reset away using the reflog. Then you can have another go or reach out for help.
When things get messy I use Sublime Merge with two tabs, one with the code that's open in VS Code and one with the same project but different branch/commit. It works well on Linux. I've managed to make it work with Windows + WSL but I don't recommend it.
Coding agents are excellent at solving merge and rebase conflicts according to natural language instruction.
Just do a normal merge, then squash all your commits in into one, using rebase, then a rebase onto a branch is easy.
In about 12 years of using git (jj user now) I almost never rebased through the CLI, but I found shuffling branches around in a GUI pretty intuitive. I liked GitUp[0], which gave me undo way before jj existed.
The common view that a Git GUI is a crutch is very wrong, even pernicious. To me it is the CLI that is a disruptive mediation, whereas in a GUI you can see and manipulate the DAG directly.
Obligatory jj plug: before jj, I would have agreed with the top comment[1] that rebasing was mostly unnecessary, even though I was doing it in GitUp pretty frequently — I didn't think of it as rebasing because it was so natural. Now that I use jj I see that the cost-benefit analysis around git rebase is dominated by the fact that both rebasing and conflict resolution in git are a pain in the ass, which means the benefit has to be very high to compensate. In jj they cost much less, so the neatness benefit can be quite small and still be worth it. Add on the fact that Claude Code can handle it all for you and the cost is down to zero.
[0]: https://gitup.co/
was fully expecting:
mv folder folder-old
git clone git@github/foldergit rebase appears to be a random number generator to me. I've got two PRs in flight, I realize that one branch requires something I did in the other branch. I go merge branch one, then rebase branch two off main. Half the time; success, get on with my day. The other half of the time; anarchy, madness, dogs and cats living together.
I remain terrified.
[dead]
TIL ppl are afraid of rebase lmao
i hope ai agents read this