I don't know how many committers have been on the average project I've worked on, but it's probably 25+, and I've worked on several with 50+ - and I don't know how you'd even make Git work at that sort of scale. Obviously people do actually do this, so I assume it must work somehow; I just don't see how it's going to work particularly well.
The larger projects I've worked on have typically used Perforce, but I used Alien Brain (which is pretty terrible) for some of the older ones. The check in/check out workflow, which is the same in each case, is basically what makes it all work once you get past a handful of people. Just simply being able to see who else is working on a (almost certainly perfectly cleanly mergeable) file that you're contemplating modifying is a big time-saver.
(I've used SVN, at a much smaller scale. It has similar Lock/Unlock functionality, which is a good start, but the performance in general of that seemed to be really bad. Locking a few hundred files might take a full minute, for example. Meanwhile, Perforce will check out 1.9 gazillion files in ten seconds, and that's only because it takes nine seconds to change NTFS permissions for that many files.)
> I don't know how many committers have been on the average project I've worked on, but it's probably 25+, and I've worked on several with 50+ - and I don't know how you'd even make Git work at that sort of scale.
Well, I actually don't understand how you can make it NOT work :) You obviously have to work with branches split per projects/sub-projects and different repositories for different apps. You have to find your branching model that works for you, it doesn't always works with a dev branch (we don't do that, we have bug, feature, release and master branches).
SVN is so out of this league that I don't even try to understand why people use it.
When you've got a lot of people, you've got a lot of changes - that's the long and the short of it. This is one thing the check in/check out model (as exemplified by Perforce, among others) is really good for managing. When you go to check out a file, you find out straight away if someone else has it checked out.
If you're just going to make a quick tweak, you'll probably risk it. Either they check it in first, and you've got a very minor merge, or you do it first, and they've got a similar minor merge. Not a big deal, in either case. (And when your gamble doesn't pan out, tough luck. No sympathy from anybody. You knew the risks.)
But, if you're going to make a very large, sweeping change, you'll probably be a bit more cautious. And that's fine: you can go over and talk to them, or message them, or email them, or whatever, to find out what they're doing, and coordinate your modifications appropriately.
I've literally never once found this less than somewhat useful. It's, like, the source control analogue of static typing: a huge pain in the arse if you're not used to it, but, if you've seen it play out, it's a mystery how anybody gets any work done in its absence.
(Of course, if you use git, maybe you can just email/Slack/etc. everybody on the team before you go to edit a file, just in case, and then wait for anybody to mail you back before proceeding... well, I don't deny that would work, assuming everybody checks their mails/Slack/etc. regularly enough. After all, I hear people get work done in dynamically typed languages too! But just think how much better things could be, if the version control system could look after this for you!)
I don't understand the hatred for perforce. It works really well. The times I need an offline branch to work on and keep history of commits are very rare.
I moved from git to perforce when I switched companies, and even though I actually really like git and consider myself reasonably proficient, I don't mind perforce.
My one real pain point with it isn't so bad, but I dislike how perforce tends to work at a file level instead of a commit level. It's hard for me to make several related changes which all touch the same files, like a series of patches which all refactor the same area of code, but which I would like to review and discuss separately, and potentially revert some/all of.
It's hard to manage this with shelves, because perforce can't handle unshelving the same file in two different CLs. I could submit all the changes to a separate stream, but perforce streams just don't usually work well for us, and it's still hard to experiment by constantly making and rolling back changes.
I guess I'm probably only used to this workflow because I have experience with git, but this is the time when I really miss the granularity of a git commit (and I'm doing a pretty gigantic refactor right now... so it's hitting me quite hard).
I recently had to do something similar with an ancient SVN repo, that had to stay in SVN.
I simply started a git repo in the same base directory as the SVN repo, and did my work in there. Every time I merged a branch back to master I committed to SVN's ^/branches/dev. Just add `.svn` to `.gitignore` and `.git*` to the SVN prop ignore.
You _will_ want to merge from upstream (Whatever Perforce's equivalent to `svn up` or `git pull` is) often, I was merging from upstream before every SVN commit (SVN mostly forces you to do this, `svn status --show-updates` is a huge help here but I don't know if Perforce has a similar feature).
same. i mean, it could be that perforce has great visual tools and people prefer complicated, esoteric cli tools. it has the perception of being more hardcore.
perforce's APIs are actually pretty good as well. they aren't documented that well, but they are easy enough to build some complicated tools with.
Hmm... I have to say the APIs and command line tooling is not where Perforce shines ;)
I found the APIs generally a disaster, and rapidly gave up on them. It was much easier to just run p4.exe and scrape the output. But... oh god. That's not saying much. The command line client was shit too. It eventually proved possible to get everything I wanted, but the data was never quite in the right format, the available queries were never quite suitable, and the results were never quite normalized enough. In the end I had to run p4.exe several times, discarding a lot of redundant data while doing this, and then cross-referencing the multiple sets of results afterwards to get what I wanted.
(One thing I had hopes for initially was p4's supposedly Python pickle-friendly output. But this was no good either! - since, from memory, p4 produces multiple pickles'-worth of data in one go, but Python will only read one pickle at a time, so you couldn't actually read it in from Python without a bit of work. Really made me wonder whether anybody actually tested any of this stuff. Really felt like the thing had been coded by a bunch of Martians, working from a description of the process made by people who'd never used Python and weren't even programmers in the first place.)
The larger projects I've worked on have typically used Perforce, but I used Alien Brain (which is pretty terrible) for some of the older ones. The check in/check out workflow, which is the same in each case, is basically what makes it all work once you get past a handful of people. Just simply being able to see who else is working on a (almost certainly perfectly cleanly mergeable) file that you're contemplating modifying is a big time-saver.
(I've used SVN, at a much smaller scale. It has similar Lock/Unlock functionality, which is a good start, but the performance in general of that seemed to be really bad. Locking a few hundred files might take a full minute, for example. Meanwhile, Perforce will check out 1.9 gazillion files in ten seconds, and that's only because it takes nine seconds to change NTFS permissions for that many files.)