This is the question. Or, better, to github or not to github.
Once upon a time, github was a bad hosting site, because the site code is not free, and we should have rather preferred gitorious. I did. But then gitorious closed shop and I had to go to the various projects (hosted elsewhere) that had submodules and make a commit to change the link. Not many projects, I admit, but still an unpleasant operation. Besides, all past history is now broken because of the dangling submodule link. I'm able to bisect anyways, but will my user be able too? And this problem is replicated for all repo owners. Not nice.
So, besides self-hosting (unfeasible for whole-kernel repos) I moved to github. Well, not using it other than as a git repo why should I care that the code (that I do not use) is not free? Maybe because I contribute visibility to that specific unfree provider, but they were "friendly" guys.
Now, they are microsoft. Same people. Same site. Different owner, different money-flow. Shall I (we) change attitude? Most smart people say no, that nothing changed. I'm aware the new owner is not worse than most other companies -- but they are the same ones who wanted to kill us out of the market, before turning into friends who still would love if we disappeared.
So, I feel a little uneasy, and I'm now wondering where to push my yet-unpushed projects (while keeping previous stuff on github for several reasons -- mostly link-rusting issues).
How does the free software community feels in this respect?
thanks /alessandro
With remote git repository hosting we have many options.
You could self-host gogs or gitlab, or use many of the public instances of these, e.g. notabug.org or 0xacab.org. Or just host good old cgit somewhere safe. Or indeed keep using github as a place/mirror to put code. But with repository hosting we have a lot of choice - in the end I suppose it depends what you want to put there and what you want to be public.
Best wishes.
Alessandro Rubini:
This is the question. Or, better, to github or not to github.
Once upon a time, github was a bad hosting site, because the site code is not free, and we should have rather preferred gitorious. I did. But then gitorious closed shop and I had to go to the various projects (hosted elsewhere) that had submodules and make a commit to change the link. Not many projects, I admit, but still an unpleasant operation. Besides, all past history is now broken because of the dangling submodule link. I'm able to bisect anyways, but will my user be able too? And this problem is replicated for all repo owners. Not nice.
So, besides self-hosting (unfeasible for whole-kernel repos) I moved to github. Well, not using it other than as a git repo why should I care that the code (that I do not use) is not free? Maybe because I contribute visibility to that specific unfree provider, but they were "friendly" guys.
Now, they are microsoft. Same people. Same site. Different owner, different money-flow. Shall I (we) change attitude? Most smart people say no, that nothing changed. I'm aware the new owner is not worse than most other companies -- but they are the same ones who wanted to kill us out of the market, before turning into friends who still would love if we disappeared.
So, I feel a little uneasy, and I'm now wondering where to push my yet-unpushed projects (while keeping previous stuff on github for several reasons -- mostly link-rusting issues).
How does the free software community feels in this respect?
thanks /alessandro _______________________________________________ Discussion mailing list Discussion@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/discussion
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Lotta people use Gitlab precisely because you can self-host a Gitlab instance - but you can use them as a service provider it's easy to leave. On Mon, 27 Aug 2018 at 22:47, Duncan dguthrie@posteo.net wrote:
With remote git repository hosting we have many options.
You could self-host gogs or gitlab, or use many of the public instances of these, e.g. notabug.org or 0xacab.org. Or just host good old cgit somewhere safe. Or indeed keep using github as a place/mirror to put code. But with repository hosting we have a lot of choice - in the end I suppose it depends what you want to put there and what you want to be public.
Best wishes.
Alessandro Rubini:
This is the question. Or, better, to github or not to github.
Once upon a time, github was a bad hosting site, because the site code is not free, and we should have rather preferred gitorious. I did. But then gitorious closed shop and I had to go to the various projects (hosted elsewhere) that had submodules and make a commit to change the link. Not many projects, I admit, but still an unpleasant operation. Besides, all past history is now broken because of the dangling submodule link. I'm able to bisect anyways, but will my user be able too? And this problem is replicated for all repo owners. Not nice.
So, besides self-hosting (unfeasible for whole-kernel repos) I moved to github. Well, not using it other than as a git repo why should I care that the code (that I do not use) is not free? Maybe because I contribute visibility to that specific unfree provider, but they were "friendly" guys.
Now, they are microsoft. Same people. Same site. Different owner, different money-flow. Shall I (we) change attitude? Most smart people say no, that nothing changed. I'm aware the new owner is not worse than most other companies -- but they are the same ones who wanted to kill us out of the market, before turning into friends who still would love if we disappeared.
So, I feel a little uneasy, and I'm now wondering where to push my yet-unpushed projects (while keeping previous stuff on github for several reasons -- mostly link-rusting issues).
How does the free software community feels in this respect?
thanks /alessandro _______________________________________________ Discussion mailing list Discussion@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/discussion
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Discussion mailing list Discussion@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/discussion
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Hello,
Two cents of a non-qualified non-programmer here.
On Mon, Aug 27, 2018 at 10:54:15PM +0100, David Gerard wrote:
Lotta people use Gitlab precisely because you can self-host a Gitlab instance - but you can use them as a service provider it's easy to leave.
The beauty of git is that it is decentralized and doesn't even need something like Gitlab to develop software collaboratively. Alessandro summed up the situation already.
What I see as the crucial part is the "social" component. I'm afraid this somewhat derails Alessandro's intended discussion as my point totally ignores "who" the current owner of github is.
If you have a project and are looking for more developers to join it, you need some kind of visibility so potential developers get aware of you. In that sense, github serves as a social network and its current state is close to amazon or ebay - and that is what I suspect is why they even bothered to buy it.
Sure, you can set-up your own ebay-like website or join one of the existing alternatives, but if you really need to get rid of your old stuff by the end of the week, your best bet is where the most eyes will see your offer.
So the gap we may need to close is how potential developers (and employers) get to know your project, and its code that might be hosted in the living room of your parents in law on a RasperyPi.
than most other companies -- but they are the same ones who wanted to kill us out of the market, before turning into friends who still would love if we disappeared.
Doesn't matter to me. They could have turned by 180° and LOVE us! I don't want us to rely on "good faith" that this may be the case. We need a decentralized infrastructure that makes their intentions irrelevant. Like if they'd take part in torrents. File transfers would be faster if they joined, but still be possible if they wouldn't.
Greetings,
Guido
Am Dienstag 28 August 2018 01:19:54 schrieb Guido Arnold:
We need a decentralized infrastructure that makes their intentions irrelevant.
This is the direction where I believe we must head.
However while doing so, as always in a long struggle, we need to be pragmatic. So each of us shall take steps as they can.
Github is * central by design, * a service that earns money with proprietary software development * profits from being the biggest (aka network effecs) * based on proprietary software. * also offering (and pushing) only one software configuration management product (git)
So if we all want to have a good choice we need to work against those effects. Possible ways to act against some aspects * Go to the competition, e.g. Bitbucket (professional, proprietary, but offers hg), or Gitlab (neo-proprietary, no hg) * Pay for your service (so others can make money with service to you) E.g. if you are a company, try phacility (a service based on Free Software, offering a choice of SCMs, but expensive) * Self-host if you can (e.g. we self-host hg and wald.intevation.de based on old Fusion forge) * Use hg or other trackers if you can. * Advertise your software on more general places (e.g. pypi or npm) * Learn to use other SCMs, trackers, build systems and so on, which is good for your understanding as well (because you lean what is concept and what is just github sugar making you mentally fat. >:)) Embrace contributing to Free Software with other tools.
But if there is a Free Software product developed github, for pragmatic reasons you can collaborate there.
Doing some of the other stuff maybe a bit uncomfortable at first, but there just few pluses for you personal or your company over the mid term. And for everybody: having a choice a few years down the line.
Best Regards, Bernhard
On 28.08.2018 09:29, Bernhard E. Reiter wrote:
Am Dienstag 28 August 2018 01:19:54 schrieb Guido Arnold:
We need a decentralized infrastructure that makes their intentions irrelevant.
This is the direction where I believe we must head.
Just want to throw in a relevant link to a project combining ActivityPub and git: https://github.com/forgefed/forgefed
On 08/28/2018 01:19 AM, Guido Arnold wrote:
What I see as the crucial part is the "social" component. I'm afraid this somewhat derails Alessandro's intended discussion as my point totally ignores "who" the current owner of github is.
If you have a project and are looking for more developers to join it, you need some kind of visibility so potential developers get aware of you. In that sense, github serves as a social network and its current state is close to amazon or ebay - and that is what I suspect is why they even bothered to buy it.
Yes, Github has become the "Facebook" or "Google" of free software code hosting - nearly everybody uses it, and many of the hugest projects have moved to it.
This is not all bad. In my own company, we used to use our own, self-hosted Git server (we still do, for some things) which we access over SSH. This means that even though our software was always Free Software, it wasn't publicly available. Over the years (since 2011, but gaining momentum since 2014) we've been moving everything to public repositories on Github, so now our code is also publicly available, which is a good thing. (Though it's not /that/ important with regard to the question of it being Free Software or not.)
I'd say, though, that my experience is that the "social media" aspect of Github is not as important as e.g. on YouTube or eBay. People find your software if they hear of it somewhere, in distro repositories, through clients, co-workers, mailing lists, forums, etc., and it's not so important where it's hosted. PyPI and CPAN (for Python and Perl) are more important, I think, but also not really social media.
A thing that /is/ nice, though, and that makes it very irritating that Github isn't free software, is the pull request and code review functionality. After using it, it's hard to go back to inspecting diffs in terminal windows.
Now, following Microsoft's acquisition, we're considering moving to a self-hosted Gitlab server. And I hope more people will do that. I think this centralization of having one site for search, one for selling stuff, one for code, one for social interaction, etc., is the sickness of the age - and one that very much promotes the proprietary business model. So my immediate hope on hearing about Microsoft's acquisition was that this would mean Github decaying and the hosting splintering - but not in two, three or even five new pieces, but in a million little pieces. As you say, self-hosting and decentralization is the best thing we can hope for - and that is also our best hope for avoiding these giants' proprietary software and all-pervasive surveillance, to which we're becoming all too used.
Best Carsten
Hi all,
sorry for being late in the discussion but I still want to comment on one point:
On Tue, 28 Aug 2018 15:11:06 +0200 Carsten Agger wrote:
I'd say, though, that my experience is that the "social media" aspect of Github is not as important as e.g. on YouTube or eBay. People find your software if they hear of it somewhere, in distro repositories, through clients, co-workers, mailing lists, forums, etc., and it's not so important where it's hosted. PyPI and CPAN (for Python and Perl) are more important, I think, but also not really social media.
Couldn't we argue the same way if we would discuss "traditional homepage vs Facebook page"?
Sure, with every search engine you can find a homepage (almost) as fast as a Facebook page. Keeping aside that Facebook often makes it easier to find a specific person because you can search specific properties like name, city,... But if Facebook becomes the de-facto standard for people personal home, than many people will only search at Facebook, if they don't find you, you don't exists for them. Facebook reached this state for a large number of internet users already.
All this is more or less true for code hosting with Github these days as well.
A thing that /is/ nice, though, and that makes it very irritating that Github isn't free software, is the pull request and code review functionality. After using it, it's hard to go back to inspecting diffs in terminal windows.
I think the "value added" functionality is what many "old school hackers" underestimate. Sure you can have a plain and stupid VCS, a separate bug tracker, a separate project management tool, a separate wiki,... In this case it might be easy to move to another VCS if you have to, ignoring that you will break all the links from the bug tracker, etc for a moment. And sure, people can send their patches just per email.
But I think that's no longer the workflow and integration which will invite many new contributors to join your project. I read a blog post from a Gnome Developer recently where he wrote how amazing it is that they get so much new contributors since they moved from their ageing git repository and bugzilla to Gitlab.
Another part of the social lock-in effect is that if you host your own project on Github, the chance is high that the 3rd-party libs you are using is also on Github. This way you can easily collaborate, mention the developer of a 3rd-party lib in your bug tracker if you need some feedback, send back a pull request, etc.
I read regularly from projects who decide against moving away from Github, exactly for all this reasons.
Cheers, Björn
On Mon, Aug 27, 2018 at 11:36 PM Alessandro Rubini rubini@gnudd.com wrote:
So, besides self-hosting (unfeasible for whole-kernel repos) I moved to github. Well, not using it other than as a git repo why should I care that the code (that I do not use) is not free?
if you're not using the other pieces of github (issues, wiki, projects, etc) I don't see why one should care that the code distributing the content of your repository is non-free. The underlying git tool is free, your data and repository history is distributed because of git's nature, and you own it too. It would be different if you used the other tools, because those are proprietary and hard to move data around.
Maybe because I contribute visibility to that specific unfree provider,
you sure would contribute to its network effect... hard choice to make here. When OpenStack moved off of Launchpad/bzr for example, they decided to self-host git+gerrit but used GitHub exactly to help discoverability of the project.
but they were "friendly" guys.
Were they really? They didn't have had a clear understanding of what free software/open source was, at least at the beginning where they happily promoted 'forking' of any project hosted there, even those without a proper license.
Now, they are microsoft. Same people. Same site. Different owner, different money-flow. Shall I (we) change attitude? Most smart people say no, that nothing changed.
I'm in the "Nothing changed" camp: they where not friendly before and neither are the new owners. To be clear, they were (and are) not hostile either.
How does the free software community feels in this respect?
You need a larger sample :)
/stef
Hi Alessandro,
Alessandro Rubini rubini@gnudd.com writes:
How does the free software community feels in this respect?
There is this:
https://www.gnu.org/software/repo-criteria-evaluation.html https://www.gnu.org/software/repo-criteria.en.html
HTH,
On Wed, Aug 29, 2018 at 07:04:05AM +0200, Bastien wrote:
Hi Alessandro,
Alessandro Rubini rubini@gnudd.com writes:
How does the free software community feels in this respect?
There is this:
https://www.gnu.org/software/repo-criteria-evaluation.html https://www.gnu.org/software/repo-criteria.en.html
This is missing from the table: https://notabug.org/ Based on Gogs, which was recently forked into Gitea: https://gitea.io
--strk;
Am Mittwoch 29 August 2018 07:04:05 schrieb Bastien:
https://www.gnu.org/software/repo-criteria-evaluation.html https://www.gnu.org/software/repo-criteria.en.html
Not taking new developments into account, though as the last evaluation came from 2016-04-13.
Thanks to all who replied. Let me comment a little on these recomendations, suggested by Bastien:
https://www.gnu.org/software/repo-criteria-evaluation.html https://www.gnu.org/software/repo-criteria.en.html
To which Bernhard (and Sandro) noted:
Not taking new developments into account, though as the last evaluation came from 2016-04-13.
I know about this initiative. I'm pretty sure I also commented (in private) on an early draft of the criteria.
But the criteria (and the evaluation) are more about where to host a GNU package. Sure FSF suggest to follow them in general, but the repeated focus on javascript, for example, takes a bias that for many users is not relevant.
My main concerns, in the initial post, were about visibility and preservation of links over time (because of submodules, for example). Maybe the right path, as suggested by Harald and others, is try to self-host. As a second choice, get aware that nothing changed in github, which is not different from other providers, and use it as a data hosting facility (especially if we just use git and ignore the extra features). Also, using github (or gitlab, or both) as a backup is good anyways.
And yes, I use mainly the command line and git-format-patch/git-am for code exchange. I agree with Ion that we'd benefit from "better" tools for local management of the workflow, to remove some pressure to rely on service providers, but I personally am cmdline-minded so I can't help there.
But I have a question for Berhnard, who says among other things I agree with:
- Use hg or other trackers if you can.
why? It's already oh so difficult to get people make decent commits to git, where at least I can point to all the world doing that...
thank you all /alessandro
On Friday 31. August 2018 13.03.22 Alessandro Rubini wrote:
But I have a question for Berhnard, who says among other things I agree with:
- Use hg or other trackers if you can.
why? It's already oh so difficult to get people make decent commits to git, where at least I can point to all the world doing that...
I might answer that as someone else who prefers Mercurial.
Firstly, it is a capable tool for doing distributed version control whose performance has easily been good enough for what most people need it for, including things like the Linux kernel, which is usually the vehicle used to discredit solutions other than Git.
It employs a conceptual model that is powerful enough for what most people need it for. Here we ignore random people on Stack Overflow who exclaim things about Git being a "powerful object database" when asked to justify some arcane incantation required to do what might have been straightforward with other solutions.
It has had a decent user interface from day one, as far as I can tell. Meanwhile, I recall advice on adopting Git which involved the Cogito front- end, now long since merged in with the actual Git interface, I guess. (Developers find a Subversion-style command interface comforting: who knew?!)
There is interoperability with Git that is presumably acceptable given that people managed to migrate their stuff to Git after being lobbied to move to Git(Hub).
It is actively maintained. I may not like the nature of some of the contributing organisations, but I cannot dispute that there are organisations who would not want to see it go away.
All of the above are merely things that do not disqualify solutions like Mercurial, and from personal experience I could probably suggest tools like Bazaar (the "NG" variant, of course, given that Canonical pretended that the original Bazaar never existed) with caveats about the last point, although I imagine that someone still maintains that, too, maybe just not Canonical any more.
But possibly far more important than the above, which is only stated to undermine claims of "not good enough" is that we all benefit from choice. I could probably find some advantages of Mercurial, too, but I would be satisfied with just giving everything a fair hearing.
Paul
Am Freitag 31 August 2018 13:03:22 schrieb Alessandro Rubini:
- Use hg or other trackers if you can.
why? It's already oh so difficult to get people make decent commits to git, where at least I can point to all the world doing that...
* Because having a choice is good. If git is to became the only competitive SCM system, it is like putting all eggs in one basket. If there is anything that you do not like about git and you have not at least feed alternatives, you don't have a choice. * You improve your skills, because now you can understand what is concept (distributed SCM) and what is implementation (git command), making you a better software engineer. * It is better for innovation. Mercurial SCM has some areas where you could consider it better as git. Maybe not on an average, but for some groups or projects it is the better choice. * To limit the network effects which drives people to help github earn money with proprietary services and hostring proprietary software development. Which in turn give them more money to outrun the Free Software competion, which ...
It is the same argument for trackers. Now you may just be 10% less efficient when using hg instead of git, because there is less tools support and less books. In two years you maybe 50% less efficient. So today the investment is bearable, it maybe too late in 2 years.
Of course you could be 10% better if going for hg when using its better features or becoming a better developer. :) Here is a story where Facebook used Mercurial in 2013/2014 to get an advantage https://code.fb.com/core-data/scaling-mercurial-at-facebook/ (Facebook is also on github, in 2015 they were syncing a lot from Mercurial, I haven't follow the further development, but the point is an example for the principle).
Best Regards, Bernhard
On Fri Aug 31 11:03:22 UTC 2018, Alessandro Rubini wrote:
- Use hg or other trackers if you can.
why? It's already oh so difficult to get people make decent commits to git, where at least I can point to all the world doing that...
Bernhard gave some good reasons but I can think of another:
* To follow the cultural norms of the communities you are part of.
My impression is that a number of projects started to use Mercurial because git was initially linked very closely to development of the Linux kernel. So, projects associated with other operating systems may have adopted Mercurial as a strategic move. Also, Mercurial was seen as having better cross-platform support in the beginning.
I'm thinking of Symbian and the OpenSolaris derivatives. Inferno also went via Google Code (svn) to Bitbucket (hg).
Using Mercurial instead of git is also a bit like using another kernel instead of Linux. It seems unnecessary to use something else when you already have something that works, but it's useful to have working options in case you find yourself using a device without a Linux port but with FreeBSD support, for example.
David
Using Mercurial instead of git is also a bit like using another kernel instead of Linux. It seems unnecessary to use something else when you already have something that works, but it's useful to have working options in case you find yourself using a device without a Linux port but with FreeBSD support, for example.
True. Having two options instead of one is always good.
Today I read some (most?) documents on the project's site, and I see that it's very similar, but on the flip side it looks like interactive rebases are not as easy as they are with git, and I really use them a lot (I write several features and test them all together, so I often squash my fixes in the original commit before pushing).
Also, I don't like much the data model (which is why, I think, changing the whole history is not as easy as with git).
Thank you none the less, it was interesting reading.
/alessandro
Am Mittwoch 05 September 2018 21:44:20 schrieb Alessandro Rubini:
Having two options instead of one is always good.
Yes, this is why my post was about the effect chains if we use and support one tool or several ones. Because in a lot of situations others are already completely booked on one tool, I'm using all opportunites to use the other, because they come less often. Just like I am always trying to use https://iridiumbrowser.de/ instead of Chrome and Edge if I can. And a GNU system instead of Windows. LineageOS-microg over Vendor Android/Linux. And so on. It is not a major hassle for me, I'll just keep an eye open for more Free Software opportunities.
But back to Mercurial SCM (aka hg from https://www.mercurial-scm.org/):
Today I read some (most?) documents on the project's site, and I see that it's very similar,
Thanks for giving hg a look. In my experience it is a sound option and comes with comparable power, if compared to git.
but on the flip side it looks like interactive rebases are not as easy as they are with git, and I really use them a lot (I write several features and test them all together, so I often squash my fixes in the original commit before pushing).
With due respect: Here you can see how your style of working was shaped by the tool. In my company (where we use hg and git) we'd already lost code because an interactive rebase can lead to data loss, thus breaking a mental concept that some have of an SCM to be able to reconstruct each configuration once it has been checked in. Still I believe interactive rebased can be useful, and you've probably found the extensions that allow it for hg,
e.g. see the dicussion at https://stackoverflow.com/questions/1725607/can-i-squash-commits-in-mercuria...
Also, I don't like much the data model (which is why, I think, changing the whole history is not as easy as with git).
Thank you none the less, it was interesting reading.
Thanks for considering hg, a technically diverse "ecosystem" is much more resilient against all sorts of "problems". :)
Best Regards, Bernhard
On Wed Sep 5 19:44:20 UTC 2018, Alessandro Rubini wrote:
Today I read some (most?) documents on the project's site, and I see that it's very similar, but on the flip side it looks like interactive rebases are not as easy as they are with git, and I really use them a lot (I write several features and test them all together, so I often squash my fixes in the original commit before pushing).
Yes, I think there's a compromise between flexibility and simplicity. Mercurial seems to be focused more on simplicity and ease of use, but that might make certain tasks difficult to achieve depending on your workflow.
Also, I don't like much the data model (which is why, I think, changing the whole history is not as easy as with git).
I think that is regarded as a feature in Mercurial. History rewriting may be a useful feature in git but it could have limited use if your repositories are already public. When Mercurial and git were evaluated at a former employer the ability to rewrite the history was counted as an advantage for git despite the problem that it would have been very difficult to justify using it on the company's public repositories.
Still, it's useful to have the option to do it, especially for private repos.
Thank you none the less, it was interesting reading.
You're welcome.
David
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 09/06/2018 06:42 AM, David Boddie wrote:
On Wed Sep 5 19:44:20 UTC 2018, Alessandro Rubini wrote:
Today I read some (most?) documents on the project's site, and I see that it's very similar, but on the flip side it looks like interactive rebases are not as easy as they are with git, and I really use them a lot (I write several features and test them all together, so I often squash my fixes in the original commit before pushing).
Yes, I think there's a compromise between flexibility and simplicity. Mercurial seems to be focused more on simplicity and ease of use, but that might make certain tasks difficult to achieve depending on your workflow.
Also, I don't like much the data model (which is why, I think, changing the whole history is not as easy as with git).
I think that is regarded as a feature in Mercurial. History rewriting may be a useful feature in git but it could have limited use if your repositories are already public. When Mercurial and git were evaluated at a former employer the ability to rewrite the history was counted as an advantage for git despite the problem that it would have been very difficult to justify using it on the company's public repositories.
Still, it's useful to have the option to do it, especially for private repos.
Thank you none the less, it was interesting reading.
You're welcome.
David
On the topic of history rewrite, I'd argue that allowing it on a private (read: development) repository provides better commits and less chance of losing work. It allows the developer to incrementally commit small, incomplete, possibly even wrong changes, then decide how they should be packaged and layered before attempting a merge. Without this capability, our programmers would tend to keep a massive chunk of unstaged changes locally, then submit the entire mess for review once it was working properly. History rewrite allows the developer to verify a multi-week, multi-layer, self-dependent modification and still be able to split it apart into logical, incremental chunks with relative ease.
I can't imagine working without this feature. The lack of that feature on other source control systems might explain the relatively poor commit quality we have observed on those systems (or from people trained on those systems) over time -- their commits to be very large, doing way too much and touching too many files. Needles to say this causes a massive headache if/when the patch introduces a regression.
- -- Timothy Pearson Raptor Engineering +1 (415) 727-8645 (direct line) +1 (512) 690-0200 (switchboard) https://www.raptorengineering.com
On Thursday 6. September 2018 12.21.17 Timothy Pearson wrote:
On the topic of history rewrite, I'd argue that allowing it on a private (read: development) repository provides better commits and less chance of losing work. It allows the developer to incrementally commit small, incomplete, possibly even wrong changes, then decide how they should be packaged and layered before attempting a merge. Without this capability, our programmers would tend to keep a massive chunk of unstaged changes locally, then submit the entire mess for review once it was working properly. History rewrite allows the developer to verify a multi-week, multi-layer, self-dependent modification and still be able to split it apart into logical, incremental chunks with relative ease.
But are they sharing the changes they commit before rewriting the history and sharing it with colleagues?
I can't imagine working without this feature. The lack of that feature on other source control systems might explain the relatively poor commit quality we have observed on those systems (or from people trained on those systems) over time -- their commits to be very large, doing way too much and touching too many files. Needles to say this causes a massive headache if/when the patch introduces a regression.
I think it depends more on the workflow more than the features of the revision control system. Of course, those people used to non-distributed systems may be in the habit of batching their commits for several unrelated issues because they are used to a centralised model where committing a change involves sharing it with everyone else.
I think you could do something similar to git with Mercurial but it wouldn't be exactly the same.
David
On 09/06/2018 02:46 PM, David Boddie wrote:
On Thursday 6. September 2018 12.21.17 Timothy Pearson wrote:
On the topic of history rewrite, I'd argue that allowing it on a private (read: development) repository provides better commits and less chance of losing work. It allows the developer to incrementally commit small, incomplete, possibly even wrong changes, then decide how they should be packaged and layered before attempting a merge. Without this capability, our programmers would tend to keep a massive chunk of unstaged changes locally, then submit the entire mess for review once it was working properly. History rewrite allows the developer to verify a multi-week, multi-layer, self-dependent modification and still be able to split it apart into logical, incremental chunks with relative ease.
But are they sharing the changes they commit before rewriting the history and sharing it with colleagues?
No, and I should have pointed that out. When you start sharing, all bets are off and history should not be rewritten, however at the same time you're not likely to be sharing broken / nonfunctional code that's still in the middle of a rewrite. At the very least, it would be expected that you clean up your own mess a bit before trying to engage a colleague for assistance, to avoid wasting time all around, and even then it would be in some kind of WIP branch that would be deleted later on.
I can't imagine working without this feature. The lack of that feature on other source control systems might explain the relatively poor commit quality we have observed on those systems (or from people trained on those systems) over time -- their commits to be very large, doing way too much and touching too many files. Needles to say this causes a massive headache if/when the patch introduces a regression.
I think it depends more on the workflow more than the features of the revision control system. Of course, those people used to non-distributed systems may be in the habit of batching their commits for several unrelated issues because they are used to a centralised model where committing a change involves sharing it with everyone else.
I think you could do something similar to git with Mercurial but it wouldn't be exactly the same.
As long as the general class of functionality is present, that's fine. No ability to stage and rework a commit stack though is going to push people more toward the monolithic / batched commit mode from what I've seen.
David _______________________________________________ Discussion mailing list Discussion@lists.fsfe.org https://lists.fsfe.org/mailman/listinfo/discussion
This mailing list is covered by the FSFE's Code of Conduct. All participants are kindly asked to be excellent to each other: https://fsfe.org/about/codeofconduct
Am Donnerstag 06 September 2018 22:40:49 schrieb Timothy Pearson:
I think you could do something similar to git with Mercurial but it wouldn't be exactly the same.
As long as the general class of functionality is present, that's fine.
https://www.mercurial-scm.org/wiki/HisteditExtension
History editing plugin for Mercurial, heavily inspired by git rebase --interactive.
(shipped with hg since v2.3 (2012-08) it just needs to be enabled)
Mercurial also has a concept called "phases" where hg tracks which changes have already be published to assist avoiding conflicts with shared repos. https://www.mercurial-scm.org/wiki/Phases
One other feature of hg that I personally like is the ability to push and pull from a clone via ssh. So I can use a development virtual machine with less security requiemtns from a regular machine. (If you know how to do this easily with git, I'd appreciate a hint to a tutorial as personal mail. It will be possible somehow.)
Regular machine, (more rights) (R) ----> dev machine (D)
There is no direct way from D to get to R. I ssh onto D, work there, then commit and pull it back to R from R. Inspect code on R and push in into the public repo. So if someone subverts D, they can only change code (which gets inspected on R), but they do not get R's priviledges.
Best Regards, Bernhard
Without derailing too much this wonderful discussion, I would like to make a comment on lost commits/code.
IMHO git is not a backup solution, its a version control system. sometimes we forgot this simple but important tiny thing so freq commits (even on a local cloned repo or branch) is really useful and copy/merge/rebase/rewrite/whatever when ready to master.
Evaggelos Balaskas https://www.linkedin.com/in/evaggelosbalaskas
Hi all,
Am Donnerstag, den 06.09.2018, 12:21 -0500 schrieb Timothy Pearson:
On the topic of history rewrite, I'd argue that allowing it on a private (read: development) repository provides better commits and less chance of losing work. It allows the developer to incrementally commit small, incomplete, possibly even wrong changes, then decide how they should be packaged and layered before attempting a merge. Without this capability, our programmers would tend to keep a massive chunk of unstaged changes locally, then submit the entire mess for review once it was working properly. History rewrite allows the developer to verify a multi-week, multi-layer, self-dependent modification and still be able to split it apart into logical, incremental chunks with relative ease.
I can't imagine working without this feature.
I much prefer a proper review system like Gerrit for that. You submit code which will be broken (for sure), but you have the chance of checking it manually and automatically _before_ committing to the repository. Also no code will be lost if your dev machine dies in a week long develop/rewrite cycle (did you consider that possibility?). Additionally you can use it to keep your repo in always-fast-forward state with linear, easy to follow history. Better explanation than I can do: https://sandofsky.com/blog/git-workflow.html
My three pluses:
+ Review/Tests: Everything not tested will break, so don't let untested code into the repo at all + Small changes instead of one big monster commit + Linearity (when using rebase)
Best wishes Michael
I won't cite any past message because I lost some of them during an accidental removal of the emails on my computer, so forgive me if this was already said by someone else.
Personally, I don't think we need to dive down into completely different VCS software just because a VCS repository provider decided to go evil. We can still use Git but push people to not to use GitHub, not because of the recent Microsoft acquisition, but due to a problem we have been facing many months before that, which is even worse than the acquisition: non-free software being forced to the people who visit the GitHub website, through client-side JavaScript[1].
GNU Savannah provides Git repository hosting too, and is powered by GNU Savane, a host software that you can use on your own[2]. There is an ongoing effort by the Peers Community to make a VCS hosting software called Vervis[3]. Besides there is host software called Kallithea[4] (partly maintained by the Software Freedom Conservancy, which has the plus of already providing correctly-marked free/libre client-side JavaScript, and also has many features to allow commits to be made from the web browser) and there is Pagure[5] (which I don't know how good or bad it is in terms of software freedom for the end-user/website-visitor). All of these, as far as I know, work with Git.
[1] https://www.gnu.org/software/repo-criteria-evaluation.html#GitHub
[2] https://savannah.gnu.org/p/administration
[3] https://peers.community/#projects
[4] https://kallithea-scm.org/
-- - Página com formas de contato: https://libreplanet.org/wiki/User:Adfeno#vCard - Ativista do software livre (não confundir com o gratuito). Avaliador da liberdade de software e de sites. - Página com lista de contribuições: https://libreplanet.org/wiki/User:Adfeno#Contribs - Para uso em escritórios e trabalhos, favor enviar arquivos do padrão internacional OpenDocument/ODF 1.2 (ISO/IEC 26300-1:2015 e correlatos). São os .odt/.ods/.odp/odg. O LibreOffice é a suíte de escritório recomendada para editar tais arquivos. - Para outros formatos de arquivos, veja: https://libreplanet.org/wiki/User:Adfeno#Arquivos - Gosta do meu trabalho? Contrate-me ou doe algo para mim! https://libreplanet.org/wiki/User:Adfeno#Suporte - Use comunicações sociais federadas padronizadas, onde o "social" permanece independente do fornecedor. #DeleteWhatsApp. Use #XMPP (https://libreplanet.org/wiki/XMPP.pt), #DeleteFacebook #DeleteInstagram #DeleteTwitter #DeleteYouTube. Use #ActivityPub via #Mastodon (https://joinmastodon.org/). - #DeleteNetflix #CancelNetflix. Evite #DRM: https://www.defectivebydesign.org/
On Saturday 8. September 2018 10.36.08 Adonay Felipe Nogueira wrote:
I won't cite any past message because I lost some of them during an accidental removal of the emails on my computer, so forgive me if this was already said by someone else.
Having just done a distribution upgrade and with Akonadi now playing around with my mail, hopefully not losing any, I sympathise with you in this situation.
Personally, I don't think we need to dive down into completely different VCS software just because a VCS repository provider decided to go evil. We can still use Git but push people to not to use GitHub, not because of the recent Microsoft acquisition, but due to a problem we have been facing many months before that, which is even worse than the acquisition: non-free software being forced to the people who visit the GitHub website, through client-side JavaScript[1].
Indeed, there are plenty of issues with the Web and the proliferation of obligatory non-free scripts. While upgrading my system, I found myself passing the time by using a single-board computer with a CPU running at "only" 1.2GHz, but also with 1GB RAM, which is actually what my normal, ancient, 3GHz Intel machine has. The network performance might not be great on this single-board machine, though.
One thing I noticed was that the /etc/hosts-based blocking I normally do might well make a difference, because I was suddenly needing to watch Firefox pull down content from various "ad markets" in slow motion. Such things really make the Web unusable for certain kinds of devices, and I start to think that we might need to make a case for a Web that is not so wasteful and intrusive, with the Web having seemingly taken over the role of Windows as the thing that causes people to junk their hardware every 18 months (or however often it is now).
GNU Savannah provides Git repository hosting too, and is powered by GNU Savane, a host software that you can use on your own[2]. There is an ongoing effort by the Peers Community to make a VCS hosting software called Vervis[3]. Besides there is host software called Kallithea[4] (partly maintained by the Software Freedom Conservancy, which has the plus of already providing correctly-marked free/libre client-side JavaScript, and also has many features to allow commits to be made from the web browser) and there is Pagure[5] (which I don't know how good or bad it is in terms of software freedom for the end-user/website-visitor). All of these, as far as I know, work with Git.
Kallithea is certainly interesting, and not only does it work with Mercurial, too, but it provides an example of forking a project to uphold Free Software values. Unfortunately, it seems to need more resources behind it, illustrating the problems that I and others have found ourselves mentioning rather a lot in the recent past.
Paul
[1] https://www.gnu.org/software/repo-criteria-evaluation.html#GitHub
[2] https://savannah.gnu.org/p/administration
[3] https://peers.community/#projects
Am Samstag 08 September 2018 15:36:08 schrieb Adonay Felipe Nogueira:
Personally, I don't think we need to dive down into completely different VCS software just because a VCS repository provider decided to go evil.
The argument is about how to counter the network-effect that will makes it easier and easier for a leading provider to get more ahead of others. And it is about what is a more sustainable choice.
The money that Github is earning with helping to develop proprietary software, allows it to innovate fast and turn a user experience of git into a user experience of git-hub. If many people are socialised with it, they want that user experience whereever they go. This is why even using Bitbucket as a proprietary competitor does something good to keep the competition open. And using hg or services that allow hg also help weakening the network-effect cycle a bit, while strengthening chances of competition.
GNU Savannah provides Git repository hosting too, and is powered by GNU Savane, a host software that you can use on your own[2].
It is very good that these Free Software product and services exist. The problem is that many developers now believe them to be too far behind (in features, available add-ons and user experiences) compared to github/bitbucket/gitlab.
Even Allura (which is the Free Software that runs Sourceforge) is considered by many to not play in the same league.
For instance Savane is an continuation of the old sourceforge code variant, a newer one is http://fusionforge.org/ which was used by Debian and others (see list at https://en.wikipedia.org/wiki/GForge#FusionForge) But Debian is moving off Fusionforge. The listed organisations (which includes my company) have failed to finance and organise a steady development of Fusionforge so it could keep up. Think about https://en.wikipedia.org/wiki/Gna! which shutdown a while ago.
If we (as Free Software people) want a first class Free Software product and a number of service providers offering to run it for us, we need to make sure professionals can earn serious money with it.
Regards, Bernhard
On Mon, Sep 10, 2018 at 2:25 AM Bernhard E. Reiter bernhard@fsfe.org wrote:
The argument is about how to counter the network-effect that will makes it easier and easier for a leading provider to get more ahead of others. And it is about what is a more sustainable choice.
One thing that I really appreciated OpenStack for is not succumbing to that network effect. Gerrit+cgit proved to be a superior system when it comes to massive amounts of patches/code reviews per hour, with hundreds of reviewers involved at any given time.
OpenStack Foundation also led the way for issue tracking and task management by developing a new tool called Storyboard. It's a modern task, API-first multi-project task and issue manager. You can see it in action on https://storyboard.openstack.org
Check the README http://git.openstack.org/cgit/openstack-infra/storyboard/tree/README.rst
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Hello Bernhard.
Personally, I don't think we need to dive down into completely different VCS software just because a VCS repository provider decided to go evil.
The argument is about how to counter the network-effect that will makes it easier and easier for a leading provider to get more ahead of others. And it is about what is a more sustainable choice.
By the phrasing "leading provider" I assume that it means a company that won't share the software innovations with the rest of the community so that all can benefit. The company gets an upper hand and can impose deals like what you write about Github that helps proprietary software companies in their business.
If this is the case then I would suggest to seperate out the parts that makes Github and others to have an upper hand and then see how the parts can be countered using free software and free innovation.
Github seem to take the inter-connectedness that would be lacking in many of the other alternatives. You can click a button to fork a project, you can start "watching" certain projects and get email notifications, you can download the sources without needing Git, you have an interface of markdown that every project adapts and the README is presented as if it were a web page.
I am at a loss right now of suggestions or methods or ideas to counter Github but it sounds easy enough to be able to get going using a few people. It might just take a discussion during a week for a good project to spring up later.
The money that Github is earning with helping to develop proprietary software, allows it to innovate fast and turn a user experience of git into a user experience of git-hub. If many people are socialised with it, they want that user experience whereever they go. This is why even using Bitbucket as a proprietary competitor does something good to keep the competition open. And using hg or services that allow hg also help weakening the network-effect cycle a bit, while strengthening chances of competition.
It's really bad that a company thrives on a thing like "open source" while using that income to fund "closed source". I wonder if even the open source camp would approve of that as a thing to further improvements.
GNU Savannah provides Git repository hosting too, and is powered by GNU Savane, a host software that you can use on your own[2].
It is very good that these Free Software product and services exist. The problem is that many developers now believe them to be too far behind (in features, available add-ons and user experiences) compared to github/bitbucket/gitlab.
Even Allura (which is the Free Software that runs Sourceforge) is considered by many to not play in the same league.
For instance Savane is an continuation of the old sourceforge code variant, a newer one is http://fusionforge.org/ which was used by Debian and others (see list at https://en.wikipedia.org/wiki/GForge#FusionForge) But Debian is moving off Fusionforge. The listed organisations (which includes my company) have failed to finance and organise a steady development of Fusionforge so it could keep up. Think about https://en.wikipedia.org/wiki/Gna! which shutdown a while ago.
I thought I would add up the list with two more Git providers that I use in parallell:
https://www.tuxfamily.org/ https://pagure.io/
They seem to be alright. I'm not sure about how Pagure is doing with their stance on proprietary software but Red Hat is usually a nice company in that regard and would prefer the repos to contain free software projects only.
If we (as Free Software people) want a first class Free Software product and a number of service providers offering to run it for us, we need to make sure professionals can earn serious money with it.
I could share an idea I have had about the financing and the "poor free software developer not getting paid". It's not a very complicated idea and it would work if enough people have the initiative to do so.
The idea is to make an economical funding platform. The platform itself only communicates between the two parties users and developers, economically. If the developer is signed up and ready to develop, users are signed up and leaving opinions, bug reports, requests and other issues - the process is a matter of transmitting funds. What would be possible right away is to transfer funds using traditional means such as physical transportation or digital transmission using anonymous payments. I see a problem with the platform being popular right away, the platform is going to slowly gain traction if implemented right and developed further. The starting position needs to calmly work out means. I have a few means that I thought about, such as a service to come home to a user and take the discussion as well as payment from them which will be transfered to the developer of said software including all the demands that that funding has.
Work it out good and it will work good.
Kind regards, Andreas
Hi Andreas,
Am Donnerstag 13 September 2018 17:05:41 schrieb Andreas Nilsson:
By the phrasing "leading provider" I assume that it means a company that won't share the software innovations with the rest of the community so that all can benefit.
yes.
If this is the case then I would suggest to seperate out the parts that makes Github and others to have an upper hand and then see how the
This seems the logical step to take, however it is hard to do this successfully as far as I can say. It takes many little things and a dedicated team with a lot of time on its hands, which basically means professionals. And then we get to the question of funding and a "business" model, which of course could be a non-profit "business" model. It had to be stable for years.
It's really bad that a company thrives on a thing like "open source" while using that income to fund "closed source". I wonder if even the open source camp would approve of that as a thing to further improvements.
Many people accept the comfort coming from innovations funded with non-free software or coming with non-free products. I cannot blame them in principle, as it is a personal decision how far out someone is willing to go from the mainstream. Hopefully we can point out ways where each person can support Free Software with little efforts and we should always offer the next steps for everyone, no matter where they stand.
I could share an idea I have had about the financing and the "poor free software developer not getting paid".
Most Free Software is developed by people are paid for doing it already. The more the more releveant the FS-product is. The question is: Who paids those developers and makes sure the interests of the organisation is considered. For the famous kernel, there are some basic statistics
https://lwn.net/Articles/760690/ 4.18 https://lwn.net/Articles/756031/ 4.17 https://lwn.net/Articles/750054/ 4.16 https://lwn.net/Articles/742672/ 4.15
where you see companies like Intel, Redhat, AMD, IBM, Google appear often. It is major companies and their customers that drive the main lines of development. My conclusion is that we need funding models for IT-interest of small organisations or private people to be successful with Free Software. A customer demand and funding can help a lot. Fortunately a number of companies are trying to create product with lots of Free Software, so the availble number of offerings is growing.
The idea is to make an economical funding platform. The platform itself only communicates between the two parties users and developers, economically.
This has been tried a number of times in the past and hasn't worked out well. What could help would be a system for micropayments that is easy and has low transaction costs. Another approach would be to have a organisations that distribute small amounts of money (e.g. GNU system distributors would be in a good position to do so.) A key point is peoples willingness to pay for something, even if they are not force to.
Best Regards, Bernhard
On Friday 14. September 2018 09.06.50 Bernhard E. Reiter wrote:
Am Donnerstag 13 September 2018 17:05:41 schrieb Andreas Nilsson:
The idea is to make an economical funding platform. The platform itself only communicates between the two parties users and developers, economically.
This has been tried a number of times in the past and hasn't worked out well.
It is worth noting that there are a number of recurring obstacles. Gratipay was attempting to pioneer consolidated payments between individuals, but there is a very narrow path that needs to be navigated to avoid being considered as some kind of financial institution that holds other people's money whilst also processing payments (or delegating that processing) in a way that does not overwhelm the actual payments with processing fees.
Liberapay took over where Gratipay left off, but their payment processor has discontinued its relationship with Liberapay, apparently claiming that they didn't think that Liberapay was fulfilling its obligations:
https://liberapay.com/ https://github.com/liberapay/liberapay.com/issues/1171
There seems to have been an account linked to an organisation which cannot be legally serviced by the payment provider, and I guess the blame fell on Liberapay for that. (This is from a quick perusal of public information about this, so it may not be completely accurate.)
Other organisations exist, but it isn't completely clear how they operate in a similar position without getting hassled about financial industry/crime regulations. Here are some examples:
https://www.bountysource.com/ https://tidelift.com/ https://opencollective.com/
Open Collective seems like it promotes a model that I may have advocated before, emphasising collectives (which Gratipay and Liberapay both support). If you have an organisation like the Python Software Foundation, to take an example I tend to use, then one could envisage establishing a presence on such a platform to solicit funding for Python development, where those giving money might have reason to assume that their money is going to the right people due to the presence of an entity they know and trust. Meanwhile, those doing the work would presumably be able to receive payments in a properly-regulated way.
Then again, I am inclined to think that such platforms tend to favour transactional work, often underpriced, that is viewed as fashionable amongst the relentless promotion of the "gig" economy (hence the venture funding for some of the companies above). Instead, I think that structures to fund Free Software should enable developers to actually draw a salary, not have people speculatively do work in order to compete for payouts.
Maybe what is really needed is some kind of virtual organisation for Free Software, maybe some kind of consulting organisation. Naturally, there are non-virtual organisations of this nature already, but the bottlenecks are getting hired by them and for those businesses to be able to hire people. And recruitment is still largely performed using pre-digital techniques (if you ignore the superficial use of digital tools).
What could help would be a system for micropayments that is easy and has low transaction costs. Another approach would be to have a organisations that distribute small amounts of money (e.g. GNU system distributors would be in a good position to do so.) A key point is peoples willingness to pay for something, even if they are not force to.
Micropayments with low transaction costs is like the Holy Grail of payments, though. But the matter of persuading people to pay for stuff is worth further thought, and there was a blog article about that recently:
http://think-innovation.com/blog/should-you-donate-to-open-source-software/
One thing I ought to mention is the need for solutions that use real money as opposed to today's favourite cryptocurrency. When looking for creative solutions there always appears to be someone wanting to sweep everything off the table to further their "cipherpunk" anarchist pipedream.
People need genuine solutions that do not involve financial speculation, legal uncertainty, and exposure to criminal schemes. A crucial aspect of funding Free Software is exactly that of giving people certainty so that they can focus on what they actually want to do.
Paul
Am Samstag 15 September 2018 17:20:26 schrieb Paul Boddie:
On Friday 14. September 2018 09.06.50 Bernhard E. Reiter wrote:
It is worth noting that there are a number of recurring obstacles.
.. with micropayments and financial regulations being among them, yes! I'll have to take more time read through the interesting links you were providing.
Then again, I am inclined to think that such platforms tend to favour transactional work, often underpriced, that is viewed as fashionable amongst the relentless promotion of the "gig" economy (hence the venture funding for some of the companies above). Instead, I think that structures to fund Free Software should enable developers to actually draw a salary, not have people speculatively do work in order to compete for payouts.
If I do understand you correctly, you believe they fund more "marketing" and less "development". Whereas sometimes good quiet engineering would need to be funded. In a bird's view I'd agree on this. The challenge - though - is to find out which kind of engineering work is worth what.
Micropayments with low transaction costs is like the Holy Grail of payments, though.
It look doable, though, if a major bank would back it (in a traditional sense without distributed ledger technology).
But the matter of persuading people to pay for stuff is worth further thought, and there was a blog article about that recently:
http://think-innovation.com/blog/should-you-donate-to-open-source-software/
This article is interesting, my rule of thumb how much to pay is * 10% of cost for a license of a comparable proprietary product * or 1% of the revenue for each business topic/unit that depends on Free Software (for my company that is 1% of 100%, but some companies may depend less on Free Software).
One thing I ought to mention is the need for solutions that use real money as opposed to today's favourite cryptocurrency. When looking for creative solutions there always appears to be someone wanting to sweep everything off the table to further their "cipherpunk" anarchist pipedream.
Well said.
People need genuine solutions that do not involve financial speculation, legal uncertainty, and exposure to criminal schemes. A crucial aspect of funding Free Software is exactly that of giving people certainty so that they can focus on what they actually want to do.
Fine again, except for the last part. It is not the desire of the Free Software engineer that should drive the directions of funds, but the needs of the users. There can be a wide difference between the three things: a) what people want to do b) what people are good at c) what others need
To be succesful in my eyes, a funding model would need to make sure that mainly c) and b) is matched.
Best Regards, Bernhard
On Monday 17. September 2018 12.42.18 Bernhard E. Reiter wrote:
Am Samstag 15 September 2018 17:20:26 schrieb Paul Boddie:
Then again, I am inclined to think that such platforms tend to favour transactional work, often underpriced, that is viewed as fashionable amongst the relentless promotion of the "gig" economy (hence the venture funding for some of the companies above). Instead, I think that structures to fund Free Software should enable developers to actually draw a salary, not have people speculatively do work in order to compete for payouts.
If I do understand you correctly, you believe they fund more "marketing" and less "development". Whereas sometimes good quiet engineering would need to be funded. In a bird's view I'd agree on this. The challenge - though - is to find out which kind of engineering work is worth what.
There are at least three forms of funding platforms that tend to see some Free Software development activity:
* Bounty funding (such as Bountysource) * Ongoing funding (such as Liberapay) * Crowdfunding campaign (such as Crowd Supply and IndieGoGo)
I think these platforms tend to favour work that can be readily advertised, packaged and sold, but of course not all work that is worth doing will fit into this form.
Crowdfunding is most definitely something that has a marketing component, and this means that people have to make a pitch to others to convince them of the work's value, but software is still seen as intangible and less valuable than hardware. With hardware, you hopefully get some goodies at the end of the campaign that are exclusively yours and that people outside the campaign do not get. With software, you share the rewards with others, and also with everyone else if it is Free Software. People might wonder, as usual, whether they shouldn't just leave it to others to put up the money.
There are also a few common problems with ongoing funding and crowdfunding. One is that some people feel that they have to be highly responsive to their backers in order to keep them happy. This ends up with people burning out because they are effectively having to do the work plus an "always on" public relations job. YouTube millionaires have this problem, apparently, so I am not sure it can be solved easily with more money:
https://www.theguardian.com/technology/2018/sep/08/youtube-stars-burnout-fun...
(The other problem is that people fail to communicate, particularly if things do not go well, but that is another story.)
Bounty funding should get around this cultural problem of people expecting "always on" responsiveness, but the work favours things that are either "odd jobs" that don't pay very well or are things that would normally be put out as a contract for bidding. I noted in my blog article about funding platforms...
https://blogs.fsfe.org/pboddie/?p=1620
...that while the sums involved for some bounties are considerably better than the usual "tip jar" amounts, the rather informal arrangements around getting the work done, combined with people competing speculatively (as opposed to collaborating), means that these large sums go unclaimed. Three years on and the IBM people are still waiting for their LuaJIT port (or dragging their feet on accepting any submitted work):
https://www.bountysource.com/issues/25924774-enable-implement-ppc64-le-linux...
Ongoing funding potentially promotes a more sustainable way of funding people, at least if we can ignore those cultural issues around keeping the audience happy, because a developer can potentially prioritise their work appropriately and not feel that they have to dance to everybody else's tune all the time. But the problem then is to persuade people that the work is worth supporting. Some might claim that a successfully funded person could just be working for a company, but that requires a sole employer to put up that person's entire salary.
[...]
People need genuine solutions that do not involve financial speculation, legal uncertainty, and exposure to criminal schemes. A crucial aspect of funding Free Software is exactly that of giving people certainty so that they can focus on what they actually want to do.
Fine again, except for the last part. It is not the desire of the Free Software engineer that should drive the directions of funds, but the needs of the users. There can be a wide difference between the three things: a) what people want to do b) what people are good at c) what others need
To be succesful in my eyes, a funding model would need to make sure that mainly c) and b) is matched.
It is true that you cannot just have people doing exactly what they want and expecting to get paid for it, no matter what it is. (Well, actually you can: it is called art.) Then again, by "what they actually want to do" I meant the work of writing software, as opposed to things like meddling with dubious financial instruments, executing foreign exchange transactions, drumming up business on a speculative basis, and so on.
But you also cannot leave it to the whims of "the market" to decide, either, because you end up in precisely the situation where only the "cool", marketable projects get funding. And those projects may only address the fad of the day and leave nothing for others to build upon. Worse, for Free Software, they may also exploit other software projects to deliver their product.
Paul
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Hello Paul.
There are at least three forms of funding platforms that tend to see some Free Software development activity:
- Bounty funding (such as Bountysource)
- Ongoing funding (such as Liberapay)
- Crowdfunding campaign (such as Crowd Supply and IndieGoGo)
I think these platforms tend to favour work that can be readily advertised, packaged and sold, but of course not all work that is worth doing will fit into this form.
Crowdfunding is most definitely something that has a marketing component, and this means that people have to make a pitch to others to convince them of the work's value, but software is still seen as intangible and less valuable than hardware. With hardware, you hopefully get some goodies at the end of the campaign that are exclusively yours and that people outside the campaign do not get. With software, you share the rewards with others, and also with everyone else if it is Free Software. People might wonder, as usual, whether they shouldn't just leave it to others to put up the money.
There are also a few common problems with ongoing funding and crowdfunding. One is that some people feel that they have to be highly responsive to their backers in order to keep them happy. This ends up with people burning out because they are effectively having to do the work plus an "always on" public relations job. YouTube millionaires have this problem, apparently, so I am not sure it can be solved easily with more money:
https://www.theguardian.com/technology/2018/sep/08/youtube-stars-burnout-fun...
(The other problem is that people fail to communicate, particularly if things do not go well, but that is another story.)
Yes. Burning out due to a deal that is taking place and relations that are just forming sounds unnecessary and unneeded. I read about the three different forms and systems of payment that exist today and I'm going to look at crowdfunding when I write now. Crowdfunding sounds to be the most stress inducing of all of these three given that there are many funders giving small funds with each having their demands for the project at hand.
When I think about crowdfunding I have stuff like "smaller hobby projects" in mind such as a design for an electric circuit that can do a funky thing. I don't have insight in the crowdfunding projects today.
With the lack of a middleman, or a team to communicate in between the funders and fundees, there is bound to be a stress related task at hand that is endured by the fundee. Be it a person, a team or a company.
I am taking guesses at what this is lacking is something called "demand specificaton" in Swedish (directly translated). When a customer (here funder) is having an idea for a product, then they are thinking in terms of a finished product and as front end and concrete as possible.
Their ideas could be that they want the software to have certain GUI components here and there, while clicking on them should execute a task explained outside of coding terms.
What I'm trying to say is that the human component for tasks like this are probably going to lack in the crowdfunding platforms. It could and should be taken a look at if volunteers, organizations or even profitable companies could pitch in here.
Bounty funding should get around this cultural problem of people expecting "always on" responsiveness, but the work favours things that are either "odd jobs" that don't pay very well or are things that would normally be put out as a contract for bidding. I noted in my blog article about funding platforms...
https://blogs.fsfe.org/pboddie/?p=1620
...that while the sums involved for some bounties are considerably better than the usual "tip jar" amounts, the rather informal arrangements around getting the work done, combined with people competing speculatively (as opposed to collaborating), means that these large sums go unclaimed. Three years on and the IBM people are still waiting for their LuaJIT port (or dragging their feet on accepting any submitted work):
https://www.bountysource.com/issues/25924774-enable-implement-ppc64-le-linux...
Sounds like an auction of developmental work. I don't know much about it. It's an interesting approach.
Could be used for things that doesn't have a clear value or usage, let the bidders decide how much it's worth to them. I wouldn't use it for critical projects.
Ongoing funding potentially promotes a more sustainable way of funding people, at least if we can ignore those cultural issues around keeping the audience happy, because a developer can potentially prioritise their work appropriately and not feel that they have to dance to everybody else's tune all the time. But the problem then is to persuade people that the work is worth supporting. Some might claim that a successfully funded person could just be working for a company, but that requires a sole employer to put up that person's entire salary.
Most funding in this fashion will probably be from organizations and companies that are supportive of a particular software approach.
I consider recurring payment in my own budget, it is always regarding either state financial institutions that are demanding money or it is for a cause. There is a third option which is to fund your own good, which I doubt much software does on these sites unless it's a scientific approach. Say there's a software to research regarding cancer, I could go with a recurring payment for that.
It is true that you cannot just have people doing exactly what they want and expecting to get paid for it, no matter what it is. (Well, actually you can: it is called art.) Then again, by "what they actually want to do" I meant the work of writing software, as opposed to things like meddling with dubious financial instruments, executing foreign exchange transactions, drumming up business on a speculative basis, and so on.
I have a hard time following here.
But you also cannot leave it to the whims of "the market" to decide, either, because you end up in precisely the situation where only the "cool", marketable projects get funding. And those projects may only address the fad of the day and leave nothing for others to build upon. Worse, for Free Software, they may also exploit other software projects to deliver their product.
I agree that the best approach is to blend various methods. Auction payments sound much like the market is in control of almost everything, while recurring payments sound more geared towards good software. Almost two opposites here. Crowdfunding is special, I don't know if you can talk about a market in that case, in the classical sense of it.
Andreas.
- -- Developer of Chilling Spree, a Quakeworld(tm) server modification licensed under GNU GPLv2.
Download area: http://download.tuxfamily.org/cspree/.
Member of FSFE. Fellow no 00 000 3948.
Composer of CC BY licensed works.
Download and streaming hosted at https://goblinrefuge.com/mediagoblin/u/andreas/.
GnuPG fingerprint: 579A 7871 2B40 5331 487D 465F B122 68DC 6FEF D814
On Tuesday 18. September 2018 18.25.26 Andreas Nilsson wrote:
When I think about crowdfunding I have stuff like "smaller hobby projects" in mind such as a design for an electric circuit that can do a funky thing. I don't have insight in the crowdfunding projects today.
I only tend to keep up with things on Crowd Supply. The other platforms seem to have a poor reputation. There have been pretty big campaigns for hardware, even ones that have delivered the anticipated product, so not Ubuntu Edge but things like Novena (open hardware) or Gemini (proprietary but "alternative" hardware).
For software, I don't see as much going on. I have mentioned Mailpile and Roundcube Next before, and here there is a difference in outcome. Mailpile struggled along with a fairly meagre sum (for what they wanted to do), then found some Bitcoin from donations under the sofa cushion (metaphorically) and managed to ramp up again, meaning that they may be close to some kind of release.
Roundcube Next raised substantially more money than Mailpile, but this money hasn't been spent and seems to have been resting in an account in Switzerland for three years waiting for people to spend it. There was some brief activity on GitHub a year or so ago, but nothing since. The backers seem to be resigned to using "classic" Roundcube or other things, if the comments on the campaign page are any indication.
With the lack of a middleman, or a team to communicate in between the funders and fundees, there is bound to be a stress related task at hand that is endured by the fundee. Be it a person, a team or a company.
Often, these kinds of campaigns are trying to squeeze out the last drops of efficiency in the economy, being low-priced, low-margin efforts. There isn't going to be any extra money to pay people for publicity, although I believe there are companies who offer promotional services, perhaps directed at getting campaigns funded more than anything else, though.
And lack of communication erodes trust. There are campaigns where every last setback is described, potentially leaving those responsible open to criticism, but even so, people are a lot more receptive to such tales of apparent failure and will forgive delays and even non-delivery if they feel that something took place and that the effort was genuine. When nothing is said, people start to get suspicious about nothing being done, even if the same efforts are being made, and they are less likely to be as forgiving with similar outcomes.
I am taking guesses at what this is lacking is something called "demand specificaton" in Swedish (directly translated). When a customer (here funder) is having an idea for a product, then they are thinking in terms of a finished product and as front end and concrete as possible.
Their ideas could be that they want the software to have certain GUI components here and there, while clicking on them should execute a task explained outside of coding terms.
What I'm trying to say is that the human component for tasks like this are probably going to lack in the crowdfunding platforms. It could and should be taken a look at if volunteers, organizations or even profitable companies could pitch in here.
Certainly, the notion of pitching an idea and getting people to back that idea with their money is not particularly compatible with the kind of iterative design that contributes to the production of good software. And another pitfall that might be even more likely with software is that of the campaign creator overselling what they intend to achieve.
[Bounty funding]
Sounds like an auction of developmental work. I don't know much about it. It's an interesting approach.
Actually, I saw some presentation by someone from Mozilla about their pet project for funding projects that involved "futures". That would be similar to auctioning in certain ways, but it all sounded ghastly and yet another potentially exploitative application of financial industry practices.
We can be sure that as soon as any auction concept gets applied to work, it will result in people underbidding to get the opportunity to work. This is, of course, familiar from any observation of procurement processes where companies offer to do work for unsustainable sums and then cut corners or exploit their workers to make the numbers add up.
[...]
It is true that you cannot just have people doing exactly what they want and expecting to get paid for it, no matter what it is. (Well, actually you can: it is called art.) Then again, by "what they actually want to do" I meant the work of writing software, as opposed to things like meddling with dubious financial instruments, executing foreign exchange transactions, drumming up business on a speculative basis, and so on.
I have a hard time following here.
Really, this was just me observing that where work needs doing, people want to get paid for actually doing the work and not being distracted with other activities that are unrelated to the work getting done.
So, when someone claims that everyone can get paid in today's cryptocurrency, it sounds great to them, but this would mean that people would need to deal with setting up their cryptowallet (or whatever), figure out how to exchange cryptomoney into real money, and all sorts of things that nobody should have to be troubled with.
I agree that the best approach is to blend various methods. Auction payments sound much like the market is in control of almost everything, while recurring payments sound more geared towards good software. Almost two opposites here. Crowdfunding is special, I don't know if you can talk about a market in that case, in the classical sense of it.
Thanks for sharing your thoughts on this!
Paul
Am Montag 17 September 2018 15:14:16 Paul Boddie wrote:
.. a lot again, I'll still have to read in more detail. :)
There are at least three forms of funding platforms that tend to see some Free Software development activity:
- Bounty funding
- Ongoing funding
- Crowdfunding campaign
Ongoing funding potentially promotes a more sustainable way of funding people,
this is the most interesting to me. Because each software component needs some basic maintenance, including running the necessary infrastructure (from the human side). Examples where continuous funding is working in a fine way that I know.
https://www.patreon.com/evanyou for the work on vuejs.org which is a competitor to Google's Angular and Facebook's React that keeps interests of more people in mind like progressive learning experience and modularity
https://wiki.debian.org/LTS/Funding via Freexian at https://www.freexian.com/services/debian-lts.html
(Transparency: my company supports both initiatives)
at least if we can ignore those cultural issues around keeping the audience happy, because a developer can potentially prioritise their work appropriately and not feel that they have to dance to everybody else's tune all the time. But the problem then is to persuade people that the work is worth supporting.
You make it sound like a bad thing, there I disagree: I believe it is good that it is necessary to show that work is good. As software designer, like as professional in any other business, there must be some competition and if you fail to be able to show relevance your work may turn out not being that relevant after all.
What I dislike and where I agree with you is if decisions are taken based on marketing material alone, personal experiences or relationships, often without taking mid and long term goals of one's own organisation in mind.
The antidote here is education and explanation. Paying for Free Software, even when it is not mandatory, is still in the best interest of its users. (Potential advertisment:) This is why I do it, personally and with my company.
Best Regards, Bernhard
On Wednesday 19. September 2018 09.48.06 Bernhard E. Reiter wrote:
Am Montag 17 September 2018 15:14:16 Paul Boddie wrote:
at least if we can ignore those cultural issues around keeping the audience happy, because a developer can potentially prioritise their work appropriately and not feel that they have to dance to everybody else's tune all the time.
You make it sound like a bad thing, there I disagree: I believe it is good that it is necessary to show that work is good.
The key element is "all the time". Clearly, if the work is not "art" then it must be relevant to others. But people also need to be able to exercise discretion about how they do certain aspects of the work and the direction in which the work is going.
What we have seen in various Free Software projects is that the developers refuse to listen to the users, accusing them of not understanding "design", "usability", "the big picture", having "entitlement", and so on. That is not acceptable and has resulted in substantial dissatisfaction (and has arguably set Free Software adoption back by many years in certain cases).
But at the same time I recognise that the developers ought to be in a better position to make technical and strategic decisions and should be allowed to do so. There needs to be a constructive conversation between users and developers to allow each side to recognise and exercise their responsibilities.
Paul
Hi Andreas,
On Thu, 13 Sep 2018 17:05:41 +0200 Andreas Nilsson wrote:
If this is the case then I would suggest to seperate out the parts that makes Github and others to have an upper hand and then see how the parts can be countered using free software and free innovation.
I wrote a blog post back in 2016 where I tried to find out what makes Github special and what we would need to challenge it.
https://www.schiessle.org/articles/2016/02/12/the-next-generation-of-code-ho...
Github seem to take the inter-connectedness that would be lacking in many of the other alternatives. You can click a button to fork a project, you can start "watching" certain projects and get email notifications, you can download the sources without needing Git, you have an interface of markdown that every project adapts and the README is presented as if it were a web page.
I think this part is already available as Free Software. You can have the same easy and intuitive workflow with Gitlab, Gitea and others.
What peoples stop moving to this Free Software solutions, in my experience, is the social lock-in effect which is stronger than many people think. As I tried to explain in both the blog and a mail I wrote in this thread earlier.
That's why I think the challenge is to convert this existing free solutions which creates many small islands at the moment in a decentralised and federated solution so that you can share pull requests, issues, mentions,... across instances seamlessly.
Gitlab already discuss this[1] and there are other initiatives[2], although all of them seems to be in a really early stage of discussion.
[1] https://gitlab.com/gitlab-org/gitlab-ce/issues/4013 [2] https://github.com/forgefed/forgefed
Cheers, Björn
Am Mittwoch 29 August 2018 09:22:57 schrieb Ion Savin:
I don't see why UI tools wouldn't make this process as friendly as GitLab/GitHub MR/PRs but I don't think they exist yet.
Have you checked out what latest Versions of Allura, Phabricator and gerrit and do to solve this use case?
(I am seriously interested as there are many developments and good comparisons are rare.)
Best Regards, Bernhard
On 29/08/2018 11:23, Bernhard Reiter wrote:
Am Mittwoch 29 August 2018 09:22:57 schrieb Ion Savin:
I don't see why UI tools wouldn't make this process as friendly as GitLab/GitHub MR/PRs but I don't think they exist yet.
Have you checked out what latest Versions of Allura, Phabricator and gerrit and do to solve this use case?
I haven't looked at any of them.
I was looking for something really specific too the email workflow. A desktop app you can point to a mailbox and which is capable of providing a friendly interface for reviewing the patches.
I guess most people who used GH PRs would object to this workflow: https://github.com/git/git/blob/master/Documentation/SubmittingPatches
But with some supporting tools (even for triggering a build somewhere) it might not be so bad.
Regards, Ion Savin
Hi Alessandro,
On Mon, Aug 27, 2018 at 11:35:21PM +0200, Alessandro Rubini wrote:
So, besides self-hosting (unfeasible for whole-kernel repos)
A remark for non-kernel-hackers: The linux.git tree is so large that you need a serious amount of RAM (and I/O bandwidth, and CPU) to host git repositories with it. However, IMHO, hardware capacity has been growing much quicker than the number of Linux kernel commits, so it is more feasible these days, at least for git-daemon + cgit to self-host that.
Also, more to Alessandro: Why not host your kernel tree[s?] on git.kernel.org?
So, I feel a little uneasy, and I'm now wondering where to push my yet-unpushed projects (while keeping previous stuff on github for several reasons -- mostly link-rusting issues).
self-hosting small projects is absolutely feasible. And there are tons of FOSS projects that are self-hosting.
How does the free software community feels in this respect?
At osmocom.org, we self-host everything, whether it's redmine, mailman, git, cgit, gerrit, jenkins, ... and use github only for public mirrors.
I don't think "gitlab" or whatever "social git" site around the repositories is needed unless you're involved in projects where you expect plenty of contributions from developers who are not familia with git send-email. And if a project grows to that size, I found gerrit much more reasonable in terms of structured code review.
For some personal stuff, I also still run a separate git instance.
For people less inclined with spending their days maintaining their own infrastructure, I would suggest to simply ask any of the existing FOSS projects or entities if they could run a repo there? Whether freedesktop.org or kernel.org, Debian/SPI, ...
But I would actually agree that there is a gap that needs to be filled: Community-hosted, non-for-profit FOSS project infrastructure. Funded by membership fees, operated by volunteers, with a legal entity/structure in place that will make sure they just don't re-brand, get bought, go out of business or depend on a single individual or corporation. I've already found this missing for simple mailing-list hosting. The same applies for git, gitlab, gerrit, jenkins, redmine, trac, patchwork, etc.
Regards, Harald