> Can someone who doesn't like symbols help me understand the downsides of them?
I wish I had been clearer in my talk but I only had 30 minutes and wanted to cover other topics. Here is a more comprehensive argument against symbols in Ruby:
In every instance where you use a literal symbol in your Ruby source code, you could you could replace it with the equivalent string (i.e. calling Symbol#to_s on it) without changing the semantics of your program. Symbols exist purely as a performance optimization. Specifically, the optimization is: instead of allocating new memory every time a literal string is used, lookup that symbol in a hash table, which can be done in constant time. There is also a memory savings from not having to re-allocate memory for existing symbols. As of Ruby 2.1.0, both of these benefits are redundant. You can get the same performance benefits by using frozen strings instead of symbols.
Since this is now true, symbols have become a vestigial type. Their main function is maintaining backward compatibility with existing code. Here is a short benchmark:
There are a few things to take away from this benchmark:
1. Symbols and frozen strings offer identical performance, as I claim above.
2. Allocating a million strings takes about twice as long as allocating one string, putting it in into a hash table, and looking it up a million times.
3. You can allocate a million strings on your 2015 computer in about a tenth of a second.
If you’ve optimized your code to the point where string allocation is your bottleneck and you still need it to run faster, you probably shouldn’t be using Ruby.
With respect to memory consumption, at the time when Matz began working on Ruby, most laptops had 8 megabytes of memory. Today, I am typing this on a laptop with 8 gigabytes. Servers have terabytes. I’m not arguing that we shouldn’t be worried about memory consumption. I’m just pointing out that it is literally 1,000 times less important that it was when Ruby was designed.
Ruby was designed to be a high-level language, meaning that the programmer should be able to think about the program in human terms and not have to think about low-level computer concerns, like managing memory. This is why Ruby has a garbage collector. It trades off some memory efficiency and performance to make it easier for the programmer. New programmers don’t need to understand or perform memory management. They don’t need to know what memory is. They don’t even need to know that the garbage collector exists (let alone what it does or how it does it). This makes the language much easier to learn and allows programmers to be more productive, faster.
Symbols require the programmer to understand and think about memory all the time. This adds conceptual overhead, making the language harder to learn, and forcing programmers to make the following decision over and over again: Should I use a symbol or a string? The answer to this question is almost certainly inconsequential but, in the aggregate, it has consumed hours upon hours of my (and your) valuable time.
This has culminated in objects like Hashie, ActiveSupport’s HashWithIndifferentAccess, and extlib’s Mash, which exist to abstract away the difference between symbols and strings. If you search GitHub for "def stringify_keys" or "def symbolize_keys", you will find over 15,000 Ruby implementations (or copies) of these methods to convert back and forth between symbols and strings. Why? Because the vast majority of the time it doesn’t matter. Programmers just want to consistently use one or the other.
Beyond questions of language design, symbols aren’t merely a harmless, vestigial appendage to Ruby. They have been a denial of service attack vector (e.g. CVE-2014-0082), since they weren’t garbage collected until Ruby 2.2. Now that they are garbage collected, their behavior is even closer to a frozen string. So, tell me: Why do we need symbols, again?
I should mention, I’d be okay with :foo being syntactic sugar for a frozen string, as long as :foo == "foo" is true. This would go a long way toward making existing code backward compatible (of course, this would cause some other code to break, so—like everything—it’s a tradeoff).
I’m curious how Europe’s tech hubs (specifically London, Paris, and Berlin) compare to U.S. cities.
I suspect Berlin would compare quite favorably, despite the relatively high German tax rate. I moved there 18 months ago and currently pay €500 ($565) for a one-bedroom apartment in a nice part of town (Prenzlauer Berg). In San Francisco, I paid 5X that for a studio apartment in Hayes Valley.
I would say Berlin would look good, cost of living there is decent mainly due to the reasonable rental costs and good transport, and the demand and job mobility that has improved a lot there. Dublin also wouldn't rate too badly. London would probably look more like New York.
I think the problem with Berlin is that salaries are low, while prices are growing pretty quickly.
From my experience, this is what makes Berlin less attractive than people think (at the financial level):
- low engineer / developer salaries compared to other German cities.
- rent prices are growing a LOT faster than salaries, my last appartment outside of the city center (the "Ring") cost almost 900€ a month, plus electricity.
- public transportation is not that good (especially in winter) and quite expensive relative to the avg. salary (about 80€ if you live in the AB zones, 100€ for ABC).
- high taxes, though this is a common problem in Germany.
- food is cheap, but not cheaper than in other German cities with better avg. salaries.
- if you own a car, prepare for extremely high insurance and taxes
Obviously, Berlin has its own perks, besides the known ones like night-life and openness of the people, it has a really huge demand for engineers and a vibrant startup scene.
Glassdoor suggests "software developers" or "software engineers" in London earn around $60k, which would put it far below the salaries listed here.
In the absence of the COL index for the UK we can use the "London living wage" - supposedly a baseline for a minimum acceptable standard of living[1] which translates to £18k (a little under $30k).
The tax paid on the amount earned in excess of this living wage is around £7k (<$11k)
That would leave around $19k of savings or "discretionary spending", actually slightly better than NYC and Portland despite the low salary, but based on what is almost certainly a much less generous view of essential costs of living.
[1]Since it assumes dependents it's quite a generous minimum standard; indeed excluding tax payments from the equation I've never spent more than it whilst living in London on substantially higher income. But it's also almost certainly less generous than the COL assumptions for the US cities.
Interesting, average rent in London is around £1000/month, plus £44 council tax (e.g. Wandsworth), plus monthly travelcard £120, gas and electricity £45, broadband/phone £16, mobile phone £15. Leaves about £60/week for food, clothes, etc. If you do manage to find a studio flat for £700/month then that brings it up to £130/week, so you could actually go to a pub sometimes.
I'd pay 65 quid for a mothly bus pass (I didn't have to use subway on a typical day), and the rent was £65 a week plus electricity (3rd zone just outside of the 2nd, Streatham Hill) - for a bedsit, kind of a shithole, but I've seen worse. It was around 2010.
London is phenomenally expensive. Rent, tax, national insurance and transport consume most people's disposable income - even in well-paid professional jobs.
You certainly work on tons of great open source stuff, and I don't want to diminish the value of that at all, but for somebody else to hold up your streak as a shining example of the commit-every-day mindset is a bit in the grey area, as far as I'm concerned.
I've not seen anybody disagree with that. But running `bundle update && git commit -a && git push` falls into what I, for myself, consider a grey area on the "I've committed today" scale. I say this as someone who also tries to maintain a GitHub streak.
You, and sferik, and each other GitHub user is totally free to disagree. I don't own the streak system.
It's gotta be done and someone has to do it. Whether it counts as coding or not, it's still a process that someone has to focus on and take time out of their day to complete.
If there was no financial incentive, I would probably merge it but now I'm questioning the motivations of the committer. If I merge this, it means there will be less money for someone else, who makes a more significant contribution in the future.
Personally, I prefer Gittip’s model to incentivize open-source contributions, if only because it doesn’t create this kind of noise for project maintainers.
I don't know. I find gittip's model mostly rewards popular coders.
Where as tip4commit, rewards are not influence by popularity. I do see how fluff pulls could drain the pool. One needs to ask: Is changing a typo worth $4 dollars? Is it worth my time to do it? Maybe I should just click that merge button.
In my opinion bountysource is the best reward system, as it gives context, popularity is a boon, and it shouldn't generate fluff.
Gittip is certainly going to help more popular coders at first. But that's true with any platform in its early growth phase.
When Gittip is more well-established, anyone who wants to should be able to put a "Gittip button" on their project page. When Gittip itself has more name recognition, an unknown coder who creates a widely-used project should be able to rapidly pick up meaningful support through Gittip.
Not only that, Gittip's project pages allow funds to be distributed to the entire team. So even if you're not well-known, if your project is, you can still get some.
You don’t know that the PR was submitted with any knowledge of tip4commit. Would you accept the commit normally? Do you want the very existence of tip4commit to modify your project management? My advice: act as if tip4commit doesn’t exist; they’re just providing a service on the side which you shouldn’t need to worry about.
People have an economic incentive to pick the low-hanging fruit, but eventually there will be none left.
Personally, I wouldn't mind accepting something like that for my own open source project. It takes time to proofread, and I appreciate spotless grammar.
IMO if this commit is useful - accept it. It will reduce the next reward by only 1%. Sooner or later all the simple ways to get tips will exhaust. The project will be a bit closer to perfection. And maybe you will have one more happy developer.
While I love to get documentation updates, the comma thing and semi-colon thing is ridiculous, and in the comma case is less correct in this context. The doc re-org isn't a bad thing, though.
But, I also suspect that once there are lots of projects that have this feature, and the maintainers establish that they won't merge non-useful commits, like grammar nitpickery (though good grammar has value, and we merge stuff that fixes grammar in our docs, and I wouldn't mind having people get paid for it) this would become less common. And, reputation still matters. I wouldn't want to be "that guy" (and I wouldn't want to hire that guy for substantial work) that takes advantage of a simplistic system to get paid.
Hey there, that last commit is mine. I won't be getting a tip for it, I just thought it was a nice change. The first thing I usually look for with a tool like this is how to install it, so I moved the Ruby installation steps down to the bottom so as not to bury what I thought was the important bit in the Installation section. Poorly-timed, if anything.
What if this kind of routine trivial fix is exactly the sort that needs to be incentivized to actually happen? This is perhaps an extreme example, but it's also the sort of thing that is both common and vaguely unappealing about open source projects. Everyone's excited to do the big important next feature or blatant bug fix, but no one really wants to do the janitorial cleanup.
What if the merger were able to modify the tip amount?
A project could have a standard of, say, 0.5% for typo fixes, 1% for documentation, 2% for refactoring which improves Code Climate or coverage statistics...
Hm... Do you mean to allow the merger to specify the value of the commit? E. g. by default it could be 1%, but merger can make it smaller if commit is not very valuable. Sounds like a good idea, thanks!
Interesting idea. Here's a twist: what if the committer could set the value of their commit instead? Social pressure would dictate that most people would not overreach. Then those who make small but useful commits could recognize in advance their minor nature. This is essentially a kind of honour system.
That's actually a good idea; you'd probably get fairly reasonable results that way. The downside: using those tags at all makes it unambiguously clear that you care about the tip4commit tips.
One plausible possibility that might help: in the block that normally contains Signed-off-by and Reviewed-by, add a tag for marking a commit as "minor" or "trivial", in the sense used by the Linux kernel's trivial@ patches or the GNU project's threshold for changes accepted without copyright assignment.
And as a quick hack, scale the tip by log(diffstat) with a sensible upper bound.
I have submitted similar pull requests to open source projects with no financial incentive. I don't really see the problem, as long as it's judged to be a "legitimate" correction.
What you are describing is direct response advertising.
There is another type of advertising called branding. Budweiser doesn’t spend $25 million per year advertising during the Super Bowl because they expect you to see the advertisement and immediately order a beer at the bar. Instead, Budweiser is trying to tell you a story about their brand to evoke an emotional response that will be stored deep in your amygdala such that every time you go to buy beer in the future you will reflexively choose Budweiser.
Google is better for direct response advertising; Twitter is better for branding.
Yes, that's the point. As people spend more of their time and attention on social media relative to other media, where will those branding budgets go? Not to Google (well, actually, that's why they bought YouTube...but not to Google Search).
Twitter is one of the few outlets where brands can effectively communicate their message online. That's the theory, anyway. You can argue that Twitter is a bad medium for branding but arguing that it's a bad medium for direct response advertising is missing the point.
Watch out guys Twitter is doing something different.
On the web there is far more money in direct response right now because that's the easiest to measure and we have the platforms well ingrained in all our systems.
To say that brand based advertising isn't required or has no place is contrary to everything we know about advertising. You don't see the coke logo 50 times a day for nothing.