I think you are misattributing his snark. In his regular column he has a recurring section called "Blockchain blockchain blockchain" such as in [1] in which he covers how the finance industry itself is trying to apply blockchains to everything. It's the new fad, they are in the Peak of Inflated Expectations [2]. So while it might be true that some bankers are afraid that Bitcoin is going to replace them (I have no opinion on that) the author is highlighting yet another case where blockchain is supposed to easily solve any difficult distributed ledger problem.
Analytics is what Kudu was designed for (it's not just marketing), so some tradeoffs were made. You'll get the biggest bang for the buck if your use cases are heavy on inserts and big scans with selective filters. Other use cases will perform ok or meh VS other storage engines. Also note that Kudu is still pre-1.0.
Thanks for the explanation. We're currently using Elasticsearch for doing ad-hoc aggregations (e.g. count per month filter by a few dimensions) in real time. Indexes are some tens of million items per year per column; not much by big-data standards, but enough that we need a whole bunch of nodes, and small enough that even wide queries (like getting a total for the entire year) finish in a couple of seconds. A large part of ES's speed comes from being able to perform the aggregation function locally on each node that has the shard, in parallel, rather than scanning the full data the data into the client first. Is this a kind of use case where Kudu would perform well? How does Kudu solve data locality? Would you actually do queries like these, or would you precompute periodic rollups?
You seem to have a pretty typical use case that we're targeting. One thing to understand about Kudu is that it doesn't run queries, it only stores the data. You can use Impala or Drill, they'll figure out the locality and apply the aggregations properly/push down the filters to Kudu.
Did you initially pick ES over systems like Impala because of the lack real time inserts/updates when used with HDFS?
Thanks, that's helpful. We picked ES for several reasons. We're not a Java shop, and the Hadoop ecosystem is heavily biased towards JVM languages.
Secondly, ES is easy to deploy and manage. Being on the JVM, it admittedly has a considerable RAM footprint, but at least it's just one daemon per node. With anything related to Hadoop, it seems you have this cascade of JVM processes that inevitably need management. And lots and lots of RAM.
Thirdly, as you point out it's easy to do real-time writes.
Well TBH you do get to pick which Hadoop-related components need to run, HDFS's Datanode itself is happy with just a bit of RAM. I do understand the concern though.
You're probably happy with what you have in prod but if you get some time to try out Kudu feel free to drop by our Slack channel for a chat! http://getkudu.io/community.html
The issue isn't that MongoDB is eventually consistent, it's that the documentation claims that in some cases it's strictly consistent[1] while Kyle found that:
"MongoDB, even at the strongest consistency levels, allows reads to see old values of documents or even values that never should have been written."
I didn't say Eventual Consistency. I said Occasional Consistency. MongoDB is has hard Occasional Consistency. Indeed, it is the most occasionally consistent database I know of. I once wrote a few million records into Mongo. It was consistent before the write, but never again after.
Great for sub-linear time algorithms! At that point, all my algorithms ran at less than O(n) on the size of the data I had written in.
From a business perspective, Occasional Consistency is also a very nice property if you are storing audit data for certain types of organizations. It gives complete plausible deniability about rule compliance.
Will it be the end of eat24's weekend coupons? When they started they were regularly giving 3-4 bucks per coupon. Now, I can't remember the last time it wasn't $2, apart from those times when they teamed up with other companies like Paypal and gave away $10. I'm actually surprised it went on for so long, seems like an expensive way to help retention.
I talked to them once to get pricing and they wanted 12% of every order. I'm sure handing out coupons regularly isn't costing them much is worth more in boosting their ordering volume.
That along with: "Many of you use Direct Messages to reach the people and brands you’re only connected to on Twitter."
So the blog post's target audience seems to be their own users (you experience/use), but spoken in advertiser-speak. It's a not so subtle reminder that the user is the product.
That's the other way around, the guy who did the translation without context wrote "jour de merde", and I saved them from releasing that in their phone calendar app during my one-day job paid in cash.
I wonder why Google translated "huis-clos" to "camera" in the header text. Best I could find is that Sartre's Huis Clos was originally called "in camera" which means: http://en.wiktionary.org/wiki/in_camera.
So my guess is that they are actual EFL, India, employees who are flown in for some time, probably on a tourist visa (I've seen that elsewhere), but still paid their normal salary while they work here.
The truly exceptional situation here is that their salary happens to be so low it's below the minimum salary (edit for clarification: exceptional in that it also happens with employees whose salary is above the minimum, but then article would seem a lot less sensational).
> The truly exceptional situation here is that their salary happens to be so low it's below the minimum salary.
Not in India it's not. In 120 hours at $1.21 per hour, these workers earned almost 9,000 rupees. While $1.21 doesn't sound like a lot here in the US, in India about 100 rupees will buy you a nice meal out. 400-600 rupees will buy you a hotel room for the night, etc. The dollar has a lot more buying power than the rupee.
This is just to say, for a contract worker hired in India, these wages are "acceptable". I think it's likely the company didn't understand US law dictates that while the India workers are in the US, they must abide by US minimum wage laws (which would make for a substantial salary increase for these workers).
I think it's more likely they didn't understand the law fully rather than malicious intent. The workers were after-all, temporary and were not going to be staying in the US permanently. This would coincide with the symbolic fine of $3,500 plus wages-due.
> I think it's more likely they didn't understand the law fully rather than malicious intent. The workers were after-all, temporary and were not going to be staying in the US permanently.
Oh yeah, we agree there, what I meant is
that of all the times companies brought over employees here to work for some time, this time it so happened that their salary was below the minimum. If they were bringing over people from, say, Canada or Germany, they'd probably be paid way above the minimum.
While that may be true, both Canadian Dollars and German (Well, EU) Euros both have a lot more buying power than the Rupee, and are more in-line with the Dollar. 1 Euro as of today is $1.26 USD. So someone getting paid 25 Euros an hour in Germany will get $31.50 USD per hour here.
In India, some googling of job postings has led me to believe the average software engineer makes about 500,000 Rupees a year. That's only $8,173 USD a year. (A nice 2 bedroom apartment is about $80-$90 USD a month in India)
Economies are totally different, which is why there appears to be a large gap in compensation.
For those interested, a good source for a lot of publicly reported crashes, accidents, incidents and whatnot is The Aviation Herald. Here's the list of all their recorded crashes sorted by occurence date (toggle the icons next to "Filter" to see more kinds of events): http://avherald.com/h?list=&opt=7681
1. https://www.bloomberg.com/view/articles/2017-01-31/mirror-tr...
2. https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Ga...