Ok so we have a state actor with unlimited resources attempting to manipulate our elections. Our defense consists of programmers and sysadmins working for the local county making sub-40s per year in salary (no offense). The election software is written on Windows 95 running with an Access database backend. I am not kidding.
Of course we've already been hacked. I personally think the 2016 had to be hacked given that no one, even Trump, thought he would get close to winning. I know it sounds like conspiracy theory bullshit, but come on, they already got Hillary's emails from the DNC. How hard could it be to target a certain number of counties in key states?
Certainly there are other techies out there who agree with me?
I'm somewhat of the opinion that something that overt was unnecessary, and so while it definitely could have been, I'm not sure it was.
"<Nation X> directly manipulated votes" is one thing, and very blatantly illegal. "<Nation X> convinced a bunch of people to vote against their interests and divide the US" is another, and while unethical, dishonest, and certainly not good, it's not clear to me that it's actually illegal. Besides, if you convince them once, they'll probably stick with the flawed information and voting pattern, which is a better return on investment.
I'm not from the US though, so... maybe I'm not clear on how easy it would be for people to collaborate and cover something like that up.
Well, nothing in there suggests direct manipulation of voting machines, which was the core point.
I think that somewhat proves my point - it's not clear to me that 'posting comments online' or 'paying news outlets to write specific stories' are or should actually be crimes. More directly stealing documents from election networks is - and was caught.
I think it would be more strange if our elections were not hacked--given how lax our election security is, plus how much there is for nefarious actors to gain. Maybe not hacked to the point of outright ballot stuffing, certainly not 100% of polling places are hacked, but I think contemporary election rigging is a more subtle, multi-pronged effort. The baffling way we've structured our elections makes it so only a few key areas need to be the focus of effort for attackers. We've really made it easy for them.
In any case, the likelihood that our democracy is intact and working as intended is virtually zero. There's obviously social engineering in play, if you count that as hacking. I think it counts; it's a way for a small minority to steer democracy in a way contrary to the legitimate wishes of the populace. Combined with other shady techniques, no single cause is enough to say for sure our elections are stolen. But all together I think it qualifies as rigged.
Go vote anyway, though. At least make it more difficult to spoof by increasing legit turnout. I really hope we have good participation this year.
Guessing or using social engineering to get into a Campaign manager's email account is easier than executing a hack across hundreds of counties in a different country.
Palin's email account was hacked in 2008... do we think that election was hacked as well?
It didn't even need to be all the counties. Just the intersection set of (most vulnerable, least funded, key electoral battlegrounds) to run up the numbers slightly over the state MOE.
Relying on pre-election polling is not a safe way to predict an election outcome, and has been badly wrong on quite a lot of occasions. Relying on pre-election punditry is even worse, since it's not only inadvertently biased but actively spun to promote certain outcomes. And relying on anecdote or acquaintance reports... sample bias aside, if you had friends or coworkers who voted for Trump, would they have admitted that openly?
Slightly better, we can look at exit polling to see how people claim they voted. At a glance, 2016 exit polling suggested far higher totals for Clinton than we saw, which is suspicious. But exit polling has systematic biases, and simple percentages have been increasingly unhelpful for several successive elections. The Kerry v. Bush exit polls, for instance, skewed towards Kerry in 28 states and Bush in 4. [1] In particular, 2016 exit polls massively oversampled college graduates in an election where a major factor was non-college Democrats breaking right. [2] Fortunately, we get enough demographic data on exit polls and vote counts to correct for some of this stuff in hindsight. It's not perfect, but what we find is that corrected exit data looks far more like the actual vote count than naive exit polls did. [3]
If we want to look for evidence of direct vote manipulation, a surprising election outcome isn't sufficient. There are much more specific things we should expect to see. Broadly, I can think of two dichotomies here: digital attacks (probably systematic and likely foreign) vs. physical attacks (probably local and domestic), and widespread versus swing-state manipulation.
For digital attacks of any kind, we'd expect to see a discrepancy in outcomes between networked electronic votes and other (non-networked or analog) votes. Fortunately, Jill Stein's recount initiative offered useful data: we have both paper and digital vote counts in Wisconsin and Michigan. The counts don't match precisely, of course, but studies found no evidence of any skew towards one candidate. Vote totals also don't appear to have had meaningful digital/paper discrepancies or departures from voter roll counts, which precludes other attacks like generating extra votes in red-leaning districts. [4]
For physical-access hacks, hard evidence would be much less obvious but statistical evidence would be much more obvious, unless we invent a conspiracy capable of traceless, coordinated physical action across the country. This is the sort of thing we did see in 2000, with thousands of votes from specific districts and even specific polling places vanishing. But the unexpected Trump vote counts were widespread and geographically consistent. Trump voters overall mirrored normal Republican demographics, but geographic results and exit poll counts alike show that the biggest changes were a rightward shift among white non-college Democrats, who are heavily overrepresented in the states which unexpectedly broke for Trump. Forging this sort of neat demographic shift ought to be basically impossible, because you couldn't plan the appropriate adjustments until after you had nationwide vote counts in hand. [5]
The hard part of manipulating a US election wouldn't be screwing up a few Symantec machines, but doing enough to change the outcomes without leaving extensive evidence of what was done. It's not enough to say that the outcome was a surprise compared to predictions or that Russia has good hackers, we'd at minimum need to see some kind of actual gap between the results and what people put in ballot boxes and told pollsters they'd chosen. I haven't seen anyone make a decent case for that.
To be honest, Voter ID addresses a made up problem. Search for wikipedia for 'Voter ID laws in the United States'.
Regardless of your political persuasion, I suggest you read a bit about how these voting machines are built. Assuming you are in tech, you will be horrified. Do you feel comfortable knowing your voting machine was written in VBA with a Excel spreadsheet as the backing data store? This is the level of incompetence we are dealing with. Integrity of our elections is an incredibly important issue for both parties, and this was a problem before Trump, Russia and the 2016 election.
I also suggest you read up on the stuxnet virus if you want to see what a determined state actor can create given unlimited resources.
The book 'Cod' by same author is also a really good, if not better book. I'm always impressed when someone can make a good story out of a seemingly boring topic.
It's important to remember what drives this - employers often like to think their problems are 'big data' and by god, they need the over-engineered solution. Your peers who interview you will toss your resume in the trash if you are not buzzword compliant. Hate the game not the player.
I solved my problem by getting serious massage therapy. For a long time I thought I had carpal tunnel, but the real problem was super tight muscles in my back and shoulder. After one painful massage the burning, tingling and pain in my hand and wrist was gone. Outside of getting an occasional massage, the answer was a regular routing of weightlifting.
I've come to the conclusion that the problem in tech is that all the people doing the work are in their early twenties and have no idea what they are doing. Once they get some experience they are quickly promoted to the CTO position. Rinse and repeat.
What we have here is a classic dbms problem and no one at Movio seems to know how to deal with that. Instead of migrating from Mysql to something serious (Postgres) they move to some columnar DB no one has heard of. Nevermind that postgres and a reasonably priced DBA and a little thought put into their data model/queries could probably handle all their issues.
Sorry for the snark, cheers on a successful product.
Can't help but also think "WTF are they doing there..." - we're doing exactly the same (user segmentation, targeting and campaign execution for cinema & movie users, disclaimer: we're more or less their only competitor, albeit indirect), but our solution is running at ~30k/year total at 10 times their user base. No magic in there, just good architecture and solid Computer Science. Boring technology (Go/Redshift/Postgres/S3).
The only thing I'd fully agree on is that using Go saved us a lot of resources as well. It's an awesome choice for stuff like this that needs to be reasonably performant as well as being simple, understandable and reasonably fast built.
Well, a problem in tech certainly. An alternative possibility (and another problem in tech) could be some mid-level developer could have figured out the problem at the start but because of artificial time pressure to deliver they didn't have the time to, so went with the first bad idea that popped into their head without taking the necessary time to evaluate it.
The fact this had to happen in a hackathon suggests a typical disconnect between management and development (and probably poor prioritization by management). Because development knew this was a problem and how to fix it (evidenced by the fact they fixed it), but it took removing management (aka a hackathon) to give development the space to fix it. And now the company pats itself on the back for having the vision to host a hackathon instead of structuring and prioritizing correctly in the first place so this would just get fixed on the clock.
I do think the author's takeaway about the value of simplicity and pragmatism are on point, but that applies not just to code but to management as well.
It is worth noting that on their website, their management team doesn't include a CTO, even though their main product is basically a software solution. They have a few sales people represented though, so management might not be great techwise.
CTO has moved to another company with not so shiny tech stack, as far as I know they were the main adopter of Go and other solutions to replace Java and then Scala. Perhaps Movio did not yet find the best fit for the company.
I think it is not necessarily the age but the mindset of focusing on solutions instead of understanding the problem first.
It often goes like this:
Oh snap, we encountered a problem! Lets find a tool, framework, language that promises to solve a similar sounding problem.
Now we have a problem with a layer of abstraction on top. Soon to be two problems.
Lets find a tool, framework, language to solve both of them ...
It is a spaghetti to the wall approach, where you just throw a bunch of things at your problem hoping that something sticks. And who cares how long it will stick.
Secondly as a developer I think in start-ups dedicated db experts are way underrated. Sure your fullstack devs can cobble together some tables, changing them 15 times a day to accommodate business requests and slap indexes on everything that gets slow. That is also the way to get into trouble once you scale, and instead of reflecting why this is, people reach for the bowl of pasta.
I was no different, when just starting out. I thought my biggest strength was, how quickly I can come up with easy "solutions" for any problem the company had. Took me years to realize how silly of an approach this is.
>It often goes like this: Oh snap, we encountered a problem! Lets find a tool, framework, language that promises to solve a similar sounding problem. Now we have a problem with a layer of abstraction on top. Soon to be two problems. Lets find a tool, framework, language to solve both of them ...
is absolutely real, I've actually seen in happen both in projects I was in, and heard or read about.
Yeah, my opinions on this are "slightly" influences by the Rich Hickey talks "simple made easy" and "hammock driven development":)
The difficulty I find is, identifying the moment to leave the hammock again in a startup enviroment. To what degree do you need to understand a problem before you take action. If you try to understand it 100%, you'll never get anything out there.
But I'm already very happy that I was able to convince the business side of the company of the approach in a brief talk about it and they now referrer to "the hammock" themselves :)
>The difficulty I find is, identifying the moment to leave the hammock again in a startup enviroment. To what degree do you need to understand a problem before you take action. If you try to understand it 100%, you'll never get anything out there.
Agreed. The problem, though, (and I'm painting with a broad brush here) is that the erring tends to be much more on the side of not trying to understand much or at all, of the problem, before jumping into action. I think a lot of it is due to peer pressure and wanting to be "seen" by peers and bosses (and VCs) to be doing stuff, as opposed to really getting things done better in the medium term, even if in the short term it looks like you are not acting but "only" thinking or analyzing or designing stuff. Hence my comment in that post I linked to, about "we have to ship next week". All too common - been there, seen a good amount of that. In fact, this subthread between HN user jacquesm and me just recently, is basically about the same point, although described in different words:
>But I'm already very happy that I was able to convince the business side of the company of the approach in a brief talk about it and they now referrer to "the hammock" themselves :)
Somewhat related: I've seen this problem exacerbated by the presence of "architects" who don't seem to have implemented running systems in a long time, and especially have no experience with running newer technologies; or limited experience which breaks at scale. Not saying this applies to all software architects, but I've seen this often enough.
e.g. I remember using a dedicated jenkins environment to run continuous, scheduled integration tests for my service. When the architect found out, he immediately sent me links to software packages that are dedicated to running continuous tests. I asked whether he had any experience running these new packages and if he would be willing to set it up/maintain it.... radio silence.
Some time ago, I thought it was <easy> to write code to do things. By now, I mostly ponder how I put things into postgres/kafka|rabbitmq|../memcache|redis|.../elasticsearch/neo4j so I can reduce everything to good queries into these systems.
You're right. I started using mysql in 2000 when it was still a toy. It's an outdated bias :) I was more flabergastered they would choose 'InifiniDB' when there are so many other great options out there.
Well, MySQL still doesn't implement the SQL standard from 1999 (20 years ago), because it's missing common table expressions. Although the next release will support them, thankfully. Sorry, I just had that axe to grind.
I disagree with your assertion that MySQL /is/ a serious database.
The questions I usually ask myself when evaluating database solutions is:
* Does it accept invalid data?
* Does it change data on error?
* Does the query planner change drastically between minor versions?
* How strong is transaction isolation? can I create constraints, columns or tables in a transaction?
* Does it scale vertically above 40~ CPU threads and 1M IOPS?
The answer to all these questions, for MySQL is "No". You could argue the value of some of them, but a lot of them highlight architectural or development procedural misgivings.
Not the OP but one example of accepting invalid data is Mysql defaulting values to null/0/0000-00-00 when no value assigned and no default on column. Unless one is in strict mode.
I appreciate that it might sound like that to someone who hasn't used MySQL in production for 10+ years.
To start with, this is still true today: https://vimeo.com/43536445 Despite being 6 years old, strict mode is still required.
Anything prior to MySQL 5.7 will accept "0000-00-00 00:00:00" as a valid date, 5.7 will not (which is sane) however this means migrating from 5.6 -> 5.7 just got a little harder.
In fact it wouldn't validate /any/ date so it would assume every year was a leap year and febuary always had 29 days.
Regarding performance:
This is what I found from my own experience: I was given the task of testing the limits of MySQL, MySQL was the chosen technology and I was no involved in making that decision so- whatever.
We were given 10 servers, with 40 cores (2014-2015~) 128G of DDR3/ECC and 8 SATA SSDs in RAID-0 with 1G of RAID cache for write-back.
We managed to get MySQL to bottleneck pretty quickly, our queries involved a lot of binary data so we should have been raw IOPS bound, but we weren't we were memory bound. So we replaced the memory allocator with a faster one (jemalloc) and we get a 30% performance improvement. We suspected that the kernel sockets implementation was slowing us down so we compiled a custom "fastsockets" linux kernel. The improvement was around 4%, but we were bottlenecked on memory. After doing a full trace of what MySQL was doing we saw that InnoDB was spinning on a lock quite a lot.
I asked if we could try other SQL solutions (MSSQL/PostgreSQL) Postgresql was first chosen because we could just install it, no license and no OS change... it was twice as fast as the optimised MySQL installation out of the box with a stock CentOS6 kernel.
We never even bothered testing MSSQL because PostgreSQL met our performance targets, we were now IOPS bound.
--
More anecdatum:
Regarding data consistency we (tried) to migrate to postgresql for performance reasons in 2014 (my previous company), and failed because MySQL had been corrupting our data very slowly and silently for many years (corrupting meaning not honouring NOT NULL, not honouring type safety, allowing invalid dates, inserting data on error) So far in that actually reimporting the output of `mysqldump` would not work.
Isn't it? I thought that today()-(2018 years, 4 months and 10 days) would be approximately that date? Maybe you prefer +0000 vs just 0000?
'ISO 8601 prescribes, as a minimum, a four-digit year [YYYY] to avoid the year 2000 problem. It therefore represents years from 0000 to 9999, year 0000 being equal to 1 BC and all others AD. However, years prior to 1583 are not automatically allowed by the standard. Instead "values in the range [0000] through [1582] shall only be used by mutual agreement of the partners in information interchange."
To represent years before 0000 or after 9999, the standard also permits the expansion of the year representation but only by prior agreement between the sender and the receiver.[19] An expanded year representation [±YYYYY] must have an agreed-upon number of extra year digits beyond the four-digit minimum, and it must be prefixed with a + or − sign[20] instead of the more common AD/BC (or CE/BCE) notation; by convention 1 BC is labelled +0000, 2 BC is labeled −0001, and so on.'
I think a lot of what have you written can be solved software-side. Good Database should be no excuse for bad code.
I do not think MySQL is a technological debt as in 80% startups moving to the different solution is cheap and non-problematic. The LAMP is good enough and quickest/cheapest for the majority of tech companies.
I'm being perfectly fair in being critical of a software which claims to be doing those things.
You can solve issues in your application if you know there will be issues like these, knowing the pitfalls and drawbacks of a technology is certainly noble- but if you do then why not choose something that follows principle of least surprise. (There might be reasons).
I would never claim that you should move everything from MySQL if you use it. However if you care about data consistency ensure that you change the defaults, engage strict mode, ensure that your application has no bugs in handling data.
This is actually hard to do correctly, it's overhead in development that you shouldn't be caring about. Just choose something that has sane error conditions and the problem vanishes.
Considering many, many of the world's largest tech companies use MySQL or MySQL compatible databases, it's rather absurd to say that MySQL isn't a serious database. Regardless of whether it matches yours or someone else's personal list of capabilities.
To be perfectly fair with you, you can make bad choices and still get something useful done.
Most companies are not alive "because they chose mysql over something else" they're alive because they have "good enough" tech to get the job done. The job that they're trying to accomplish is the thing that makes them successful.
Uber isn't super huge because it used a specific database technology. It's huge because it's good at marketing, it's providing some value to people.
If it got the work done at a reasonable cost and performed reasonably well (i.e. it served the purpose it was meant to serve), how “bad” a choice could it have been?
I'm reminded of an article about zombie companies I read recently: they're companies which are inefficient/pporly-managed/poorly-executing, but due to market/regulatory inefficiencies they're not dead yet. Companies which use MySQL are in a similar situation: they're not doing as well as they could be, and all other things being equal they ought to be put out of business by their competitors — but all other things are rarely equal.
Still, if you are making choices for yourself, you don't choose mediocrity and hope to muddle through: you choose excellence. Choosing MySQL isn't choosing excellence.
“Excellence” is a poor criterion for comparative analysis because it is (a) subjective and (b) unquantifiable.
Do you have objective or quantifiable data and references upon which your opinion is based, _and_ is universally applicable to any arbitrary problem that a SQL database might be an appropriate solution for?
If it costs development time because they need to be extra careful about not sending queries that ERROR and 'wipe' data.
If it silently corrupts data over years and gets discovered much later. (As was the case with my previous company, an e-commerce retailer that lost large chunks of order history)
Are those problems still unresolved in MySQL today? How do you know that similar or worse problems did not exist in alternative solutions at the time it was implemented?
MySQL is making strides to fix these kinds of issues ever since the Oracle acquisition for sure.
> How do you know that similar or worse problems did not exist in alternative solutions at the time it was implemented?
Because I've been working on database solutions for over 10 years, there are problems in other software but I consider data loss to be worse than any of them. For example the autovacuum in postgresql 8.3 and before was mostly garbage which ended up bloating highly transactional databases. But deleting data when you fail a constraint is worse.
I have 15 years of experience and can built a decent clean system using "boring" technologies. But all the decent paid work where I live is maintaining big balls of mud with tech that was obviously peak hype when it was chosen, and nothing done according to best practices because of that would require sticking with a tech and learning it properly. Its quite frustrating.
Then we have the interview process where people expect me to give up my weekend for their coding test and can't even be bothered to give you feedback afterwards. Or some ridiculous algorithmic nonsense that has no relevance to the job. Getting bored of it all.
If I ever be a CEO of the company / Startup, that one criteria I made is either I decide on all the technologies we use, or there is no CTO so i make those decision.
And that criteria of technologies could be summed into one sentence. Use something boring. No Hyped programming languages / DB / tools allowed.
Of course some would argue you would be doing it wrong even if it was using old tech / programming / tools. Well yes, but you have a sea of recourse and expertise there to ask for help. Instead of spending energy and time doing figuring it out.
Of course if your company is all about tech innovation, AI or something cutting edge there surely you will have to tried something new. But 80% of those startup aren't.
Quoting Dan McKinley's "choose boring technology" [cbt]:
> Embrace Boredom.
> Let's say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while. You might get a few more after you achieve a certain level of stability and maturity, but the general tendency is to overestimate the contents of your wallet. Clearly this model is approximate, but I think it helps.
> If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use MongoDB, you just spent one of your innovation tokens. If you choose to use service discovery tech that's existed for a year or less, you just spent one of your innovation tokens. If you choose to write your own database, oh god, you're in trouble.
> Any of those choices might be sensible if you're a javascript consultancy, or a database company. But you're probably not. You're probably working for a company that is at least ostensibly rethinking global commerce or reinventing payments on the web or pursuing some other suitably epic mission. In that context, devoting any of your limited attention to innovating ssh is an excellent way to fail. Or at best, delay success.
I'm a fan of taking a 'one new technology' approach. When I'm building something new, I get to choose zero or one new technologies to play with, depending on whether I want to get shit done or learn something new.
By choosing at most one new thing, you can better control for how your stack should work and how you expect it to respond to certain unexpected circumstances, which means you should be able to more effectively solve issues as they crop up than you'd be able to if you were using multiple new technologies.
I agree with the main idea: working with hyped technologies is not a solution and you can build most of the things out there with boring technology.
But then ... you have to find, attract and hire good developers. That's already difficult, adding an extra layer of 'boring technology' will make this task even more challenging.
"In June 1970, E. F. Codd of IBM Research published a paper [1] defining the relational data model and
introducing the concept of data independence. Codd's thesis was that queries should be expressed in terms
of high-level, nonprocedural concepts that are independent of physical representation."
The key, the whole key, and nothing but the key so help me Codd.
Also said as... "In Codd we trust."
If none of these DB jokes mean anything to you, take a DB concepts class at a CS university. There's a lot of great research going back 50 years and you can learn a great deal about why things are the way they are (tuple algebra and calculus). And before changing anything for something you think may be better, you should fully understand what you are giving up.
This talks a lot about 5.5 and mentions that 5.6 is “due out soon”. The current release series is 5.7. How much of this is outdated and how much has stayed the same?
"Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines." [0]
I like the part "We used it because it was there. Please hire some fucking software developers and go back to writing elevator pitches and flirting with Y Combinator."
Well, you should school those fools at Google, Facebook, Twitter, Pinterest, Amazon... and tell them how they are wasting time with their toy MySQL databases.
> those fools at Google, Facebook, Twitter, Pinterest, Amazon
... have dedicated hundreds of engineers and millions of dollars to nothing more than keeping MySQL up, running, and not crapping the bed every time someone looks at it funny. If you can afford that resource expenditure, by all means go nuts with MySQL. Most companies can't and would be far better served by something which doesn't need that amount of handholding to serve its basic purpose.
There is one thing that bugs me about all the talk of postgres' superiority: why haven't these companies switched to postgresql? Surely they weren't all too far invested in mysql before a "more knowledgeable" DBA came along saying postgresql is better.
Following on from that, I suspect a lot of large companies use MySQL because they always have, not because it's actually any good. For example, Basecamp used MySQL while I was there, but I never met a single Sysadmin there who would use it over Postgres if they were to start a new project.
PHP was built for the web and has been successful at that job. It is easy to use because the core developers have made some good design choices for the task at hand. For example no threads, stateless requests, core functionality focused on outputting HTML, etc.
MySql and PHP are good. They do the job they were designed for in a cost effective way and of course that means there will be trade offs.
PHP is crap. It's actively hard to write good code in it. Not good code like SOLID or pretty code that's self documenting, it's hard to write code that's not going to break in unique and interesting ways.
Sure, you can knock up a contact form in it really quickly, but that ease of use hides significant dangers.
This might have been true 10 years ago with versions like PHP 4. But remember many companies have invested a lot into PHP including Facebook. In the newest version of PHP what you said cannot apply with a type system, OOP features like traits, class inheritance etc.
> Surely they weren't all too far invested in mysql
I think by the time Postgres sorted itself out into a more user/admin-friendly system (which is still fairly recently, really), MySQL had pretty much conquered the "quick and easy" mindshare and was deeply embedded almost everywhere.
And if you've spent millions of dollars architecting your systems such that MySQL's flaws aren't killer issues, there's very little financial benefit to switching, I guess.
MySQL for years was far easier to install, configure and run than PostgreSQL, especially features like replication which was much better than other options. Big companies use these databases as more like simple key/value systems rather than complex relational schemas so strong replication and operational simplicity was favored over the rich featureset of Postgres.
Eventually Postgres caught up in most things, and the delay was in some part because of implementing those features "correctly" and with more thought, but it's still a delay that hurt the uptake in the early days.
You are right. It is a solid DB. I was more pointing out that if you have to move off Mysql, there are excellent options other than adopting a new columnar datastore.
Can you demonstrate that Postgre is significantly faster than MySQL on average? Highly doubt so. The problem is in the way data model was architected and implemented.
PG isn't usually faster than MySQL on any naive database (which is 90% of database in the world I suppose)
Most E-Shops will be fine running MySQL or MariaDB.
The one thing PG excels at however is that you can tune it much more to your workload and it allows tuning the workload much more finely than MySQL/MariaDB. That and the ability to extend PG arbitrarily (try adding native functions to mysql without recompiling) via the C-FFI offered. You can write and define your own index methods that let you use an index that is perfect for the workload or you can add a new data type to support a new input with validation.
You can sink a lot of work into getting the most out of a PG database, MySQL not so much. But again, for most people MySQL will provide the same (or even better) performance than PG. (I still trust PG over MySQL after MySQL nulled out all entries of a table with only NOTNULL columns after a nasty crash)
apples to oranges. you need to compare PostgreSQL with other database that don't take shortcut around ACID for performances (for example DDL forcing implicit commit in transactions)
Seriously? You don't need to do DLL operations in day-to-day use but you'll need to when you're doing development work.
For example, you had a "color" column on a table, for a new feature youre now adding the ability to have multiple colors. You're going to create a new column, create a new table, populate that table, and drop the old column. If anything fails during that process you'd like to be able to roll back.
There is a concept or "forward-compatible change". Basically, you don't do things that will break your software.
Example, you don't add a NOT-NULL column unless you can give it a good DEFAULT value, to make it work.
Also dont' drop columns until the software is ready for it, etc.
If you have a decent ORM, it will compare your "how it needs to be" sql schema with the "how it is" schema. Then it will generate appropriate "ALTER TABLE ...." "CREATE INDEX " etc statements. Note that this is automated and you never need to type SQL statements to achieve that.
All together in the last XXX years, I did not really need to do a rollback on a dml statement.
To be fair, we used to die at 40. Now you can get a CS degree at 21 or 22, and you still "have no idea what are you doing". Maybe it's just that technology is really complicated, the world is really complicated, and everything is changing fast.
I don't mean it to be conformist, but it's easy to forget some things are actually hard when you are very clever or old enough to forget how it was like when you were still learning too.
But at the same time it's easier than ever to find information on nearly everything. Asking experts also is easier than ever before. I don't think it's feasible to dismiss GP's claim with yours.
Sorry if it came across like that. Not trying to dismiss his claim, just trying to give a wider perspective. Similarly, in the same way you are right saying it's easier to find information on nearly everything nowadays, it might also be relevant to remember that we still have limited time and attention spans.
I barely ever see an engineer knowing everything that happens from the top to the bottom on any platform. Very few companies are forced to get to know their stack in depth, usually they throw more money to the problem.
> I've come to the conclusion that the problem in tech is that all the people doing the work are in their early twenties and have no idea what they are doing.
Not that I disagree, but to be fair, I've seen plenty of tech ignorance with experienced and older engineers as well that has been pretty crippling.
Agree. I'm not sure per se this is an age thing: the field is just so big and apparently (but perhaps less so in reality) in a constant state of tech churn, that it is hard to anchor to consistent proven techniques and practices.
This should be the top comment. You can tell from the article that the WSJ is trying really, really hard to push the 'intolerant liberals' angle. From what I can tell, the only person cited who really left due to politics was Thiel.
The question is how will power be distributed between the owners of the automated plants and the 'subsistence' class? Will there be a true meritocracy, or will the children of the plant owners get all the best jobs? Judging from the way third world economies work, I would not assume the best. This I think is the reason for fear and protectionism.
In the US one party promises free education, free or reduced health care to the majority of the population, etc etc.
This party is still not in power. I don't think its a foregone conclusion that people vote themselves bread and circuses, which is I think your primary point.
On another front I find it hard to believe that UBI if it ever comes to the US will be set anywhere above poverty level. In that context anyone who voted to take money from people in poverty and use it to build roads they would probably be laughed out of office and rightly so.
The Democratic party does not credibly propose any of those things. Hillary Clinton explicitly rebuked Bernie Sanders' campaign promises with regards to health and education.
We have not yet seen a candidate running on a socialist proposal coupled with even mediocre or passable mediatic support. Every time a socialist-leaning candidate pops up, moneyed interests work really, really hard against them. However, it's getting better and better (see: Corbyn in UK).
It'll come. People will vote themselves, not bread and circuses, but health and education. And democracy and markets, as opposing decision-making mechanisms, will clash.
> I don't think its a foregone conclusion that people vote themselves bread and circuses, which is I think your primary point.
Or just populism in general - and I think "fear" is a stronger voter-pull than "want": fear of Mexican immigrants committing crimes, taking your job, fear of increased healthcare costs, etc.
As the majority of Americans do have health coverage it's a given that 51% of the population want cheaper coverage even if it means the other 49% will lose coverage then that's going to happen.
Is it really that simplistic? I would guess many wouldn't vote for cheaper coverage for them if it means that their parents/children/close friends lose coverage.
(Which is why politicians are so busy denying that anyone will lose coverage under their plans.)
Given the history of trying to build public support for moving UBI from $0 to >$0, probably the former, even if you switch the non-UBI factors, after wealthy interests who aren't on the benefitting side of redistribution get their propaganda in.
“People will always vote for the candidate that promises to increase public benefits they receive” is an attractive myth that doesn't actually play out in practice; in the US, it doesn't even play out in practice in primary elections within the major party most favorably inclined to public benefit programs.
" wealthy interests who aren't on the benefitting side of redistribution"
I don't think anybody wants to live in a society where 99% of population are suffering from hunger and lack of basic necessities. There will be no safety in such a society.
but there is always possibility of robot bodyguards!!!
I would assume that's because most people right now picture UBI as benefitting only lazy people etc. Once there is a UBI, they'll realize that they're also getting paid, and they'll be more likely to vote against people who want to cut it.
> Once there is a UBI, they'll realize that they're also getting paid
As with existing benefit programs, people will also realize that they are paying. And, as with benefit programs now, even people who are likely to benefit far more from the “being paid” part more than they lose to the “paying” part will often be prone to, and be encouraged by slick propaganda from moneyed interests to, identify with and vote for the interests of those who are on the other side of that equation instead of their own.
I think it would be important to tie UBI to key economic indicators and put any direct adjustments under the control of economists. We will need to limit politicians control of the purse strings.
> I think it would be important to tie UBI to key economic indicators
Better, just tie it to a defined relationship to a particular revenue stream; if it's capital-income-derived revenue, it's a good theoretical tie to the purpose of compensating for the displacement of labor in industry, and it's less subject to distortion than most economic indicators.
Then wouldn't you end up with a class of economists who are in a revolving door situation with financial institutions (who stand to benefit from networking with those controlling the purse strings), the same way we now have politicians converting to lobbyists (who benefit from networking with those controlling the political decision making)?
> The bad guys in David Weber's Honor Harrington novels are based on this idea.
Pedantically, the obvious “bad guys” in the earliest few, not the (so far, at least) principal bad guys of the whole series, who it turns out were also manipulating those obvious, early ones.
> It's an interesting take on what might happen.
It's kind of a shallow throwaway regurgitation of a standard argument without deep exploration or novel insight (which is okay, because it's not like Havenite society under the People's Republic is all that central a focus of any of the books, so shallow cartoonish broad strokes are fine.)
The bad guys in Baen published novels are just whoever the author doesn't like, usually anyone who isn't a Republican.
Though David Weber has his own pro-monarchy pro-libertarian ironically-not-racist thing going on, he just assigns random evil acts to random characters coincidentally named after his opponents, like having the Progressive Party actually be into human trafficking.
(When SF authors get old they all develop terrible opinions a while before anyone notices. Like Larry Niven wrote a book about the Green Party causing an ice age by stopping climate change and Arthur C Clarke became a pedophile. Heinlein, of course, started out that way.)
People don't necessarily want hand outs, they perhaps just want to live in an environment where they can sustain themselves through their own effort. Perhaps this is part of the reason HRC lost in November.
Centrists like HRC don't believe in giving you things for free, they want to watch you jump through hoops made of paperwork first. It's the TurboTax lobby at work.
What sort of corruption is possible when everything is done by machines. The greatest argument to privatized enterprise is efficiency, but how does that argument hold up with factories that are massively automated. Why are owners of such an enterprise needed at all?
If everything is automated then such companies need no owner as the ownership is to make sure the work gets done properly. It will be done properly no matter what the owners do. Why are they entitled to anything more than your average person.
I would think in the future you have shareholders, people who own the machines, that employ like a few workers who are managed by machines. And therefore those shareholders can just be just about anybody or even governments... who take the profit or revenue that enterprise generates and distribute it to the people.
I am talking about something like this sort of business which does not change, or grow much. It's a commodity. It can just be produced by pretty much anyone.
If we reach that point I would prefer to see people vote for government to construct new automated production facilities and provide their benefits to the people. Rather like how I think that voters should be able to vote for a government owned power plant, sewage treatment plant, library, broadband provider, etc. but I don't think they should be able to vote to seize the nearest privately owned book store for conversion to a public library.
If you build public facilities in parallel there's no unjust seizure of existing assets. (Some people will complain that it's unfair to allow the government to do anything at all that could reduce the profitability of existing private assets, but I don't think that those people make up a prohibitive share of the population.)
With the amount of unused capital that is available in the system these days... I don't know why anyone would, anyone should be able to buy machines and build their own factory. It's just that those assets would not be growth assets. In fact if anything I would think investors would not be interested in investing in that sort of enterprise... as the returns are nominal so it makes sense for governments to run these sorts of things like utilities.
Wouldn't such a society be predicated on the idea that everyone has a basic income - and these fully automated factories with no (state?) ownership would be relegated to producing the basic consumables that come along with a society that provides all the basics for survival and hygiene and ideally health (mental and physical)
As such, a society would find value in the skills of the populous which produce things that have a "human" value for having been created by a human.
Further, it would seem that the overall population would drop significantly. Especially as technology for automation iterates, machines will care for human basic needs, and will care for maintenance and production of other machines to keep the system going.
Will AI manage the overall resource supply chain?
It would be interesting to see a critically thought out matrix of all the roles which could be done by machine/automation/AI vs that which must require a human.
What about "soft" skills required to run a civilization; politics and law for example.
Where politics is fundamentally required to ensure stability of an economy and society such that humans can survive in an ordered world, it is clearly exploitable and shouldn't we be attempting to remove as much human cruft from that process as possible - but ensuring that human empathy remains. As machines cannot have empathy. (At what point do we trust "programmed empathy" in AI?)
---
While there are all these efforts on ML to get machines to "see", say, cats in a picture, are there any efforts for teaching an AI to discern emotion in any given scenario?
Then ultimately, an AI will use all this to interpret intent...
Of course we've already been hacked. I personally think the 2016 had to be hacked given that no one, even Trump, thought he would get close to winning. I know it sounds like conspiracy theory bullshit, but come on, they already got Hillary's emails from the DNC. How hard could it be to target a certain number of counties in key states?
Certainly there are other techies out there who agree with me?