Hacker Newsnew | past | comments | ask | show | jobs | submit | xmatos's commentslogin

AMP is not about fast websites, better user experience for either users or publishers, nor a trusted platform.

It's about google hosting your site and capturing data from it. It's a shitty idea and i can't understand why people use it.

Yes, make your website fast, please. There're plenty of guidelines and tests for that, but you don't need amp for that.


this was beautiful:

Every step in a software project is solving problems. You don't know how long it will take to solve the problem. You don't know how many new problems you might run into along the way.


I like the idea of a tool like this, but I'm not sure it's a better process than having migration scripts. And I dislike migration scripts.

To me, the perfect process would be, using an MVC web framework as an example, to generate a temp schema from models and run a schema diff tool like this, to update the destination database. That would eliminate migration scripts, but would probably slow down the db sync process, once your db start getting bigger.

Now, if you're not using an ORM, I think it's a bad idea to use a tool like this to update a production db from development. It will work for a single dev, but not for a team project, since you'd need a common dev schema that would be used as a source for the diff, and that will get clogged quickly.

It's still a nice tool to find and update small inconsistencies between different environments. Nice work!


I’ve been using a couple of python tools wrapped in two shell scripts to achieve what I think works well:

migration-capture.sh (using https://github.com/mmatuson/SchemaSync) creates a temporary db using a base schema SQL file, applies previously created migrations, and then creates up/down migration scripts by comparing that to a db that’s been modified manually or by an mvc model tool, etc.

migration-apply.sh (using https://github.com/gabfl/dbschema) applies the “up” scripts that haven’t been applied previously OR runs the “down” scripts corresponding to “up” scripts that have been run previously but are not found in the deployed scripts (ie rolling back a deployment automatically runs the down script for migrations that were in the rolled back versions)

Right now my only concern/goal for improvement with the setup is to replace the python tool the apply script relies on, with a pure shell solution so it’s one less dependency to need to install on production machines.

Edit: spelling typo

Edit2: Wtf is a bass schema.


I'm working with Postgres using Marten, and it let's you do exactly this. I have a console app that builds a model in memory, compares it to a database, and generates a migration script - it works really well.

I recently tried to replicate this with SQL Server and Entity Framework Core, but trying to reverse engineer a database to a model seems impossible.. I can get close, but there are always some things it gets wrong that need manually fixing... at which point I may as well write the whole damn script by hand anyway. sigh I wish the EF team would just add real support for reverse engineering databases.


Like it or not, that enforces the beauty of GPL. I'm positively sure, if it wasn't it, linux wouldn't be in its position today. I actually prefer the LGPL, as it doesnt spread through, or "contaminate", the rest of your code base.


I've asked myself that many times before and I'd say it's because it's not an easy task and often you need to get things done to meet deadlines, instead of developing a general solution, that although could save time in the future, might not help much in the short run.

Having said that, Django admin is the best example of that. I'm a huge fan of it and tend to use it on every project that fits a crud workflow, mainly business apps.

To give an idea of how much work goes into building something like that, it took 5 devs an year to build its first version, not to mention add new features and improve it.


Django admin sounds like a very useful tool.


Why use GraphQL? Why not simply expose one endpoint that takes a sql data parameter, queries the db and return the result set as json?

Yes, you'd have to setup db security to allow those db commands to run only selects and restrict which tables are allowed to be queried.

How can using a new json query language, without any server libraries targetting relational databases be better than that?

How much work would be envolved to implement this on the server side and for what, exactly? to allow frontend devs to query without exposing separate endpoints?

Client-server applications should not run business code on the client. The only code that should run on the client is the UI's.

Sorry, this is simply wrong.


What happens when your backend isn't SQL? Then you'll need to accept a different query language as well. Wait - maybe those could be abstracted into a single query language? Ah, it's GraphQL!

To your other points:

* no matter what you do to expose an API you have to set up security

* there are already server side solutions that help enormously with implementing your own endpoint

* I don't see the logic behind your client-server opinion. You seem to say that the server should be defining the data that the client needs.


Regarding the last point, yes, that's what I'm saying. And it's not a matter of opinion, but of decades of computer engineering best practices.

Your app's UX will depend on how much code runs on the client. It can vary from a dumb terminal, like classic web applications, where everything runs on the server, to smart or rich clients, where some code do run on the client, but is always related to the UI. In a client-server application, your app's logic should always run on the server.

I've worked on desktop client-server apps where the application logic was on the client and the server was mainly the database. It works, but you might end with clients with different versions trying to access the DB and you don't want that.

That's what server validation means. You should never trust clients. You shouldn't build a web app, which is a client-server app, entirely on the client. It's like reverse best practices. It will bite you.

Regarding having something else than sql, how much effort would be necessary to build a graphql server library that abstracts whatever you have on the server? Would you build something and release it to your clients with that level of maturity? And for what?


Hype and lack of critical thinking abolutely had their role, but mostly because it's so much easier than a relational db.

Relational db's are hard. You have schemas and query planners and the truth is, you have to take care of every single query. Every query counts.

The same query can take 30 seconds, or 30 milliseconds to run, depending on it using an index or not. You think it's ok for it to take 2 or 3 seconds, until it hits production and halts the server.

Mongo doesn't have any of that. Just load and dump json back and forth, which is perfect for fast prototyping.

Who cares about data integrity or technical debt?


It has most of that. It's not that RDBMS are hard and schemaless convolutions are easier.

Mongo has query planners, the same thinking you do in SQL applies to Mongo, the edge cases are different. For example, you can't have composite indices on multiple arrays. The same principles we consider when tuning our RDB apply to Mongo, normalisation comes at different forms/levels. The same applies to denormalisation. Just because there are/were "no joins" didn't mean that everything goes into a single collection.

I run some Mongo deployments, and I take as much care in understanding performance, as I do with our SQL deployments. Databases are hard to those who are unwilling to open up the manual and read. It's not only SQL.


GitLab's CI runs on every branch push, which I dislike, since you can push a branch that isn't ready yet for backup purposes. Although, it can be configured to automatically deploy only to staging, while having production deployment manually triggered. It can also run only on specific branches, like staging and master.

I was after a way to integrate deployment with task status updates, so once we updated a task status to Staging, for example, it would trigger a job to merge that task's feature branch to staging and deploy it. Once it gets tested and approved, we would update the task status to Approved, which would trigger a master merge and a push to production. To me, this would be the perfect solution.

Unfortunately, that's not how it works. It might be achievable through GitLab's web hooks and api, but i don't think it has a way to add a custom field to a task to store the related git branches and it only has an open / close status. We could use labels instead and parse the task description to extract the branch info.

What I've done in the past and worked great is use git hooks for deployment. That would handle deployment's heavy lift, but I'd still like to automate branch merging, by linking it to task status.

Has anyone done that? Is it a good idea? Are there any caveats?

Any tips or ideas would be highly appeciated. Thanks!


>GitLab's CI runs on every branch push, which I dislike, since you can push a branch that isn't ready yet for backup purposes.

What's the downside of deploying a review app for something that's "not ready"? It's (presumably) just deploying to a development-only environment of some kind anyway. I'd also argue that you should be pushing almost as regularly as you commit. It increases the opportunity for discussion of what you are working on and it's also useful to know "does this still successfully deploy?"

Here's an excerpt of a .gitlab-ci.yml file from one of my projects: https://gist.github.com/kelchm/fc8ca06dd3e44a42fa640ed7a8e84...

Basically, the functionality is as follows:

1. Any branch other than 'master' automatically deploys to a development environment (AKA a 'Review App'). Multiple of these review apps can exist in the development environment.

2. Merges into master are automatically deployed to staging

3. Tags which match the form '/^v.*$/' are considered production releases and result in a manual deploy job being created.

It's a really simple and powerful workflow.


It's been a while since I've been on Gitlab, but you used to be able to add `[ci skip]` to a commit message and CI wouldn't run. It does litter your git history with CI information though.


I love it!

I owned an Atrix with a lapdock, which was the first device to embrace this concept. Unfortunately, it was poorly handled by Motorola in the sense that you had to hack it to enable a full linux desktop, instead of what was basically a Chromebook.

What really made me give up on it was the fact that it never got updated, so you were stuck with Android 2. It was pretty useful even as a workstation and saved me as I used it exclusively for a couple of weeks, while my laptop needed repair.

I was already thinking about trying something like that again, by getting a phone that had hdmi output and i'll definetely take a shot on that! good luck to them!


I too had the Atrix, the Lapdock and the multimedia dock. It was a good device, just hampered by the lack of RAM, the dual core processor (which was a bit sluggish), and the hampered Linux desktop; I too hacked around with it to be able to install all manner of useful tools (compilers, editors, office packages) and it was useful. The Lapdock had a good screen and was conveniently incredibly thin. I did miss a backlit keyboard though.

In any case it was years ahead of its time.

Sold all the components individually though - I think someone used the Lapdock with a Raspberry Pi in the end.


I still have my atrix. I remember hacking it to run a full linux desktop via the dock instead of Motorola's official limited selection. Motorola was truly ahead of their time...it is unfortunate for them that the smartphones weren't quite fast enough then to run the firefox desktop browser, though.


I was one of the other Atrix people. Never got the full dock because I didn't think they'd ever continue with the concept. A shame they didn't, having more then one port on a phone might be nice.


Woah I remember the Atrix! Man I had totally forgotten about that.


Or you could do the same thing with a USB-C phone. C was meant to act as a single cord (charging,video,peripherals) solution for phones and laptops already.


Build tests against your app`s public interface. On a web app, that would be your controllers or API.

That will give you good coverage, while avoiding too simple to be useful unit tests.

It's really hard to foresee all possible input variations and business logic validations, but that doesn't mean your test suite is useless.

It just means it will grow everytime you find a new bug and you are guaranteed that one won't happen again...


On a small app that is a good idea. However on a massive app you need to break things down more. I work on a project with > 10 million lines of code, there is no way any human can understand everything in detail. As a result I need to test at a smaller scale which is not my public interfaces, but my interfaces to other people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: