Hacker Newsnew | past | comments | ask | show | jobs | submit | Kaali's commentslogin

Abstractions need proper names and meaning. In this example the "abstracted" version would be a lot simpler if the names of the functions would be 'doB()', 'doC()' etc. and it is still hard to make sense of as A, B, C, D, E as an example doesn't really have an obvious meaning. Just an order. But if the main function describes a process that must happen in a specific order, then by all means keep the code in the single function if it makes it more clear.


Agreed. I personally find it very difficult when abstractions are named after the design pattern rather than the purpose. E.g

  class B
  import com.acme.b.BRepository
  class BImpl extends B
  BFactory.getInstance(STANDARD_B)
  b.process(); // this is all we want, doB()


Monadic error handling is quite nice for domain specific errors, but for exceptional situations which are not supposed to happen, you still need some sort of an exception system. And even with domain error conditions, exceptions has a nice property of saving a stack trace, which can make error hunting a bit simpler.


Exceptional situations as in crashes, like writing to 0x0 address?

Otherwise no, there is really no good reason for having some orthogonal value returning system that can jump up the stack until it's caught (if ever.)

Many situations that are often considered exceptional are really not: cannot connect to server, no such file or directory, cannot bind to port...


Any situation that prevents functionality from working and should not happen in normal flow is exceptional.

Including such connection failures or file open failures. They may have to be handled, but not at cost to the hot path.

You cannot typically just "eat" such an error with default behaviour and expect whatever relied on it to work properly.


In Finland we call it "kissanhäntä" (cat's tail) or "miukumauku". "Miu" and "mau" are actually cat's meows, so "miukumauku" is kind of a "a meow, a meow" with two different words.

Nowadays it's usually called "ät-merkki" as in "at-sign" as "at" is spelled in Finnish.


Unity is the most popular engine at the moment. Saying that it is the best is going a bit far. I actually think that Unity is one of the best 2D engines available. For 3D the tools are really broken for production use, and Unity is quite unresponsive about fixing them. And the renderer quality is a bit of a far cry from CryEngine or Unreal Engine, but is getting there bit by bit. Except that those two are also progressing, mostly by doing actual new research in the field.

Still, new 2D engines are welcome as Unity still has the 3D package to work with in a 2D game.


This is a great example of a modern looking website, that actually performs in a way one would expect from today's computers and browser optimizations. Instead it seems to be more important for developers to use the coolest new technology, even though it actually hurts the end-users.

React is all the hype right now. And it is a nice system for creating dynamic single page apps. But there is little reason to use it for static websites, which could be cached easily, instead of rerendering the website on the client with Javascript. And instead of having automatically cached static website, we use local storage and such to get something resembling a cache.

I would love to see some mechanical sympathy in modern web development, instead of cargo culting all the new tech.


You can use React to render static websites. You get the nice component model and the speed of static content. You can take a look at the react native[1][2] website for an example.

[1] http://facebook.github.io/react-native/

[2] https://github.com/facebook/react-native/blob/master/website...


That's pretty cool way to structure websites: https://github.com/facebook/react-native/blob/master/website...

The chain of build steps is quite complex though... but hey, if it works...


We've got our entire marketing site statically rendered. Looking forward to react 15, should get rid of lots of "data-reactid"s which should shave a bit off of the size of the rendered html.


You can actually do that at the moment with ReactDOMServer.renderToStaticMarkup(), which doesn't create any react-ids (but can't be used later by React on the client)


There's also such a thing as cargo cult criticism... when dismissing stuff as "hypes" and "trends" without really grasping the reasons behind the enthusiasm... for example, how one of the big advantages of React is that it can render on the server in a very convenient way.


True. I shouldn't have been that dismissive about React. My comment was really related to the explosion in popularity of client side rendered static websites. And React happens to be one of the more popular ways to implement them.


Well in this case the coolest new technology seems to be AMP, the use case for which is not so plain to anyone who knows how to make a modern looking website that actually performs in a way one would expect from today's computers and browser optimizations.


Mechanical sympathy implies that you tailor for very specific use cases.

For example my blog: Most visitors read a single article and leave. Optimization: Serve page as one big blob if possible. Inline the CSS. Avoid images and widgets. Social buttons are implemented as CSS-styled links without Javascript-iframe-img stuff.


> React is all the hype right now.

React is old school, the cool kids use VueJS nowadays.


There are women in the calendar.


Node.js is in an interesting position. There is a lot of libraries and new ones are coming out at breakneck speeds. It's also a way for frontend developers to transition to backend tasks. And it gets a lot of mindshare at the moment, with MongoDB and microservices.

But in my opinion a lot of Node.js ecosystem is a lot of mismarketed features. Many developers doing backend services with Node.js actually think that it's the fastest thing available, even though multiple benchmarks, e.g. techempower, shows that it really isn't. And even more people seem to think that it is a way to do simple parallellism, so they won't have to understand threads and locking, which are really complicated stuff. But as many have said here, Node.js does not support threads, or parallelism without running multiple different processes. Which can be fine if you don't have any shared state between your processes. And with no parallelism in process, it is quite easy to actually block the event loop by running anything that is CPU and not IO bound. This can be a loop that is too long, too much math, or even parsing a JSON string without using streams. All of these can block the event loop, which means that no requests are going through that process while one request is parsin a JSON.

And even though there is a lot of libraries and frameworks for it. The quality is often really, really bad. As in invalid MD5 algorithm bad, etc. But there are also some gems such as Bluebird for promises, which makes the callback hell more easier to handle.

You will also face immature debug support, profiling and static analysis. You barely get any refactoring help from your tools, even though IntelliJ IDEA does quite a good job with basic refactoring and debugging. And you will have to spend time with handling odd bugs with no logs showing up on crash, or stuck processes when something has gone really wrong in the code, with no way of knowing (if you don't have DTrace) where the code is stuck.

But there are stuff Node.js seems to excel at. It is really quick for creating a simple REST service, feedback loop is really quick as the services restart almost immediately (at least when you don't use all the latest ES6 transpilers). And if you want to create isomorphic applications, where the server can render a Javascript site on behalf of the browser for the first request, or even successive requests for mobile use, there is no better platform than Node.js. And if you know that you will not do anything that is CPU bound, just IO bound stuff, you can still use any library available. Where for example in Java or Python, you would have to find specific libraries that support your chosen async IO framework.

I would use Node.js between a backend server done in a more robust ecosystem such as JVM, and the browser. Where Node.js gets the data from the backend and does it's magic with isomorphic React for the client.


There is another side of extra resource use that I don't really see addressed except in the mobile space: ecology.

Even though my computer can run all applications without a hitch, it is still very wasteful to constantly use CPU power because of technology choices or plain laziness. As an example, Spotify and Slack are two applications that seem to use most of my CPU after Chrome. Spotify and Slack combined seems to hover around 5-15% of total CPU (a two year old i7). When there is a lot of traffic in Slack I have seen it using 15-20% by itself, with multiple processes running and memory use going above 200 megs.

Both applications work smoothly, but should they really use that much resources? A chat application? A music player? With modern CPU's I would expect them to be at the bottom of the process list when sorted by CPU usage. I used IRC on my Pentium 75Mhz and it ran fine. When simple applications are made so poorly that they use that much resources, what is the worldwide impact of that power use? And what about the users that don't have powerful and expensive CPU's?


It's not that they have poor algorithms or whatever, it's that they're part native, to handle the desktop interaction and the rest is a bunch off html/css/JavaScript running in an embedded browser. At least that's how Spotify works.

The problem is that our tools for making multiplatform native GUI's suck so bad we'd rather just embed an entire Web browser into everything.


There are some okayish cross platform frameworks, such as QT and even JavaFX. One of the main complaints of cross platform GUI's has been that they don't work like native applications. But for some reason nobody cares when the app works like a single page web app, which in many cases is a lot worse than even plain old Swing apps. Which at least supports right-click properly.

I think the main reason that node-webkit and what-not are popular, is because of web developers moving to native app development. It's really easy to get started that way, and you can even share code with your web app. Where something like QT has a really huge learning curve for programmers transitioning from Javascript.

About poor algorithms. I actually worked on optimizing a well known web browser for a couple of years. And most of the stuff we did, was because of really bad Javascript code. Even though it seems gluttonous to embed a web browser in applications, and even insecure, it doesn't have to be as bad as it is, especially with a simple application like Spotify. This is going on a bit of a tangent, but every frontend programmer should at least learn how the browser actually works, a nice site for that is http://jankfree.org/


> There are some okayish cross platform frameworks, such as QT and even JavaFX. One of the main complaints of cross platform GUI's has been that they don't work like native applications. But for some reason nobody cares when the app works like a single page web app, which in many cases is a lot worse than even plain old Swing apps. Which at least supports right-click properly.

There's more to non-nativeness than just look and feel, which the more mature cross-platform GUI toolkits can ape fairly well. Another concern is accessibility for people with disabilities, e.g. blind people using screen readers. Qt, for example, is kind of accessible on Windows, Linux, and Mac, but not at all on mobile platforms. Not sure about the status of JavaFX. At least a single-page rich web app can be made accessible using the ARIA extensions to HTML, and if you use one of the big four web rendering engines, you can be sure they've done the hard work of implementing the underlying OS accessibility APIs well. Of course, many (most?) web developers don't implement ARIA for their custom UIs.


Accessibility is a point I didn't think about at all. Thanks for reminding me, it's really something that is all too often forgotten. JavaFX supports ARIA and all standard controls have accessibility built-in. But I have no expertise to actually comment on the quality of accessibility features in JavaFX.


> One of the main complaints of cross platform GUI's has been that they don't work like native applications.

wxWidgets applications work like native applications because they are in fact using the native toolkits, and some successful applications, like Audacity, are written using wxWidgets.

And it runs fast, as it is C++, as expected. Why it is not used more fits perfectly with the content of this article.


Hmm. Java went through this with AWT and Swing. You can have "identical on all platforms, nonnative, missing some native features and look&feel" OR "native features, look and feel, but different across platforms". Embedded browsers are closest to the former.

It sounds like solution (for spotify at least) is better methods for each platform of creating a GUI around web services. Microsoft have sort-of had a go at this.


The solution is to promote the browser to a full VM container, with both low level and high level APIs for graphics, sound, networking, and so on.

Browsers are already closer to this than most people realise. By the time you've included websockets, openGL, webaudio, and bunch of other stuff, you're 90% of the way to a useful OS.

The problem is that the current state of the web "OS" is a random collection of ad hoc APIs hacked together for a completely different job, and it's very badly designed for what it's trying to do now.

In a perfect world the FOSS world would collaborate to thrash out a new spec, and also design a new browser to follow the spec which was backwards compatible with standard HTML etc, but also included new and more efficient APIs for much faster performance.

Something like this has already happened on servers - see also, containerised VMs running RoR or Node or whatever you want - and it's only a matter of time before it happens in browsers too.

The problem is that the current plan seems to be to build a VM layer on top of the existing DOM/js/etc layer, which will kill performance even further. It should really replace it and emulate it.


I'm conflicted on this.

On the one hand, I wonder how much of the wasted CPU cycles can be attributed to carelessness that could easily be avoided, or caught early, if we developers deliberately used underpowered hardware for our own machines. Speaking for myself, my main workstation, where I also do most of the testing on my desktop apps, has a Core i7-4770 processor and 32 GB of RAM. Maybe if I used something more modest, like an ultrabook with only 8 GB of RAM, I'd be more likely to notice when I'm carelessly writing inefficient code.

On the other hand, as others have argued both on this thread and elsewhere, there's a trade-off between machine efficiency and developer productivity. We may argue that it's wrong to waste machine resources when the machines in question don't belong to us, but then again, developer productivity means we can crank out more features that drive sales and make users happy.


A very good point, we should have a metric that establishes how much carbon is released extra into the environment from cpu cycle wasting crap like atom and even more bloated versions of word. Clippy could pop up: "I can see you would like to release an extra ton of carbon into the atmos, why not upgrade word now!"


Yeah, and then we could see how small the differences would actually be.

A 15" Macbook Pro has a 99.5Wh battery and lasts about 8h; that's 12.5W. A program that decreases battery life by 20% would only mean an increase of 2-3W. Even if the machine and program are running 24/7 and all the energy comes from coal, that means an increase of less than 0.03T of CO2 per year. For comparison, the average carbon footprint of an US citizen is 20 Tons per year.


Apple has started doing something about energy efficiency, probably because most of their computers are laptops where energy use is quite important. OSX tracks energy efficiency, which somehow calculates how much power a single application uses (CPU + GPU if I remember correctly). But I don't think they do anything with that info at the moment.

I guess it would be a nice incentive for developers if OSX could notify you that an app that is on background is using a lot of energy at the moment, maybe even with a quit-button when on battery power. At least I wouldn't want my app to end up on that kind of a popup.


As students we learn of the cpu/memory tradeof. In the real world it is engineering time/cpu/memory that is involved in the tradeof. Given how much money we make the business case often doesn't come close to existing for moving the tradeof to CPU time. Take Slack - they are adding tons of new users and their users love them, the company has a high valuation. Focusing on getting that percentage down to nothing would have torpedoed their company.


It seems that usually when moving from paper to digital forms, the original form is just reimplemented to be filled out on a computer. Where rethinking the actual process might actually reduce the actual interaction that is required from a person, to an automated system which can induce information that would otherwise be filled manually.

In Finland when filling your tax forms online, the form comes prefilled with numbers that are calculated from your tax info of the previous year. If there are no changes in your salary or benefits, you can just agree to the the form and it is done, without typing out anything.


I've seen this first-hand. All they want is the same paper workflow they had before, but with less paper, even if the old workflow is bloated and/or nonsensical.

Edit: A colleague phrased this brilliantly before: "We have these machines that can do literally anything we want them to, and instead we're using them as a poor imitation of paper"


>It seems that usually when moving from paper to digital forms, the original form is just reimplemented to be filled out on a computer.

In Germany as a business owner I'm required to file taxes electronically. But then in the last step I still have to print out some of forms and send them via snail mail to the German tax office.

And don't get me started about registering a new company. It takes weeks and you need to visit a notary. (Coincidentally last week I created a UK Ltd to hold some intellectual property. It took 20 minutes and I paid the fee via PayPal. And the next day everything was ready to go.)

There are really different school of thoughts when it comes to administration. And you can't just slap an electronic form ontop of an over-regulated dinosaur and automagically become a modern & agile institution.


This is probably because they haven't found a way yet to put a stamp on a digital form ;)


Really? Scammers could do that ten years ago.

I assumed Germany was more organized than that.


Yes. In ideal world, for example, founding a run-of-the-mill company could be as simple as spinning up a DO box or creating Paypal account. Set up billing and contact details, check a few options, click and done.

In reality, even with e-signatures and what not it's far from that. You submit multiple documents to multiple government offices, processing takes days, and there is plenty of printing, mailing and scanning going on behind the scenes.

Problem is, governments have little incentive to improve UX, as they face no competition. You either put up with the bureaucracy and stupid big forms, or... well, there's no other option.


Tactile controls are something that most phones are missing. But there seems to be a problem of culture. The games developed for mobile phones are targeted at mainstream casual markets. And there is also a culture of free beer with mobile gamers. When the client won't pay a proper price for the game, it is too much of a risk to create a large game. Which is why developers target simple casual games, and tries to nickel and dime with some targeted psychological tricks.

I enjoy more complex game, even with mobile platforms such as Nintendo 3DS. My most played games are Fire Emblem Awakening, Etrian Odyssey IV, Devil Survior Overclocked and Monster Hunter 3 U. The first three games would not even require tactile controls; but I don't think any mobile gamer in the current culture would buy them for 30 to 40 dollars.

With dedicated game consoles the culture of actually paying for a good large game is still alive and well. And I think that is the reason why developers make games for them. If the day comes when free-to-play casual games are the only mobile games available, it will be a sad day for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: