Hacker Newsnew | past | comments | ask | show | jobs | submit | bytefish's commentslogin

Making software is 20% actual development and 80% is maintenance. Your code and your libraries need to be easy to debug, and this means logs, logs, logs, logs and logs. The more the better. It makes your life easy in the long run.

So the library you are using fires too many debug messages? You know, that you can always turn it off by ignoring specific sources, like ignoring namespaces? So what exactly do you lose? Right. Almost nothing.

As for my code and libraries I always tend to do both, log the error and then throw an exception. So I am on the safe side both ways. If the consumer doesn’t log the exception, then at least my code does it. And I give them the chance to do logging their way and ignore mine. I am doing a best-guess for you… thinking to myself, what’s an error when I’d use the library myself.

You don’t trust me? Log it the way you need to log it, my exception is going to transport all relevant data to you.

This has saved me so many times, when getting bug reports by developers and customers alike.

There are duplicate error logs? Simply turn my logging off and use your own. Problem solved.

If it is a program level error, maybe a warning and returning the error is the correct way to do. Maybe it’s not? It depends on the context.

And this basically is the answer to any software design question: It depends.


I have never been in a project, where estimates were spot-on, and I do this for 15 years now. By now hundreds of features have floated by the river and hundreds of meetings have been held.

Estimations are often complicated, because there are way too many variables involved to give accurate estimates. Politics within companies, restructuring of teams. The customer changes their mind, the reality you've expected is slightly different, your architecture has shortcomings you find out late in a project, the teams your work depends on disband, … and a million other things.

Theoretically you could update your estimates in a SCRUM meeting, sure, but to be honest, this has always been nothing but a fantasy. We rarely do work in a void. Our features have been communicated higher up and have already been pitched to customers. In a fully transparent and open organization you might update your estimates, and try to explain this to your customers. In reality though? I have never seen this.

While this sounds very negative, my take on it is not to waste too much time on estimates. Give a range of time you expect your features to fall into, and go on with getting your work done.


I have once migrated my repositories to Codeberg, but have moved back to GitHub.

While I despise a lot of features on GitHub, Codeberg is sadly lacking the gravitational pull and visibility. I know, someone has to start, but as a single maintainer I need collaboration to keep the projects alive.


It’s great to see Apache Baremaps being mentioned. It’s a great project and I saw its first iterations. Really amazing they have built a community around it.

Although my library in Apache Baremaps probably plays a minor role only (PgBulkInsert for Postgres COPY protocol), it’s great to see it chugging on all this data day by day.


If you are using SQL Server, then SQL Server Database Projects are an amazing tool to work with. I found them to generate high-quality migration scripts and it makes it easy to diff against an existing database.

ORMs are good up until the point you need to include SQL Views, Stored Procedures, Functions, User-defined Types… which is usually the point the ORM abstractions begin to crack (and every SQL Server database I use include them).

For PostgreSQL I usually hand-write the scripts, because it is easier, than fighting against an ORM.

I heard the Redgate tooling is also great to work with, but I’ve never used it personally.


Good point regarding ORMs - that was one of the main problems I wanted to tackle when we built Atlas (https://atlasgo.io). We added support for reading ORM definitions directly, then let you extend the "base schema" defined in them. For example, you can define your models in SQLAlchemy, EF Core, Ent, or others as a partial schema, and then extend it with functions, views, and additional objects.

From there, Atlas handles diffing, planning, and execution. This is similar to importing modules in TF, but for database schemas in Atlas. See this example: https://atlasgo.io/guides/orms/sqlalchemy

Disclaimer: I'm involved with Atlas.


I feel super uneasy developing Software with Angular, Vue or any framework using npm. The amount of dependencies these frameworks take is absolutely staggering. And just by looking at the dependency tree and thousands of packages in my node_modules folder, it is a disaster waiting to happen. You are basically one phishing attack on a poor open source developer away from getting compromised.

To me the entire JavaScript ecosystem is broken. And a typo in your “npm -i” is sufficient to open up yourself for a supply-chain attack. Could the same happen with NuGet or Maven? Sure!

But at least in these languages and environments I have a huge Standard Library and very few dependencies to take. It makes me feel much more in control.


Deno solves this, it's not a JavaScript Issue, it's a Node.JS / NPM issue.


How does Deno solve this? Genuine question by the way. I'm not trying to be snarky.


It provides a runtime, that sandboxes your application and requires you to give explicit permissions for file system operations and network requests.

This limits the attack surface, when it comes to installing malicious dependencies, that npm happily installs for you.

So yes, I was wrong and my previous comment a hyperbole. A big problem is npm, and not JavaScript.

My point about the staggering amount of dependencies still holds though.


Of course, this only works so long as the sandbox is secure.

There have been attempts to do this kind of sandboxing before. Java and .NET both used to have it. Both dropped it because it turns out that properly sandboxing stuff is hard.


Go kinda solves this by using repo links instead of package names. This forces you to go through the repo and copy paste the url (instead of manually typing it out), but it's not bulletproof I guess.


Once you start with LINQ you basically see it everywhere. It makes you think different about structuring your code, which is a good thing. It makes you think about immutability and pure functions, leading to more robust code.

But it’s also a fine line, when to use it and when not to. While LINQ is easy to read and make sense of, it is far from being easy to debug. Especially on inexperienced teams I tend to limit my LINQ usage and try to write more “debuggable” code. But that’s my approach, I would love to hear other peoples thoughts on it.


For a lot of problems it’s a good idea to talk to customers and stakeholders, and make the complexity very transparent.

Maybe some of the edge cases only apply to 2% of the customers? Could these customers move to a standard process? And what’s the cost of implementing, testing, integrating and maintaining these customer-specific solutions?

This has actually been the best solution for me to reduce complexity in my software, by talking to customers and business analysts… and making the complexity very transparent by assigning figures to it.


This. Also DevExpress and Progress Telerik do not invest into their WinUI Controls at all, and that’s a sign they don’t buy into WinUI neither.

WinForms and WPF are currently the only viable frameworks for Line of Business application. I have yet to see a WinUI3 application in the wild.


Very true. We just developed a brand new LOB desktop app and settled on sticking with WPF. WinUI has been dead for years imo.

On a side note, I still love WPF after working in it for 10 years. Maybe it's just familiarity, and it's a little verbose at times, but man it's a great framework when you know that you're doing.


We also settled on WPF for a new LOB Desktop application, this validates the decision. If you combine WPF with the CommunityToolkit MVVM, it’s a very nice framework to develop with.


DevExpress supported WinUI for a little while but decided to abandon support.

One of the biggest problems with WinUI compared to WPF is that DependencyProroperty is implemented as native code, so for .NET developers, there is a huge performance penalty getting or setting any property on control.

https://github.com/microsoft/microsoft-ui-xaml/issues/1633#i...


It's like saying "a UI toolkit for Rust" and then making all the Rust functions call into a java codebase haha.

I read through that github issue a few years ago and with a mix of surprise and disgust knew that I would not be learning WinUI.


I know, that I am just a single data point, but Scheme took the fun out of programming in my computer science studies. Of course I do understand, that it makes teaching lambda calculus obviously a lot easier and is a better vehicle for teaching theoretical computer science concepts, than say, Python.

But at the same time Scheme is limited to academia and has near zero practitioners outside academia. So the Scheme ecosystem is tiny compared to Python or Java.

Yes, you could argue, that "Computer Science" is not mere "Software Development" and academia shouldn't bow down to industry and shouldn't be a tool to provide "programmers". But again it's also important for motivation to apply the language, algorithms and concepts in other fields. Learning also has to do a lot with motivation and experimentation.

And this is a "hot take", but I always had the feeling back then, that courses are often bound to the professors curriculum and sometimes their books, so I am not surprised about resistance to changing the language.

That said, I was glad, when the Scheme lectures were over. Now I am a Software developer for more than a decade and have to admit, that learning Scheme didn't make me a better programmer at all. If I'd stayed in academia I might have a different opinion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: