Hacker Newsnew | past | comments | ask | show | jobs | submit | danwee's commentslogin

Indeed! I would buy it if it were smaller (e.g., 4.7')


I think the technical product manager role is the worst of both worlds:

- not enough tech: so you don't get to work on the nice and interesting stuff

- it has it's management part: so you get to work on the boring stuff


> I was promoted to tech lead 2 months ago

Did you get a salary increase as well? And if so, would they decrease your salary if you go back to IC? If you keep your salary increase, that's a good deal. If there was no salary increase -> that wasn't a promotion! If they decrease your pay -> find another job


> I would rather not maintain a system that was built on quicksand, where dependencies cannot be upgraded without breaking anything.

To each their own. I prefer to maintain a bad system because:

- I can make it better

- If something doesn't work as expected it's because of the current state of the system, not because of my lack of ability

On the other hand, I don't really like to maintain very good systems (crafted by very intelligent people) because:

- There's little I can do to make them better (I'm a regular Joe)

- If something breaks it's because of my ability as a programmer (all the shame on me)

So, it's like playing in two different leagues (but the paycheck is rather more or less the same, so that's nice).


This is an interesting perspective which I'm inclined to disagree with. There's little pleasure to be found in having to deal with a system that broke because it was badly designed or implemented, although I guess it means you've got a reasonably secure job for the time being. Being able to gradually refactor it can be fun sometimes I guess, but I'd still rather not have to.

Your second category is more interesting to me - you're interpreting a system is hard to understand and work on as being made by super intelligent people. I would interpret that as a system that was badly designed, unless you're doing some new and revolutionary thing (you're probably not). A system that has been designed in such a way that only someone with deep knowledge of the thought process can work on it has been designed badly. I know this because I have in the past designed many such systems. Coming back to them a few years later even I hated myself for it, so I'm deeply sympathetic to the people who had to work on them who weren't me. Thankfully in most cases I got to task a few people with ripping out the system and replacing it with something better.


"you're interpreting a system is hard to understand and work on as being made by super intelligent people"

I read it as the opposite. GP says that if a system is good, it needs no improvement, so there's no fun in refactoring and redesign.

And those good systems are easy to understand and work no, so when something breaks you can't blame it on the design. You can only blame yourself.


I also enjoy improving bad software, high five.

But funny: I was trying to think of "good" systems that I ever worked on, but drew a blank. It can't be that I only worked on bad code, right? Maybe this is one of those "when everyone around you is an asshole..." situations!

But now that I actually think deeper about it, the reason I don't remember doing a lot of work in good systems is because I barely had to touch them. They just worked, scaled fine, required very little maintenance.

And on those good systems, building new features was painless: they were always super simple and super familiar to newcomers (using default framework features instead of fancy libraries), because they never deviated from the norm. Things would also pretty much never break because there were failsafes (both in code/infra/linters/etc and in process, like code review).

At my previous job the other person working in our backend was the CTO, which worked part-time and had lots of CTO attributions. I remember spending about 20 hours tops in the span of 2 years on that backend. It was THAT good.


> At my previous job the other person working in our backend was the CTO, which worked part-time and had lots of CTO attributions. I remember spending about 20 hours tops in the span of 2 years on that backend. It was THAT good.

It might be "cargo culting" but I am curious what properties of that good system were true?


It was familiar, because it used a popular framework used the “vanilla” way, the way the author of the framework recommends. So even a junior dev would be able to do stuff in the first day.

There were very few optional third-party libraries or smart-pants patterns. If it wasn’t necessary, it wasn’t imported.

Some database views were used instead of complex ORM queries. Sounds trivial but saves a lot of time debugging.

Control flow was so predictable that I rarely debugged. Honestly for a lot of features I just did TDD without much exploration at all, even on the first uses.

Features were super well isolated and decoupled. If there was some strange, awkward, cross-cutting concern between two distant parts of the domain, it was decoupled using async events rather having domain-model-#1 call domain-model-#2. So any weird interaction between distant parts was well documented in a specific “events” folder.

Dependencies were very up to date and everything was simple so very few issues updating the framework.

Most important: test suite was comprehensive and very fast.


There's a bit of selection bias. Developers are more willing to stay working on good systems for much longer, so there are fewer job openings for other developers to work on them. Hence, most job openings are to work on crappy systems.


You are 100% correct, but my observation was more about how I actually got to work on good projects and enjoyed, but I just don't remember much about them because they barely needed my intervention.

Naturally I'm not counting the stuff I built myself: I definitely worked a lot of time on them and they were a breeze to maintain, but I won't classify them as good bad, since the one thing I'm sure of is that I'm biased about their quality ;)


I'm not defending fraud: if fraud is made, then you pay for it (either economically or via prison). On the other hand, having thousands of people on the internet call you everything because of a mistake. I don't know, I definitely wouldn't like to be judged by Twitter users if I make a mistake in my life (that's why I don't have Twitter/IG/etc. It's so scary to be judged in 0.5 seconds by thousands all over the world).

The best he could have done is to apologize, though, and handle it legally.


a mistake is an accident... this probably required more effort than actually finding women to speak at the conference... it was willful and harmful


This feels strange (maybe I got it wrong).

It's probably because Amazon has all the money in the world they need to develop their products, but I would expect to launch a new product with some minimal but strong features to attract customers. Q offers 40+ built-in connectors from day zero. Like I can imagine the engineers/managers working on connector number 25: "Man, we have implemented already 24 connectors and we don't even know if the product will be a success or not...".


I see "connector" and hear "you'll spend weeks trying to get this feature to work before you give up and write it in Python in an afternoon."


15 connectors was the bare minimum. Brian, for example, has 37 connectors, okay. And a terrific smile.


> his year our team is hiring for 3 new people instead of 4

Na. We'll be able to do more with less... but the amount of work to be needed will increase, hence more people will be needed as well. Same old story. Compilers didn't get massive people fired.


I don't think anyone can claim certainty on this either way.

My view is that in the past the increases in productivity have come in a period of exponential growth of the software industry and that growth can absorb the additional productivity. But exponential growth doesn't last forever and if you have a period of a declining / flat / slowly growing software field, a significant enough productivity improvement from tools like LLMs can reduce the overall demand for software development.


> Outsourcing the work to a country with low wages. Even nowadays you can get highly skilled workers for like $50 a month in some African countries.

But this is not the norm. In order for company X to hire people from a different country, it is necessary that:

- company X hires contractors/freelancers. The vast majority of people out there in IT work as employees, not as contractors

- company X needs a branch in that country. This is rare

- company X hires in that country via an intermediary company. Not as rare, but usually it's not worth it

Not even in Europe countries hire people from other countries that easily. It's more common now after covid, yes, but it's definitely not "let's hire from country X developers 50% cheaper!"


What you say is true, but the amount of "grunt work" is not constant over the years. In fact, I think the amount of "grunt work" in teh tech industry is just growing and not shrinking; I think the following look is quite obvious:

- amount of current grunt work: X

- new tech Z appears that makes X be reduced to 0.1X

- at the same time Z enables new ways of doing things. Some things become grunt work because they are a byproduct of Z

- amount of current grunt work: Y (where Y ~= X)

- ...

If the technological progress had stopped in the 2000s, then all the grunt work (originated in the 90s) would be esentially zero today. New tech just brings automation and grunt work. I don't think we will live in a society where there's practically no grunt work.

The most recent example is AI: there are AI tools that generate sound, images, video and text... but if you want to create a differentiating product/experience, you need to combine (do the grunt work) all the available tools (chatgpt, stable difussion, etc.)


>If the technological progress had stopped in the 2000s, then all the grunt work (originated in the 90s) would be essentially zero today.

If you wanted to have a simple database application in the 1990s, Delphi, VB6 or MS-Access were most of what you needed to get it done. The UI was drag and drop, the database was SQL, but you almost never touched it, mostly it was wiring up events with a few lines of code.

The work was commodified out of the way! Domain experts routinely built crude looking but functional programs that got the job done. It was an awesome time to be a programmer, you just had to refactor an already working system, fix a few glitches, and document everything properly, and everyone was happy.

Then everyone decided that all programs had to work on Steve Jobs' magic slab of glass in a web browser connected through janky Internet, and all that progress was lost. 8(


Are all of those proprietary products? I can't speak on your experience, but if linux was created in 1991, seems like in another angle you're bemoaning the rise of OSS and web.

I'm just a web developer that learned everything from online resources. So i think we are both biased on different ends on the spectrum.


Open source is great, Lazarus does a pretty good job of replacing Delphi.

Microsoft went insane with .NET so VB6 was killed in the process.

Access automatically handled table relationships, building queries and seeing them as SQL, and the report engine was pretty good. Thanks to ODBC, you could use the same database across all of them, or hook up to a real SQL server when it came time to scale up.

What's missing is the desktop and a stable GUI API these days. Windows apps from the 1990s still work, because they are distributed as binaries. Most source code from back then will not compile now because too many things have changed.

I love Open Source, but it doesn't solve everything.


> Microsoft went insane with .NET so VB6 was killed in the process.

I'd love to hear more about this perspective or any links to get more of it.

I did a (very) little hobby VB6 and loved it. Never made switch to .NET at that time (I was young, it was a hobby).

Having recently worked through part of a .NET book, I was pretty impressed by how far MS took it (although it seems extremely mind-numbing). Obviously it took a long time and had false starts, but MS stuck with it. On a personal level, I am very opposed to the entire model in an ideological sense, but it does seem to make a lot of business sense for MS, and it seems to cover a lot of cases for a lot of businesses.

So, was Microsoft's insanity with .NET just the obsession part, or doing things poorly for a while, until eventually getting it "righter", or is the insanity still pretty apparent?

I really would love to learn more about the historical-technical aspects of this specific comment quote, from VB6 to modern day, because it fits my experience perfectly, but I've had second thoughts about the position more recently. The more the specifics the better.


The insanity was to abandon the advantage they had with VB/COM, in order to challenge Java on its own ground. They threw away the baby with the bathwater. The C# pivot also slowed down their desktop efforts pretty dramatically, doubling the blow.

They were lucky Sun squandered the opportunity they had engineered with Java, focusing on the hardware side and missing the boat on browser, virtualization and services. If Sun had bought Netscape and then focused on building something like Azure, instead of fighting the inevitable commoditization of server hardware, they would have eaten Ballmer's lunch.


Disclaimer: I am not a .Net programmer, so these are just my thoughts and impressions as someone on the outside who followed the development from a distance.

I think a lot of the focus on .Net was driven by MS and Balmer's fear of Java. At the time, almost all desktop computers were running Windows 9x/2k. If 3rd party applications were developed with cross-platform Java, the customers would no longer be locked in to Windows.

First they tried the famous embrace/extend/extinguish approach by creating a Windows-specific version of Java. Sun fought back, and MS decided to push .Net instead.

It seemed to me that the initial strategy was to claim .Net was cross platform, but focus more on Windows and let open source projects like Mono be their cross platform "alibi". They changed strategies after a while, and now I guess the cross platform is more real.


> Windows apps from the 1990s still work, because they are distributed as binaries.

Only if you have the right libraries, and runtimes, and OS interfaces, and even if you have all that, oh no, it's a MIPS binary and you don't live in 1996!

Any proprietary API exists precisely as long as the owner says it does. Open standards don't suffer from that malady.


>Only if you have the right libraries, and runtimes

That generally only happens with .NET based programs in Windows systems. You always need some .NET v2,3,3.5,4,4.5, etc., runtime.


Totally agree. There is no backward compatibility with .NET runtime - if your application is built/linked to a given version, it won't work with any other version of .NET


That's simply not true. Newest .NET 8 does not need the assemblies you reference to target .NET 8 - as long as the TFM is any version of 'netstandardx.x', 'netcoreappx.x' or 'net5'+ it will work.

You can even make proxy-projects that target netstandard2.0 but reference .NET Framework and with certain compat shims the code will just run on .NET 8 unless it relies on some breaking changes (which have mostly to do with platform-specific behavior, there have been no breaking changes for the language itself since I think C# 1 or 2? some odd 20 years ago).

As for the runtime itself - the application can restrict itself from being run by a newer version of runtime but you can absolutely do so. The lightweight executable that just loads runtime and executes the startup assembly may complain but just try it - build a console app with 'net5.0' target and then run it with latest SDK with 'dotnet run mynet5app.dll' - it will work.


I think the point is that the Access, Lotus Notes tooling was in largish corporations somewhat ubiquitous.

The experience of this tooling was make a change and it was in production. It was incredibly simple and productive to work with given the needs of the time.

There was also plenty of opportunities to make a mess, but I don't think that has really changed.

Learning was not difficult, you just had to be prepared to spend time and some money on books and courses.

It is not a tooling set you would want to go back to for a bunch of different reasons but it worked well for the time.


> It is not a tooling set you would want to go back to for a bunch of different reasons but it worked well for the time.

I remember using lotus domino at one of my first jobs. There were all sorts of things I hated about it. But you could have a database - like the company’s mail database. And define views on that database (eg looking at your inbox, or a single email). And the views would replicate to a copy of that database living on all of your users’ computers. And so would the data they needed access to. It was so great - like, instead of making a website, you just defined the view based on the data itself and the data replicated behind the scenes without you needing to write any code to make that happen. (At least that’s how I understood it. I was pretty junior at the time.)

Programming for the web feels terrible in comparison. Every feature needs manual changes to the database. And the backend APIs. And the browser code. And and and. It’s a bad joke.

Commodification has a problem that for awkward teenagers to make the same fries every day, we have to ossify the process of making fries. But making good software needs us to work at both the level of this specific feature and the level of wanting more velocity for the 10 other similar features we’re implementing. Balancing those needs is hard! And most people seem content to give up on making the tooling better, and end up using whatever libraries to build web apps. And the tools we have are worse in oh so many ways compared to lotus domino decades ago.

I wonder what the original lotus notes designers think of web development. I think they’d hold it in very low regard.


Right!!

10/20/x years ago we didn't have DevOps, CloudOps, CloudFinOps, CloudSecOps, IaC experts, Cloud Architects, Cloud transformation experts, Observability architects, SREs, plus all the permutations of roles around "data" that didn't exist discretely, etc etc etc.


We did not have web scale products, which enabled new possibilities. E-mailing documents and collaborating offline sucked.


> I think the amount of "grunt work" in the tech industry is just growing and not shrinking...

Not sure, but isn't this just another way of saying that the tech industry keeps growing?


I'm not sure what the parent post meant exactly, but I do agree there is tons of grunt work -- I've seen big name SV companies where large parts of their work flow include parts like "and then just every hour you need to do something in a slow UI that can't be automated" to keep vital systems working. I would say that's really grunt work, and there are even persons in such companies where their only task is doing such grunt work. Truly I've been told by clients I work with they have entire double-digit sized teams where the members only responsibility is to reboot VMs that breach specific resource thresholds -- easily automated and even built into most hypervisors, but for whatever reason these tech giants opted for a human to do it -- the only semi-reasonable explanation I got from one client was that their infrastructure team got outsourced and they laid off the only people who knew how to use the automation tooling. It's a dumb reason for sure, but at least I can understand why they opted for the manual grunt work.

Similarly, keep in mind a lot of this grunt work is just to satisfy some reporting requirement from somewhere -- some person(s) in the company want to see at least X% of uptime or Y LOC every day, so you get people trying to write a lot of yak shaving code that basically does nothing except satisfy the metrics or ensure that uptime % always looks good (i.e., they don't fix the cause of the downtime entirely, they just get the endpoint that is checked to determine update working well enough so it reports to the monitoring system and they leave it at that)


If it's the amount of grunt work to solve the same problem, it just means the ecosystem keeps getting worse.

What IMO, is quite obvious.


We are invening the problems of tomorrow by solving the problems of today, and people tend to be the constraint.

Managing complexity to where a fixed team can operate the software.


I'm a total newcomer about this, but I would like to learn and start making "music". They advertise:

> MAKE USE OF THE CURATED SELECTION OF DRUMS, BASS AND KEYS THAT COME PRE-LOADED ON YOUR K.O.II.

Question: would I be stuck with their pre-loaded curated selection or is there any way I can "upload" any extra dum/bass/keys and use them?


From what I see in their docs [0], you can either record samples directly from their hardware, or use a web tool [1] using Web MIDI (!) to update them. Guessing from the Web MIDI use, probably should be easy to create open tools or even load that page offline to update its samples.

  [0]: https://teenage.engineering/guides/ep-133/functions#10.1-sample
  [1]: https://teenage.engineering/apps/ep-sample-tool


There is software included that allows you to add your own samples(audio files).


Yes, you'd be able to use whatever samples you want; transfer from the computer ("drag and drop samples using the sample tool") or record them yourself via the input.


It has 64MB for samples, I think you can record via the line in + microphone or with a software companion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: