Hacker Newsnew | past | comments | ask | show | jobs | submit | pacmansyyu's commentslogin

Congratulations on building this! I certainly do agree with the fact that there are a lot of sites that force you to upload, ask for your email, and sometimes even add a watermark to the image; amongst other unknown things.

Although from first look, I can tell you that there's a lot of text on the site and it's a bit too cramped. From my perspective, tools like these should get out of the way and the UX should be self-explanatory for an image "conversion" tool. Basically, just a box to select, drag/drop images, a few user inputs such as the output quality and format. That's about it. A single line at the top explaining what the tool does (and that it is local) should be good enough.

Also, the title says "PNG to JPG converter," but the rest of the site claims it can convert to quite a lot more than just those format. You can possibly change that to, as an example, something like: "ImageConverter - Convert images between formats, locally". And you can get rid of multiple pages, turning it into a single-page with all the possible output options.

As a sidenote, I've been using Mazanoke for this: https://github.com/civilblur/mazanoke. It's not my project, just something I happened to stumble upon a while ago, but it's similar to your project and works exactly like you would want it to.

From my test, the rest of it works great. Good luck!


Hey, thanks a lot for the thoughtful feedback — really appreciate you taking the time to write this

Totally agree with you on the UX point. I also believe these tools should be almost invisible — just drag, drop, adjust a few settings, and done. I initially added more text to make the privacy aspect clear (since many users don’t realize it’s 100% local), but you’re right — that could be simplified and better communicated with a single line. I’m already working on a cleaner layout with fewer distractions and a clearer “drop zone.”

Good catch on the title too! I started with “PNG to JPG converter” for SEO reasons, but as the app expanded to support multiple formats (WebP, PNG, JPG.), that label became outdated. I really like your suggested phrasing — something like “ImageConverter — Convert images between formats, locally” is a lot clearer and more accurate.

And thanks for sharing Mazanoke — hadn’t seen that before! Love that it follows the same “everything local” approach. I’ll take a closer look; maybe I can learn a few UX tricks from it as well.

Appreciate your kind words and testing it out! If you have any other UX ideas or thoughts on layout simplification, I’d love to hear them


I've been working on an encrypted environment variables management tool, called kiln[1], for teams. I know, tools like age and SOPS exist, but this partly came through because of the lack of a good UX around the encryption part especially for a team-based workflow. I aim to continue building kiln as a developer-first experience, making it seamless to integrate into a large team's workflows.

The idea came to me when we were trying to find ways to manage Terraform secrets , CI vars were a no-go because people sometimes wish to deploy locally for testing stuff, and tools like Vault have honestly been a pain to manage, well, for us at least. So I have been building this tool where the variables are encrypted with `age`, have RBACs around it, and an entire development workflow (run ad-hoc commands, export, templating, etc) that can easily be integrated into any CI/CD alongside local development. We're using this and storing the encrypted secrets in Git now, so everything is version-controlled and can be found in a single place.

Do give it a try. I am open to any questions or suggestions! Interested to know what people think of this. Thanks!

[1]: https://kiln.sh


Seems to be very cool project, especially for teams. I like Access control mechanism, and also klin run command is great!


Yes, SOPS does have `exec-env` which does the same thing, kind of. From one of the issues, it currently lacks support for the POSIX-semantic way to run commands: https://github.com/getsops/sops/issues/1469, where you cannot add a `--` to tell sops that everything after it is supposed to be a command, so you end up having to quote everything. Other things that I found lacking were that with SOPS, adding a new team member means manually updating .sops.yaml, re-encrypting all files, and managing PGP/age keys. With kiln, you just add their SSH key to kiln.toml and run `kiln rekey`.

kiln also lets you have different access controls per environment file (devs get staging, only ops get production) without separate .sops.yaml configs, automatically discovers keys from SSH agent/~/.kiln/, and has built-in template rendering and export formats for different tools. You could definitely build similar workflows with SOPS + scripts, or any other tool, but kiln packages these common patterns into a single tool with better UX for teams.

Think of kiln as "opinionated SOPS", focused specifically on environment variables rather than general file encryption.


Well, technically SOPS/age are both encryption tools first. Both of them are excellent, mind you. But they lack the user experience, specifically SOPS, with handling keys in a multi-user environment, and subsequently with the overall developer workflow. They do offer a lot more than just accessing environment variables securely though, something that kiln is trying to solve.

At first, I did consider using them instead of building my own tool on top of age. But our requirements were far beyond just encrypting and decrypting files in a single environment.

What kiln adds here is the role-based access control, so you can define multiple files, and users/groups who should be able to access them. It also adds to the developer workflow where you can directly run commands through kiln with the variables injected in the command's shell environment. You can also render templates for all the kiln-encrypted files you have access to.

You can say it's a wrapper over age, but adds functionality that allows seamless sharing of developer workflows, and environments, all from a single place. It's git-friendly, and primarily aims for your secrets to travel along with the code so all deployments can be done offline (as an alternative to something like Infiscal, or Vault). I've tried to make it as simple as possible to adopt for anyone in the team.

The only other best way for me to put it is that you should try it out, and I'm sure it'll be helpful in a lot of ways. If you have any more questions, I'm happy to answer them!


I'm working on Damon[1], a Nomad Events stream operator that automates cluster operations and eliminates repetitive DevOps tasks. It's a lightweight Go binary that monitors the Nomad events stream and triggers actions based on configurable providers.

A few examples of what it can currently do:

- Automated data backup: Listens for Nomad job events and spawns auxiliary jobs to back up data from services like PostgreSQL or Redis to your storage backend based on job meta tags. The provider for this is not limited to backups, as it allows users to define their custom job and ACL templates, and expected tags. So it can potentially run anything based on the job registration and de-registration events.

- Cross-namespace service discovery: Provides a lightweight DNS server that acts as a single source of truth for services across all namespaces, solving Nomad's limitation of namespace-bound services. Works as a drop-in resolver for HAProxy, Nginx, etc.

- Event-driven task execution: Allows defining custom actions triggered by specific Nomad events; perfect for file transfers, notifications, or kicking off dependent processes without manual intervention. This provider takes in a user-defined shell script and executes it as a nomad job based on any nomad event trigger the user defines in the configuration.

Damon uses a provider-based architecture, making it extensible for different use cases. You can define your own providers with custom tags, job templates, and event triggers. There's also go-plugin support (though not recommended for production) for runtime extension.

I built this to eliminate the mundane operational tasks our team kept putting off. It's already saving us significant time and reducing gruntwork in our clusters.

Check out the repository[1] if you're interested in automating your Nomad operations. I'd love to hear your thoughts or answer any questions about implementation or potential use cases!

[1]: https://github.com/Thunderbottom/damon


We recently migrated from Matomo to Umami at work after hitting scaling issues with Matomo, even after implementing various MySQL optimizations and archiving reports through cron at a decent interval. Even the most basic tasks like loading the dashboard was painfully slow (before you comment on the resource usage, our instances were quite huge and the load was alright).

Surprisingly, Umami has been handling our traffic volume without breaking a sweat on much smaller instances. I suspect PostgreSQL's superior handling of concurrent writes plays a big role here compared to MySQL/MariaDB. Except for the team/user management, everything feels much nicer on Umami.

Shameless plug: As part of the migration, I also took the opportunity to learn some Rust by writing a small utility that uses the Umami API to generate daily/weekly analytics reports and sends them via email[1]. Pretty happy with how it turned out, though I'm still learning Rust so any feedback or suggestions for improvement are welcome!

[1]: https://github.com/Thunderbottom/umami-alerts


I am also curious about the traffic amount and server specs.

In my experience, MySQL still runs very well until you have 10-20m rows (on a single machine, like 8vCPU and 32GB RAM), after it gets trickier to get instant responses.


We had huge servers, with the database and the application itself running on separate instances. IIRC, we had a 32 core, 64GB instance just for the DB itself which we doubled when we started adding more sites to our configuration and it still wasn’t enough. As for the numbers, our site(s) get heavy traffic everyday, in millions daily, since we are a stock broker.

You’re right about MySQL performing alright for 10-20m rows, but from our perspective those numbers are not that big for a company this size.


> our site(s) get heavy traffic everyday, in millions daily

Yeah, it's hard to run aggregate queries on MySQL once you are talking about hundreds of millions of rows, or billions. Even though, if the server has modern CPU, enough RAM to store the entire DB and NVMe storage, it's still okish with the right indexes and if the queries are optimized.

Thanks for sharing!


Could you describe a bit the load and the server/db specs? I’m using Plaisible right now and I wonder how it would handle with similar specs


We had separate database and app instances, the DB instance had 32 cores and 64GB memory, which we doubled to keep up with our requirements. We have tens of millions of visits daily, and our database was close to ~300GB within the first few months.

For plausible I believe that since it runs on Postgres, scaling should not be a problem as long as you scale the resources with it.


For my platform, I found those optimization tips to work quite well: https://docs.uxwizz.com/installation/optimization-tips/mysql...


In all honesty, these optimizations are quite basic. We already used MariaDB instead of MySQL itself. Other things listed in the post are something that we have standard across all our databases, well, except for deleting the data to speed up the database.


Have you also considered Percona MySQL server? I think they say they have the best performance (but I haven't tested their implementation yet).


No, unfortunately our company’s and external regulatory compliance policies require us to host all data within the country itself, alongside it being required to be run on an infrastructure that is easily auditable. So as a policy within the company, all our internal services are open source and self hosted.


Umami also support MySQL as well and I do t remember having much different between Postgres or MySQl as backend.

Things would hopefully be even better once clickhouse support lands.


Hi, I'm from the team at Zerodha. It seems as if the well-known URL for the laravel repository points to the GitHub page[1] instead of the URL for the raw code[2]. Changing the well-known URL to the raw code path should fix this issue. Will pass this feedback on to the team to improve the experience, thanks!

[1]: https://github.com/nativephp/laravel/blob/main/.well-known/f... [2]: https://raw.githubusercontent.com/nativephp/laravel/refs/hea...


Sure, but that's not how it's demonstrated in the docs...

I've updated it, but that also doesn't work AND fails validation because:

"projects[0].repositoryUrl.url and `projects[0].repositoryUrl.wellKnown` hostnames do not match"


Apologies for my previous misguiding comment. We have figured out the issue and a fix is being implemented right now to handle URLs for GitHub repositories and suffix them with `?raw=true` to automatically to fetch the raw file. The fix should be live soon.

In the meanwhile, if you would like to submit the project, you could use the following URL which redirects to the `raw.githubusercontent.com` page: https://github.com/nativephp/laravel/raw/main/.well-known/fu...

I hope this helps!


Awesome! It worked! Thanks so much.

Looking forward to being listed and considered for funding :)


Cheers! The issue has been fixed now and is live. Thanks a lot for your feedback :)


Hi, I'm from the team at Zerodha.

> Instead of structuring this as a fund, why not a non-profit?

Thank you, this is an interesting suggestion. While a non-profit structure could potentially increase donations, implementing this globally would be extremely complex.

Particularly because tax laws vary by country, which would require us to be registered as a non-profit in most, if not all countries, and comply with their jurisdiction. The administrative overhead and legal complexities of managing a truly international non-profit outweighs the benefits for our current scale. We appreciate the idea and will keep it in mind as we grow, but for now, we're focusing on efficiently directing funds to FOSS projects through our current model.


Thank you! these are very cool.

> you can save some bandwidth in exchange for recording quality by using high compression of Speex algo

how about using opus[0], the comparison chart[1] shows that opus is supposedly significantly better even at a lower bitrate.

[0]: https://www.opus-codec.org/

[1]: https://www.opus-codec.org/comparison/


actually I encoded the files to 4kbps but opus goes only to 6kbps?


http://www.rowetel.com/wordpress/?page_id=452 is a bit more exotic but should sound way better even though it caps out at 3.2 kbps.


Impressive, the Codec 2 can go down to 0.7kbps! http://www.rowetel.com/downloads/codec2/hts1a_700c.wav


Codec2 has even lower vitrate modes than speech or opus.


wow, looks very good! It seems it my have even better support in players. I should try it out.


I am sorry, I usually don't do this but I guess the person you're replying to is the creator of Tildes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: