There was zero evidence they were close to a nuke. In fact, they've been alleged to be weeks away from a nuke for over 20 years. And the accusations come from the ones with the illegal nukes themselves!
This was true 3 years ago but not generally the case anymore. There's been significant advancements to move away from trusted setups and the speedups with current methods are quickly approaching viability.
Even if that's true: Okay, so why "UNLICENSED" instead of "CC0-1.0"? Especially for a trivial example? Or I guess MIT or something if you really care about something at the same level as hello world?
noob question, i'm currently adding telemetry to my backend.
I was at first implementing otel throughout my api, but ran into some minor headaches and a lot of boilerplate. I shopped a bit around and saw that Sentry has a lot of nice integrations everywhere, and seems to have all the same features (metrics, traces, error reporting). I'm considering just using Sentry for both backend and frontend and other pieces as well.
Curious if anyone has thoughts on this. Assuming Sentry can fulfill our requirements, the only thing taht really concerns me is vendor-lockin. But I'm wondering other people's thoughts
>I was at first implementing otel throughout my api, but ran into some minor headaches and a lot of boilerplate
OTeL also has numerous integrations https://opentelemetry.io/ecosystem/registry/. In contrast, Sentry lacks traditional metrics and other capabilities that OTeL offers. IIRC, Sentry experimented with "DDM" (Delightful Developer Metrics), but this feature was deprecated and removed while still in alpha/beta.
Sentry excels at error tracking and provides excellent browser integration. This might be sufficient for your needs, but if you're looking for the comprehensive observability features that OpenTelemetry provides, you'd likely need a full observability platform.
Think of otel as just a standard data format for your logs/traces/metrics that your backend(s) emit, and some open source libraries for dealing with that data. You can pipe it straight to an observability vendor that accepts these formats (pretty much everyone does - datadog, stackdriver, etc) or you can simply write the data to a database and wire up your own dashboards on top of it (i.e. graphana).
Otel can take a little while to understand because, like many standards, it's designed by committee and the code/documentation will reflect that. LLMs can help but the last time I was asking them about otel they constantly gave me code that was out of date with the latest otel libraries.
I'd say "track errors first" [0] and focus on APM later (if at all). If you're worried about Sentry's lock-in, know that there are API-compatible drop-in replacements[1][2] though they are less feature-complete on the APM/observability side.
Ops type here, Otel is great but if your metrics are not there, please fix that. In particular, consider just import prometheus_client and going from there.
Prometheus is bog easy to run, Grafana understands it and anything involving alerting/monitoring from logs is bad idea for future you, I PROMISE YOU, PLEASE DON'T!
It’s trivial to alter or remove log lines without knowing or realizing that it affects some alerting or monitoring somewhere.
That’s why there are dedicated monitoring and alerting systems to start with.
If you need an artifact from your system, it should be tested. We test our logs and many types of metrics. Too many incidents from logs or metrics changing and no longer causing alerts. Never got to build out my alert test bed that exercises all know alerts in prod, verifying they continue to work.
Biggest one, sample rate is much higher (every log) and this can cause problems if service goes haywire and starts spewing logs everywhere. Logging pipelines tend to be very rigid as well for various reasons. Metrics are easier to handle as you can step back sample rate, drop certain metrics or spin up additional Prometheus instances.
Logging format becomes very rigid and if the company goes multiple languages, this can be problematic as different languages can behave differently. Is this exception something we care about or not? So we throw more code in attempt to get logging alerting into state that does not drive everyone crazy where if we were just doing "rate(critical_errors[5m] > 10" in Prometheus, we would be all set!
Sentry isn’t really a full on observability platform. It’s for error reporting only (that is annotated with traces and logs). It turns out that for most projects, this is sufficient. Can’t comment on the vendor lock-in part.
You can run your own sentry server (or at least last time I worked with it you could). But as others have noted sentry is not going to provide the same functionality as OTel.
The word "can" is doing a lot of work in your comment, based on the now horrific number of moving parts[1] and I think David has even said the self-hosting story isn't a priority for them. Also, don't overlook the license, if your shop is sensitive to non-FOSS licensing terms
Next and RSCs have become some of the most frustrating things I've worked with on frontend. Dealing with FE is already annoying enough, but having to wrestle the magic of Next and then vendor lock-in to Vercel to top it off.
Team is trying out Tanstack router + vite this week. Excited to build a regular ass CSA.
It's fine if you know them well. The unclarity in boundaries between client and server components, and the unintentional complexity that brings, is just frustrating to work with. I will gladly take a CSA any day.
But don't let a random internet stranger detract you. If it works for you, go for it.
URCL is sending me down a rabbithole. Haven't looked super deeply yet, but the most hilarious timeline would be that an IR built for Minecraft becomes a viable compilation target for languages.
reply