Interesting, I hadn't heard of Highlight.io. Thanks for sharing your code!
How does your session replay feature compare to LogRocket (which I've always found quite impressive)? (Side note: I'm missing proper screenshots and/or videos on your website.)
Regarding your pricing: Am I seeing this correctly that, in contrast to LogRocket, you're not charging ridiculous amounts for team seats? This is what drove us (web shop, 30M user sessions / month, ~150 developers) away: For 100k sessions/month and 100 user seats they wanted $30k/year of which $21k were supposed to be just for the user seats. ( ._.)
Now we didn't expect 100 of our developers to use the tool regularly (as in every day) but since LogRocket told us accounts could not be shared or rotated and we expected many devs to access LogRocket at least every now and then, that price was beyond unacceptable.
> How does your session replay feature compare to LogRocket
Forgot to reply to this. We've actually had quite a few customers switch off Logrocket because of our support for things like Canvas recording (https://www.highlight.io/docs/general/product-features/sessi...). From a session replay perspective, we have feature parity, i.e. we report everything in the devtools and even report network request/responses. We also support backend error monitoring and pairing those errors with network requests, for example, which is something logrocket doesn't do.
But overall, logrocket is very comparable tool, but as we mature, we're looking to become a more generalized monitoring tool (w/ logging, traces, etc..).
Thanks so much for your detailed response! This does sound very interesting!
If I may ask a few more questions, out of pure curiosity: What are, in your experience, the biggest challenges when it comes to frontend error monitoring from the point of view of a company / a developer using it? How can these be addressed? And what are good criteria to evaluate frontend monitoring services by these days?
(Obviously, you'll be biased here but that's alright. :))
EDIT: I should add: Obviously I've got some experience when it comes to frontend monitoring (though not a ton) and could – in theory – partly answer these questions myself. However, sometimes I wonder whether the problems we face in our day-to-day are the "correct" problems. (I.e. is everyone else struggling, too? How do other people solve their problems? How do other people choose their frontend monitoring solution?) Hence my curiosity.
I think we see "frontend monitoring" in two different buckets: "metrics-oriented" and "incident-oriented". Metrics are about things like lighthouse scores, optimizing your frontend bundle, etc.. which tend to only become relevant for larger companies as only at that point do these things affect conversions to a significant level. Incidents, however, are more about one-off issues happening in your web app, and that' what we're focusing on at highlight.io (at least right now). Examples include customer support issues, bugs that affect a user's experience, etc..
Both are important for larger companies, but for smaller companies, the only thing that we see as relevant are frontend "incidents".
So to answer your question, I'd ask yourself what types of issues are you trying to track down? Does that make sense?
My question was more about the error/incident monitoring part. For us the biggest challenges in this area have been
1) grouping errors correctly. Often, identical errors are not logged because the browser messes up the stack trace, because error messages are not completely identical (browser-dependent), or because the stack trace (line numbers etc.) have slightly changed from one version to another. So we often need to group errors by hand.
2) identifying what the impact of an error is. Is it critical? Is it enough if we look into it next week? Is it relevant at all? Add to that that our web shop sees a significant number of automated interactions by resellers, so errors due to broken browser extensions, their bots doing crazy things etc. have been quite common and often need to be ignored by hand.
I really hope session replays are going to help us with 2).
In my experience, the biggest problems we have is that it’s hard to figure out what exactly to log. There’s sensitive information in the system that we are not supposed to store outside designated locations.
Then there is session storage itself. Do we log every request? Do we log even very large responses?
Yep, we don't charge for seats at all. Just usage. In fact, we're likely going to be adding a lower tier so that individual developers can use the tool (at about $50/month). With the direction we're going with the product, we see ourselves having several "products" long term (error monitoring, session replay, logging, etc..) so there's a lot of value in making smaller teams happy w/ pricing, even as they grow.
And if you ever give us a try, definitely send feedback our way; we have a discord community up at https://highlight.io/community.
How does your session replay feature compare to LogRocket (which I've always found quite impressive)? (Side note: I'm missing proper screenshots and/or videos on your website.)
Regarding your pricing: Am I seeing this correctly that, in contrast to LogRocket, you're not charging ridiculous amounts for team seats? This is what drove us (web shop, 30M user sessions / month, ~150 developers) away: For 100k sessions/month and 100 user seats they wanted $30k/year of which $21k were supposed to be just for the user seats. ( ._.)
Now we didn't expect 100 of our developers to use the tool regularly (as in every day) but since LogRocket told us accounts could not be shared or rotated and we expected many devs to access LogRocket at least every now and then, that price was beyond unacceptable.