Hacker Newsnew | past | comments | ask | show | jobs | submit | stedolan's commentslogin

The way to handle stack overflow gracefully on Linux is to check in your signal handler whether the faulting address was between the beginning of the stack and the current stack pointer. You can read the stack pointer register from the signal handler's third parameter.

This is also how the kernel grows the stack. When there's a fault, it compares the faulting address to the stack pointer register. This way, big frames don't confuse the automatic growing. (On Windows, by contrast, stack growth is detected using a single 4k guard page, so compilers must be careful with big frames and insert strided accesses)


Thanks!

1. I think your example shows sed/awk's failings with JSON data :) I don't want to write a JSON parser by hand every time I want to pull a field out of an object, and parsing recursive structures with regexes is never a good plan.

2. It reads JSON items from stdin into memory, one at a time. So if the input is a single giant JSON value, it all goes into memory, but if it's a series of whitespace-separated values they'll be processed sequentially.

It's cat-friendly: if you do

    cat a | jq foo; cat b | jq foo
then it's the same as doing

    cat a b | jq foo


1. But those are general statements. Opinions. What I mean is give me a specific case. A specific example, a specific block of JSON and a specific task. Once I have that, then I can ask myself "Is this something I would ever need to do or that I have to do on a regular basis?"

Sometimes I need to write one-off filters. There is just no getting around it. I have to choose a utility that gives maximum flexibility and is not too verbose; I don't like to type. Lots of people like Perl, and other similar scripting languages for writing one-off filters. But Perl, _out of the box_, is not a line-by-line filter. It's unlike sed/awk; it needs more memory. That brings us to #2.

2. If I understand correctly, jq is reading the entire JSON block into memory. This is what separates your program and so many other sed/awk "replacements" from sed and awk, the programs they purport to "replace". sed/awk don't read entire files into memory, they operate line-by-line and use a reasonably small buffer. Any sed/awk "replacement" would have to match that functionality. Given that sed/awk don't read in an entire structure (JSON, XML, etc.) before processing it, they are ideal for memory constrained environments. (As long as you don't overfill their buffers, which rarely happens in my experience.)

Anyway, so far I like this program. Best JSON filter I've seen yet (because I can hack on the lexer and parser you provided).

Well done.


They exist now!


(author here) That bug was fixed a while ago, but I forgot to upload new binaries. Try again, they should be working now.


(author here) I haven't tried to build on a mac in a while, bison there seems to support fewer options (must be an older version).

For now, I've just checked in the autogenerated parser, so that bison won't have to run when you build master. git-pull and try again :)


Works great now, thank you:)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: