Hacker Newsnew | past | comments | ask | show | jobs | submit | seeyebe's commentslogin

Didn't need the complexity of regexes for this, and C++17 strikes a good balance between features and availability.


You’re right. What I’m working on is meant to complement that, especially early on. The goal isn’t to replace judgment or deep dives, but to surface patterns that can guide where to look: areas with a lot of churn, untouched files, contributors active in a specific part of the code, etc.

It’s still early, but I’d like to evolve it to make those insights more actionable, maybe even link recent PRs, show how files evolved, or highlight ownership boundaries. Feedback like yours helps shape that, so thanks again.


Good point, and I appreciate the heads-up. Naming is tricky. I’ll definitely consider renaming or at least making the distinction clear in the README


Thanks for the thoughtful question. The tool doesn’t aim to declare what’s “important,” but rather to highlight patterns. like hotspots, dormant code, or contributor trends. that can guide refactoring, onboarding, or even just curiosity. For some workflows (e.g. legacy cleanup, team handover, bug tracking), that context can be quite valuable.


> The tool doesn’t aim to declare what’s “important,” but rather to highlight patterns

I guess my question then is why should someone care about these patterns that are explicitly not what's "important"?

You say things like

"can guide refactoring, onboarding"

and

"For some workflows (e.g. legacy cleanup, team handover, bug tracking), that context can be quite valuable."

But those are vague hand-wavy statements that don't explain themselves. I don't understand why it would be valuable for those tasks, and I could use some explanation of what concrete problem is solved by looking at these details.


I tried the tool and would like to use it to track team KPIs such as 'Commit regularly in small increments' with the JSON export it provides. Or to track pairing and mobbing. Currently, we use a script that goes through the commits and searches for >1 authors.


(I posted a version of this earlier, but this is a proper “Show HN” with updates and full context.)


Ah yeah, that changed recently, you now can use the tui and it will fetch anyways.

Good shout on the cache folder. Right now it just lives locally (.gmap), so yeah adding it to .gitignore is the way to go for now. I’ve been thinking about better ways to handle it. maybe an XDG-compatible path or something configurable. If a better idea comes up, I’ll def switch to it.

Thanks for trying it out!


Thanks! Great question. Everything is an excellent tool, but it works differently—it uses a pre-built index, while rq scans the file system in real time.

To benchmark, you can use hyperfine like this:

hyperfine 'rq C:\ ".png" --glob' 'es.exe .png' 'fd --type f png'

This compares rq, Everything CLI (es.exe), and fd.


Windows search is slow and painful for developers. Even tools like Everything or PowerShell Get-ChildItem can crawl on huge directories. I wanted a CLI tool that feels instant—so I built rq, an open-source file search utility in modern C17 that leverages parallel directory traversal.

rq is typically 3–7x faster than common alternatives, supports filters (size, date, type, regex, glob), and streams results as text or JSON. It’s designed for scripting and developer workflows.

Why not just use existing tools? • Windows Explorer: slow, not scriptable • PowerShell: Get-ChildItem is painfully slow • Everything: fast, but GUI-first • ripgrep: amazing, but focused on text content search

rq focuses on metadata-based search for files and directories on Windows.

Core challenges: 1. Handling Windows quirks like MAX_PATH and Unicode (rq uses \?\ paths and UTF-8 internally) 2. Efficient parallel traversal without burning CPUs 3. Adding powerful filters without making everything slow

Design highlights: • Custom thread pool built on Windows Thread Pool API • Directory traversal is fully parallel: each worker processes directories and queues subdirectories • CLI filters: size, extension, date, regex, glob, file type

Example usage: rq D:\ “*.png” –glob –min 1M –after 2024-01-01 –threads 8

Performance benchmark (1.2M files on NVMe SSD, Windows 11, Ryzen 7): • rq (8 threads): 3.2s • Everything CLI: 6.8s • PowerShell Get-ChildItem: ~40s

Lessons learned: • C17 is still great for high-performance tools • Windows APIs are powerful but painful for paths and Unicode • Thread pools beat raw threads for scalability • Avoid syscalls in the hot path for speed

Source: https://github.com/seeyebe/rq


Great suggestion, thanks! snub already skips common dirs like node_modules, .git, .svn, and __pycache__ by default (you can turn that off with --no-skip), but an explicit -x/--exclude flag for user-supplied patterns would be even more useful


Yep! Binaries are up on GitHub under Releases. Let me know if you hit any issues.

https://github.com/seeyebe/snub/releases/tag/v0.3.0


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: