Hacker Newsnew | past | comments | ask | show | jobs | submit | beechwood's commentslogin

c'mon


I just built this yesterday, so 0. Want to be my first?


I assume this means the reviews under "Trusted by Listing Agents" are AI generated as well?


No, I had a couple of agents give it a spin. I am letting them use it for free in exchange for feedback.


My apologies, yes this is for the American market.


Hello HN, I’m a solo developer building tools for real estate workflows. I built OfferGridAI after watching listing agents repeatedly struggle with the same problem during hot markets.

When a property gets multiple offers, each offer usually comes in as a 10–20 page PDF. Under tight time pressure, agents have to manually dig through each document and rebuild a spreadsheet to compare things like price, net to seller, contingencies, financing, closing timeline, escalation clauses, etc. It’s not conceptually hard, but it’s stressful, time-consuming, and easy to miss details buried deep in the PDFs.

I wanted a way to make that moment less chaotic.

The idea: Upload multiple offer PDFs → extract the key terms → generate a clean, side-by-side comparison grid that’s easy to walk through with a seller.

Instead of just dumping text, the tool normalizes the information into comparable fields (price vs net, contingencies, financing strength, days to close) and adds a short summary highlighting tradeoffs (e.g. highest price vs highest certainty to close).

What it focuses on:

Structured extraction of common purchase-agreement terms

Normalizing offers so sellers can compare apples to apples

Surfacing risk factors (financing type, contingencies, timeline)

Producing a seller-ready grid rather than raw AI output

What it intentionally does not do:

Make decisions for agents or sellers

Replace professional judgment

Integrate with MLS or transaction management systems (at least for now)

The goal is to be a fast decision-support tool for a very specific, high-pressure moment.

I’m early and still refining the scope, especially around:

Which fields matter most in practice

How to communicate “risk” without over-claiming

How tolerant users are of “best effort” extraction vs perfection

I’d love feedback from anyone who’s worked with complex PDFs, document comparison, or decision-support tools under time pressure, or from anyone who’s built vertical SaaS in heavily regulated industries.

Happy to answer questions and learn from the community.


For covering the risk of mistakes I suggest considering ways of "visually quoting" the documents.

If the summary says "closing timeline: X" but there's an icon I can click that pops open an overlay with a visual cropped screenshot of that part of the original PDF - maybe even with a red circle around that detail - I can trust those summaries a whole lot more.

Gemini 2.5 has image bounding box and masking features that can help with this (sadly missing from Gemini 3.)


Oh I didn’t know about the visual bounding boxes this is super cool!

Quick question are you talking about this feature?

https://docs.cloud.google.com/vertex-ai/generative-ai/docs/b...

Because it’s just using structured response so it should be doable with Gemini 3 ? (We are using Gemini 3 for some docs processing and its visual understanding is just incredible)


No I'm talking about the image segmentation feature: https://simonwillison.net/2025/Apr/18/gemini-image-segmentat...

But the bounding box stuff might work well enough in Gemini 3 to handle this case as well.


Hmm so that post also links back to segmentation done by structured outputs? (Though here not even enforcing the structure)

https://ai.google.dev/gemini-api/docs/image-understanding#se...


It's not supported by Gemini 3: https://ai.google.dev/gemini-api/docs/gemini-3#migrating_fro...

> Image segmentation: Image segmentation capabilities (returning pixel-level masks for objects) are not supported in Gemini 3 Pro or Gemini 3 Flash. For workloads requiring native image segmentation, we recommend continuing to utilize Gemini 2.5 Flash with thinking turned off or Gemini Robotics-ER 1.5.


Ok, gotcha. I think this is doable. Show the excerpt from the original document so the user has confidence the data is correct.

Thank you for the feedback.


> each offer usually comes in as a 10–20 page PDF.

When sold out vacation home, we had multiple offers, but I seem to recall the offer letters being 1 pagers. Does offer letter length vary by region?


Both houses we’ve bought have essentially been 1 pagers for the core details.

The rest of the document has been a semi-standard contract (used by the real estate agent associations).


Yes, I think it varies by state.


Tangential question:

I've never owned a home and would like to try to buy one in the next year or two. There doesn't seem to be much in the way of API's/software tools that let you analyze historical data and prices of listings in specific areas.

How can I get my hands on the right information to make sure I don't get ripped off?


Others mentioned county data. If you can get that, you can build something like I did for DC -- https://colab.research.google.com/drive/1Kep_9j_PN_SxX85PYHE...


In the US at least, your county should have an assessor that's responsible for tracking property values for tax purposes. How accessible the data is probably going to vary from county to county, and there's no common API for that, but it's a start.


Not sure, maybe check on reddit in one of the real estate subs.


This is a recruiter-facing tool that compares a candidate’s resume with their LinkedIn profile PDF (uploaded by the recruiter) and flags mismatches in job titles, dates, roles, skills, and timelines.

I would love some feedback.


So it is going to flag my resume when I tailor it to a job?

My linkedin profile is generic. It gives a fairly broad overview of what I've done. When I apply to a job, I get specific about how my experience matches the posted job description. While dates will still line up, titles and skills won't. All my roles have worn multiple hats, and I'll adapt the title and description to the highlight the hat that matches the job. It would be a false expectation for the resulting resume to fully match LinkedIn.


I respectfully disagree. Your LinkedIn job history should match what your resume says (generally speaking).


I don't think you can find older kids channels in YouTube kids.


You can share any video with kids from your main profile. Buy I haven't tried sharing adult only or age restricted content so don't know where that is allowed.


YouTube kids does not have Minecraft building channels, at least not ones that my kids watch. That's why I created this.


the idea is that you trust the channel that you add. so assuming every video on that channel is safe....


YouTube kids doesn't have the channels they watch. AFAIK the creator has to designate their videos as kid friendly. Not all creators do that.


Not all channels my kids watch are on YouTube kids.

They watch Minecraft building channels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: