Great analysis, but I think you're missing the forest for the trees here. The real issue isn't about "understanding project history" - it's about signal-to-noise ratio, plain and simple.
`raw_anon_1111` nailed it with the context rot reference. After working with LLMs daily for the past year, I've found that garbage in = garbage out, consistently. It's like working with that brilliant junior dev who can't see the big picture through all the implementation details.
You wouldn't dump your entire git history into a code review, would you? So why would you feed it to an LLM? `ManlyBread`'s "poison the context" is exactly right. Every token spent on explaining dead ends or reverted commits is a token wasted.
The solution isn't more data - it's better data. What we need are tools that create concise, high-signal context packages. Architecture diagrams, clean code, and clear requirements. Not the messy sausage-making that got us there.
This isn't just theory - I cut API costs by 40% when I started curating prompts instead of just dumping everything into context. The attention window is precious - use it wisely.
This is exactly my experience. AI tools have completely transformed my relationship with code.
On one hand, they've eliminated the boilerplate I've hated for years. No more googling obscure syntax or writing the same utility functions for the nth time. There's a real joy in focusing purely on the creative aspects again.
But there's a catch. My role has shifted from writing code to managing the AI. It's like being the manager of a brilliant intern with zero memory. My day is now this constant cycle:
1. Crafting the perfect context window to prevent hallucinations
2. Engineering the right prompt
3. Context switching while waiting for responses
4. Painstakingly reviewing the output for subtle but critical errors
So has it killed my interest in programming? Partially. The craftsman's satisfaction of writing code has diminished. But it's sparked a new obsession: building better tooling. How do we reduce this cognitive load? How do we make AI-assisted development more structured and less chaotic?
I'm wondering if others feel the same - has your passion just moved up the abstraction stack like mine has?
I’ve been in the exact same spiral—new tool drops every Tuesday, I install it, it feels cool for twenty minutes, then I’m back to copy-pasting code like it’s 2019. The thing that finally broke the cycle for me was admitting that the tools weren’t the problem; my process was.
So I stopped reading launch posts and started eavesdropping. In practice that looks like:
- Keeping a muted Discord tab pinned for one competitor tool. I skim #feature-requests once a day—not for ideas, but to see which promises still aren’t keeping users happy.
- Sorting Reddit threads by “controversial.” The bitter, down-voted rants are where the real friction lives.
- On Show HN, I scroll straight to the third-level comments. That’s where the folks who actually tried the thing roast it in detail.
Those three habits give me a short, evergreen list of “this is still broken” problems. Everything else is noise.
From that list I distilled three rules I actually stick to:
1. *Context on purpose.* Before I ask the model for anything, I spend 90 seconds writing a tiny “context manifest”: file paths, line ranges, and a one-sentence goal. Sounds fussy, but it kills the “oh crap I forgot utils.py” loop.
2. *Tokens are cash.* I run with a token counter always visible. If I’m about to ship 1,200 tokens for a 3-line fix, I stop and pare it down like I’m on a 1990s data plan. The constraint hurts for a week, then it becomes a game.
3. *One-screen flow.* Editor left, prompt box right, diff viewer bottom. No browser tabs, no terminal hopping. Alt-tab was costing me more mental RAM than the actual coding.
It’s not sexy, and it definitely isn’t “AI-native,” but it’s the first workflow that hasn’t crumbled after a month. Maybe it’ll help you too.
Sama-sama bro, confirming this from Jakarta. It's a mess. My group chats were blowing up yesterday when WARP and Twitter suddenly went down. Felt like they pulled the plug right when everyone needed info on the protests.
Be very careful with random free VPNs being shared around on WhatsApp right now, many could be honeypots.
Like others have said, the most reliable long-term fix is rolling your own. I've had a cheap VPS in Singapore for years for moments just like this. The latency is low and it's been rock solid. I'm using v2ray with a simple setup, and it's been working fine because it just looks like normal web traffic to my ISP (Indihome). The guides posted in the top comment are excellent starting points.
For my less technical friends, I've been helping them set up ProtonVPN. Their 'Stealth' protocol seems to be holding up for now, but who knows for how long. The hardest part is getting this info to people who aren't tech-savvy.
`raw_anon_1111` nailed it with the context rot reference. After working with LLMs daily for the past year, I've found that garbage in = garbage out, consistently. It's like working with that brilliant junior dev who can't see the big picture through all the implementation details.
You wouldn't dump your entire git history into a code review, would you? So why would you feed it to an LLM? `ManlyBread`'s "poison the context" is exactly right. Every token spent on explaining dead ends or reverted commits is a token wasted.
The solution isn't more data - it's better data. What we need are tools that create concise, high-signal context packages. Architecture diagrams, clean code, and clear requirements. Not the messy sausage-making that got us there.
This isn't just theory - I cut API costs by 40% when I started curating prompts instead of just dumping everything into context. The attention window is precious - use it wisely.