Curious if anyone can comment on how Cursor compares to the current version of Github Copilot.
I've been using Cursor for many months now. The biggest feature it had that I wanted when I first used it was searching your own repository. It indexes all of your code in a vector DB so that it can then use RAG to make suggestions against your own codebase. That was the "killer feature" for me - I don't get a ton of value from inline code completions, but I get LOTS of value if I can ask "Is there a utility function in this repo that does XYZ?" when working in a large codebase with lots of developers.
Does anyone know if Copilot offers this know? I thought I had read a while ago that they added it, but a quick search just now brought up some relatively recent posts that said they still don't have it.
Not that many people use Copilot Chat, anecdotally. We've focused on codebase chat when building Cody (https://cody.dev), since we can use a lot of the code search stuff we've built before. It's hard to build, esp. cross-repo where you can't just rely on what's accessible to your editor. You can try it on the web if you sign up and then go to any repository's sidebar on https://sourcegraph.com/search, or get it for VS Code/JetBrains/etc.
What about url-defined ollama? Personally, I run open-webui on an outward facing pi on its own vlan that connects to an internal machine running ollama. This is so that there is fallback to openai api if the machine is down.
Yeah, use this in your VS Code settings to use a different Ollama URL (here it's localhost:11434 but change apiEndpoint, model, and tokens to whatever).
We should add an easier way to just change the Ollama URL from localhost, so you can see all the Ollama models listed as you can when it's available on localhost. Added to our TODO list!
When I tried Cody around half a year ago it only used Ollama for tab completion while chat still used propriety APIs (or the other way around). Did that change by now so you can prevent any API calls to third parties in the Cody config?
Yes, Cody can use Ollama for both chat and autocomplete. See https://sourcegraph.com/docs/cody/clients/install-vscode#sup.... This lets you use Cody fully offline, but it doesn't /prevent/ API calls to third parties; you are still able to select online models like Claude 3.5 Sonnet.
I have a WIP PR right now (like literally coding on it right now) making Cody support strict offline mode better (i.e., not even showing online models if you choose to be offline): https://github.com/sourcegraph/cody/pull/5221.
I find Copilot most useful as a kind of very good autocomplete. One way to think about it is: if I know exactly what the next line should be (e.g. I've done X transformation on dataset A, and I need to do the same transformation on dataset B...), then Copilot excels at filling that new line in for me.
I never ask it to create new ideas from scratch, it just isn't good at that, let alone designing interfaces or figuring out the right data structure.
For example, a common pattern might be typing: "if (x := load_data(...)) is None:", then Copilot will create a reasonable next line given context (in some parts of the codebase, return None, in other parts, raise ValueError; it sees the type annotation of the function so it usually knows which one).
>>But then way more often it’s like having someone looking over my shoulder, telling me what they think I want to do, disrupting my thoughts.
Right way to use AI agents to code is to write the solution in plain english and let AI implement it for you. Note, when you do this, you can't expect it to write full functions for you. You have to dictate small easy to do solutions.
Split this string based on spaces and pick the 3rd element
Iterate over this and and remove all new lines
This sort of thing.
As atomic as you can dictate, the better.
Don't outsource your thinking to AI.
The way to go about AI is 'Here is the solution, write the code for it'. Not 'Here is the problem, invent a solution for it'.
You must be doing the thinking, AI must be doing the manual work on writing code. Don't let AI do the thinking while you are fixing its bugs.
Depends on how easily and quickly you can touch type though. I agree with you not many people are good at speaking their thoughts in English. In fact even in the Googling era, bulk of the masses never really benefitted from it all that much as writing and composing small, atomic questions and moving upwards(Socratic method) is not something that comes naturally to a lot of people.
This is true with Programmers too. I realised that a massive chunk of programmers didn't like or even get books like The Little Schemer Which quite literally deals with this sort of a workflow.
If you are one of those people, you are specifically cursed in the AI coding era. People who can compose small atomic questions, and build upwards stand to benefit disproportionately in these times. This is a workflow you have to get used to.
On a tangent, I was watching some videos on Youtube on how to think like a Chess Grandmaster. Sure there is a lot of theory and Knowledge base they are aware of. But what stands out is they have a strong internal monologue, and in many videos they speak it aloud. Like How do I want to deal with this piece, what if I move that piece, in what angles can the opponent approach this piece. In the next few moves can X, Y, Z happen etc etc.
This is sort of thinking is called progressive thinking. It involves making the most minimal atomic change to a thing, and then building a outcome tree out of it, and then judging the best way to go about it given all the options.
This strong internal monologue is something I have seen some legendary coders have as well. Like in English.
I guess somethings you just have to learn, adapt, improve. Its a new way/workflow of working, and I find it liberating to learn new methods of working and thinking.
You just need a key binding to turn it on when you want it. Obviously it would be very difficult to write thoughtful code if you leave it on all the time.
Moving from a push to a pull based approach is an absolute must. It’s the difference between having a novice dev constantly in your ear telling you what they think should go next, vs an assistant you trust to fill in scoped out well defined logic for you.
But it is a junior dev and changing the UX won't fix that. That's why the proactive, non-blocking, easy-to-dismiss design is so effective. The cost of failure is extremely low. The moment you switch to a UX where you have to push a button and wait, the acceptable failure rate goes way down.
I don’t want to need to read through a bunch of junior dev trash when I’m going about my day to day tasks. If there’s something I am working on that I can clearly define well enough for a novice to be able to implement it, great, if not, I’ll do it myself.
Back in the free Copilot beta, I disabled automatic completions because they were like intrusive thoughts. Even then, with on-demand completion I felt my code-writing skills withering away as I'd instead ask Copilot to generate blocks of code I didn't have to think through myself.
Yeah, I'm not seeing how a souped up VS Code plugin needs or is going to turn around a 60M investment TBH. Not a slight against the product, it's decent, but I think that round is about an order of magnitude too large for the scale of the app.
Teams like Cursor remind me of the importance of excellent execution.
To many, a product like this was almost obvious, esp after Github Copilot gave us a glimpse of what an AI powered coding experience could feel like. And there have been many attempts to do this right. But this team got the hundreds, if not thousands, of product / engineering micro decisions right, and seem to move quite fast.
I stopped using Cursor months ago and dont think I will be coming back. For one, the software drains my laptop battery, I like the ability to reference files @filename but
claude.ai and aider has replaced the need for Cursor
Real big fan of Continue for VS Code with Claude; @ reference files or add snippets for specific context, and you can disable indexing + tab autocomplete to save on resources
Congrats to the Cursor team, their offering just seemed too closed box for me. Continue was a nice breath of fresh air being OSS
For a different user experience from the tools in the list, I am building 16x Prompt which is a standalone desktop app aimed at streamlining semi-automated AI coding workflow.
It is not as well known as the others, but I do have close to 2000 users.
ALSO - every one of these tools need to explicitly, in large letters state "This only works if you have a paid account on any AI you want to use"
So the prompt wrapper is "free" - but you cant do much with free these days, and you cant seem to use these wrappers without a paid API token to an AI.
---
Make it natively work with InstantDBs InstaML/InstaQL.
Both ChatGPT and Claude struggle with the actual correct formatting of schemas for InstantDB - but if your thingy can wrangle them to be better... And throw this guyus idea of using Github Gists as a scratchpad for AI to lean on in your workflow outside of the utterly broken Projects/Folders on Chat and Claude.
(Ill give yours a try, but Ive been heavily iterating with Claude on python, fastapi, instant, postgres, etc - and it runs out of context so damn fast (pro acccount, even) and the memory in project files, and artifacts is useless.
So on your free account, 10 prompts a day is not enough... though Ill give it a real try before I judge :-)
My guess is they'll want to become the AI-first developer tools company and branch out into all kinds of different dev products (CI/CD, testing, etc) ala GitHub. Obviously, enterprise is where the real money is.
My napkin math is your valuation is usually 5 x raise amount and 10x annual revenue.
So valuation is 5*60m = 300m
And expected annual revenue is 30m.
At 40/month they are expecting roughly 1M monthly actives. So I am guessing their pitch is with the vc money they will get to this number and beyond before the next funding round.
Reality is more like the founders got to cash "something" for their troubles and ability to sell the dream to others. Who knows may be they will hit it out of the park before the next round.
The linked post said they got funding from Andreessen Horowitz, Thrive Capital, OpenAI, Jeff Dean, Noam Brown, and the founders of Stripe, Github, Ramp, Perplexity, and OpenAI
But there are many others like Accel and Caffeinated Capital
So the funding is usually for a "future" revenue. Ie to hit this goal. Imo if they had this revenue they'd be aiming for 10x that with a much higher valuation. VC funding is all about a growth story. If you can keep selling vision you never have to worry about revenues or hitting them. There's even a name for such schemes!
They either run out funding (so be forced to do layoffs, liquidations etc) or they have to find someone who can bail them out at unfavorable terms (read dilutions) or be really really good at story telling.
23 million SWEs today, presumably they argue that number will climb both due to growth of software in general and their tool lowering the bar of writing code.
23 million * $40 * 12 => $11B per year in rev * ~10-12x revenue enterprise value => company could grow to be worth $100B+
So assuming you feel like you're getting a fair price for it today, then there's plenty of room for your money to grow.
I don't know how many developers want to give up their bespoke IDEs. Moving away from JetBrains products (even given all there warts) would be a hard sell for me. I also don't know how many developers even pay for an IDE.
So you're assuming every single developer will pay $40 a month for Cursor??? And I say this as someone who does pay $20 a month for Cursor.
I also think it's quite the stretch to say the number of SWEs will grow meaningfully due to AI, especially when it seems like half the time you hear a pitch about how it will make software developers obsolete.
To be clear, I can easily understand how $60 million is a good investment, but I chuckle at the $100B+ valuation.
Yeah of course they’re fantasy numbers. It’s called a TAM and it’s how VCs get comfortable with putting $60 million into a company like Cursor. I made no claim as to what the company is actually worth.
And right, some people claim AI will reduce number of SWEs. If you’re Cursor, you’re probably arguing the opposite.
I was turned off by this too, but the fact that it's compatible with all VSCode extensions makes migrating a non-event. fwiw, here's their explanation [1]:
> Why Not an Extension?
> As a standalone application, Cursor has more control over the UI of the editor, enabling greater AI integration. Some of our features, like Cursor Tab and CMD-K, are not possible as plugins to existing coding environments.
Considering Zed introduced AI and it's so much faster than Cursor or VSCode, I think the competition will be much harder than they sold to the investors.
Cursor had the vector indexing going on but it doesn't work very well in my experience. Oftentimes it doesn't find stuff and I have to manually search anyway.
Cursor is still pretty good as an editor, I've been using it for a while and even the free plan is pretty good value. LLMs are still pretty bad at coding (Claude, GPT, it doesn't matter) so even cursor-small is almost always enough for the kind of task you would offload to a LLM.
Zed's approach of building the context manually with files and text and then asking for stuff is way more direct and less "magic". It works consistently.
i think cursor is the only player that forked vscode. i've been using cursor for a week and i really like it. it's able to auto complete/correct code in multiple places at the same time. other copilot extensions in vscode were not able to do this. but i assume microsoft will quickly add this feature in vscode and others will catch up soon. i don't know if it's a true advantage to stay as a fork of vscode.
The fork is what put me off so I use Codeium that seems to do an excellent job. It has saved me a lot of time especially in "convert from React to HTML" or "Use this field list to define a table" type tasks. The more autocompletey end of things rather than intelligent.
The practical issue of a fork for me is .NET won't debug on a fork, but that is probably a niche issue overall.
I was about to try it, but it seems to try to not sell me just an extension vor my existing work environment but a full new code editor. I don't think it has all my needed features my current Intellij based editors (Android Studio, PHPStorm and PyCharm) have.
I wonder how many people are willing to give up their current IDE just for their code AI suggestions.
I've been using Cursor for many months now. The biggest feature it had that I wanted when I first used it was searching your own repository. It indexes all of your code in a vector DB so that it can then use RAG to make suggestions against your own codebase. That was the "killer feature" for me - I don't get a ton of value from inline code completions, but I get LOTS of value if I can ask "Is there a utility function in this repo that does XYZ?" when working in a large codebase with lots of developers.
Does anyone know if Copilot offers this know? I thought I had read a while ago that they added it, but a quick search just now brought up some relatively recent posts that said they still don't have it.