Hacker Newsnew | past | comments | ask | show | jobs | submit | travisd's commentslogin

At $DAYJOBSTARTUP, we do hackathons twice a year. At the most recent one, an engineer sat down with a designer and set him up with Cursor. The designer looked like a kind in a candy shop, he was so excited to be able to rapidly prototype with natural language and not be clicking in Figma for hours.

A month later, he comes back to the engineering team with a 10k line "index.html" file asking "How do I hand this off?" (he was definitely smart enough to know that just passing that file to us was not gonna fly). We decided to copy the designs into Figma for the handoff, both because that was the existing way we did design/engineering handoffs and also because creating high fidelity designs (e.g., "this color from our design system" and "this standard spacing value") isn't in Cursor's wheelhouse.

We're probably going to spend more time working on a better setup for him. At the very least he should be working against our codebase and components and colors and design tokens. But I'm very curious to see where it goes from here.


This is doable. I set something similar up at scale. Figma Variables/Tokens -> Token Studio (Style Dictionary, essentially) -> Our Component library <-> Storybook -> MCP

Atomic component system, good page level template coverage, great prop support.

The LLM consumer nor the designer is allowed to write directly back to the library. Those changes need to go through a governance process to prevent drift as their are multiple product teams consuming it and we still don't have a reliable way to make sure Figma and the component library are always 1:1. Maybe in a company of a single designer.

So while this setup is arguably more fleshed out than what you have it still requires multiple humans in the loop.

Sure, there are a billion Medium articles about how it can be done with tokens but its much messier at any kind of scale.


Haha I did the same with our product manager and designers. One of our designers just got her first (tiny) PR merged this week.

I am somewhat fearful of having created a monster, but at the same time I think it’s good to knock down barriers to knowledge and learning. All else equal, I think a designer or PM with some exposure to code is better than one without.

What I’m fearful of are 10k line PRs and pressure from product to “just ship it.” Past a certain threshold a PR will be really tough to review, to the point that it would be preferable for an engineer to have handled it from the start.

I think we will need deeper integration between figma and the codebase/storybook. Shared color palette definitions, integration of storybook components with figma components, stuff like that.

The Figma MCP that you can use to handover to your agent and simply say “implement this” is already pretty impressive.


Why not just give him a branch? I've found underestimating "non-technical" people a folly in the AI era. They can easily boot up projects with agentic AI assistance.


Same here. Really curious on where this leads. Firstly, I feel that the speed and complexity increase that comes with agents can only be dealt with people adept both in the domain and in general AI tools.

Basically, to really leverage this I think just knowing Figma perfectly previously or being a noobie and knowing Claude Code perfectly isn't gonna cut it.

Building things is fast, but building something that is gonna stick is gonna be more difficult now you have so many options.

The game has changed.


packets udp bar walk a into


ok this might sound crazy, but at first I completely missed the joke because it automatically rearranged the entire sentance into udp packets walked into a bar. I wonder how that works psychologically.


You don't read a word at a time... every typical line of text is taken in with 2 or 3 eye focal points and the meaning of each group of words is determined in a single chunk. https://en.wikipedia.org/wiki/Saccade#Reading


> I completely missed the joke because it automatically rearranged the entire sentance into udp packets walked into a bar.

Same here.


2 FORTH PROGRAMMERS BAR INTO WALK


Many of these package managers get invoked countless times per day (e.g., in CI to prepare an environment and run tests, while spinning up new dev/AI agent environments, etc).


Is the package manager a significant amount of time compared to setting up containers, running tests etc? (Genuine question, I’m on holiday and can’t look up real stats for myself right now)


Anecdotally unless I'm doing something really dumb in my Dockerfile (recently I found a recursive `chown` that was taking 20m+ to finish, grr) installing dependencies is longest step of the build. It's also the most failure prone (due to transient network issues).


Ye,s but if your CI isn't terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don't have a hard dependency on a third party service.

The reason for speeding up bundler isn't CI, it's newcomer experience. `bundle install` is the overwhelming majority of the duration of `rails new`.


> Ye,s but if your CI isn't terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don't have a hard dependency on a third party service.

I’d wager the majority of CI usage fits your bill of “terrible”. No provider provides OOTB caching in my experience, and I’ve worked with multiple in house providers, Jenkins, teamcity, GHA, buildkite.


GHA with the `setup-ruby` action will cache gems.

Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.


> GHA with the `setup-ruby` action will cache gems.

Caching is a great word - it only means what we want it to mean. My experience with GHA default caches is that it’s absolutely dog slow.

> Buildkite can be used in tons of different ways, but it's common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.

The only way docker caching works is if you have a persistent host. That’s certainly not most setups. It can be done, but if you have that running in docker doesn’t gain you much at all you’d see the same caching speed up if you just ran it on the host machine directly.


> My experience with GHA default caches is that it’s absolutely dog slow.

GHA is definitely far from the best, but it works:, e.g 1.4 seconds to restore 27 dependencies https://github.com/redis-rb/redis-client/actions/runs/205191...

> The only way docker caching works is if you have a persistent host.

You can pull the cache when the build host spawns, but yes, if you want to build efficiently, you can't use ephemeral builders.

But overall that discussion isn't very interesting because Buildkite is more a kit to build a CI than a CI, so it's on you to figure out caching.

So I'll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.

I've worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds. And when rubygems.org had to yank a critical gem for copyright reasons [0], we continued building and shipping without disruption while other companies with bad CIs were all sitting ducks for multiple days.

[0] https://github.com/rails/marcel/issues/23


> So I'll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.

The problem is that none of the providers really do this out of the box. GHA kind of does it, but unless you run the runners yourself you’re still pulling it from somewhere remotely.

> I've worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds.

I kind of suspected - the vast majority of orgs don’t have a team of people who can run this kind of a system. Most places with 10-20 devs (which was roughly the size of the team that ran the builds at our last org) have some sort of script, running on cheap as hell runners and they’re not running mirrors and baking base images on dependency changes.


> none of the providers really do this out of the box

CircleCI does. And I'm sure many others.


> My experience with GHA default caches is that it’s absolutely dog slow.

For reference, oven-sh/setup-bun opted to install dependencies from scratch over using GHA caching since the latter was somehow slower.

https://github.com/oven-sh/setup-bun/issues/14#issuecomment-...


This is what I came to say. We pre cache dependencies into an approved baseline image. And we cache approved and scanned dependencies locally with Nexus and Lifecycle.


Starbucks is airport drink/food for me. Being able to order as I enter the TSA line and pick it up on the way to the gate is unmatched convenience, and the coffee options at airports generally aren't great.


Of course! It helped your friend realize what kind of person you are and hopefully spurred them to find better friends who possess actual human empathy.


Making an HTTP request and dealing with JSON data is a weed-out question at best. Not sure if you are interpreting the grandparent comment as actually having them write a JSON parser, but I don't think that's what they meant.


I either had that come up in an interview recently myself, OR it wasn't clear to me that I was allowed to use encodings/json to parse the json and then deal with that. I happened to bomb that part of the interview spectacularly because I haven't written a complex structure parser in years given every language I've used for such tasks ships with proper and optimized libraries to do that.


I don’t want to disparage your work, OP, but micro-optimizing text files is often not very effective, especially after gzip. I’m curious to see if there’s any noticeable difference for a representative gzipped CSS file (since these assets are almost always served from a CDN with compression).


Thanks! I hate it!


I highly highly highly recommend the book “Two Wheels Good: The History and Mystery of the Bicycle.” It explores the origin, the various social implications across various cultures (it often became a symbol of perversion due to its association with women’s liberation), and even the modern day e-bike movement. 12/10 book, very well written too.


Just ordered a copy on your suggestion. Looking forward to reading it!


Thnx for reccomendation, interesting one!


I'll have to check this out


I'll second this.


Based on the article, it sounds like this doesn't activate a device's microphone at all. If it did, most (all?) browsers would give a pop-up requesting permission for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: