Hacker Newsnew | past | comments | ask | show | jobs | submit | mawildoer's commentslogin

I wanted to setup a slackbot to manager Doordash orders for our company.

Starting with PyPi "Doordash Client": https://pypi.org/search/?q=doordash+client I was excited by 5 recently published packages. As I usually do, I checked them out via Github... buut, hit a deadlink

Quick inspection of the package clearly shows a random server handles all the requests made including your PII, address, credit card info -- 99% chance this is malware.

World's moving fast these days, and AI is making it easier for everyone - even the bad actors - to make what looks like polish OSS.

My typical workflow selecting packages is:

1. Check out their Github - social credit means a lot to me

2. Clone the repo, and ask `claude`, `cursor` or whichever agent I'm using at the time for a quick audit

3. If I'm putting my own credentials of a PAT in there, review it myself at the top level too

Stay safe folks!


It's interesting how it ignores things like headers and footers. LLMs have an edge there in "deciding" whether to include something in the output or not.

It'd be great if your hosted version would also accept a URL to a PDF and give a permalink to the result as well (if you're looking for upgrades)


I've noticed the same "deciding" what to include issues. Despite explicit instructions in the prompt to include all text on the page.

This is one of the items that can hopefully be resolved with fine tuning.


I thought it was a big upgrade. Comparing Zerox w/ Unstructured on the first 5 pages of [this datasheet](https://www.ti.com/lit/ds/symlink/lm5117.pdf); zerox gave me what I wanted, and Unstructured gave me a bunch of extra junk that was harder to sort through at the top


We have some plans and atopile already supports some configuration regarding those deviations. There's a component selector which can choose resistors, caps, inductors etc... from a database of known components. Currently it's a free-for-all, but naturally we expect to be able to constrain this to a subset of components like "thing we already use at this company". That database means (once equations are also in) we'll be able to capture those requirements and change those output components depending on the scenario.


Lot of good stuff! We're working on some updates to the package manager now that should make a lot of that super tenable to build out.


Not at all! Thanks so much for engaging with us on this level. It's greatly appreciated!


You're not the first person to ask for alternative syntax highlighting! Might be something we need to address soon.

I think an LSP server is what we're really excited about, since it also means we can provide far richer autocomplete and inline docs as we go too.


You're right about "code" not being the right solution for everything. In the case of the layout we have indeed already already implemented an MVP of the "snippets" approach you described: https://atopile.io/blog/2024/02/02/-layout-reuse-keeps-getti...

A big part of the code is that creating a wizard for each issue it a tall order to develop, but through expressive and composable language we should be able to articulate the parameters for the configuration just as as clearly and generically solve that class of problems.


If we can build the equation solver our hearts are set on, you should be able to account for as many factors as your design requires - albeit with some performance limitations at some point. That is, assuming temperature can be factored into equations in the same manner as any other parameter or attribute


We hear you! We're most certainly planning on eating up the system's chain to describe, version control and validate up the system's chain.

As one example in an earlier (and likely future) permutation of atopile we could compile harnesses (using the fantastic https://github.com/wireviz/WireViz) by linking two interfaces on connectors.

Long future, if you can integrate mechanically too, you can at least check if not generate the harnesses lengths, bend radii, drip looks etc... for that same harness.

Somewhat like software, you can only tractably do that at scale if the units work and work reliably. Unit tests are the basis of software validation and we expect hardware validation as well. We're starting low-level we know, but with a concepts we know (from the insane scale of big software projects) can work on enormous projects too.


Many of the things you mention are not really done in hardware.

For example, unit tests. Even in FPGA designs, you can run functional simulations on portions of a design to help save time and validate. I don't believe we are yet at the stage where we simulate the entire chip. Not sure it would make sense even if you could. You have to worry about real-world effects such as clock skew and jitter that might not necessarily be deterministic. If you have designs running at hundreds of MHz or GHz, at one point you have no option but to run the design on the real IC and debug in hardware.

The other issue is that every company is likely to have their own process for some of the things you mention. Harness design and manufacturing is a good example of this. Companies like Siemens, Zuken, TE and others have professional solutions that often integrad with CAD/CAM tools (like Siemens NX) and produce professional manufacturing-ready documentation. Job shops, in many cases, are setup to receive files from industry standard tools and work directly from them. WireViz is a neat tool, but it is pretty much at the hobby level.

For example:

https://rapidharness.com/

https://www.zuken.com/us/product/e3series/electrical-cable-d...

https://www.sw.siemens.com/en-US/vehicle-electrification-wir...

You should not be discouraged though. That said, I would still urge you to interview a lot of EE's and product design engineers to really understand what you are walking into. You need to realize that you are not likely to change the entire product design and manufacturing industry just because you offer a software-like approach to design. That's just not going to happen. Industries have tons of inertia and they tend to only be interested in solving pressing problems, not adopting entirely new workflows. Also, the EDA/CAD/CAM industries are paved with the corpses or thousands of offerings that, collectively, over time, defined how things are done today.

My guess is you'd have to raise $100MM to $300MM, hire tons of engineers and devote ten solid years to materially influence how things are done. Nobody has the time or budget to introduce new tools, new problems, new training and grind their entire product development process to a halt just to adopt a new paradigm.

I'll give you an example of this from real life. The CAM tool we use to program our CNC machines is crap. We use CAMWorks Professional, which integrates with Solidworks and probably cost us $30K+ per license (between initial purchase and maintenance fees). We want to switch to at least doing CAM using the Fusion360 tools. However, this will definitely cause us to take a hit in productivity and possibly put out bad product for a period of time until the dust settles. And so, while we absolutely detest CAMWorks, we have no choice but to continue using it until a window of opportunity presents itself to make the switch. And, of course, also knowing full-well that the Fusion360 solution isn't utopia. There are no perfect tools. Just choices you might be forced to live with.


Thanks mate!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: