Hacker Newsnew | past | comments | ask | show | jobs | submit | abhishekbasu's commentslogin

Developing LogiModel AI (https://www.logimodel.com/) which is an agentic supply chain optimization engine that forecasts demand, optimizes operations, and simulates scenarios to reduce costs while keeping your network reliable and customers satisfied. It integrates seamlessly with your ERP and document repositories to learn your business context, then acts as an intelligent decision engine, freeing you to focus on strategy while it handles execution.

Always interested in possibilities of LLMs interfacing with MIP solvers.


Also, this was a quick fun project for a multi-agent flow https://flickfeast.party/

Big market. Good luck with that. How will it be hosted? Who will implement it?

Would you follow the Red Hat business model? https://gemini.google.com/share/2825b8ff67d6

Part open / part paid closed-source? Fully open and charge on consulting/customization?

How will you stop LLMs from recreating it?


Hosted on premise for enterprise, VPC for mid market customers. For enterprise customers, FDE model makes most sense for a product like this and they would assist with implementation and training. For mid market customers, I'm still exploring options to make it cost effective.

For domain-specific optimization, the value is in the solver integrations, specific constraints that form the seed, and the modular simulation that powers the visuals. The software is monetized, not the services around it.

> How will you stop LLMs from recreating it? Having worked this space for a while now I think there are two ways to ensure reliability (the real moat here), first is going deep into five-six problems that are complex enough that out of box solutions/simple prompting don't work well. Second, tightly coupling a simulator to provide rapid feedback that actually helps change manage and solve the "people" problem when optimizing operations.


Sounds interesting. Are you not going to use the .ai domain? I also like asco.1 and asc.oo

Anyhow, is the scope centered around intra-firm supply chains? What happens if you have one firm suggesting to one of their partners to use the service... can you provide additional features if they work on the same platform? Perhaps they could agree on information to share with each other? Maybe you could have crypto payments tied in somewhere for inter-trade and eventually working toward a Logimodel marketplace - but I presume that's not really in the vision.

Anyway, best of luck with it.


Thank you for your kind words! Your marketplace idea definitely sounds interesting to me, if not for payments, it would definitely provide a way to improve visibility through easier data exchange due to a common platform.

you're right, it should be (3,0) with optimal obj value of 6.


this was a great read to start the new year! having worked extensively with mixed integer programs, it is always a bit disheartening to see them not used enough for everyday decision-making. one of my goals this year is to create a layer to make it easier to formulate mips and test them, via plain text input. this would hopefully increase adoption through a lower barrier to entry.


Great product and congratulations on the launch. Who is the target user vs customer? On the surface, and I may be wrong here, this feels like a LLM layered on top of a typical AutoML structure eg: TPOT, Caret. Is that the correct mental model for a tool like this? And if so, do you see a similar problem that these tools faced in broader adoption at companies?


I think "agents layered on top of AutoML" is a reasonable simple mental model for Plexe's model building capabilities, but it also masks some important qualitative differences between Plexe and traditional AutoML tools:

1. AutoML tools work on clean data. Data preparation requires an understanding of business context, the ability to reason on the data in that context, and then produce code for the required data transformations. Given that this process could not be automated with "templated" pipelines, teams using AutoML still have to do the hardest - and arguably most important - part of the data science job themselves.

2. AutoML tools use "templated" models for regression, classification, etc, which may not result in as good a "task-data-model fit" as the sort of purpose-written ML code a data scientist or ML engineer might produce.

3. AutoML tools still require a working understanding of data science technicalities. They automate the running of ML training experiments, but not the task of deciding what to do in the first place, or the task of understanding whether what was done actually fits the task.

With this in mind, we've seen that most ML teams don't find traditional AutoML tools useful (they only automate the "easy" part), while software teams don't find them accessible (data science knowledge is still required).

Plexe addresses both of these issues: the agents' reasoning capabilities enable it to work with messy data (as long as you provide business context), and to ENTIRELY abstract the deeper technicalities of building custom models fitting the task and the data. We believe this makes Plexe both useful to ML teams and accessible to non-ML teams.

Does this line up with your experience of AutoML tools?



I've always had the impression that Mathematical programming esp. Mixed integer programming/Integer programming is largely "unknown" outside of core engineering and operations research. It's an excellent framework to solve a whole host of problems that arise in business and elsewhere, which are solved using suboptimal (hah) heuristics instead.

Okay, maybe I was a bit harsh, but it definitely doesn't pop up as often as deep learning and statistical machine learning. For those who wish to get deeper into this, I highly recommend Optimization over Integers by Bertsimas and Weismantel.


Oh yeah, there are whole subfields of engineering that the current crop of AI deep learning engineers are mostly unfamiliar with. I've been able to find places where I can make significant advances on the state of the art in AI through incorporation of concepts from decision theory, control theory, process engineering, constraint optimization, etc.


The amusing ones, to me, are the people that know of the techniques, but are convinced they can't apply.

Obviously not everything will be easy to map into a classic optimization problem. And you may have a heuristic approach that is better for a problem. But the general solvers out there have gone a long long way.


Interesting. Deepseek R1:

You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.

Abstract ↔ Concrete: +4 Abstract Human ↔ Computer Friendly: +9 Human-Friendly


+1 for Pyxel, recently used it for a couple of hours to create a tiny project [1] and loved how intuitive it was.

[1] https://github.com/abhishekbasu/minesweeper


This was fun. I saw a post on Pyxel a couple of days ago, and decided to write mine in Python using it.

https://github.com/abhishekbasu/minesweeper


There are [1, 2], and AFAICT they rely on gnuplot [3].

[1] https://github.com/nschloe/termplotlib [2] https://github.com/dkogan/gnuplotlib [3] http://www.gnuplot.info/


The last time I used gnuplot was around 25 years ago. And it is still around!


A few years ago, you had to recompile it to add sixel support on debian, so I provided https://github.com/csdvrx/sixel-gnuplot

Now it's included by default IIRC


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: