I'm not sure we're understanding each other. When I asked "How does it solve the problem of sharing code across service boundaries?", I meant how does your language or architecture enable you to share code across separately deployed artefacts, for example across two microservices.
There are three approaches that I'm aware of for solving this problem: 1) copy/paste the shared code between the two repositories, 2) freeze the code into a library that both services depend on, 3) keep both services in one repository, extract the shared code into a module or component, both services depend on the shared module, deploy the services as separate artefacts.
1) is bad for obvious reasons, 2) adds unnecessary friction to the development process, and 3) is how Polylith solves it.
I was wondering if you've come across another way to achieve 3), or perhaps a fourth approach?
I am a vendor. Develop software products from scratch. Sometimes I "hire" myself as I have couple of products that make me money just by me owning and maintaining those.
Pattern goes like this.
1) Client hires me to develop product. After initial phases I determine what parts (code) of other products I can salvage for reuse. At this point I either copy those parts or get the latest version if those parts are 3rd party libraries which at this point is basically the same. So in between different projects I always use approach (1). You are free to laugh at me. I develop for 40 years already and this approach saves me from countless headaches related to "purism". Unless there is really really big reason I do not want to change something in a piece of shared code and then test countless permutations in unrelated projects. Thanks but no thanks. Also once my product reaches maturity I usually transfer it to client.
2) Stage 2 - working on a single product. Even though I avoid "microservices" as deployable like a plague the product still might consist from few physical executables / services. In this case the "components" code is shared as a code and the code uses Interfaces (or simulation in case of Javascript for example) when I feel that I need to abstract some "component" so that I can replace it. so this is your (2) and it poses zero friction to me unlike what you claim.
Types of products I develop ranges from firmware, to game like native applications with device control and accelerated graphic and multimedia, to enterprise backends etc. I have way bigger things to deal with rather than nitpicking whether concept of Component through Interface / library / etc. poses mental / maintenance challenge for me (hint it is not).
Thank you for explaining how you work, and I would never laugh at a developer for copy/pasting code. We've all done it!
I'm still intrigued to understand exactly how you share code across services. Let's say that you've written a piece of code for logging, which you want to use in both service A and service B. Do you package it up in a library? If so, doesn't that mean you have to place that library in a repository, so both services can access it? Doesn't that mean that if you want to make changes to the logging code that you now have to publish a new version of the library, and remember to update both services to depend on the new version?
That's the friction I'm talking about.
With Polylith, the logging code would live in a component that's directly accessible to all the other components in the system. That's because Polylith lets us work with all our components as if they're a monolith (even if we chose to deploy them as multiple services). This means that when we update the logging component, there's zero friction to update any impacted components in the services.
If the change only affects the logging component's implementation (and not its interface) then no other components need to be updated, and we can just redeploy the system. If it's a breaking change to the interface, then we can immediately fix the impacted components within our monolithic development environment. If the change is a refactor of the logging component's interface, then the other components will be automatically updated by our refactoring tool!
Hopefully that explains how Polylith solves this challenge so elegantly.
I package it mostly as a code. For example in C++ those would be the header file "xxx.h" ans an interface and "xxx.cpp" as implementation. If I only change the implementation there is no need for me to to touch anything else. Build system will figure out what services (executables) need to be rebuild and relinked. It will then build, deploy, run tests etc while I sit and pick my nose.
There are three approaches that I'm aware of for solving this problem: 1) copy/paste the shared code between the two repositories, 2) freeze the code into a library that both services depend on, 3) keep both services in one repository, extract the shared code into a module or component, both services depend on the shared module, deploy the services as separate artefacts.
1) is bad for obvious reasons, 2) adds unnecessary friction to the development process, and 3) is how Polylith solves it.
I was wondering if you've come across another way to achieve 3), or perhaps a fourth approach?