Hacker Newsnew | past | comments | ask | show | jobs | submit | dti's commentslogin

> trying to hide distribution

The paper unfortunately hides that in reality you have to pass a context object in your RPC calls, hence there is no ambiguity whether you are calling a potentially remote object.

It's in the example on the project home page: https://serviceweaver.dev/

  // The "RPC" handler
  func (adder) Add(_ context.Context, x, y int) (int, error) {
      return x + y, nil
  }

  // The call-site
  var adder Adder = ... // See documentation
  sum, err := adder.Add(ctx, 1, 2)


~2.5x when not co-located isn't bad either; and the beauty of it is that the programming model lets the runtime system perform these relocations, based on the application profile.


Regarding the comparison with RMI, the authors did mention it:

> Java RMI use a programming model similar to ours but suffered from a number of technical and organizational issues [58] and don’t fully address C1-C5 either

Apart from that, it looks like Java RMI allowed remote objects returning other remote objects, rather than only immutable values. With that you could abuse it by making a call to one java.rmi.Remote object, getting another java.rmi.Remote object in response, then passing it around, and then finding a totally different subsystem suddenly make RPCs (however, such abuse probably would be easy to spot in a code review, as it requires a modification to the remote object interface).

---

The authors also acknowledge that it doesn’t solve the distributed computing challenges:

> our proposal does not solve fundamental challenges of distributed systems [53, 68, 76]. Application developers still need to be aware that components may fail or experience high latency

I think at least in terms of latencies their platform can occasionally inject latencies into some percentage of the tasks, then verifying if any alerts fire, if there is a fear the components become dependent on a certain deployment shape (within a cluster).


> Apart from that, it looks like Java RMI allowed remote objects returning other remote objects, rather than only immutable values.

Actually this is a very powerful concept as it allows one to achieve high level of reuse. Jini (https://jan.newmarch.name/java/jini/tutorial/Jini.html) made mobile objects the core idea of its architecture.

How powerful the approach is one can see when looking for example at Jini concept of a Lease and LeaseRevenewalService:

a server program can register an object (client side implementation of a service) in ServiceRegistrar (to make it discoverable and downloadable). Registration is lease based so the server has to renew it periodically. But it can be delegated to a LeaseRenewalService that can do it on its behalf so that the server can go to sleep (ie. not use any server machine resources).

All of the above happens without any party a-priori knowledge about any code that needs to be present at use site - code is downloaded automatically on-demand - the only thing common to client and service is a Java interface.


Looks interesting.

> The call to hello.Greet looks like a regular method call

That’s a departure from how components interact in boq — an internal and widely used production platform that has _some_ of the features from the paper. There component interfaces _are_ RPC interfaces (e.g., Stubby / gRPC + protocol buffers), and interaction between them is possible exclusively through the component interfaces. Hence it’s very explicit at the call site that an RPC is being made (which could happen to execute locally with all the standard RPC functionality — context and deadline propagation, etc.).

RPCs looking like regular method calls sound a bit scary (easy to miss in code reviews); I wonder if enforced naming conventions + IDE + code review tool support would be enough.

Edit: it seems to require to pass a context object, so the readers won't confuse it with a local call (from https://serviceweaver.dev/):

  sum, err := adder.Add(ctx, 1, 2)

---

Also, the paper claims that most benefits come from a non-versioned serialization format:

> Most of the performance benefits of our prototype come from its use of a custom serialization format designed for non-versioned data exchange [...]

However, I don’t understand why local RPC calls have to serialize protocol buffer messages — can’t they already pass them as-is to the local handler?

(disclaimer: a googler, no internal knowledge on ServiceWeaver)


> However, I don’t understand why local RPC calls have to serialize protocol buffer messages — can’t they already pass them as-is to the local handler?

I didn't read the paper in enough detail to know the answer to this, but mightn't this enable different implementation languages for different components? In my experience, it's difficult to accomplish reliably that without using a language-agnostic serialization format (like proto).

Even if that's the goal, it seems like a handler could determine whether it could elide the serialization depending on the implementation details of the components.


I see I may have been unclear — I was surprised they don't use protobufs (which one should be able to pass as-is without serialization to the locally-deployed component), but apparently using a custom optimized format for non-local calls is the primary motivation (not local calls with grpc requiring serialization — that shouldn't be the case).

However, now that I think again about the serialization format choice, it may result in a limitation on the size of monoliths (in terms of the number of people / teams contributing to it). When the number of contributors grow, the likelihood of bugs in a binary grows, and teams adopt more elaborate qualification processes, and also become much more sensitive to binary rollbacks as a remedy to discovering bugs in prod. Then they could institute policies like all changes should be protected by a feature flag (aka an experiment).

If non-versioned serialization format is used, that means that the platform cannot possibly rollback a single component. However, using versioned serialization won't be enough on its own to support per-component rollbacks — it at least requires independent component qualification (where each component is tested against "stable" versions of other components) + rollback testing to make rollbacks A2 -> B2 to A2 -> B1 safe.

I wonder if it's an explicit design choice — i.e., whether Service Weaver supports monoliths up to a certain organizational size (and then you should split into separate service weaver apps)?


In your experience, how does this kind of approach behave with asynchronous dependencies?

Let's say you start from a codebase with three portions (call them services, modules, whatever): A, B, C, D. A sends a (synchronous) remote procedure call to B, which sends a message over a message bus which is also used by C and D. C and D do not talk to each other except over the bus.

It sounds like this approach would identify the remote call dependency between A and B (which could be split into different deployment units), but not the message bus usage. Or, at least, it can't identify who is subscribing to a topic where B pushes its events.

As a result, you would get two deployment modules:

- A

- B and "everything else"

Which doesn't sound right.

Am I missing something?


> local RPC calls have to serialize protocol buffer messages

They don't have to, they artificially added that constraint to make the benchmarks fairer.

5ms protobuf, 2ms custom, 0.4ms in-process


> RPCs looking like regular method calls sound a bit scary (easy to miss in code reviews);

CORBA had the same issue. The call could take 10us or 10s and no way to tell by the user. This was ofc widely considered as huge design flaw.


They're claiming that the runtime will figure it out for you.

It wouldn't be too surprising if this is the sort of thing that an optimizing compiler or query planner could do better than a human.

If not, you're probably at the scale where performance regressions are caught and rolled back at early phases of rollout.

Like most magic, it's either going to make things 100x better or 100x worse, depending on how leaky the abstraction is at its current state of maturity.


I'm going to (at least I should) design my application logic very differently if I know in advance the call might take a while or timeout completely. If I'm not offered that info during development time it's just going to turn into terrible mess in production. Ain't nothing any framework can do about it if the language itself lacks the semantics to express developer's constraints.


> it will have no chance in h* to last through the day

My previous Xperia Compact, which is of about the same dimensions as mini, survived for a couple of days easily when new.

> iPhone Mini has been weak in battery department [1]

The article says "solid battery life", which matches my experience with 13 mini.


> are easy to use one-handed without dropping

Not necessarily.

I bought an iPhone Mini expecting that it'd be comfortable to use in a single hand like my previous Xperia phone of exactly the same width.

Unfortunately, it is not the case: the iPhone screen is very close to the bottom edge, and to switch apps you need to move your thumb to the very edge and then swipe up, which is rather uncomfortable (or requires holding your phone very low, in which case, you can't reach the top of the screen without changing your grasp). Similarly, the keyboard is rather low and uncomfortable to use from the otherwise most natural single-handed grasp.


You can try it yourself, e.g., the instance the Android team uses: https://cs.android.com/


Oh, I didn't know this existed. The syntax seems to be on par with the internal one, I couldn't find any info on what's driving it.


Also don't know how search works there, but the cross-reference functionality is powered by an open-source Kythe project: https://kythe.io/


There is a series of JEPs that add to OpenJDK a similar mechanism, class data sharing:

- https://openjdk.java.net/jeps/310 - https://openjdk.java.net/jeps/341 - https://openjdk.java.net/jeps/350


Is Semmle, offering CodeQL language and LGTM service, and recently acquired by Github, doing a similar thing (https://semmle.com/)? If so, how does Semgrep compare to CodeQL?

Edit: There is a help entry: https://semgrep.dev/docs/faq/#how-is-semgrep-different-from-...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: