Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Objects, as implemented today, are mostly a scoping mechanism to package data with functions that work on it. But that wasn't entirely what Kay was proposing. He wanted objects to "send messages" to each other, as if they were nodes in a distributed system. Hence the "message" terminology Discrete-event simulators really work that way, but not much else does. OOP is two way - class functions return values.

(I got this early view when I had a tour of PARC in 1975, and Kay explained his thinking.)



to a significant extent you can do #doesNotUnderstand: in python, ruby, clos, or spidermonkey javascript†, which was the extent to which that kind of completely dynamic message sending was implemented in smalltalk-76. (smalltalk-72 was more radical.) you can think of the synchronous implementation of message sends as dynamically-dispatched subroutine calls (already more or less present in smalltalk-72) either as a helpful convenience or as a fatal compromise of kay's pure actors-like model

it's true that not many systems really depart from that tradition and go fully asynchronous: only erlang, stackless python, orleans, golang, current ecmascript with promises or web workers, python with asyncio, backends connected together with kafka or rabbitmq or ømq, and a few others. and for hysterical raisins their asynchronous tasks aren't called 'objects'. but i don't think it's really true that that style of programming is entirely limited to discrete event simulators!

______

† __getattribute__ or __getattr__, method_missing, no-applicable-method, and __noSuchMethod__ respectively


"not many systems"

"only erlang, stackless python, orleans, golang, current ecmascript with promises or web workers, python with asyncio, backends connected together with kafka or rabbitmq or ømq, and a few others."

That's a lot.

I do agree with both that Objects as used in programming languages is a very limited definition and not quite what Kay had in mind.

Kay Objects really exceed even all of the examples quoted above, which are plentiful.

Take a bank for example. Your system may communicate with a bank by using a stripe API or by sending an ACH file to process some transactions. The bank may take the transaction and process it only returning a response, in a somewhat functional request-response fashion. But they might also, by their own volition, send their own messages to the originator, for example a chargeback. They may even send messages unrelated to a specific message, like a request for documentation.

From a technical standpoint, any API system that requires a callback address is probably because they need to send their own messages back, in that case there is a bidirectional channel of communication, and we are talking about Kay objects.

A feature of this interpretation of Kay Objects is that they are not necessarily computer systems, a bank is a juristic entity, its barriers of communication are human as well, they have NDAs and contracts which are not unlike code. They protect their internal state and data, and have specific channels of communication.


> I do agree with both that Objects as used in programming languages is a very limited definition and not quite what Kay had in mind.

Yes. Kay was trying to envision a sort of object oriented nanoservices architecture, decades too early to build it. Arguably, CORBA came close to that. You create a local proxy object which has a relationship with a non-local object, and talk to the proxy to get the remote object to do things.

Interestingly, there's a modern architecture for distributed multiplayer games which works that way - M2, from Improbable. In-game objects talk to other objects, some of which are on different machines. The overhead and network traffic within the server farm are very high, because there's so much communication going on. It's only cost-effective for special events. But it does work.


Don't you think Kay Objects are very present in distributed microservices architectures? Services which provide APIs as the only way to interact. Some even require consumers to register their own servers for callback and require the implementation of callback functions.

Without going much further Server-Client architecture presents characteristics of Kay objects, if only because their physical separation requires limitation of the control between server and client for security concerns.

Multitenancy of machines also forwarded Kay Objects in parallel due to security concerns, first OS processes and then stricter virtual machines enforced independency of these objects and allowed communication through strict long-range protocols like TCP in the case of VMs.

I feel Kay pushed for objects at the application level and this was largely redundant with Operating system level concepts like scheduling, user and kernel layer memory protection. Threads and containers proved that there is a need for a more tightly controlled scheduler and resource sharing, but in general Kay's objects nowadays just use strong encapsulation mechanisms at the OS layer such that objects usually communicate via network protocols as if they were in separate machines altogether, they truly are separate physical objects running independently.

It is important to consider the ideas of Kay in their time context, preemptive scheduling was a young concept, and processes back then did not have much protection against memory accesses. Of course the scarcity of resources (compute, memory) back then was also a factor to push for application level encapsulation, but nowadays we can just spin up virtual machines and throw metal into some datacenters, there is a surplus of hardware so there is no incentive to replicate and optimize hypervisors, so they don't move to the application layer at all. Turns out all of those security features are really important in guaranteeing encapsulation, you don't even have to worry about whether there is a bug leaking state, because that is taken as a security concern, and the barriers are designed to be protected against skilled attackers, so random bugs are much less likely to break encapsulation.

Application level objects are still very much used, to my knowledge in simulation software including games, where it would be unreasonable and unnecessary to spin up a VM for each butterfly in a simulated world. But it turns out that in business, Kay Objects are usually assigned to a programmer or to a team of programmers, so there's rarely situations where a programmer is in charge of more than one object, and they need to play a dissociated god controlling designing many entities, and when we do, we inevitably suffer from an identity crisis. And we use harder abstractions like processes or servers anyways. No need to fit multiple kay objects into a single process, that usually causes way too many objects. It's desirable to assign some cost and administrative overhead to object creation to avoid Programmer Ego Death.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: