Hacker Newsnew | past | comments | ask | show | jobs | submit | pakl's commentslogin

A few years ago Peterpaul developed a lightweight object-oriented system on top of C that was really pleasant to use[0].

No need to pass in the object explicitly, etc.

Doesn't have the greatest documentation, but has a full test suite (e.g., [1][2]).

[0] https://github.com/peterpaul/co2

[1] https://github.com/peterpaul/co2/blob/master/carbon/test/pas...

[2] https://github.com/peterpaul/co2/blob/master/carbon/test/pas...


For people wondering what it looks like without the syntactic sugar of carbon then look here [0]. As far as I can see, there's no support for parametric polymorphism.

0. https://github.com/peterpaul/co2/tree/master/examples/my-obj...


Doesn't look much different than GLib the base for the GTK implementation (and other things in GNOME, the GNU Network _Object_ Model Environment).


Objects yes, classes and inheritance no. Just interfaces please.


I feel like Vala tries to fit in this niche too.


In most B2B cases you really don’t want to self host authentication. Really.

There are plenty of identity providers out there who will worry about hashing passwords, resetting them, 2FA, etc. Most client businesses already have identities via one of those for all their employees (read: users of your APIs or apps).

Unfortunately nearly all of the open source solutions out there do exactly what you said, they start with (required) self-hosting authentication. Not helpful.

What’s more relevant to businesses is authorization using existing IdPs (shameless plug: https://github.com/DMGT-TECH/the-usher-server)


I believe it’s important to offer people a choice.

Some prefer self-hosting, while others opt for SaaS—it really depends on their specific needs. If you require data residency and complete control, self-hosting is the way to go. On the other hand, if you want a hands-off operational experience, SaaS makes more sense.



IMHO (from the viewpoint of a neuroscientist) the biological inspiration is quite measured and restrained in his work…

The problem he was proposing we solve is computing with heterogenous “machines”. This doesn’t preclude the regimented organization you are favoring, above.

Please see my other comment on call-by-meaning.


Actors solves a very different problem. Alan Kay was talking about enabling computing across heterogeneous systems.


What about actors makes that impossible?


At Alan Kay’s Viewpoints Research Institute, the problem was phrased in a more concrete form and a solution was provided — “Call by Meaning”[0].

The most succinct way I have found to state the problem is: “For example, getting the length of a string object varies significantly from one language to another... size(), count, strlen(), len(), .length, .length(), etc. How can one communicate with a computer -- or how can two computers communicate with each other -- at scale, without a common language?” [1]

The call-by-meaning solution is to refer to functions (processes, etc) not by their name, but by what they do. VPRI provided an example implementation in JavaScript[0]. I re-implemented this -- a bit more cleanly, IMHO -- in Objective C[1].

[0] http://www.vpri.org/pdf/tr2014003_callbymeaning.pdf

[1] https://github.com/plaurent/call-by-meaning-objc?tab=readme-...


> The call-by-meaning solution is to refer to functions (processes, etc) not by their name, but by what they do.

This seems like call by an even longer, more difficult to use name.

And it would seem to rely on a common language to describe functions/methods, which clearly we don't have or everyone would use the same names for things that do the same thing already.


Think about it. A “meaning” in this usage is definitely not a longer name.


From the doc you linked we have

   var clock = K . find (
   "(and
    (variableRepresents that thing)
    (typeBehaviorCapable-DeviceUsed thing
    (MeasuringFn Time-Quantity)))")
So if I want a clock instead of using the name system.timer, now I need to know the much longer name. Maaaybe you think I can reason about the parts of this name, but it's just a longer string with funny syntax. And it's only really useful if we all agree on the language of description, which if we had a common language of description, we wouldn't have the problem this is trying to address.

If you've got an example of a real system using this where it's actually better than searching docs, or learning what the language of today uses to measure the size in bytes and the size in codepoints and the size in glyphs, please link to that. But this feels like yet another thing where if everyone agrees about the ontology, everything would be easier, but there's no way everyone would agree, and there's not even an example ontology.


The different between a descriptor and a name is that there is one name, but infinite descriptors.


I find this super interesting! The first thing that comes to mind reading the demo code is, perhaps against the purpose, to canonicalize the lookup examples, which in turns evokes that the examples could be expressed by type expressions alone. Which makes me think of a type system that embeds a generalized set of algebraic operations, so that the adder function is one that simply returns the type Number + Number. Those could be semantic operations, beyond the basic mathematical ones, of course. Anyways, just thinking out loud.


Thanks for the pointer!

"Call by meaning" sounds exactly like LLMs with tool-calling. The LLM is the component that has "common-sense understanding" of which tool to invoke when, based purely on natural language understanding of each tool's description and signature.


There exists the Universal (Function) Approximation Theorem for neural networks — which states that they can represent/encode any function to a desired level of accuracy[0].

However there does not exist a theorem stating that those approximations can be learned (or how).

[0] https://en.m.wikipedia.org/wiki/Universal_approximation_theo...


People throw that proof around all the time; but all it does is show that a neural net is equivalent to a lookup table; and a lookup table with enough memory can approximate any function. It's miles away from explaining how real world, useful, neural nets, like conv-nets, transformers, LSTMs, etc. actually work.


FYI, there are actually many algorithms going back longer than the neural network algorithm that have been proven to be a universal function approximator. Neural networks are certainly not the only and not the first to do so. There are quite a few that are actually much more appropriate for many cases than a neural network.


What other algorithms can do this and which situations would they be more useful than neural networks?


The Taylor Series dates to 1715. Fourier Series dates to the 1820s.

Both are universal function approximators and both can be learned via gradient descent.

For the case where the function you want to learn actually is polynomial or periodic (respectively), these are better than neural networks.


For your interest, Taylor Series are not universal function approximators - the Taylor Series around 0 for

f(x) = e^(-1/x^2) if x != 0 else 0

is identically zero (all partial derivatives are 0 at 0) but the function is clearly not identically zero. So the radius of convergence for this Taylor series is infinite but it only equals the approximated function at one point.

I'm sure there are some conditions you can put on f to make the Taylor Series a UFA but it's been quite a while since I did any real analysis so I have forgotten!

Doesn't detract from the overall point though that there are UFAs that are not neural nets. I should say that I don't know what the precise definition of a UFA really is, but I assume you have to have more than equality at one point.


Taylor series work on differentiable intervals. You specifically chose a function and interval where this is not true. Of course it will not be a good approximation.


I'm pretty sure the UFA theorems for neural networks wouldn't apply to that function either: https://en.wikipedia.org/wiki/Universal_approximation_theore...

Generally, they assume the function to be approximated is continuous.


This area is covered by non-parametric statistics more generally. There are many other methods to non-parametrically estimate functions (that satisfy some regularity conditions). Tree-based methods are one family of such methods, and the consensus still seems to be that they perform better than neural networks on tabular data. For example:

https://arxiv.org/abs/2106.03253



Newtons Method approximates square roots. Its useful if you want to approximate something like that without pulling in the computational power required of NN.


I think the problem to solve is more like : given a set of inputs and outputs, find a function that gives the expected output for each input [1]. This is like Newton's method on a higher order ;-). One can find such a tool in Squeak or Pharo Smalltalk, IIRC.

[1] https://stackoverflow.com/questions/1539286/create-a-functio...


By definition, that’s not a “universal“ function approximator.


Newton's method related to universal function approximation in the same way a natural oil seep is related to a modern IC engine...


Not any function though. There are restrictions on type of functions "universal" approximation theorem is applicable for. Interestingly, the theorem is about a single layer network. In practice, that does not work as well as having many layers.


They can model only continuous functions, more specifically any continuous function on compact subsets of ℝⁿ. They can approximate functions to an arbitrary level of accuracy, given sufficient neurons


Makes you wonder what is meant by learning...


Learning is using observations to create/update a model that makes predictions which are more accurate than chance. At some point the model ends up having generalizability beyond the domain.


If you think about it, for embodied agents symbol grounding isn’t really the “problem”.

Rather, embodied agents start with reference and indices. The hard problem is actually ungrounding — which takes work — to eventually get to things that approach what people typically think of “symbols”.


It's metaphors all the way down, until you hit sensory grounding, space and time.

Discrete objects give integer arithmetic. Correspondence gives equality. Spatio-temporal behavior gives basic logic: concurrent AND, choice OR, inside/outside, under/over, up/down, more/less... Properties and behaviors cluster to give categories in a context. Action frames give role bindings for actors...

It's Lakoff&Johnson all the way down.


In an anthropological framework, social capital can be built up and exchanged for other forms of capital (including economic capital). There are even brokers who have the role of facilitating these transactions[0].

[0] https://bertrandlaurent.substack.com/p/monday-converting-soc...


The Precursor should tick your boxes, and with an FPGA-based SOC.

https://www.crowdsupply.com/sutajio-kosagi/precursor/updates...


By bunnie too, so you know it'll be good :)


I love the price on that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: