I only clone data using JSON.parse(JSON.stringify(someThing)), and thus avoid all the mess that comes with trying to clone anything else than primitives types.
Although ISO date strings can be fairly trivial to read and manipulate with RexExp or a custom parser, depending on the complexity of the task, I would recommend using something like iso-fns.
It's too bad this library (and approach) never took off, but it's there to use nevertheless.
Even without iso-fns, it's not as if mid level developer can't figure out how to write a function to perform a specific operation on an ISO date string.
After over 10 years of web development, I do think this is the best known approach and is even better than storing UNIX time. Even if there were no existing libraries to help with manipulating ISO strings, I will gladly take the inconvenience in exchange for the rest of the advantages.
With ISO strings, you don't get any behavioral quirks from Date; they are totally compatible with JSON; you can store the time zone along with the time; there is nothing to stop you from changing the time zone while leaving all the other values intact; ISO strings also support durations.
Basically, they support most of what a software developer will be expected to do with times and dates but without any behavior or assumptions about the client time zone that introduce bugs like with Date. `<input type="datetime-local">` also uses ISO strings out of the box, which can be really convenient, and values from `<input type="date">` can be easily appended into an actual ISO string.
Emphasis on _data_. I don't have a need to clone complex objects (what the article is trying to do). The data I need to clone are usually small so when it comes to performance it's negligible, not worth adding another dependency to the project.
A lot of the problems described in this (very detailed, nice work!) comparison are not really problems so much as design decisions. Having a clone operation that copies non-enumerable properties is, depending on what you're doing, a bad thing. They're not enumerable for a reason.
Similarly, cloning things like getters means you're potentially copying over closures, which means your new cloned object now has references to its source and interacting with it may mutate the source. This is a pretty serious hazard that (IMO) justifies not cloning getters/setters, at least by default.
Cloning freeze/seal status would also lower the usability of your cloning API, because now if you wanted to make a non-frozen copy of a frozen object, you can't use the clone API.
Exactly this. It was a joy reading this article because it shows how deep the rabbit hole goes - but in 99% of cases I would still choose cloneJSON, because a) no external dependency, b) I (think I) know what it does and c) I don't care about anything that is not data and I don't care about BigInt and Symbol.
There is a reason none of the libraries tested got it "right" - nobody needs it. Or if they do, they just write their own implementation.
Right out of the gate they mention that the first approach listed doesn't clone Symbol...and then they treat this as a flaw.
I haven't used Symbol in JS much, but I was under the impression that it's _supposed_ to be a sorta "interned" value (in the sense that there exists exactly one instance in existence of each Symbol value)...which would mean that cloning them isn't a concept that makes sense.
My loose awareness of this particular JS type is leaving me quite un-confident in this understanding though, so I'd love for someone to correct me where I'm wrong so I can learn more!
I don't think it's a flaw given a symbol is unique so cloning it wouldn't be possible, and creating a new one would compound the problem by not only removing the symbol, but now adding a new one as well.
This article has a pretty good overview of what symbols are, usually a corner case, but good to know -
I've been programming JavaScript for ~10 years now (wtf) and I've yet to encounter a case where I needed to "deep clone" an object. It has always been a code smell indicative of a need for some other structural refactoring.
It may be rare that you need to deep clone, but doing so can be a good idea nonetheless. Sometimes I want a function to not be able to mutate the original object that informs it, in which case it will receive a deep clone or return a brand new object. I find this can make complex code easier to comprehend because I can have greater confidence that return values are new and functions aren't causing side effects. Of course you can never guarantee this in JavaScript, but it's worth making a best attempt IMO.
I'm coming from a c++ and functional perspective but this seems very counter intuitive to me. Can you explain why they're bad? My assumption was it's easier to reason about objects the less they share.
If you're reaching for deep cloning, your objects are too big. You're passing too much data between functions. The one exception is "actual" data which is (de-)serializable to/from JSON without any special cases (getters, functions, etc). But if it's an object that was programmatically constructed, then it can probably be made smaller.
If your object is "deep" because of real nested dependencies, then you should ask why your user would need to mutate it in the first place. If they want to change its internal structure then this is indicative of a code smell because your object is too big and the user cannot accomplish their goal by calling functions exposed by your API.
There was a phase in the JS ecosystem a few years ago where everything absolutely had to be immutable. This was, IMO, a horrible side effect of react developers optimizing their props for diffing during reconciliation. Libraries like immer and redux produced a lot of this zealotry.
I understand the purism appeal of "immutable everything," and I won't say it has no application, but it has been dogmatically over-applied to the point that you have people googling how to deep clone their config object with a nested function assignment because they're scared of passing a mutable object, and oh my god what if my user mutates it?
But in React you never need to deep clone the object? The same principles apply to all the same breed of framework btw, vue/angular etc only updates what you r updating. You are supposed to make a new reference of the object but the fields of would be of the old references except the field you want to update. If you are deep cloning the whole thingy then the performance is actually worse because you r doing unnecessary updates. You could use spread operator to easily make a new object reference. Nowadays you have immerjs to do the grunt work for you
But if you are doing more functional style, you would be copying the reference, NOT the object. Functional style makes you need LESS of deep copy. See immerjs
Also your object won't be that complicated in the first place, they would be treated as plain data object like struct or record. And you would not use those advanced features in objects(Not needed).
But I just want to point out that cloning isn’t an end in itself.
The article includes a lot of judgements but it’s all context-free. Whether a clone method is good or not depends on its suitability for a purpose. E.g., whether you want non-enumerable properties cloned (if you care at all) really depends on your specific uses. Not to mention performance is very often a concern and that isn’t covered here.
Right because the spec is not IEEE 754. JSON numbers are not floats. They’re numbers with as many decimal points as you want. It’s up to the serializer/deserializer to decide how to handle them.
The clear intention of the JSON spec is that JSON numbers should deserialise to doubles, and this is how every decoder that I've ever seen handles it.
The first decoder was the JS eval() function, so JSON was clearly intended to be a subset of JS. Ambiguity in the spec is not a licence to deviate from JS semantics, it just means that the spec is poorly written.
It was intentionally written to be very simple and achieved that goal. If your opinion is that that’s a poor choice, sure. But that’s an opinion, not a fact. Specifying a specific float format would cripple an interchange format and I think that would be a mistake. The intention here is to allow each origin and destination to decide how to fit the generic data into their representation formats.
The spec is not ambiguous. And a spec is a spec is a spec. You can implement it or not. But you can’t just decide “Enh… they didn’t mean it like that so it’s wrong to satisfy the spec. You should really satisfy it wrongly.”
> In other languages like Java, each class is expected to implement its own clone method
This is incorrect. There is an `Object.clone()` in Java, but it's not implemented by default for most types, implementing it is fraught with complexity, and the standard book of Java advice strongly recommends avoiding it. If you indicate that your type supports cloning (using the marker interface `Cloneable`), you still only get a shallow copy by default.
Cloning functions seems like a bad idea even if it's "handled" by the cloning logic. An easy footgun: you've got a function on an object that references the object itself. You clone the whole object. The cloned object has its own copy of the function, but its closure still references the old object
Personally I just don't try to clone anything that isn't JSON-valid data, and I usually use the stringify/parse strategy. It's slow in a hot loop, but it's fast enough for the rare situations where I need to fully clone something. Most of the time, I can just use readonly types and avoid defensively cloning the whole tree. When I want to replace part of an immutable object, destructuring works fine
As far as I know there is no way in JavaScript to copy a function with its closure, so fundamentally writing a perfect cloning algorithm in JavaScript is not possible.
By closure, do you mean the values of free variables at the time of the copy?
If so, the inability to do a perfect copy is applicable to regular functions at the top level, too. (For such functions, other global variables are free variables.)
Considering this, I think this is why copying a function is generally defined as just having a reference to the function that can later be used to invoke it. It does not include each copy of the function having its own copy of the function's free variables from the instant of the copy. Separate invocations of the multiple copies (i.e. references) of the function may affect the shared closure environment/free variables.