It’s even boring to technical people. It reduces intelligence to a multidimensional optimization problem. Now intelligence just involves all kinds of mechanical ways to fill out the weights for a neural network. I use to be more interested. Upon learning more about it, I am less motivated.
I've found exactly the same experience. Data science is mostly cleaning data in the first place, and the 10% that isn't, it's just fiddling knobs (hyperparameter optimization) to get the model to work.
But man, I can't argue the incredible results it creates. Perhaps that's why people do it, for the ends not the means.
I work in a data science team and I think that motivates a lot of my colleagues. Delivering a product and looking at the massive impact it has on the business is very satisfying.
What's described there is the predicting patterns, which is a part of intelligence but there's much more to discover and invent. Even within the 'optimization' task there's huge differences in the leaps from NNs to DNNs and from DNNs to AlphaGo/Zero. The details are what make it interesting.
If we were to understand exactly how the brain operates and learns, we'd see that it's solved/solving just an optimization problem, but that doesn't make it uninteresting.
I'm sure this is due to my beginner status in ML/DL, but I'm really disappointed in how much deep learning seems removed from the things that I enjoyed the most in statistical learning.
I enjoy the creative challenge of applying domain knowledge when building (for example) linear or bayesian regressions. In contrast, DL seems like a whole bunch of hyperparameter tuning and curve plotting. Curious to see if this assessment seems correct from those more experienced...
Technical is quite a broad term. There are quite some challenges in designing and engineering large ML data pipelines, both from a technical and business perspective. But I agree it's a specific problem that is arguably boring to a lot of people. Some people take more fun in the modelling part, others more in the engineering. Personally, I'm more into the engineering part than creating the actual model.
Isn't that like saying, "I used to marvel at nature, that it has elements such as fire, water, ice, and amazing living things. Then I discovered it is all made of atoms interacting... I am now less motivated." ?
I mean, the fire, the water, the ice, the amazeness of life and intelligence are still there. You just gained a new foundational view. Now you can understand and manipulate better what you already knew, maybe now you learned about plasma, or even extremely advanced and mysterious phenomena like bose-einstein condensates or superfluidity. The old wonders are still there, you've gained new ones.
I'm not going to claim complete cognitive equivalence (or even preference) between the two states of mind, but it is a bit like childhood: firmly believing in Santa Claus, or Wizards or whatever can be exciting, perhaps more exciting than knowing they are myths; but growing up and understanding they are mythical brings new opportunities, capabilities, and even new mysteries you could not reach before (buying and building whatever you want, vast amounts of knowledge, understanding more about technology and society, etc.). It's the adults that keep us alive and well, that make decisions for us and for society at large. So perhaps (although I'm not entirely convinced by the cumulative argument) truth is a sacrifice, but it is one well worth bearing, at least for me. I am deeply interested in how intelligence works, in how "the sausage is made" (at least for certain highly useful sausages that compose the fundamentals of the world).
Even more, understanding is above all a responsibility, if not for all of us, at least for some of us, or hopefully in one way or another for most of us.
I can't recommend enough Feynman on Beauty: (this argument is largely inspired by that)
In the same vein, intelligence to me used to be a black box where you got input from the world, some kind of wondrous magic happened, and then you got talking kids, scientists, artists, and so on. Now I still view it as wondrous, but now I understand the fundamental is apparently a network-like structure with functional relationships that change, adapt to previously seen information in other to explain it, that there are a number of interesting phenomena and internal structures (going well beyond the simple idea of 'parameter tuning') that can be formalized -- essentially the architecture of the brain (or better, 'a brain').
To give an example, there have been formalizations of Curiosity, i.e. Artificial Curiosity, and I consider it essential for an agent interacting independently in the world or in a learning environment (part of the larger problem of motivation). How amazing is it to formalize and understand something so profound and fundamental to our being as Curiosity? I felt the same way about Information theory years ago. How amazing is it that we've built robots (in virtual environments), and it works -- they're curious and learn the environment without external stimulus?
Above considerations aside, I find that amazing, beautiful, awesome.
There's another related concept I came up with thinking about this discussion (which I've had with friends as well): 'freedom of utility'.
The basic idea is, forget about what you think is beautiful or motivational. Suppose you could choose to be motivated by something. Would you choose to be motivated by superficial mystery, or by deep knowledge of how things are? Should you choose to find beautiful just the surface of the flower, or also the wonders of how it works, its structure as a system, the connections to evolution and theory of color and so on -- all of which could turn out to be useful one way or another. If you could choose, would you choose to be exclusively motivated by the immediate external appearance or by the depth and myriad of relationships as well?
Unfortunately, (unlike AI systems we could design) I don't think we have complete control of our motivation -- our evolutionary biases are strong. But I'm also fairly certain much of our aesthetic sense can be shaped by culture and rational ideals. If I hadn't heard Feynman, watched so many wonderful documentaries (and e.g. Mythbusters) and many popularizers of science, perhaps I wouldn't see this beauty so much as I do -- and I'm grateful for it, because I want to see this beauty, I want to be motivated to learn about the world, and to improve it in a way.
> Isn't that like saying, "I used to marvel at nature, that it has elements such as fire, water, ice, and amazing living things. Then I discovered it is all made of atoms interacting... I am now less motivated." ?
Yes it is exactly what I'm saying. I'm less interested because of this. I could turn it around and also say that with your extremely positive attitude you can look at a piece of dog shit and make it look "amazing." Think about it. That dog shit is made out of a scaffold of living bacteria like a mini-civilization or ecosystem! Each unit of bacteria in this ecosystem is in itself a complex machine constructed out of molecules! Isn't the universe such an interesting place!!!!!
This history of that piece of shit stretches back though millions of years of evolutionary history. That history is etched into our DNA, your DNA and every living thing on earth!!! All of humanity shares common ancestors with the bacteria in that piece of shit and everything is interconnected through the tree of life!!! We can go deeper because every atom in that DNA molecule in itself has a history where the scale is off the charts. Each atom was once part of a star and was once part of the big bang! We, You and I are made out of Star Material! When I think about all of this I'm just in awe!!!! wowowow. Not.
I'm honestly just not interested in a piece of shit. It's boring and I can't explain why, but hopefully the example above will help you understand where I'm coming from.
You see, there are people out there that legitimately, professionally study poop for a living. I read a book (well, part of it) Gorillas in the Mist, by Dian Fossey, and there is an appendix on parasites, mostly using fecal analysis. Literally, a chapter on poop and worms. Reading it without prejudice, I found it extremely interesting.
Should we just say 'ewww', 'dog shit is boring, no one should study it'; or should we give it the benefit of doubt? What makes something interesting? I'm sure you could study poop and parasites for years -- they tell you about the diet of an animal without having to follow it day and night, they reveal parasites that may be of health concern for human poop.
Should we, as a society, forsake all study of poop by deeming it boring? Are those people that study poop, and don't find it boring, wrong? Or maybe they secretly go about their job finding it extremely boring? I doubt it.
> Yes it is exactly what I'm saying. I'm less interested because of this.
I think you're falling victim to reductionism. I meant my example literally: because everything is just atoms, should everything be boring? (if intelligence is just parameter adjustment) I suppose you don't find literally everything boring despite literally everything being just interacting atoms.
You could have this reductionist attitude on anything really:
Once I found out mathematics is just manipulating symbols, I am less motivated/Once I found math is just deriving from axioms, I am less motivated/Once I found life is just a bunch of organisms fighting for survival, I am less motivated
Does it really make sense to be less motivated, is the subject matter really boring, or are you just taking a reductionist argument and replacing the nuance and complexity and beauty of the real thing with a reductionist model (that doesn't really tell us much about how it works)?
Going even further: forget about machine learning. You can formulate physics so that Nature, everything, is locally minimizing (optimizing) a high-dimensional energy function. Literally everything in the Universe is parameter tuning! Oh no, everything is boring! :p
To me, then, there are three pillars of what makes something interesting:
1) It is useful;
2) It has breadth of knowledge (i.e. it's not a trivial matter you can learn in one sitting);
3) It has structure (i.e. it's not just rote memorization)
If I pointed someone to a perfectly uniform white wall and with an extremely positive attitude he declared "Amazing!", and spent hours going "Look how white the white is... what purity, I will stand here all day contemplating different aspects of the whiteness", I'd think he's yes a bit different. But it's not difficult to argue why we think that.
Another point of confusion, is that we're not all in the same situation. Each person has a set of skills, and a background knowledge, such that, for an individual, a subject can seem more or less useful, more or less related to everything he knows (thus much structured, connected, rich), and more or less aligned with his skills. It's perfectly acceptable to declare something as not interesting to him, but not plain boring, universally uninteresting.
I cannot advance much further without talking about the specifics of intelligence: do you know learning theory (PAC learning, etc.), reinforcement learning, all the interesting mathematical structures e.g. in convnets, GANs, Wasserstein-GANs, cognitive psychology, neurobiology, etc.. I think my argument is easy because in this case 'intelligence' is so vastly broad, reaching most areas of math, engineering and science that I doubt with serious effort someone could still blankly classify it as uninteresting (unless you literally do find everything uninteresting... you should be a bit worried about that, I'm serious).
And like in every field in practice one would not sit every day thinking in abstract terms about 'intelligence' -- you would be trying to solve specific problems e.g. what kind of neural architecture could be used to solve a specific problem, what kind of data augmentation can I contribute, or more advanced problems like what is the internal architecture of a robot.
Thank you for the opportunity of laying out those thoughts
:)
(Please read my other comment as well, and I have a few things to add w.r.t. hyper-specialization)
When I said you're the guy that can see the bright side of dog shit I was startlingly accurate. You're that one guy people call "excessively positive."
Every time your brain sees something related to "science" it automatically dumps a gallon of dopamine into the happy center of your brain giving you euphoria equivalent to a line of heroin.
I wonder what's your positive spin on the holocaust? There's actual science that came out of that event.
This doesn’t effect the result. They are comparing people with children. If you didn’t have children or aborted a child it’s the same as not getting married. You simply aren’t part of the study because you are irrelevant for the same reason that a kangaroo is irrelevant to the study.
A disproportionate amount of parents with first born daughters over sons is inconsequential because they are measuring the percentage of divorces for each population, not the total number of divorces for each population.
The logic she uses here is really far fetched and unlikely to be true so you really need statistical causal links in order to say anything substantial. I mean her disclosure also discloses a possible bias. She may not have the ability to admit that she her self was the causal factor in her own parents divorce.
In the article, it specifically says that the matter she found is, in fact, unseen baryonic matter, and not dark matter. She found a method for detecting it even though it's cold and not emitting any EM waves.
I don’t see any disagreement that some type checker could catch this unintended behaviour. Many popular languages have checkers that would. The question here appears to be whether TypeScript’s type checker could do it without other consequences that are considered unacceptable.
Read carefully. The distinction here is that the type checker must allow for intended behavior within JavaScript while checking for an error.
The type checking I am talking about is not a sum type. It is not that the function can take a two different possible types. It's the fact that the parameter function can mutate into two different types depending on the usage. It has (<arity 1 or 2>) not (<arity 1> or <arity 2>) if you catch my meaning.... Or in other words the concrete type is not evaluated when you pass the function as a parameter but only when it is called with a certain amount of parameters... which is not something type checkers I know about look for.
The fundamental problem with the example under discussion seems to be that while the behaviour might not be intended by the programmer, it is working as specified as far as the language is concerned and changing that specification to make the unwanted behaviour fail a type check could have additional and unwanted side effects.
Perhaps I’m not correctly understanding your idea around arity as part of the function types, but so far it’s not obvious to me how what I think you’re describing helps to resolve that contradiction. Are you suggesting a way the type system could be changed without causing those additional, unwanted side effects?
Do you by any chance have a more rigorous definition or even a formal semantics for your proposed arity types that you could share, so the rest of us can understand exactly what you’re proposing here?
> The fundamental problem with the example under discussion seems to be that while the behaviour might not be intended by the programmer, it is working as specified as far as the language is concerned and changing that specification to make the unwanted behaviour fail a type check could have additional and unwanted side effects.
You don't need to change the behavior of the program. You can change the type checker to catch the unwanted error.
>Perhaps I’m not correctly understanding your idea around arity as part of the function types, but so far it’s not obvious to me how what I think you’re describing helps to resolve that contradiction. Are you suggesting a way the type system could be changed without causing those additional, unwanted side effects?
It's not formalized anywhere to my knowledge and I'm not willing to go through the rigor to do this in the comments. But it can easily be explained.
Simply put, what is the type signature of a function that can accept either two variables or one variable? I've never seen this specified in any formal language.
To fix this specific issue you want the type signature here to specify only certain functions with a fixed arity.
When some external library is updated with a function that previously had arity 1 to <arity 1 or 2> that could be thought of as type change that should trigger a type error.
Right now type checker recognizes F(a) and F(a, b=c) (where c is a default parameter that can be optionally overridden) as functions with matching types.
F(a) == F(a, b=c)
F(a,b) == F(a, b=c) <-----(F(a,b) in this case is a function where b is NOT optional)
F(a) != F(a, b)
From the example above you can see the type checker lacks transitivity (a == c and b == c does not imply a == b), because the type of a function with an optional parameter is not really well defined or thought out.
This is exactly the problem the author is describing. The type checker assumes that when the library changed F(a) to F(a, b=c) that the types are still equivalent, but this breaks transitivity so it's a bad choice and will lead to strange errors because programmers assume transitivity is a given.
You don't see this problem in other type checkers because JavaScript is weird in the sense that you can call a function of arity 1 with 5 parameters.
The true nature of type checking is basically a method of hindering you. The set of all correct programs is much smaller then the set of all programs that exist so anything that hinders a programmer from operating in the bigger parent set outside the set of correct programs is a good and impressive thing.
What’s going on here is a type checking issue. JavaScript and typescript is a little too loose. The map method takes a function of <Arity 3, 2, or 1> So if a library changes a function from <arity 1> to <arity 2 or 1> you should get a type error, but the type checker is too loose. It’s subtle.
Basically a type of <arity 3, 2, or 1> should only type check with <arity 3> or <arity 2> or <arity 1> it should not allow <arity 2 or 1>. You see what’s going on here? Subtle.
This is indeed as much of a type checker problem as it is defining what type correctness is. The definition above is simply a way of defining type correctness that fits with our intuition of what is correct for this given situation, so take what I wrote with a grain of salt. There could be situations where the current definition of type correctness in typescript is more correct then the definition I provided.
Our intuition is complex and if you think long and hard enough you may be able to come up with a formal definition of type correctness that perfectly fits our intuition and therefore elegantly unionized typescripts looser definition of correctness and my own stricter definition.
Beware though, often human intuition can be contradictory. This means that a formalization of our intuitive notions of type correctness will also be contradictory and therefore unusable. In other words there may not be a way to type check for this issue while maintaining the convenience of the status quo.
Intuitively I think it’s possible, you just need special syntax to tell the type checker whether to use my stricter definition or the original looser definition that’s in use now.
Also I’m not sure if there’s any type checker in existence that handles that case (don’t know). So I believe this is more than just a JavaScript issue.
> Beware though, often human intuition can be contradictory. This means that a formalization of our intuitive notions of type correctness will also be contradictory and therefore unusable.
It is my very strong suspicion that in almost all (if not all) of the similar cases in programming methodology, there have been not just arguments on both sides, but implementations and real-world lessons on both sides.
A veeeery minor example: I’m equally as sure that someone designed and deployed systems that specifically created errors when you tried to pass less than the required number of params as I am that other people (or even the same people) specifically designed and implemented systems where params were optional (most likely because they hated having to specify empty params every time).
>It's neither a conspiracy, nor a scheme, nor a world of wealthy indulgence
But it is a crime the company is responsible for that caused many people to die and become addicted to opioids.
You think Kim Jong Un walks among the people he starves to death on a daily basis in North Korea? No. He's too high up. He's in his palace in the capital and he doesn't even see the real state of his country. He just hears about it in reports and goes to do his day to day job like any other person in a company disconnected from the consequences of their actions.
Does this mean Kim Jong Un is not guilty? No. Not. at. all.
It's very possible for everyone to be guilty. We don't live in a world where if something isn't "useful" it isn't true.
Your statement ends up reading sort of like "Is every Nazi guilty of the holocaust?" Technically maybe not, doesn't change the fact that of the matter that overall all Nazis are guilty.
You can't run away from this with some garbage statement of "Not a very useful perspective." This incident literally killed an amount of people that is equivalent to a genocide.
Imagine if you were a Nazi and you said that. If you were just a mere guard at one of these concentration camps could you say what you just said to me to a victim who lived through the atrocity? Think about what you should say to the parents of a man/woman who died from an opioid overdose. Literally, I think you're unaware of the magnitude of the crime that was committed here.
I'm not unaware I have been very personally affected by the opioid crisis. And I don't think the pharma companies are responsible. I haven't met a single opiate addict (and I've met far too many for one lifetime) who blames pharma companies.
And I know people who have been effected by big pharma. Anecdotal evidence doesn't fly in the face of a journalistic documentary. There's tons of docs on the crisis and the blame is squarely on big pharma.
I had the same perspective on this. I spent some time building up a 2d graphics library from zero, primarily for drawing very simple UIs on resource-constrained devices.
I initially tried using D3D/Win32 APIs to perform the high-level drawing operations for me, but found that for the scope of functionality that I required, these interfaces were far too heavy-handed. These also have lots of platform-specific requirements and mountains of arcane frustrations to fight off.
I didn't need to interface with some complex geometry or shader pipeline in hardware. Raytracing is hilariously out of scope. I really just needed a dead simple way to draw basic primitives to a 2d array of RGB bytes and then get that to the display device as quickly as possible. What I ended up with is something that isnt capable of very much, but can run on any platform and without a dedicated GPU. I also feel like this was a much better learning experience than if I had slammed my head into the D3D/opengl/et. al. wall.
This is true, but DirectX 11, Metal and modern OpenGL (without cutting edge extensions) are still very accessible to novices, not to mention that you can transfer knowledge between the three of them, so there's little cost in learning a second/third API.
Vulkan and DX12 however are the work of the devil.
Vulkan is soulless, like uefi: all functions have more-or-less the same interface. But I wouldn't say it's the work of the devil. Its greatest sin is boilerplate.
I understand that Vulcan adds a lot of complexity, but OpenGL and Direct3D are surprisingly finite. Once you learn the graphics pipeline, it’s not too hard to begin drawing things. Init window and device, fill vertex and index buffers, load textures, load vertex and pixel shaders, set each of these things as the active thing, draw. Of course you can also explore the extents of all these entities until the sun engulfs earth. Even though I’ve been at it for 30 years, I realized long ago I would never keep up with, learn, and implement every notable technique even if I did nothing else for the rest of my life. But drawing nice looking, animated 3D things is very tractable.