Hacker Newsnew | past | comments | ask | show | jobs | submit | latently's commentslogin

"On the other hand, Brooks doesn't show any indication he knows what Musk is talking about"

Not quite true: "Tell me, what behavior do you want to change, Elon?"


The brain is a dynamic system and (some) neural networks are also dynamic systems, and a three layer neural network can learn to approximate any function. Thus, a neural network can approximate brain function arbitrarily well given time and space. Whether that simulation is conscious is another story.

The Computational Cognitive Neuroscience Lab has been studying this topic for decades and has an online textbook here:

http://grey.colorado.edu/CompCogNeuro

The "emergent" deep learning simulator is focused on using these kinds of models to model the brain:

http://grey.colorado.edu/emergent


That's about as interesting as saying that a Taylor series can approximate any analytic function arbitrarily well given time and space. Or that a lookup table can approximate any function arbitrarily well given time and space: see also the Chinese room example.

The first question is whether that neural network is learnable. Sure, some configuration of neurons may exist. Is it possible given enough time and space to discover what that configuration is, given a set of inputs and outputs?

The second question is whether "enough time and space" means "beyond the lifetime and resources of anyone alive," in which case it seems perfectly reasonable to me to call it a limitation. I generally want my software to work within my lifetime.


I like your comment. The real question is whether they are conscious.

The analogy between deep neural networks and the brain has proven to be very fruitful. Other analogies may as well. See our upcoming paper for more info.

https://grey.colorado.edu/mediawiki/sites/mingus/images/3/3a...


I think a lot of people end up mixing being alive with being conscious. Is a tree conscious? Is a self driving car conscious?

If we use the definition "Aware of its surroundings, responding and acting towards a certain goal" then a lot of things fit that definition.

When an AI plays the atari games, learns from it and plays at a human level, I would call it conscious. It's not a human level conscious agent but conscious nonetheless.


Consciousness has a specific meaning - https://en.wikipedia.org/wiki/Qualia


Latently | Deep Learning | Boulder, CO | REMOTE

Have some time on your hands and interested in implementing scientific papers for a stealth-mode deep learning startup? Contact brian@latent.ly More info: https://goo.gl/HhvxLO


I'd downvote you if I could. Partly because this isn't a job, and mainly because you're dishonest.

1. Why are you doing this?

This project increases the talent pool for AI/ML which benefits both engineers and companies. Additionally, by abstracting implementations into libraries that implement the publically available literature we can more easily see what is patented and what is not in addition to discovering prior art that can be used to invalidate patents. It also helps new inventors know what inventions are OK to incorporate into new inventions and what inventions they will need to get permission to use.

Yeah. That's it. Go troll for free labour somewhere else.


That's not how it works.


An interesting technicality from the post and paper is that the measure of causal information (mutual information between the initial and final state) bears some resemblence to the Lyapunov exponent as it is used to measure whether a system is on the edge of chaos. When the exponent is 1 (IIRC) the system does not diverge exponentially when the initial conditions are changed slightly and the system is said to be on the edge of chaos and to have good generalization ability. Anywhere else and the system is either damped or chaotic and you don't expect "interesting" stuff to happen there, such as higher-order "causal" effects. (seriously though, why are people so obsessed with causality when it's clear that there is almost never one "causal" description. let it go!)


Because sometimes there is, and when you know you can control or at least predict the process. Sometimes this makes you mad dosh.


Is fitness emergent, or fundamental? If you didn't discuss this, you should add another year to your PhD!


"Fitness" is simply the name we give to the likelihood that an organism will have multiple generations of descendants.

This value of likelihood is an emergent phenomenon given an environment and an organism.


That's not deep enough for my tastes.


The word explore is actually great in a data analysis context. The notions of exploratory vs confirmatory analysis are widely used, and exploratory means exactly what your students think it means. Just make sure they don't explore all of the data at once, otherwise they will have to go collect more so that they can confirm what they found when they were exploring.


Latently | Deep Learning Engineers | Boulder, Co | Remote

If you have some time on your hands and are interested in gaining industry deep learning experience on a cluster of nVidia P100s click here: https://goo.gl/HhvxLO


"However, these models are largely big black-boxes. There are a lot of things we don’t understand about them."

This describes geometry as well as it describes deep learning.


err, no. The model is a "black box" if the only thing we have is the input and the output, and only little intuition how the model produces output from the input. We have spent at least a couple of thousand years studying geometry; we know geometry quite well.

Let me demonstrate with a stupidly simple geometric model.

Suppose (for the sake of the argument) that we have simple image input, consisting only simple solid geometrical structures. Say, solid 2-d circles of one color, on the background of different color.

From high school geometry, we know that we know everything there's to know about a circle when we know its location on x-y plane and its radius. We could easily come with a parametric model for fitting circles in the pixel image data of circular objects. (For example, we could minimize difference in 2-norm between data image and image corresponding to a set of circles, [x_i,y_i,r_i], i=1..n ). This kind of descriptive parametric model would be particularly easy to understand: model structure consists nothing but representations of circles! (But of course, it wouldn't be particularly interesting model; it would apply to simple images consisting of circles only).

Alternatively we could work out the mathematics bit more, and come up with something like Hough transform to find circular shapes. Still nothing mysterious about it: https://en.wikipedia.org/wiki/Circle_Hough_Transform

However, my point is: We could also train a neural network to find circles in the images of our example. It might be good at it. However, understanding how the circle representations are encoded in the final trained network certainly would not be as easy than in our nice parametric model.

Some realistic applications of "simple" geometric models would be active contours / snakes ( https://en.wikipedia.org/wiki/Snake_(computer_vision) ) or (stretching the meaning of the word 'geometry') the various traditional edge detection algorithms that have been around long time.

Or read the post, in which the author describes how they utilized projection geometry model to account for camera positions and orientation, or for stereographic images. We know how the geometry of stereographic vision works: we don't need waste resources to train a network to learn inscrutable model for it.

Deep learning is useful when we need models for things complicated enough that we don't know how to model them. (For example, model that tells us "is there a dog in this image".)


In my opinion you are overconfident in the foundations of mathematics. Like deep learning models, math works. Why and how does it work? It's open to interpretation in both cases. In both cases, we don't have a complete understanding. It is that lack of complete understanding that makes it a black box.


You have NO idea what you're talking about man. Don't believe the hype. Deep learning has ALWAYS been popular and a very active field, since the mid 80s.


This is an excellent syllabus by Professor Dave Touretzky, a pioneer in deep learning. He started the Connectionists mailing list and was heavily involved in the early days of NIPS.

Sign up here: https://mailman.srv.cs.cmu.edu/mailman/listinfo/connectionis...

For anyone who thinks there was ever a pause in deep learning progress, the Connectionists archives beg otherwise!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: