Hacker Newsnew | past | comments | ask | show | jobs | submit | sthlm's commentslogin

From the article:

> “A bank mistakenly making such a large transfer shows its controls aren’t working adequately, and it’s embarrassing,” said Dieter Hein, an analyst at Fairesearch

Also:

> The error should have been caught by an internal fail-safe system known as a "bear-trap,"

The issue is not that someone said "let's not have any controls", but that the controls failed.

Additionally, yes, ranking all transaction and double-checking the big ones should probably be part of a fail-safe, but I assume that even more harm could be done by a larger amount of minor issues that go unnoticed over a longer period of time.


I can attest to that, my newborn at the time screamed into my ear unexpectedly loud and my hearing was affected for a few days.


This is the best "visualization" / "explanation" of the possibilities and limits of AI that I've seen.

I can show this to someone and say:

1. The software can recognize a feather, as long as it looks similar to what it thinks a feather looks like.

2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.

This is good, because most examples focus on point #1 and -- if enough marketing is involved -- don't go enough into point #2.

People read news articles like "X can recognize cats in a picture with Y certainty!" and are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.

This does a much better job by letting people write (or draw) their own test cases and figure out the limits intuitively.


> 1. The software can recognize a feather, as long as it looks similar to what it thinks a feather looks like.

I was prompted to draw a hurricane. I drew something that looked like the typical hurricane doodle used on news reports.

The software didn't recognize it.

When the game was over and I was able to look at all of the doodles that were used to train the software to recognize a hurricane ... the majority of them instead looked like tornadoes!

So maybe we should more precisely say:

1. The software can recognize a feather, as long as it looks similar to what the humans who contributed its training set think a feather looks like.


My hurricane was just terrible. I ended up with a scribbled mess because I got that in the first set or two, didn't really have a plan and drew components of a hurricane as I remembered them.

I'm also ashamed to admit I drew some less than ideal stuff due to forgetting details on things and then panicking because of the timer. Like the spots on a panda's face for some odd reason.

Hopefuly my drawings were treated as outliers.


Apparently most players of this game didn't see the "carrier" part in "aircraft carrier" and just drew airplanes. Probably because of the time constraint.


Or maybe they mistook it for "carrier aircraft", as in "cargo plane".


Which is actually a pretty big win. After all you could also say this:

1. The person can recognize a feather as long as it looks similar to what the other people who contributed to it's learning think a feather looks like.


I was asked to draw: brush I drew a hair brush. It was trained to see brush as a bunch of circles or trees.


Are you sure that wasn't "bush"?


...or something for sweeping the floor


When the game was over and I was able to look at all of the doodles that were used to train the software to recognize a hurricane ... the majority of them instead looked like tornadoes!

Idiocracy was prophetic -- except it missed the aspect that "Idiocracy" would first manifest on the Internet.


The premise of Idiocracy is actually false; IQs have been rising over time.


Citation needed. IQs are defined such that the average IQ is always 100. https://en.wikipedia.org/wiki/Intelligence_quotient



So the internet is really just revealing more of the idiocy that's always been there.


You are really stretching to find a way to feel superior to people.


Alas, if I had to stretch. Basically, there's rampant prejudice and anti-intellectualism from all points on the political spectrum. The response to the enabling of trolling by anonymity is an upsurge of authoritarianism by (of all people) many on the Left. The Right? Not much better.

If I had a dollar for every time someone pattern matched me or a phrase I wrote, then jumped to conclusions about my ideas of internal emotional state then even insisted I am lying when I tried to disabuse them of the notion -- I'd have a whole lot of dollars. (Hint, if you start sniffing around trying to justify that they're right, I haven't left you sufficient evidence and you're probably also doing that.)

Apolitical people? Mostly just as bad.


It exhibits gender disparities very nicely too.

https://twitter.com/OdaRygh/status/798872670221856768


Yes, an absolutely classic example of implicit biases in training sets.

On the one hand, the network should eventually learn to classify high heels as shoes.

On the other, when these classification system actually get used, they're always at some arbitrary point in their training, so you can't just wait for "all the biases to go away."


Erm... high heels are not the only kind of shoes women wear. They're not even the most common kind of shoes women wear. Pointing to this as a 'gender disparity experience' is showing your own bias. Yes, high heels are shoes and it should learn to recognise them, but most women don't actually wear them most of the time.


Makes sense to me. The training data set focuses on generic, gender-neutral shoe examples instead of highly gender-specific ones.


There is another take on this issue. That it's not that the shoes are gender-neutral, it's that "male" is neutral. This essay explores that take: http://www.tarshi.net/inplainspeak/marked-women-unmarked-men...


Sit in a shopping centre, movie theater lobby, or even just out on the street. Watch the shoes of the women as they stroll by[1], and you'll find very few of the ridiculously high heels that are pictured in that tweet. Claiming that tweet's shoe as the typical women's shoe is laughably erroneously stereotyped.

[1] Not just the young fashionistas that specifically dress up, but every woman.


I don't think that's true for shoes. The male equivalent to high heels would be dress shoes, and women wearing male dress shoes would be weird and unusual. The examples shown appear to be casual or athletic shoes, which are indeed neutral.


You might be able to explain it, but it still shows that it's wrong. (Though I disagree that these shoes are gender neutral; only ~5% of the shoes in my household look like "gender-neutral" shoes, and they're all mine)


There is their problem, they had Al Bundy train the AI. How else do you get from shoe to whale in only three pictures, with one involving food.


Can we please keep gender identities discussion from Hacker News?


The comment was pointing out a specific example about how an AI miscategorized something because of a small sample-size in data, something that has been shown to be often the result of unintended biases in the training set, and you say that we're just talking about "gender identities"??

This is the kind of thing AI researchers write papers on (source: AI MSc), not some SJW topic, yet you saw the word "gender" and assumed it didn't belong?


Humans usually can't do your 2. either. In some cases, people may be able to recognize things based on descriptions alone, but those are typically simple combinations of known entities.

For recognizing relatively simple entities, are there advantages humans still have over neural nets (assuming the same scope of knowledge)?


Definitely my 3 y/o can recognize a cat in an abstract drawing of a cat that is unlike any cat he has seen before.


Humans are great at learning abstraction from concrete examples. That's also what deep learning does and the big reason for its success as well. I'd guess that some neural nets architecture can do the same with your cat example (perhaps with adaptation). Can any expert weigh in?

An idea: We can also run several cat photos through image processing algorithms to filter out details. The output would be outlines similar to the drawings in the Google Quickdraw app. We put those through the app to generalize (perhaps the app needs some training with a few categories of objects, not necessarily animals). Voila! Software can now recognize drawings based on photo examples.


> Humans are great at learning abstraction

Of course, there's severe bias here, in the sense that what we consider abstraction is by definition "human shaped" abstraction

If multiple humans try to "abstract" a cat, the overlap in underlying processes will be pretty big, making it more likely that we can recognise each other's abstractions.


Of course, there's severe bias here, in the sense that what we consider abstraction is by definition "human shaped" abstraction

I can read the words here, but I don't understand the meaning.

We abstract to find a common set of features in things that are supposed to be the same but that are not present in things that are not supposed to be the same. Grouping these features then produces higher level abstractions, and so on.

Where would the bias be?

Even if the features differ, the process is the same.

And even the features are often the same. If you reverse a DCNN to see what it uses to classify things as "cats", expect to see whiskers and fur.


Think of Bugs Bunny. He looks nothing like a real rabbit, yet humans recognise him as a rabbit (presumably) because we look at the characteristics that separate him from a normal human, then compare those characteristics with our list of things with those characteristics (long ears, big feat, eats carrots) and get a rabbit. If he'd been made to look like a rabbit-octopus hybrid instead of rabbit-human, we may have struggled more.

Computers don't look at things from a human perspective; they're still good at abstraction, just different to human abstraction. i.e. there's a human bias in there.

That's OK though; the objective is to make a computer that sees things the way people do; so it's a bias we want.

However the issue isn't that the computer's not a sentient being and therefore can't abstract things it's never seen before; only that the algorithm hasn't been written to sufficiently take account of human bias.


I think the word you're looking for is "familiarity", insofar as it describes a particularly efficient means of recognition. E.g. humans have become pretty good at identifying cats.

I don't see a fundamental difference between biological and electronic neural nets; so please take the following with a physicalist grain of salt. Imho, precisely because NNs will be fed with nothing else than the reality (physical or virtual) we live in, it should gradually develop the same familiarity as humans have; i.e. nothing more and nothing less than elements of our lives/civs. Visually lots of cats, lots of cars, mountains and coasts; functionally all the tasks we accomplish daily, like driving or cooking or cleaning.

I don't really think you can hard-code "human bias" as it's an emergent property of our biology: too complex (we don't really understand much of it, imho you're bound to miss the mark and induce subjective biases), and somewhat contradictory to how NNs are supposed to evolve (thinking long term here). Basically, I don't think it would be practical nor cost efficient to induce too much perturbations in deep learning, better work on refining the process itself. Think of plants: you can tweak the growing all you want, but the root deciding factors lie in genetics (their potential, and in understanding how to maximize it).

I realize another wording is that we should apply sound evolutionary (Darwin etc.) principles in "growing" AI at large. Because AI and humans share the same environment, we should see converging "intelligence" (skills, familiarity, etc). It's a quite fascinating time from an ontological perspective.


It's interesting to think about what the limits of an AI that doesn't have a full human experience are. I think you're probably right that machine vision will be competitive with human vision. It's already much better in specialized areas.

General purpose machine translation is harder, for instance. Brute force algorithms have gotten decent, but aren't in the same ballpark as humans (though professional translation services now often work by correcting a machine translation). However, MT systems trained on a specific domain do much better (medical or legal docs, etc).

What would be the hardest task for machines that's trivial for humans? Maybe deciding if a joke is funny or not?


Perhaps not the hardest, but one where there's tons of room for improvement: the Story Cloze Test [1] is a test involving very simple, five-sentence stories, where you pick the ending that makes sense out of two endings.

A literate human scores 100% on this test. No computer system so far scores better than 60%. (And remember that random guessing gets 50%.)

[1] http://cs.rochester.edu/nlp/rocstories/


Interesting study; whilst it's possible to guess which ending is expected as correct, the alternate could be easily argued. For example, in the case of Jim's getting a new credit card, I recall during my uni days many students took that exact approach to debt...


Good point; I'd not considered whether the human imprint would be down to familiarity (individual's) or in-built through evolution (inherited familiarity); likely a combination of both. In fact, I recently read that chimpanzees raised by humans are believed to identify as human rather than chimp; so individual familiarity does seem stronger.

The book, "We are all Completely Beside Ourselves" is fiction, but refers to findings from real studies.


Hmm, I always assumed bugs bunny was a hare.



You implicitly (and I think without realising) presume objectivity + complete knowledge in the observer.

Human perception is heavily biased towards features that had evolutionary advantages, and limited by whatever technical flaws our eyes/brains/etc have. That's a selection bias in our perception of information, in our processing of said information, and therefore in the abstractions that result from it.


I agree with what you say, but it doesn't support your earlier statements.

I presume it's possible that the limitations of our visual system means we may miss powerful features and hence the ability to build some more powerful abstractions. (I didn't even argue this, just pointed out the process is the same even if features differ)

But I don't see how this supports your original claim of bias, which was: "If multiple humans try to "abstract" a cat, the overlap in underlying processes will be pretty big, making it more likely that we can recognize each other's abstractions."

If humans are good at recognizing each others' abstraction, that's a validation that low-pass (for lack of a better term) filtering the features due to human's physical design still creates very good abstractions and classifiers. That is to say, if anything you're confirming that humans are designed in a way that makes the abstractions they can make maximally useful.


"you're confirming that humans are designed in a way that makes the abstractions they can make maximally useful."

... to other humans.


What's the meaning of this?

Are you arguing that the classifications themselves are biased?


That's exactly what I and others have been arguing. Now to be clear: it's not that these classifications are wrong, just that out of all possible classifications we could have found, we will most likely find the ones that fit the human perspective of the world.

Think of the Turing test and its criticisms; it's kind of has the same issues.

PS: I've upvoted every comment of yours; asking questions like this should be encouraged :)


Thanks for taking the time to explain your argument!


Confirmed; thanks vanderZwan.


Classifications are also dependent on the capabilities of the language they are expressed in: https://en.wikipedia.org/wiki/Linguistic_relativity


> still creates very good abstractions and classifiers.

My point is that "good" and "bad" are not objective here, but depend on human use-cases.

Now to be clear: I'm not disagreeing with you! These are good abstractions, for humans. It lets us communicate concepts easily, which is great! But it might not be the best abstraction in every circumstance.

For example, I recall reading an article that said that AI is better at spotting breast cancer from photos (which is essentially interpreting abstract blobs as cancer or not). The main reason seems to be that it is not held back by the human biases in perception.


Cats are probably a particularly unfortunate example to use in comparing abstraction forming cabilities, as given our history it's highly likely that we come supplied with some dedicated cat recognition circuitry.


Humans have a bit of an advantage on two levels here. First, we know what a cat looks like. Not a video or a picture or a drawing, but an actual cat. That gives us a solid frame of reference. "That is definitely a cat. That drawing looks kind of like what I know a cat to look like, so it's a drawing of a cat." The closest a computer can get is "This drawing has quite a bit in common with these other drawings, and apparently these other drawings are cats. So this is probably a cat too."

Second, when we look at a picture of a cat, we're looking at a human's interpretation of what a cat looks like. If we asked a computer to draw a cat, it might look nothing like a cat to us, but another computer could look at it and go "Oh sure, that's a cat." I seem to recall Google did a thing with this a while ago, where they effectively created a feedback loop in a neural net - feeding its own drawing back into itself. As I recall, the result looked like the computer had done way too much LSD.


Basically: you are right.


Can you sketch an example of such a drawing? I'm having a hard time imagining something that looks enough like a cat to be recognized as such but unlike any cat a three-year-old has ever seen before.



I'd say that misses both my criteria: it looks just like lots of cat drawings any three-year-old has been exposed to, and it also seems like an image Google would have no trouble recognizing as a cat.



Again, I think first-world children over the age of 3 have been exposed to plenty of drawings like that, and also, Google can recognize it as a cat anyway -- in fact, it even knows which cat; do an Image Search and you'll see, "Best guess for this image: garfield meme"


Your criteria were "looks enough like a cat to be recognized as one, but unlike any cat that a 3-year-old would have seen before".

Google doesn't recognize it as a feline, it recognizes it as Garfield.


I doubt that webpage is as smart as a 3-year-old.


Do you really think any reasonable person is going to mistake this couch for a cat?

https://rocknrollnerd.github.io/assets/article_images/2015-0...

The software does:

https://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.h...

Sure, you can fool a human. But there are things AI is missing that would be embarrassing if a human made the same mistake. It's hard to say, based on anecdotes like this, how big that gap is, but it's there.


>Humans usually can't do your 2.

I think we do. We see a building we've never seen before and we know it's a building because it has certain features that we use to classify it as a building. The examples aren't scarce.

I also think a good indicator of us doing it is the use of "y" and "ish" and "sort".

As for sthlm's point 2:

>2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.

This is Asimo in 2009:

https://youtu.be/6rqO5eiP7_k?t=5m24s


When it comes to abstraction from a simple rendering – no shading, no sense of depth, no discernible dimensions – it's hard to extrapolate features.

I feel there is an immense difference between recognizing simple sketches and deriving what an object is based on extended characteristics.

The video you linked furthers that by showing that ASIMO was using three-dimensional observation to calculate certain features and ascertain what that object was.


The abstract drawings benefit a lot from the limited selection and the huge implicit context.

If you'd give these doodles to people that are not Western males it'll do a lot worse. Someone already pointed out it doesn't recognize woman's shoes.


Humans frequently misrecognise sketches too.


If you've ever played pictionary you'll see the level of abstraction we can manage is remarkable too.


Familiarity with teammates may factor into that as well, partially from having unspoken frames of reference to infer from.

It is unmistakable how much the difficulty level ramps up when you're paired with those of an unlike-nature to you. Sometimes that level of abstraction is taken way outside of generic context clues.


It is but we've also had decades of practice. What scares the most about AI isn't how advanced computers can become but how slow we are to learn in comparison.


Actually in mind when I was mentioning that was playing a game I coined "foot pictionary" (we've also played "blind pictionary") with kids ages ~6 to 10yo.

We use very generic "words" (eg egg, tree, bike, cloud, plate).

When you're using your foot to draw you really have to distill down to the essence of the item. Yes there is a deal of guessing but in some way the image (however unlike the object) has to have some element of the Platonic nature, if you will, of the object being drawn.

Fun!


You're just wrong on this one. Humans can recognise a lot of things that aren't in the form that they're used to. It's seen a lot of research in psychology.

As for advantages over neural nets, one of the primary ones is that humans can recognise things from unusual angles much more easily. When I tried QuickDraw and doodled things from non-stereotyped angles (like a three-quarter view of a car rather than the usual 2D side view), it had no idea.

The dalmation optical illusion[1] is another example of human ability to pick out patterns and assign them to belong to certain objects. Neural nets have different abilities, and are sometimes better at picking out different sorts of patterns than humans.

[1] http://cdn.theatlantic.com/assets/media/img/posts/2014/05/Pe...


> 2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.

Why did this word "sentient" sneak in to your comment? I don't see what "sentience" has to do with what you just described; it's just a more sophisticated form of pattern matching.

"See, it can't do this! It's not self-aware!" is almost never the correct answer, because whatever thing it is you want to do will probably be solved in the future with more of the same techniques. Just about the only thing "sentience" or self-awareness is good for is an entity's private experience, which you wouldn't ever be able to see anyway.


I think sthlm's thinking that people as in:

>People read news articles like "X can recognize cats...

may assume sentience when it's not there


I don't think "sentience" is a sufficiently precise term to enable us to judge whether it's there or not.


"Doesn't look like anything to me"


Highly relevant to your two-point example:

http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.ht...


>The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being

Like humans brains?

>are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.

Like human brains?


No, not at all. If you only showed it a bunch of pickup trucks in various colors, it would be really good at identifying pickup trucks. But if you then showed it a Prius, or a motorcycle, it would have no idea that it was looking at a vehicle. A human brain wouldn't have much trouble with that, though, because it associates more information with the vehicle idea than just statistical similarity to previously seen shapes, and can extrapolate without having direct previous experience with the object being seen.


If you showed a small child 10 pictures of pickup trucks and told them "These are cars" then showed them a motorcycle and said "What is this?" what do you expect to happen?

Remember, this child has never been on the road, never driven a car, never had the mechanics of locomotion taught to them. All they know is that objects that are longer than they are tall with a flat bed on one side and wheels on the bottom are classified as cars.

Once the child (or machine) has more information to associate with the 'vehicle idea' it can call on this information when it sees shapes that are also associated with the 'vehicle idea' in order to extrapolate without having direct previous experience with that object being seen.


Trucks are generally not classified as cars, nor are motorcycles. These are all types of vehicles, per my original terminology. I actually did a similar experiment with my friend's daughter (3 years old) and she was able to figure it out just fine. Humans are generally able to extrapolate that things with wheels move, and if they have a seat, it's meant for someone to sit on, while it's moving. Hence a vehicle. It's this level of conceptual understanding and "how would this thing work" thinking that ML lacks in comparison to human brains. People use more than just sight recognition to identify new objects, while current ML models do not.


Maybe some current implementations lack the ability to make these connections but it is in now way even a small stretch to conceive a machine that understands "Wheels are for moving" "Seats are for passengers" "Things that have both wheels and seats are probably vehicles".

So when that machine learning algorithm recognizes wheels in a picture and recognizes seats in the same picture, it searches for results that include both wheels and seats.

The human brain does not inject any magic in to this process.


It sort of does, though. Let's say we train an ML implementation so that it can recognize things with wheels and seats as vehicles. Now we show it a hovercraft. What will it do? How about a helicopter? All the human brain needs is a single example of people getting in or on something, and it transporting them from point A to point B in order to infer that the thing is a vehicle of some sort. This is because we are able to infer purpose of an object even if we have never seen it before. ML is just statistics - it implies no meaning or comprehension whatsoever beyond "thing A is statistically most like thing B I have seen before". There's an important difference between recognition and understanding, and current ML techniques are solidly in the former camp.


The often forgotten difference between ML and humans is that we learn from stereoscopic video streams, not from a bunch of static pictures. There's a lot more information in a few seconds of watching cars on the road than in a thousand pictures of different cars. We get to see the 3D picture (we have dedicated circuits for that), hear 3D audio and perceive temporal data. We correlate all that and many more data sources to form categoties.

ML trained on bunch of static pictures is like humans dealing with those abstract geometrical riddles that are used on IQ tests. They're difficult for us, because they're not related to our normal, everyday experience.


Neural networks can learn new categories of things like that with about 5 examples. They are already outperforming humans on some tests. https://news.ycombinator.com/item?id=11737640


Not exactly: if you've never seen a particular kind of feather before, you may not recognize it at first sight, but most certainly you'll sit, examine it and eventually acknowledge it's a feather -- the neural networks we're using aren't prepared to do this kind of analysis yet.


Drawing your triangle upside down is enough to hit the limits!


#2 applies to humans as well. For example if I show a human something that looks and has all the properties of a car, the human will think I am showing him a car even if the thing I am showing him is actually called a feather.

Any neural net, artificial or not, can only recognize things as long as it looks similar to what it thinks the thing should look like.


I can recommend to peruse the source code. It's well written and documented. Such projects always put a smile on my face, since they demonstrate how far we've come in some respects:

- We've learned how to write and produce reusable, easily digestable code

- Python gives us a language that is concise, readable

- Frameworks like OpenCV let us do incredible things

- Open Source allows us to share it and collaborate*

- Platforms like Github facilitate the entire experience

10 years ago it took forever to configure my webcam on my Linux machine. I had to scour mailing lists and custom web sites to download various versions of kernel patches. Now my webcam is built-in and it takes 2-3 commands to take something off of Github and have fun.

* Of course Open Source is not new, but today it really seems like "Open Source won".


What about these magic string: "slouching_alert(QString, QString)"? I don't know Python. Does it work by reflection? I assume these are some kind of bindings to an underlying native API or something? It doesn't look very maintainable...


Qt uses slots and signals to control messaging between threads (e.g. the UI and the workers).

The Python binding to Qt is good, but it is auto-generated (from C++) and some of it shows up as non-Pythonic mechanisms or conventions.


Newer versions of PyQt have replaced the magic strings with a system that's much more Pythonic, using decorators and OOP.

I guess the author just wanted to keep backwards compatibility with old versions (or isn't aware of the change).

Edit: If anyone's interested, here's the new syntax: http://pyqt.sourceforge.net/Docs/PyQt4/new_style_signals_slo...


Thanks for this. I had no idea a different mechanism existed.


is PySide2 also getting that new system?



Nothing to do with Python, it looks like a Qt thing. Not much the author can do about that.


Hey there sthim. Slouchy author here.

I just wanted to say thank you for your comments. It really put a smile on my face to see someone liked my actual code.


In Javascript, many unicode characters are allowed [0], so háćḱéŕŃéẃś is a valid variable name [1].

Note: The number of іllэБіъlэVаѓіаъlэИамэѕ [2] used in your production code is inversely proportional to the number of friends you'll make in the maintenance team.

[0] https://mathiasbynens.be/notes/javascript-identifiers

[1] https://mothereff.in/js-variables#h%C3%A1%C4%87%E1%B8%B1%C3%...

[2] http://www.panix.com/~eli/unicode/convert.cgi?text=illegible...


I had quite a lot of fun defining 汉字 variable names in C#. Though definitely not something to put into production code of course...


The Yahoo Directory was Yahoo's first product.

This is a stark reminder that very few things last and maybe a point to reflect how Yahoo succeeded and failed to pivot into other directions.

Can you imagine Google terminating Search? Facebook terminating their home page? What else would they do and would they be successful?

In a historical context, back in the day directories were a big deal, it took a long time for search engines to become powerful enough to rival the usability. A piece of the old web is gone now.


The original "school networks" of Facebook are long gone; their vestigial remains insufficient too track down fellow alumni. The newsfeed, like the Yahoo portal content, was a later addition.


> rival the usability

Not really their utility! Search by keywords/phrases sorted by popularity and date do not fully replace a good directory which, in contrast, can be a list curated by human experts in ways that keywords/phrases cannot more then just begin to characterize.

Rough, old proof of this point: Why the Yellow Pages, with their categories and ads, were so much more useful than just the white pages. E.g., the ads gave a lot of details beyond just the keywords used in the alphabetical lookup. So, do the alphabetical lookup and then look at the ads for more details. The larger ads in effect were a curation.

And there are other examples of directories.

Am I the only one who thinks this way?


Graphs are a poor visual representation for a lot of data sets. In cases where they provide significant benefits (think maps, dependency structures, routing), graph layouts are hard problems.

Generic algorithms are great for large networks but computationally intensive. Small network diagrams with an explicit message often have to be manually curated to make sense.

That said, this is a great library, especially since it works so well in the browser. I'm looking forward to future development.


I am a huge proponent of one-page websites for a variety of scenarios. I find that browsing existing examples is one of the best ways to get inspiration and ideas for what is possible.

It's also interesting to browse the source to get a feel for what's easily / hardly possible with different frameworks and how some things are implemented.


I had the following experience, which I considered quite useful.

I was traveling and in the meantime let a friend stay at my apartment. For this, I moved all of my belongings to the basement. After I returned, I shared my apartment with another friend. It was small and quite full, so I only took back the bare necessities from the basement. At some point I noticed that I had taken all that I needed from the basement, and that the rest was mostly unnecessary. So every once in a while I go to the basement to take out some things and throw them away.

The good thing about this approach was the following: Don't go through your stuff and think what you can throw away. Assume that you want to throw everything away and pick what you want to keep.

It's much easier to decide to keep something than to decide to throw something away.


I thought the whole budget was rather low compared to the total value that is to be created.

Sure, 100k for 1 language sounds odd, given that 200k was enough not only for 2 languages, but also the entire framework. Still, the overall price point is more than fair.

You have to take into consideration that Light Table aims to be specific; adding further languages will likely result in more efforts than merely adjusting an IDE a little bit.


Of course, the more popular language, the higher the value. $200k for a clojure IDE is certainly overvalued for the number of people using it, but $300k is way undervalued for python. Not sure about javascript; obviously it's popular, but it's unclear whether this ide supports anything but niche (non-browser) use.


Depending on how its Clojure support is implemented, LightTable may be usable for ClojureScript projects, and thus allow developers to target browsers and NodeJS (and other JS runtimes).

Do a search for "Pluggable Backend Infrastructure for ClojureScript, and Development of a Lua backend" and you should turn up a Google Summer of Code 2012 project which seeks to broaden the scope of the ClojureScript compiler. If that project is successful, and if LightTable supports ClojureScript, the IDE's reach may be greatly expanded in the relatively near term.


Sure, but you still need to use Clojure, which is very unlikely to be of use to the average programmer.


I somewhat doubt he's targeting the average programmer.


I'm not saying 300k is a lot of money for LightTable. Sure, it sounds fair.

I'm just saying they claim to be "highly extensible" and yet, support for a new language requires a development effort of 100k.

It's sad since I mostly code in ruby or scala, these languages won't be supported and I don't think anyone is going to make a 100k development effort to support them.


The fill-out-the-code-flow stuff is going to be -much- harder to do for python than for clojure, I suspect.

Just because it's highly extensible doesn't mean that beating the other language's VM into doing what's needed isn't going to require a bunch of effort.


it's not as if we were purchasing one language. It's a stretch goal, with python as the reward. The goal and the difficulty of implementation are not related.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: