Hacker Newsnew | past | comments | ask | show | jobs | submit | benl's commentslogin

Your explanation of finding a surface to separate good reasoning traces from bad reasoning traces in a high dimensional space worked as a great framing of the problem. It seems though that the surface will be fractal - the distance between a good trace and a bad trace could be arbitrarily small. If so then the work required to find and compute better and better surfaces will grow arbitrarily large. I wonder if there is a rigorous way to determine if the surface is fractal or not.


> Sydney

> Venom

> Fury

> Riley

"My name is Legion: for we are many"


Darkness is the absence of light. In this usage light would represent a moral agent, and so darkness is its absence - either no morality or no agency or both.


True Due Date is great, thank you for making it!

My wife is pregnant and, because the nearest maternity unit is 1hr45mins drive away, we're going to rent a place near it around the due date. This just gave me a confidence boost about what dates to be there. Thank you!


Thanks for the feedback!

Best wishes for a healthy pregnancy and delivery for mom and baby!


If you strip out the AGI hype then this just sounds like OpenAI is now moving to monetizing their tech. This makes sense for them but probably not for the philanthropists who originally backed them.

Sadly for them, AGI is metaphysically impossible - this will be realized eventually but a lot of waste and possibly harm will happen first.

We are not just super sophisticated machines, so the fact that we can think doesn’t tell us anything about what’s possible for machines. But philosophy does - and it tells us you can’t get mind from matter, no matter what configuration you put it in.


I'm a believer that we are super sophisticated molecular machines, embodied in matter.

Can you provide some material that supports your claim that AGI is metaphysically impossible - I always like hearing from people with views opposite to myself.


I'm skeptical his claims are substantive. As with all things philosophy there are competing and supporting theories, and with this age-old question of AGI I doubt the field is as conclusive on the matter as he believes.


I used to be a believer of the theory that we are super sophisticated machines. When I read some of the philosophy on the subject I changed my mind. I now believe there must be some immaterial component to our minds.

There’s a lot to read out there on this subject, but I found expositions of the philosophy of Aristotle and Aquinas to be the clearest and most convincing for me. Lots of different books and articles exist on them both - pick one that sounds like it suits your style of understanding.


> I used to be a believer of the theory that we are super sophisticated machines. When I read some of the philosophy on the subject I changed my mind.

What philosophy? Be specific

> I now believe there must be some immaterial component to our minds.

What specific points or ideas made you believe that?

> There’s a lot to read out there on this subject

So provide some examples, be as specific as possible

> but I found expositions of the philosophy of Aristotle and Aquinas to be the clearest and most convincing for me

These two wrote a lot on many subjects, can you be specific on the points that convinced you that we are not super sophisticated machines. Don't vaguely point at a couple of authors, we are talking about a very specific idea.

> Lots of different books and articles exist on them both - pick one that sounds like it suits your style of understanding.

If there's lots then cite some examples, or better yet, rather than vaguely pointing at a book, (which is only marginally more useful than vaguely pointing at an author) let's discuss the specific ideas exactly.


I found “Aristotle for Everybody” by Mortimer J. Adler to be really great. The topic of the immateriality of the intellect is covered in the last few chapters, but the rest of it is great stuff too.


Sounds like @benl has been afflicted with the "Cartesian wound". Such dualistic thinking and ideas like free will are ~hard for us to work through. But perhaps the more important, and immediately tractable, question @benl brings up is what our approach should be? Should we make an AGI or better IA, Intelligence Augmentation?

~hard

Daniel Dennett

From Bacteria to Bach and Back


You might be more familiar with the field than me, but my understanding is that’s Dennett position is not well-thought-of in the fields of philosophy of mind and metaphysics. At the very least there are very good cases made that unpick his position very carefully. They’re not all Cartesian views - I grasp the Aristotelian views best myself.


Thanks, i will keep studying. In the mean time my actions will veer more towards IA than AI


> But philosophy does - and it tells us you can’t get mind from matter, no matter what configuration you put it in.

Curious - do you think humans have mind? because if so we are very much matter and if not well that's an interesting thought as well.


That’s right - we have minds therefore we must be more than just matter.

I used to think the opposite, but reading the philosophy on the subject changed my mind. There are a lot of different takes on the topic, but what most added up for me was the philosophy of Aristotle and Aquinas. There are many great expositions of their work out there.


AGI in the sense of robots that can do the jobs people can, design better robots and so on would be a game changer in itself. You can leave to philosophers to argue if they have true feelings and that.


But, not even matter is "very much matter".

I'm a quantum maximalist: the brain is just the antennae, receiving and broadcasting. Attention itself cuts (slices) through the quantum soup, and as a result, these mind-forms appear.


A framing: can we make a rock think?

I don't know the answer but that some people think they do upsets me. I definitely think we should try but right now mostly what we do is make a rock DO so I'm not seeing the leap yet.


I would ask for evidence to support your claim, but I think Newton’s Flaming Laser Sword probably applies in this case.


Well, machine is a name for a stance of analysis, there are no machines in the real world (which is not to say that there no are mechanical linkages) only in our minds.

FWIW, consciousness has no properties and so cannot be studied scientifically.

However, consciousness can be explored experientially, i.e. two conscious beings can merge and experience self as one being. (See Charles Tart's experiment with mutual hypnosis.)


Yes, I used to hold that view too. But actually it turns out that the null hypothesis is that mind is at least partly immaterial, because all attempts to demonstrate the opposite philosophically are fraught with difficulty. I’ve found that the thought of Aristotle and Aquinas, when explained by modern philosophers, best explains to me why that’s the case.


> ... because all attempts to demonstrate the opposite philosophically are fraught with difficulty.

Can you give at least a rough sketch or gist of the argument you are referring to?


I’ll try because you asked me to, but i think I’ll do a bad job. You’ll get a much better understanding by reading on the topics of philosophy of mind and metaphysics. Here goes, though:

1. Purely immaterial things exist. Think of mathematics or the laws of logic or physics - these things exist as ideas or concepts, not arrangements of matter.

2. Some abstract concepts cannot be embodied in matter at all. For example, you can make a shoe, you can draw a shoe, but you can’t draw shoe-ness. You can understand and reason about what makes something a shoe in the abstract, but you can only make or draw an individual shoe.

3) the mind contains these purely immaterial things when we think about and reason about them.

4) If we can use the abstract concepts, but the abstract concepts can’t be embodied in matter, then the mind must be at least partly immaterial in order for the concepts to be in our mind.

I hope that helps a but please don’t rely on my exposition of the case - a real philosopher would do it justice.


The Crown of Thorns is kept in the treasury at Notre Dame and was due to be displayed all day this Friday for Good Friday. How it ended up there is an interesting tour of European history in itself. Let's hope that it has been saved.


It's rather disingenuous of AI researchers to complain of overhype when they are the ones claiming that their tech should be used to drive cars and hence, as we've seen, kill people.

AI winter will be caused, once again, by the failure of the technology to do what the researchers and practitioners claim it can do. This time, tragically, with fatalities.


It seems reasonable enough to argue both that 1. AI should take over certain human roles like driving, which causes millions of fatalities due to human error, and 2. it's silly to frame every new step in AI as part of a grand road to SkyNet. The first is proposing AI for a discrete task, the second is extending this way way out to consciousness or something.


Yes, but it's my argument that claim 1 is incorrect and overhype. AI cannot drive better than humans, and that was an hubristic claim.


> "Non-empirical statements are meaningless"

This is a non-empirical statement, given that you probably don't believe that you can demonstrate the truth of it empirically.

Putting it another way, perhaps you might agree with the following statement?

"All legitimate knowledge is gained empirically."

But how do you know this? Did you reach this conclusion empirically?

So there must be some things that you know through non-empirical means.


> But how do you know this? Did you reach this conclusion empirically?

Yes I did, that was my point. I haven't solved and am not claiming to have solved the problem of induction - the generalisation from "a bunch of empirical knowledge turns out to be valuable/effective/legitimate and all the supposed non-empirical knowledge I've seen turns out not to be valuable/effective/legitimate" to "all valuable/effective/legitimate knowledge is empirical" rests on potentially shaky ground. But that's a problem that already exists when making ordinary, object-level generalisations about the universe; it doesn't render the conclusion any weaker than ordinary scientific conclusions.


That sounds like you're saying something like this:

"I believe empiricism is true because empiricism seems to be true."

We strive to live our lives based on reason, so we should look for ways to understand the world that go beyond a circular argument.

Such lines of thinking exist. They have been well argued and debated and have much going for them. Plenty of places to start learning about them, but maybe start with Aristotle.


> We strive to live our lives based on reason

I don't think we do. Reason is a means to an end, not a goal in itself.

> so we should look for ways to understand the world that go beyond a circular argument.

I don't see it as circular, but even if it were, my point is it's impossible to do better: all of us accept everyday common sense before we can even begin to argue technical philosophy, and if we're willing to set it aside then there are infinitely many self-consistent things we could think and no reason to prefer one over another. So no amount of sophistry will ever get you away from having to believe in everyday common sense.

> Such lines of thinking exist. They have been well argued and debated and have much going for them. Plenty of places to start learning about them, but maybe start with Aristotle.

Please. You're dismissing rather than engaging. If you're not willing to actually contribute to the discussion then don't post at all.


I'm sorry you thought I was being dismissive. I felt I had reached the limit of my own pursuasiveness on the question and wanted to point you to somewhere better than me.

One final point I will try to make is that in thinking about how we know things, there's no suggestion that we need to set aside common sense. It's about starting with common sense and then seeing what we can add to it.


That's only an empirical generalization if you can cache out "valuable/effective/legitimate" in genuinely empirical terms (at minimum, in terms of observer-independent observations free from value judgments).


> That's only an empirical generalization if you can cache out "valuable/effective/legitimate" in genuinely empirical terms (at minimum, in terms of observer-independent observations free from value judgments).

I can cash it out empirically as "generates accurate empirical predictions and suggests fruitful avenues for future investigation" (fruitful in the sense of ultimately leading to more detailed and accurate empirical predictions). That the measure of a theory is the accuracy of its predictions is of course a subjective human position (there are an infinity of possible measures on which to evaluate theories, and a priori no reason to prefer one over another), but again that's (a cautious Neurath's boat extension of) the common-sense way that we all evaluate theories in practice in everyday settings.


No, that's not even close to cashing out the generalization in empirical terms. To do this you'd need to specify exactly which observations would confirm or disconfirm it. Without the parenthesized parts, your gloss of the generalization remains vague and value-laden. With the parenthesized parts it is virtually tautological, since it's in the nature of empirical knowledge to generate accurate empirical predictions. It's surely not news to anyone that if forms of knowledge which lead to detailed empirical predictions are superior to other forms of knowledge, then empirical knowledge is superior to other forms of knowledge.

What you really seem to want to do, then, is argue from the nature of empirical knowledge itself to the conclusion that it is better than other methods of empirical knowledge. But that requires rational argument to back up the italicized statement above, not (just) an inductive generalization. And then we come back to the problem that it is impossible to find suitable premises for such an argument which can themselves be known empirically.

(For reference, the generalization we're talking about here is that "a bunch of empirical knowledge turns out to be valuable/effective/legitimate and all the supposed non-empirical knowledge I've seen turns out not to be valuable/effective/legitimate".)


> But that requires rational argument to back up the italicized statement above, not (just) an inductive generalization.

Why? Everyone evaluates ordinary, everyday knowledge in terms of its empirical predictions, so everyone seems to accept the italicised statement in practice, even if they'd argue for some sophisticated alternative in the abstract.


Hi, I'm Ben, CTO of Academia.

Everyone who works at Academia would love it if we were able to make advanced search free.

When we first decided to build a premium account, we also made the decision to not take anything out of the free account. Strange as it may seem, the free account never had full-text search because we couldn't justify the cost of building it (full-text search of 20MM PDFs at our traffic levels is expensive to operate). We built it for the premium account because people asked for it in our initial research - and we would love to be able to eventually move it into the free account.

On the team we all agree that we want to keep building premium features in order to make the platform sustainable. The author of this article takes the view that advanced search is not a feature that should be paid-for. My view is that we intend keep building features until we have something that is worthy of his support. The support of the academics who use and enjoy the platform, both in free and paid accounts, is what will keep it around and growing for the long term.


Why don't you just figure out how much it's gonna cost you to implement, publish that, and ask for the money? I am 100% on your side but I'm not interested in a bunch of organizational doublespeak, just say you can't afford it and you need $600k or whatever to do this because (budget breakdown). Life's too short to waste time parsing business emo messaging.

How much money do you need? Publish your revenue model and stuff so people can help you with it .


This is honestly a terrible idea. It will get posted to HN, and the comments will be a flood of "why do you need that much to do this, I could do it for $5 using list of this week's vaporware buzzwords that won't survive to the end of the month".


Who cares? If the problem is money then the next question is how much. It doesn't require universal agreement.

Look what this boils down to is that Academia.edu is a private venture, people are investing in it in the hope of being able to make profit later. Here the firm says they want to bring as much information as possible to as wide an audience as possible, but need to raise additional money to add this (rather obvious) functionality. How much? That's a secret, because revealing it might reduce the profit the investors are hoping to get out of it.

I'm very much in favor of the stated goal of disrupting the existing academic publishing/cataloging oligopolies. but if that's really the priority, then stop being so secretive. And if profitability and becoming the new monopolistic incumbent is really the priority, then stop bullshitting me with feel-good mission statements and just cold-call more rich people until someone writes a check.

I know I'm pressing things very bluntly, but I don't think the habit of corporate doublespeak that has become the norm in society is actually doing anyone much good, including the people engaging in it. Nobody really wants to get up in the morning and spend their day bullshitting people with cliches, that doesn't create value for anyone.


Why is accountability a bad idea?

Charities do that all the time. Some are well funded exactly for this reason. Most people understand good things cost money when thinking for long term.


Accountability to well-informed people who understand the tradeoffs involved in building sustainable real-world infrastructure is great.

HN is not an audience of that kind of people.


It is when people want to launch things and get lots of buzz, then HN is great. But when people express criticism suddenly the user community is a bunch of angry peasants to be kept at arm's length.

I've gotta say I'm getting real tired of entrepreneurs that want to be everybody's friend when they're getting their exciting new venture off the ground but are too cool to discuss the nuances of business with anyone outside the VC bubble whenever they run into a PR problem.

That's a general statement, not directed at the academia.edu team. It's a sad reality that a lot of what passes for entrepreneurship today involves telling users, employees, and investors that they're each the most important group so as to make as many people as possible happy, right up to the point where a conflict of interest emerges and then trying to obscure the fact of its existence with platitudes.


I'm not an entrepreneur and don't necessarily want to be HN's friend.


Are you saying that HN commenter response is a good barometer for business ideas? A free and well-diversified focus group?


Bioinformatician here. I appreciate that you need to make money somehow. I do find this a little hard to believe, though:

> full-text search of 20MM PDFs at our traffic levels is expensive to operate

Assuming you convert them to text once, index them, and put them in a standard FTS engine, I'd guess it is on the order of 100GB-1T of text (max), plus some more for the index (basing these estimates with my experience text mining PubMed Central and MEDLINE). So it can all fit on a pretty standard server. Maybe at 100 req/s it would take a few. Yes, you'd want replication.

The number of servers required to get good latency FTS is the part of this that I'm least familiar with. Anyone have a ballpark, given these estimates, on what kind of hardware would be required? (I could easily be wrong, and indeed this is very expensive. If so, I'd be curious about ballpark numbers)


I maintain a P2P on-premise FTS search. Though this one indexes many types of text (plain, HTML, PDF, DOCs).

One 8c server (running about 10 workers) can handle 8 to 10qps, depending on the depth required. This is on an index of 20 million documents. If the number of workers is constant, doubling the index will have the qps. 2 million docs take about 50GB of disk space (20 million = 500GB, 1TB with redundancy).

It's better to go with SSD arrays here, since random IOPs are much higher than for other workloads. This can skyrocket cost.

So for this (our) system, it could be as cheap as $1k for the hardware, e.g. using the Foxconn Purus cloud server: http://www.bargainhardware.co.uk/cheap-e5-2600-lga2011-sixte...


Thanks so much. Fascinating. I've made many of these types of app but never to "web scale". Bioinformatics apps are a bit niche, and we take the view that our non-paying academic "customers" can wait however long it takes to finish the query, in the unlikely event that there is high load.

Just to make sure I understand correctly, that's about $1-2K of one time cost per 10qps (w/o SSD, and not counting power and maintenance, etc)? When I first saw "cloud server", I thought that was a per-month rental cost, but the link is for actual in-house hardware. If this is even close to correct, my suspicions seem confirmed.

Except for one thing. I have no idea how many qps a site like Academia would have. 100qps was completely out of my ass, but it seemed hard to imagine it being any more than 1-2 orders of magnitude higher, at most. Any guess on that?


Yeah, in my example, the $1k is for one server (cluster node). These servers (Purus, Quanta etc.) are commonly used to rapidly build enterprise clouds (you usually buy them by the rack). It's the closest thing you get to plugging a network cable into a bunch of Xeons. The cost of one system breaking is negligible. This is not counting the colo costs.

You can also do this with virtual public clouds (AWS, linode, GCP et al), but you'll of course pay a premium for the infrastructure. This might be worth it though, because you can now scale within seconds to handle qps bursts. Usually, latency can be lowered by going baremetal (see e.g. Algolia).

Academia should be able to handle more qps than our system, because the queries are really trivial in comparison. With decent caching, an 8c should be able to do 50 to 80qps. That's what I get from a few experiments when I switch my test cluster into restricted mode (basically just substring search).

Of course I can only speak from my experience, not how this can be applied to Academia's existing infrastructure. Testing large search engine deployments can be really, really frustrating.


Hi, what do you mean with P2P FTS, can you please elaborate? Solr, Elastic, some other custom thing you wrote? Language? I'm curious...


This feature isn't going to be effective for you without some rethinking. People can simply search on Google: site:academia.edu "Potterheads" and retrieve all the results.

Full-text search is freely provided by Google. Judging by how many people caught on to using a similar trick with WSJ, it will start to become a very popular way of interacting with the site.


You're overestimating the average Google user's google-fu. Most people aren't aware of the 'site:' functionality.


Fair, but I also wouldn't be surprised if someone just builds a browser extension to do this automatically on academia.edu.

Also, that sounds like an assumption. Do you have anything concrete to back that up? Assumptions are often broken in painful ways for businesses. You might be surprised how many academics are motivated to learn about Google's advanced search and tell their friends about it if it saves them money.


I think it would be a little easier to accept (and probably easy to implement) if, in addition to title search, authors and keywords were also searchable. A search for my PhD advisor's last name results in 0 papers, though I uploaded a few papers that he and I coauthored.


That's a good idea. It's already possible to search for author names separately and then find the papers on their profile (if they have one), and it's also possible to search for a research interest tag and find the papers tagged with it. It would make sense to unify those with the title search.


> he and I coauthored

…which means that you wrote them and he, ah, "co-authored" them?


Yes, I did write most of the text, but without his guidance on the research and on what to write about more in depth, there would have been no article. He's been the best advisor. Don't you dare say bad things about him! :-)


Nice to hear!


If you have enough experience to guide me wrt what the interesting questions in a field are and how to make potential answers to those questions "meaningful", then I'm happy to write and have you as a co-author.


Hi Ben.

Will Academia ever stop impersonating people in e-mail? I receive e-mails all the time purporting to be actually sent by academic colleagues, which are instead form messages sent by Academia.edu.

I know that you're not sending these messages with permission. I know that my dead co-author is not giving you permission to send unsettling e-mails from him.


Hi - Please forward me a copy of one of those emails (to my first name at academia.edu).

We only ever send emails from users in response to a request from them to do so (e.g. they send a message or invite a co-author). Please send me the email and I will look into it.


Okay... this turns out to have been a false accusation. I'm very sorry. The company that does this is ResearchGate, not Academia.

Academia's e-mails have appropriate From: lines, and don't appear to be sent from beyond the grave, and you deserve credit for that.


> full-text search of 20MM PDFs at our traffic levels is expensive to operate

Why not do a free full-text search on the abstract or first couple of pages (or few hundred words)?

This might make the premium search upgrade more subtle but I wonder if it could help the majority (?) of legitimate searches get the hit they expected/hoped for.

... Perhaps this would appease the haters?


That's really good idea.

As Richard mentions in another thread, search is not actually the primary discovery mechanism on Academia. The social features (the news feed, bookmarks, sharing, recommendations) are the primary discovery mechanism. These are all free and we want to keep them that way.


Huh? Lucene can index that many documents no problem.


> On the team we all agree that we want to keep building premium features in order to make the platform sustainable.

Is there any concern that paywalling features will reduce the number of users on the site?


Yes, we don't want that to happen. That's one reason (but not the only reason) that we haven't paywalled any of the existing free features.


Academia.edu (San Francisco, CA) - Software Engineers, Designers, Data Scientists

At Academia.edu, we're trying to accelerate the scientific process by changing the way scientific research is shared. Right now publication is slow, un-innovative, and expensive - we want it to be fast, innovative, and free. We're a platform where scientists upload and share their research directly, and track metrics on the impact of their work. These metrics help when they apply for jobs and grants.

A leading climate scientist in Germany told us "Academia.edu shows the impact of your work that is not covered by Web of Science and citation indexes of that sort. With Web of Science you only learn how many people have quoted what. But with Academia.edu I can see what is viewed, what is actually read or not. Here I learn something additional, something I would not know otherwise.”

5.1 million academics have signed up to Academia.edu, and around 1 million join every 2 months.

We are a 10 person team based in downtown San Francisco. We just raised $11 million from Khosla Ventures, Spark Capital, and True Ventures. We are looking for full-stack engineers, designers and data scientists. Technologies we use include Ruby, Rails, CoffeeScript, Backbone, Postgres, Mongo and Varnish.

Bijan Sabet from Spark Capital writes "We believe open science is really important. We believe Academia.edu is going to have a profound impact on the world."

There are some core values that define the Academia.edu culture. Since everyone in Academia.edu is involved in running the company, we look for these values in everyone we hire:

* Being a do-er - "The credit belongs to the man who is actually in the arena" - Theodore Roosevelt

* Being driven - "Look at a day when you are supremely satisfied at the end. It's not a day when you lounge around doing nothing; it's when you've had everything to do, and you've done it." - Lord Acton

* Having equanimity - "First they ignore you, then they laugh at you, then they fight you, then you win." - Gandhi

* Having humility - knowing what you know, what you don't know, and where you just have an instinct

* Being motivated to open up and accelerate science

See the founder, Richard Price, on Bloomberg TV: http://www.bloomberg.com/video/academia-edu-scientific-resea... Read the coverage of our recent funding round: http://venturebeat.com/2013/09/26/meet-academia-edu-a-startu...

To learn more, take a look at http://academia.edu/hiring, and then email Ben Lund at ben [ at ] academia.edu.


To Whom It May Concern,

My name is Sam Sudore, I am a seasoned technical professional with over 20 years of management and business experience. I live in Seattle and conduct a lot of business in San Francisco. I represent a small but talented group of Ruby and Java developers based right at the boarder of Mexico. We have helped a lot of companies achieve their goals while keeping their expenses in check. If you are interested, we would love the opportunity to discuss your development needs to see if we may be of service. Give me a call anytime if you would like to discuss this further.

Regards, Sam Sudore 425-471-3133


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: