Hacker Newsnew | past | comments | ask | show | jobs | submit | sfrank2147's commentslogin

What does it even mean for consciousness to be an illusion? An illusion implies that someone is being fooled by consciousness, but who is the someone being fooled if there is no underlying true self?


Human brains are able to create certain mental abstractions. There's good evidence for why this would be useful (think about how much more flexible humans are than computer programs). Referring to these mental abstractions as "consciousness" or "qualia" is begging the question to a large extent.

The illusion is that these abstractions are evidence of something qualitatively more advanced than simple neurons interacting in a way that's beneficial for the organism. Which isn't unusual, we've seen the same kind of reaction when people proclaim that they couldn't have "come from monkeys," or any other scientific discovery that knocks humans off the pedestal they've placed themselves upon.


Why is it like something to be these abstractions?


Anthropically - whatever it would be like to be one of these abstractions is what we would call conscious.

A society of only p-zombies would call p-zombieness consciousness and they would be no less correct than us.


I wouldn't say it's an illusion; I would call it a perception, which I know exists because I do perceive it. In the same way I perceive external shapes and sounds, I perceive thoughts about those perceptions and thoughts about my previous thoughts. That's not better than Descartes, but I think Descartes had this part essentialy right :-)


There is an idea that consciousness may a mere epiphenomenon; as an epiphenomenon, consciousness cannot effect any change or action. But we have the feeling that we are choosing our action, so this must be an illusion, according to the idea.

There were some experiments in which subjects were asked to push a button whenever they chose. The results showed that the impulse to push the button arose before the subject was aware of having "decided" to push the button (I can't recall how they managed to detect and measure this).

At any rate, I think that's what the parent is referring to.


I think the feeling that we choose our actions, is actually not so clear a feeling. If you pay attention to how you actually make decisions, even simple ones, it is difficult to see where a "conscious being" played any part. You don't think your thoughts before you think them. They just arise out of the darkness of our subconscious minds and pop into our consciousness. We didn't consciously make them, or control them -- they just showed up. It seems that whatever we call consciousness, it is more of a leach that takes credit for what the subconscious animal does. To me, at best, the conscious mind is a journaler of thought, and perhaps is an offshoot of memory and our association engine.


>* You don't think your thoughts before you think them. They just arise out of the darkness of our subconscious minds and pop into our consciousness. We didn't consciously make them, or control them -- they just showed up. It seems that whatever we call consciousness, it is more of a leach that takes credit for what the subconscious animal does. *

Neuroscience actually confirms this :-) Brain scans show that conscious thoughts are reflections of subconscious processes doing all the work, AND that the rational mind is very good at creating post-hoc rationalizations for explaining "reasons" of why you arrived to the decisions you took. Most of what we call "reasoning" is about creating narratives to save our self-esteem.


If consciousness is an epiphenomenon, what was it that led Newton to create Calculus (for example). It's not clear to me how such a thing could come about via instinct.


It could just be the right kind of inputs. And I don't mean just education and such, but literally every perception Newton ever had over the course of his life, plus any genetic factors.


Somewhat tangential, but the most absurd ad I've ever seen was when Dodge ran an ad during the Superbowl quoting from this speech.


Sorry for being even more tangential, but an Israeli friend joked about this that "You shouldn't mix your Rams with MLK." (i.e. Dodge Ram and Martin Luther King vs the kosher dietary restriction of not mixing meat with milk)


I heard a President’s day ad for a Jeep CHEROKEE that had a song with the US Presidents in order. I don’t know if they got to Jackson shudder


Whenever people make a comment about references like that, or with sports teams, it makes me think:

- If it was common to make references like that in Germany to Jewish institutions, groups, or tribes, ambiguously "honoring" them, would that be a bad thing? Would it be gloating over victimizing them, or commemorating their bravery?

- Given that (my impression is) they don't, what does the cultural difference really signify? Are Germans entitled to feel superior for it?

Like, I can imagine a world where there were sports teams called the "Maccabees" or the "Ghetto Fighters". I think that German cars do make reference to groups in ways that surely someone could find offensive, like the VW Touareg. Is this better/worse/as bad as using "Cherokee" as a name?


I don’t think you have the context. President Andrew Jackson violated a Supreme Court ruling to basically commit a genocide and expulsion of the Cherokee from land that was theirs by treaty.


Yes, the association of Jackson is obvious. Why do you think that genocide was not the context of my comment?


Well would you use a song about the “Chancellors of Germany” to advertise the Maccabees’ next game?


You probably would/could have ads for Mercedes or VW, and their history could lead to awkwardness, particularly if old photos of their cars were used.


I know. they have balls, if no sense.


That's their style. I'm surprised they don't have a Hellcat Journey "just because". It works fairly well for them. Jeep SUVs sell well. Chrysler and Dodge sedans, SUVs and van like vehicles pretty much have half the market for people with bad credit (with Nissan having the other half).


The Journey Hellcat Edition (Would that be the 'Don't stop believin'' option package?) was probably discussed, then deprecated only on account of not having a transverse automatic rated for arbitrarily silly nM's of torque.


Does anyone know how Project Euler was storing the passwords?


    Usernames cannot contain more than 32 characters 
    and they may only contain upper/lower case
    alphanumeric characters (A-Z, a-z, 0-9), dot (.), 
    hyphen (-), and underscore (_). 
    Passwords must contain between 8 and 32 characters.
My money is on "ineptly."


There's really not much rational for capping passwords at anything beneath 256 characters.

256 characters makes for a fairly sizable passphrase, and doesn't represent a substantial hit on storage space. In reality, even if they were stored as encrypted binary/base64 in a nosql file system of structured data files, 4096 is pretty much the de-facto floor for disk space occupied by non-zero-byte individual files on most modern file systems.

...variable data size being a concern in cases where the transformed value is encrypted rather than hashed.


> 256 characters makes for a fairly sizable passphrase, and doesn't represent a substantial hit on storage space.

They shouldn't be storing passwords at all so storage space should be a non-issue. My 20 meg password should hash down to the same small(er) value as your 15 character one.


On the other hand, you might not want to be hashing a 20meg password. It is fast on my computer but it's fair to limit at something more reasonable.

    $ python -c 'print "8 bytes\n" * (20 * 1024 * 1024 / 8)' > 20meg.txt; time shasum -a 512 20meg.txt 
    59cb7f88ad8d6229e6d3a74ee422dff57e17f168c6e6fa44ef32c3f07a73a6e455d8b55c1265d5212b9ed5475b6d9364286645200dada59aa16905a9ce748561  20meg.txt

    real	0m0.289s
    user	0m0.284s
    sys	0m0.004s

    $ python -c 'print "8 bytes\n" * (16 / 8)' > 16byte.txt; time shasum -a 512 16byte.txt
    b6043d3a520424d5ec17dc0c23ba3b591d74517e2b9faa0df2d69d13c89a5f372d6dc35f95836687ee05be18433277e1c4b67393eb2771b475d655a832b16654  16byte.txt

    real	0m0.045s
    user	0m0.040s
    sys	0m0.004s


Fast password hashing is bad anyway. Slow schemes are better (assuming they come from a thoroughly tested library written by someone that actually knows what they're doing).


I'm admittedly not familiar with the details of the hashing process, but it could be done client-side, no? Then the compute power required falls on the user, PLUS the 20 megs never gets sent over the wire.


No. Actually we would not want the hashing technique to be exposed in the source code.


That turns out not to be the whole story.

Hiding the technique to compute the hash is relying on security by obscurity. Ideally you want a system that is secure even if potential attackers know what methods you're using.


If you do the hashing on the client side, then the hash itself effectively becomes the password.


That would assume your users have JS enabled. I've seen something like that done before, but always with a fallback in case user has JS disabled.


If you're not sending 20 megs of data, you're not getting 20 megs of security. So why allow it if it doesn't add anything?


It doesn't add to the security, but I might find it easier to remember 20 MB of redundant and meaningful stuff than to remember 384 bits of literally random stuff. The entropy might be the same, but my memory is not a computer. I can remember vast amounts of material that is meaningful and use it as a password. I can't remember 384 bits that have no meaning.

The benefit isn't in the entropy, it's in the abilities of your users to remember their passwords/passphrases in the first place.


Maybe 1 or 2 KB, max. 20 million characters is a ridiculous password to remember.


That's not the point - the difference between 2 KB and 20 MB is purely a detail. You said:

  > If you're not sending 20 megs of data,
  > you're not getting 20 megs of security.
  > So why allow it if it doesn't add anything?
You could just as equally say:

  > If you're not sending 2 KB of data,
  > you're not getting 2 KB of security.
  > So why allow it if it doesn't add anything?
Your point is the same, and it's still wrong. What you're getting is not the security - that's only half the story. My point is that is does add something, it's just that the something it adds isn't the entropy for the purpose of security.


When there will be multiple shorter passwords that hash to the same value, is there a point to a 20mb pass?


Yes. Ideally you want users to be able to remember their pass-phrases. To do so usually implies significant internal structure and/or correlations, so to get the necessary entropy they will be large. The fact that they hash to the same value as other things is effectively irrelevant.


I misspoke, I meant size.


Depends. Can you guess them?


If I'm an attacker who is running through hashes...yes. Faster than the 20mb one.


There is a slight exception to this. If they are using an older statically typed language like C, it might make sense for them to have a limit on the buffer ready to store your unhashed password. Yes it seems crazy these days, but it might apply to some of the older systems which have password length limits.


BCrypt has a character limit of 72.


Hard limit of 72, beyond which many implementations will silently truncate, and reduced entropy from each character beyond 55 bytes.

Probably a good idea to pre-hash. Or use scrypt.


I can't find it now but I seem to remember this came up in response to another breach ~24 months ago. At that time they made an announcement to the effect that from then on you'd no longer be able to have your password sent to you if you forgot it, but that you would instead need to use an account recovery key.

I took that to mean that prior to being pwnd they had been storing passwords cleartext and would no longer be doing so.

Also, the wording about allowed special characters seems to be incorrect. I personally have a non ./-/_ special character in mine. Unless they are doing something terribad like silently discarding noncompliant parts of the password.

Re: password length - at least 32 characters is respectable. I believe last time I used outlook.com they had a max length of 12-16!


Oh and on the topic of silently discarding portions of passwords, another outlook.com password deficiency (circa 2011, doubt it still exists):

When setting the password, max length was only enforced by a text input with a max length attribute. You could happily type more characters and everything would work as expected....until you went to log in. The max length on the password field on the login form was greater so those characters that were silently dropped when setting the password suddenly weren't.


It used to be a salted MD5 hash, but it may have changed.


I would have assumed of course that the size limits were because the passwords were being stored in plaintext in fixed-length fields, but I guess they wanted to make sure they were 'complicated' enough? I guess salted md5 is literally better than nothing.

The character limits for usernames, though... smells like a SQL injection issue. Which is an obvious and completely naive thing to assert but they're using PHP so my immediate thought is that they're passing raw userdata into the database as strings.


my immediate thought is that they're passing raw userdata into the database as strings

That was my first thought too. I'd guess that it's a vulnerability somewhere in the code for handling the forums.

I would be willing to bet that they could get rid of a lot of the attack surface just by using standard services for certain things.


Probably. If they're not using PDO then that needs to be their first priority, dead stop. After that, maybe looking at their captcha script, because those sometimes have issues if they're not well designed. I don't know where theirs comes from but it doesn't seem to use much obfuscation so it's probably old. After that, Twig.

Although judging by a screenshot of the recent hack[0] posted here[1] escaping (and XSS) may not be an issue.

[0]https://i.imgur.com/pl22srz.png

[1]https://news.ycombinator.com/item?id=9990221


Admin from PE here.

We've already been using PDO. As for overall privacy/security, please see https://projecteuler.net/privacy


And PHPass as well, fair enough.

Thank you for showing up and addressing my armchair criticisms. I appear to stand corrected.


I genuinely hope the security hole is findable/fixable. Thank you guys for continuing to run an awesome service, despite asshats repeatedly trying to abuse it.


Thanks for the response!


That's not what standards are for - standards define what students should be learning and at what age, and teachers are responsible for making lesson plans to implement those standards. Every teacher has his/her own teaching style, imposing one method of teaching the standards would just piss everyone off.

And yes, teachers do share lesson plans, both online and within their schools.


I'm learning Haskell right now and it's great. I don't know how widely used it is in production, but it's introducing me to tons of ideas that are applicable in lots of other languages (e.g., how to do functional programming effectively, how to effectively use a strong type system, how to separate out functions with side effects).


That's a great tradition! From an economics perspective, I would gain way more than $2 worth of utility from discovering that someone had randomly paid for my coffee, it would really make my morning.

I wonder if the benefit would decrease if the practice became more common though. Would I start expecting my coffee to be free, and become disappointed when I had to pay for it?


That assumes that it would have to happen more often than %50 of the time, which I think is impossible.


A monopoly should be defined by barriers to entry, not just market share. There's nothing preventing other large companies from entering the search space, and in fact many of them have (yahoo, microsoft, etc).

There are true monopolistic firms like Comcast or Time Warner, where there are serious logistic/economic barriers preventing other firms from entering the market. This doesn't seem like one of them to me.


There are serious logistic/economic barriers to enter the search space. Building a competitive search engine is very expensive. Ask the Bing people.

Yahoo Search is actually powered by Bing since 2009 according to Wikipedia. Their own search engine was not competitive enough. http://en.wikipedia.org/wiki/Yahoo!_Search


    > There are serious logistic/economic barriers to enter
    > the search space.
But there were before Google started as well. Google entered the search engine party after it seemed to be over. I remember reading about them first on slashdot and thinking - wow - someone still thinks there's room to break into this? Surely portals are the proving ground. (Who knows? Maybe they are.)

Something particularly interesting about Google is that they just took the industry head-on. I'd guess that there'd be areas where you could build a kind of search engine that was better than the market leader, and focus on carving out a niche. Yandex have done just this with the Russian market. (and there's an example of a commercially-viable post-google search engine business).

But Google just went after being the leading power.

History in general, but in our space in particular, is written by small, well-coordinated teams who can repeatedly execute. If you can get that team together, you can do almost anything.


Just because it is expensive and technically hard doesn't mean Google is preventing them in any way.

What would be worrisome is if they abuse their market position. Is there strong evidence they do this in search? It'd be more concerning if they did to promote their other products.


I am refuting the statement "A monopoly should be defined by barriers to entry, not just market share." made by sfrank2147, specifically interpreted as "barriers of entry for search market", which are surprisingly high. Based on his definition, Google is deep into monopoly territory.

What a monopoly really is and whether Google qualifies is a separate question, and I'm recusing myself from commenting on this point.


> Just because it is expensive and technically hard doesn't mean Google is preventing them in any way.

I agree. It means it's a natural monopoly. These should be regulated, because otherwise there is a tendency for them to abuse their power.


Duckduckgo may claim otherwise.


The "many" examples you gave is actually just one: Microsoft. Both Yahoo and DuckDuckGo searches are essentially based on Bing.

There is tremendous barrier to entry in the search space - otherwise Bing wouldn't still suck compared to Google. The point of competition is to get good enough to steal market share from the entrenched incumbents or gain new users somehow. If that doesn't happen, for whatever reason, then you don't have strong competition.


In the US "natural" monopolies are allowed to exist, the best example of this being Microsoft with their Windows operating system. What isn't permissible, is using that monopoly to advantage your other products, and possibly more importantly disadvantage products of your competitors.


Natural monopolies are things like infrastructure where the entry cost of providing a fully separate parallel network (of railway lines, phone lines, broadband etc) is prohibitively high or where provision of such parallel network is undesirable for other reasons (e.g. environment).

Could you explain how an operating system is a natural monopoly?

Development of a new OS isn't cheap, but isn't prohibitively expensive either. It also isn't undesirable AFAICT.


> Could you explain how an operating system is a natural monopoly?

The cost of providing and OS and an entire ecosystem of device drivers and an an entire ecosystem of apps is very high.

But even if you manage to build that, the cost of training a significant percent of the population into using and developing for your platform is much higher. Getting a significant presence in the "brain space" of a population is very expensive. Having people switch to an alternative is even more expensive due to, among others, human network effects.


You're making implicit assumption that a new OS must come in with a new ecosystem built from scratch. There is a significant number of standards (e.g. POSIX, ELF, PDF etc) that can help you ensure different levels of interoperability with existing software and data. Implementing such standards lowers the cost of building an ecosystem significantly. Also, with the push to the cloud, fewer applications need to be provided natively.

Driving adoption is a matter for all products in the market, not specific to operating systems. You can make it easier with familiar UI, advertising and bundling your OS with hardware (although that may be anti-competitive practice it seems to be widely accepted where I live).


> Driving adoption is a matter for all products in the market, not specific to operating systems.

It's a matter of degree. To simplify, the costs of switching are linear in the complexity of using the product. I apologize that I don't know how to precisely model human network effects (you need to learn the product from someone) on top of it, but intuitively the societal costs are super-linear.

Computer systems are by far the most complex products human kind has ever produced. Given the simplistic model sketch above, the costs of switching in the computer industry are the highest humanity has ever seen, likely by orders of magnitude. Fun times.


Computer systems may be complex, but they're not the most difficult to use. I'd say a lathe for example is harder to use than a smartphone or a PC.

Generally, high complexity of the implementation does not necessarily translate into high complexity of the interface.


Logistic/economic and most importantly, legal. ISP have deals with local governments to have monopolies.


The one in 30 million statistic is crazy to me! That means that the surviving fish need to average 30 million children in their lifetime to sustain the current population - if they stay adults for 2 years, that's 41,000 children per day! Can someone shed light on how that's possible?

Note also that it's 1 in 30 million babies that actually hatch that survive, not just 1 in 30 million eggs.


Wikipedia looks like it makes sense:

http://en.wikipedia.org/wiki/Pacific_bluefin_tuna#Life_histo...

They are reproductive for more than 2 years and lay millions of eggs in each breeding season.


As the paper mentions, it's also possible that people are lying to the IRS about the date of relatives' deaths. (I personally think that's the more likely interpretation, but I guess it depends on your priors).


I don't understand how a startup will help here. The problem is regulatory hurdles (good or bad): there are laws that prevent patients from buying medicine before it's been thoroughly vetted. Will this company lobby to change those laws? Or is this an Uber situation where the fact that it's a VC-funded startup means that regulations magically don't apply.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: