Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple is working on iPhone features to help detect depression, cognitive decline (wsj.com)
113 points by walterbell on Sept 25, 2021 | hide | past | favorite | 148 comments



Since most of the comments here seem to be on the hysterical side:

1. This is not something Apple is rolling out now with their devices and systems.

2. This is research being done by UCLA and BioNTech using Apple's various sensors.

3. There's nothing concerning here (yet). This is a common kind of research using novel sensors (either actually novel sensors or taking advantage of their now-pervasiveness). Webcams were used as part of studies at least 15 years ago (by classmates while I was in grad school).

Other sensors have been used for decades to monitor people in research studies to see if they could be used to detect conditions either after they'd developed or earlier while still developing. The only interesting thing about this is that the researchers are using Apple's sensors, and that's not terribly interesting at all. Last year, Garmin and Oura partnered with DOD and others to use their sensors to try and detect COVID infections before people developed the obvious symptoms, was that terrifying to any of you?


The pattern is always the same. When an invasive technology or a precursor to an invasive technology is announced, someone on HN sagely points out that everyone should be reasonable and drop the conspiracy theories.

Five years later the tech is on everyone's phone.


also, even if "privacy" is assured, once the tech is normalized either the private version can be cracked, or a similar but non-private tech is released everywhere else (with hooks for advertising/data-mining/govt access)


Which is why all the afib data from watchOS got leaked to increase insurance costs, right? Right?

Sometimes the sky does not actually fall.


Less than 24 hours ago, the top story on HN was a tranche of iOS 0-days released by a whitehat who got frustrated by Apple's mismanagement of its bug bounty program. So it's open season on that afib data right now...


It does not follow that 0 days mean every piece of information in apple’s control will be stolen. Historically apple 0 days have been used to target high level targets, usually journalists, and not appropriate (or usable) for removing all data from all iPhones.

Something Apple should fix, sure. But not something that I’m going to delete all my Apple Health data for.


> It does not follow that 0 days mean every piece of information in apple’s control will be stolen

You're correct in general, but not in this specific instance. If you followed the said thread, you'd have encountered the disbelief that Apple stores health data unencrypted on the iPhone, despite FIPS certification for the watch. Every 0-day, as it stands today, can result in access to health data, until Apple adopts defense in depth.


How would you know if it was? It was more likely sold by whoever collected it. The very existence of this pool of data is a risk, and just because the sky didn't fall today, do you stop wearing a seatbelt?


>If the research finds that any of that data correlates with relevant mental-health conditions, the hope is to turn those signals into an app or feature that could warn people they might be at risk and prompt them to seek care

I am worried about the part of article that mentions that they want to automate the analysis of the data to some degree and move it out a strictly clinical research setting.

I have also done some research in the field and my final sentiment is that the data is too noisy to be ultimately useful and the problem is hard to define in a quantitative way. I predict having more data isn't going to show a more clear correlation between things.


These things don't do diagnostics. They just give the user a heads up and suggest they see a specialist. The sky was already falling when they did that for atrial fibrillation, and there were lots of people claiming that it would submerge hospitals and doctors under a flow of false positive.

The sky did not fall, hospital managed just fine (before COVID, that is), nobody's precious heart rate data were leaked, and a couple of lives were saved.

These things gather metrics and suggest to see a doctor. They don't do anything like diagnostics, and they don't replace a doctor. It would just nudge someone to see a specialist, who would then diagnose according to their experience and the patient's symptoms. Noise is irrelevant here.


Jumping from a very clear definition of AFib to very subjective diagnosis of mental health issues isn't something I'm comfortable with.

Noise is relevant because the diagnosis is subjective. We're not talking about timing between heartbeats, we're talking about situation where even medical professionals can't do an invasive quantitative test to say how depressed someone is.

Using privacy invading sensors doesn't make the diagnosis less subjective.


So, because you’ve done research that’s the end of it? No one else can possibly do better? Game over guys. The guy on hacker news did his best so everyone else should just stop trying.

I for one, as a person that swings around with depression, would love something like this on my wrist.


I was simply sharing my personal opinion after having spent a few years doing grad level research in the field. My research wasn't funded by Apple money so I could be wrong about what the ultimate outcome will be.

The research of connecting mental health to how someone uses the device is being encouraged by Biogen which released the controversial Alzheimer's drug, Aduhelm. The one that's approval made 3 FDA committee members resign.

I don't like the jump Apple wants to make from signals that are meaningful in a clinical setting when reviewed by a human to get a diagnosis, to an automated system that make a guess based on a black box model and is deployed to the general public. Just like a doctor wouldn't encourage every single person go out tomorrow and do a fully body mri scan and then follow it up with every single blood test just to see if something wrong.

I hope I am wrong and their research produces something meaningful. If it's actually robust with no false positives and opt-in, it sounds like a great idea. But that's not how reality works when you're testing for something so subjective like depression or autism. There's no clear relationship between depression and the kinds of signals they are using, unlike heart rate and AFib.

I would be the first one to join any sort of study that will use blood tests to find markers of depression, to quantify it. But the reality is that it's not currently quantifiable. Using proxies like screen time and how focused you are on the screen isn't good enough. And going as far as:

"data that may be used includes analysis of participants’ facial expressions, how they speak, the pace and frequency of their walks, sleep patterns, and heart and respiration rates. They may also measure the speed of their typing, frequency of their typos and content of what they type, among other data points"

it's literally throwing everything and the kitchen sink at the wall and seeing what sticks.

That's not science, that's p-hacking. With that many signals, some might be promising but it's like a study where you see if vitamin D effects the outcome of 40 lifetime illnesses and then only reporting which ones will work.

For other people getting an alert that their HR has spiked might help, but I don't see how the person would not have noticed themselves already.

I personally don't believe an automated phone/watch analysis is going to give me a better understanding of myself using some black box model. If I feel depressed, the watch isn't going to call a doctor for me, and even if it did, I don't know how it could give me motivation to physically seek help in that state.

I have a smartwatch that does 24/7 per minute heart rate monitoring and it's really really interesting to see the data. I love using my polar h10 HR chest strap when I work out because it gives a live EKG graph as I work out.

I am all for using data to help inform, but this doesn't seem like the a scientific way to go about it.

Especially if they've jumped to saying they want to detect Autism, where by definition it's broad and very specific to each individual, and Depression and Anxiety which are insanely environment specific.

I have wished many times that mental health issues could be quantized in some robust way because that would make treatment easier and remove a lot of stigma surrounding diagnosis and treatment. That's why I spent energy in research there myself. I don't see this move by Apple doing that. And I understand that's not Apple's goal, but anything less is going to have too many cofounding variable which will create a lot of noisy data. Apple wants to go for the more basic signals, but that's not something you use your phone for. Maybe it'll help people that don't have a strong social support system and live alone.

The other main issue is that they want to create a generalized model that doesn't need human analysis to make these predictions. Depression causes a cluster of symptoms but not everyone has the same ones and it doesn't hold them back in the same exact way. Anxiety and Autism and other mental health are the same in that regard.

The research that might potentially produce some meaningful positive outcome is the testing for Alzheimer's by doing cognitive tests on devices, but the 2019 feasibility study had that "31 adults with cognitive impairment exhibited different behavior on their Apple devices than healthy older adults" leaves out the total cohort size and any details you can use to evaluate the validity of the research. They also go on to mention that they're testing against traditional cognitive tests and brain scans that show plaque buildup. The plaque theory has shown to have a lot less evidence supporting it than originally thought [0]. This research is in collaboration with Biogen, the company's who's Alzheimer's drug was approved earlier this year. The drug the 3 members quit over.

I have reservations about privacy and the invasive sensors this article seems to describe, but that doesn't change that the underlying approach of finding significant signals has a lot of issues.

I hope it's clearer now where I'm coming from. This is an issue I am invested in and am passionate about. I would be just as happy as you if the research is positive and they make something meaningful. I just wanted to point out the red flags I see.

0. https://www.nature.com/articles/d41586-018-05719-4


I can't edit anymore but I thought I should mention that some of the sensors and data we explored for various projects. I have experience trying to find data from an eeg cap. For a different project I had to use neuron firing data recorded during a surgery. I have used an accelerometer glove, eye tracking harness, pedometers, myoband.

And my favorite is heart rate data. With an ant+ sensor and a polar h10 strap you get the RR and hrv testing if you want and can even keep a live ekg graph going. As a side hackathon project, for medhacks, I wanted to see if with enough data you could find a correlation between tempo of music with your heart rate. The idea is that first the model would learn what songs paired with what music. Then if your heart rate spikes which I naively and intentionally assumed was related to anxiety, there would be a song recommend to help lower your heart. It was just a fun idea for a weekend hackathon. I couldn't get to work as well as I wanted it to.

Sensors and data analysis are the future of all medicine. But in order for the data analysis to be useful, it has to be methodology has be to rigorous with very little room for a subjective outcome.

If someone ends up taking that idea, or would like to talk more, I would love to hear about it, my email is knaik1994 at gmail dot com.


> UCLA and BioNTech

Biogen. BioNTech is still pretty focused on mRNA technology :-)


Thanks. I went to lunch and came back and realized I'd mistyped it, hadn't bothered to put in a correction post and was well passed the edit period by the time I saw it.


For a minute, I got excited that someone was trying to use mRNA against the amyloid hypothesis!


Even if they do roll it out, this is totally fine as long as users are given a chance for informed consent and control over the data.

I see no particular issues with the afib detection in WatchOS from a privacy perspective. As long as they handle any new health features similarly, should be fine.


As a counterpoint to everything here, the Apple Watch just saved a friends life by telling them they were in AFib.

Yes, it could be bad. Or it could be medically beneficial to people who are anxious and depressed and simply don’t have anyone to help them recognize it because they’re alone. Particularly during the pandemic, that would have been useful.


Great anecdote about your friend.

As long as no health data leaves my Apple Watch + iPhone, I am OK with local deep learning (or other models) running on my devices.

I do wish there were International standards and local laws supporting: all data staying on users' devices unless explicitly shared with health care providers, and also agreed upon standards for anonymizing explicitly shared data for the public good (research, AI training data, etc.)


The problem here is the tight integration by Apple: the app and the networking services are implemented by the same company.

I'd be ok with an app by a third party which does not automatically gets network access on my phone explicitly or implicitly, so I know my data doesn't leave my phone.


Has there been any indication that health data and the associated sensitive information are leaving the device without the user's consent? They would run afoul of quite a lot of laws and regulations in all western jurisdictions the moment they do that.

AFAIK, except when you enrol in a study, none of that depends on network access.


According to a security researcher[1], a 0-day in Analyticsd allowed any installed app to access the users health data.

[1] https://habr.com/en/post/579714/

EDIT: Found the link to the previous HN discussion about this: https://news.ycombinator.com/item?id=28637276


the health and PII data making its way to log data may give Apple legal cover to ingest the data "accidentally", also.

Recall that Google got to claim their street view cars wardriving was unintentional, and just paid $13M to make it go away, after using the data for about a decade [1]

https://www.cnn.com/2019/07/22/tech/google-street-view-priva...


When there is an indication, then it's already too late.

The trust you have in the system is not the trust I have.


Well yeah, but without observable facts, this is just wild guesses or paranoia. Considering the regular news about weaknesses in anything from Safari to iMessages, I have a really hard time taking these claims seriously. The storage and processing of health data is documented, and they do not leave the device or the Secure Enclave (unless you accept it, e.g. during a study).


> Well yeah, but without observable facts, this is just wild guesses or paranoia.

Not at all. What you are saying is that you don't want to lock the front-door because nobody has illegally entered your house yet. In contrast, I want to lock my front-door for obvious reasons. No paranoia. It's just a different mentality of dealing with security.


Your wish wouldn’t work on iOS. Apple hasn’t so far focused on adding Internet access permission. Every app on the device gets access to the network by default. There is no way to stop one app from accessing the Internet while allowing others.


Sorry to use the phrase but this sucks. I feel like companies are too much focused on pushing shiny stuff while forgetting about basic security features.


I remember reading about a guy who got FreeBSD's "pf" working on an iPad. MacOS used to (still does?) ship with it. They could do it, they just don't want. They'd certainly never let you be the admin of it and block the gazillion telemetry requests they make back to themselves.


Frankly, it would probably be fine if it could be turned off and if turned on, doesn’t report its findings back to Apple (or anyone, but you, for the matter).


No Apple health features report anything back to Apple or anyone else. Why would you expect this to be different?


Oh, I don't know... the behaviour of the entire rest of the tech industry.


But this is not being done by the rest of the tech industry, it is being done by Apple, specifically, who have a record of behaviour with medical data that, as far as I am aware, is spotless.


Even if that's true -- and you've provided no proof -- the data would most likely get stored on iCloud, and just like iPhotos, Apple will decide one day to just start trolling through it for fun and profit.

CSAM is coming back in other ways. Apple will try again.


Apple health data stored on iCloud is encrypted so that Apple can not access it.


As has been discussed often the keys to decrypt iCloud data are also stored on iCloud, so it's trivial to unencrypt the data.

https://arstechnica.com/tech-policy/2020/01/apple-reportedly...

https://www.theverge.com/2020/1/21/21075033/apple-icloud-end...


And why, exactly, did they try to make it impossible for themselves to do this, if that is something they actually want to do?


It's not impossible. They do this all the time for law enforcement. Did even read the links?


I read the article, which said Apple wanted to make this impossible for themselves, but then police complained so they didn't.

If they wanted to do it in the first place, why did they go to the effort of trying to make it impossible and had to be stopped by police?


I'm assuming the FBI and various other law enforcement agencies would have kept pushing for access anyways, most likely by escalating to legal or legislative means ("let's ban encryption" type thing).

Despite what one may think of the police, they do have a legal basis for attempting to collect possible evidence in the natural course of their duties when equipped with an appropriate warrant. In such cases Apple might have been exposed to any number of different legal problems, including obstruction of justice accusations.

So it was probably just easier for Apple to comply. But who knows what the thinking was.


No, I am asking you why you keep claiming again and again that Apple wants this data. You have this clear evidence that they tried to make it harder to for themselves to get this data.

Why would they do that, if they want the data? Are they just complete morons?


We started with this statement though:

"Apple health data stored on iCloud is encrypted so that Apple can not access it."

Whatever anyone is saying, they CAN access it if they want, and do so for law enforcement as needed.

From what I understand you are the one that seems to believe that Apple has tried to make it hard for themselves. I've been saying all along it's trivial for them to unlock iCloud data. As to why I say that -- the recent CSAM thing made me realize that I don't trust them. It's a personal thing.

Nor do I really care. I don't use Apple anymore, and I sleep better at night.


i wonder if non smartwatch have fine AFib detection like apple's device.


I suffer from bi-polar disorder and it's been difficult to get a referral to a psychiatrist who can confirm the diagnosis. The depressive phases are easier to spot and I get medication for that when it's too bad.

The thing is, there are already countless mechanisms that can track the obvious symptoms of a manic phase, but none of them have an incentive to tell you that you're suddenly spending too much. Frivolous purchases have increased, monthly spend is up, you're staying up late and buying shit on Amazon at 2am, etc. You're at the pub more often, and drinking more. You're working late or staying involved with work, late.

The data is all there, why can't it be used to say... maybe you're in a manic phase, you need to know this.

But more generically, if Apple invested in this but also kept the data open and portable (i.e. adhering to or contributing to FIHR and similar medical standards), I would switch back.

Basically, using the mass of data to understand your mental state and physiology, privately, instead of powering the ad industry.


Where hours of sleep/night is a reasonable proxy, providers of sleep tracking aren't incentivized to let you spend as much on Amazon as you can. A warning that says something like "hey, you've been getting ~2 hrs of sleep/night with no naps for the past couple days, you might want to consider doing X" would go a long way to helping me manage my symptoms.


Oh, you can actually do that with Shortcuts and Apple Health, today!

Mine has a bug for some reason so I can’t make it, but basically:

1. Get details of health sample, should be one for time asleep 2. Get the number, compare it to the total you want a warning at 3. Display a custom notification if below threshold 4. Set this to run as an automation at like 9 am or whenever, daily, without asking

My only point of doubt is whether you can get this week’s sleep as a health sample.

But you can definitely get last night’s sleep from the health app.

I also checked Autosleep. They have a sleep bank function in shortcuts. You can parse the dictionary output, extract sleep debt and hours sleep, compare both to a set level, and alert if either is beyond threshold.

Autosleep is a great, inexpensive app. Works best with apple watch but can function without.

Shortcuts has gotten extremely powerful in past few years. Anyone on HN with an iphone should check it out. You can automate all kinds of stuff.


> buying shit on Amazon at 2am, etc. You're at the pub more often, and drinking more

Okay, some simple guidance, you don't need anyone (or anything) but yourself to do this: don't stay up past midnight. If you do, the next day you must wear blue light blocking glasses for 4 hours and drink extremely strong chamomile tea. This will kill most mania/hypomania, something often triggered by sleep variance.

Do not drink. Whatever benefits come from it are not worth the mood disruption (if you do indeed suffer from a mood disorder).


Your comment is basically "have you tried not doing that?"

You can't get out of a serotonin issue with chamomile tea and a blue light filter.

"Having trouble sleeping? Just go to sleep earlier!"

I can't even.


> You can't get out of a serotonin issue with chamomile tea and a blue light filter.

BTW, bipolar disorder involves disregulation of dopamine, increased serotonin will trigger mania. If you knew anything about this topic you would know that. But I went against the pharma state and its indoctrinates, I must be punished lol.


That's the neat thing, the data won't help you get better care.


> That's the neat thing, the data won't help you get better care.

Are you speaking from experience? In mine, good doctors will absolutely leverage relevant self-captured sensor data to aid in diagnoses and provide better care.


The data could tell you something, though. It couldn't diagnose, or offer a solution.


It's kind of already there. I can look at time spent in certain apps - when I'm in a depressed mood, I find myself trapped in brainless swiping, instead of being motivated to do the things that bring me joy and productivity.


A little dismayed that the sci-fi "tech futurism" attitude of previous decades is now a beaten, lifeless corpse. Opt-in health features that give you more control over your health? World-ending! But 20 years ago, we were imagining a little wristband-type machine that would be able to non-invasively and routinely analyse blood samples, detecting cancer a decade early, increasing human lifespans, etc.

This pattern is all over the place now. I get there's reasons for it, but let's take into account one thing: throughout history, the luddites/skeptics were rarely right in their apocalyptic predictions. People today may be deciding to drop all smartphones, avoid modern TVs, avoid modern computers and stick to a 2008 barebones ThinkPad, give up music streaming services and run their own email; i.e. refusing any of the time-efficiency gains technology gave us in the last decades and ensuring they'll spend much more time troubleshooting and maintaining tech, taking away from the precious, short time they have on this earth, all to what end?

That hypothetical person, who switches away from all corporate technology because of Apple's CSAM news, is now maintaining their own music library, turning music discovery from something that takes seconds, to taking minutes or hours. A pattern that'll repeat itself throughout their life if they're consistent.

But will Apple users be living in camps, in 2050, because the frog will finally be boiled by then? Or will none of these fears turn out to be real concerns, and CSAM-detection's (just using it as an example, since that seems like a catalyst for many here) errors will be extremely rare and rapidly corrected?

To me, the risk of dystopia is lowered not by those willing to burn it all down and give up modern society, it's lowered by the fact that modern society is too interconnected for these dystopian scenarios to unfold like a novel: laws will be passed, corporations will be pressured by social media campaigns, and the state of the world will remain "mostly acceptable" just like today. Why the endemic pessimism?


There's a kind of collective paranoid delusion here (I mean, on the Internet in general). When you start from the idea that they are all about to get you, it's easy to interpret anything as a confirmation of your suspicions. It is not easy to be rational with this sort of siege mentality, which is reinforced by the fact that non-technophiles do not seem to get it or take it seriously.

We've gone from a (unwarranted) radical techno-optimism to (just as unwarranted) general hostility. A consequence of that is the pervasiveness of this cynical, nihilistic mood, which quickly corrupts most discussions every time some subjects pop up. To some extent, these companies brought this onto themselves with some objectively despicable behaviours, but we are not doing ourselves any favours.


There are people out to get you, but they are mostly marketing departments.


I am that hypothetical person, actually CSAM kicked me in the bottom and removed "tech fetishism" completely from my perspective. Using my several NAS stations to find music takes seconds if I stream locally. If I want to listen on the go I find immense pleasure in selecting which playlist I will play. Comparing this with dark patterns filled UX from any streaming service is funny. I control my UX, not some millennial PM trying hard to impress upper management for promotion.

I don't watch TV from 15 years. I have RSS for several sources to inform me, it is amazingly fast process, I have important shit to do in my day. I have a real library with real books, some of them very old (added value).

I produce technology, web-technology. But I see a future where a lot of people will pay good money to have user centered UX, not politically or "insert common good mantra" motivated experience.

There are no risk of dystopia, we are living in one. Palantir, NSO, countless other "surveillance businesses" are fighting over peoples data. The free petrol of the future overlords.

Why the endemic pessimism? There is no pessimism, there is real, factual information everyday. If someone wants to connect the political situation, the pandemic situation and the corporate movements, the data is obvious and accessible.

When It Comes to Data, Skepticism Matters (https://hbr.org/2014/10/when-it-comes-to-data-skepticism-mat...).


An insightful testimony. I'll be spending some time looking for books and talks about this, because it's very possible I'm misjudging this and having too much faith in the average person drawing the line if anything bad happens. Certainly I haven't thought deeply about where I would draw that line, and I'm no longer satisfied in my vague feeling that the line is far from having been crossed. I'll give it some thought. Thanks.


> That hypothetical person, who switches away from all corporate technology because of Apple's CSAM news, is now maintaining their own music library, turning music discovery from something that takes seconds, to taking minutes or hours. A pattern that'll repeat itself throughout their life if they're consistent.

That time expenditure will likely pay for itself. For a time, I was genuinely addicted to YouTube and its algorithmic feed, to the extent that I would regularly spend 2-4 hours a day watching nothing in particular, as long as it tickled my funny bone. Then I got rid of it for a while, and suddenly I have so much free time. I reinstalled the app a few days ago, and relapsed for a few days until I noticed it was happening again, and got rid of it again.

I've never even considered signing up for TikTok. If what I hear about it is true, I'd be hooked on it like on crack cocaine.

If the only way to function without having my brain hacked left and right is to run my own services and hoard my media on a NAS, then this is exactly what I do.


I too, am that person, real not hypothetical.

I use a dumb phone, ride a bicycle, use music cds. I refuse to use smart anything, even though my work goes into many of the smartphones and other devices.

>> laws will be passed, corporations will be pressured by social media campaigns, and the state of the world will remain "mostly acceptable" just like today. Why the endemic pessimism?

It is because of the pervasive lack of agency. Increasingly, I am getting tracked/registered/surveilled for everything. And if I don't comply, I am an outlier.

If you would compare "mostly acceptable" from a few decades ago to today I think there would be many differences.

Just because things are normalized doesn't mean they should be acceptable.

I have come to the realization that it is pointless to depend on companies bent on exploiting you, to build devices that you want. You have to do it yourself and we have that capability now.


> People today may be deciding to drop all smartphones, avoid modern TVs, avoid modern computers and stick to a 2008 barebones ThinkPad, give up music streaming services and run their own email

Far more people talk about such things than actually do it. It’s the technological equivalent of “I’m moving to Canada”.


I would argue that modern TVs and smartphones are some of the most common ways that people are wasting their precious time on earth these days, but do what makes you happy I guess.


Feeling sad, angry, depressed are essential parts of who we really are. We are not something sits in containers waiting to be amused at every second.

Salut, brave new world.


Clinical depression is different from feeling sad. Being able to detect cognitive decline would probably be a great boon in elder care.


> Clinical depression is different from feeling sad.

I don't want to come off as a pedant on this, but the prevailing definitions of clinical depression show quite clearly that it's a difference of degree, and not of kind. Clinical depression can indeed be "feeling sad":

> Criterion A.1: Depressed mood most of the day, nearly every day, as indicated by either subjec­tive report (e.g., feels sad, empty, hopeless) or observation made by others (e.g., appears tearful). (Note: In children and adolescents, can be irritable mood.)

> The essential feature of a major depressive episode is a period of at least 2 weeks during which there is either depressed mood or the loss of interest or pleasure in nearly all activi­ties (Criterion A)

> The mood in a major depressive episode is often described by the person as depressed, sad, hopeless, discouraged, or "down in the dumps" (Criterion Al). In some cases, sadness may be denied at first but may subsequently be elicited by interview (e.g., by pointing out that the individual looks as if he or she is about to cry). In some individuals who complain of feeling "blah," having no feelings, or feeling anxious, the presence of a depressed mood can be inferred from the person's facial expression and demeanor.

Both of these are straight from the DSM-5 (pp. 160-163) on Major Depressive Disorder: this is what's used to diagnose you, and not any blood test or neurotransmitter assay. "Depressed mood" is not described in any further detail, by the way. (Which, to me anyway, was always surprising in itself. You'd think there'd be more precision.)

Again, I hope this illuminating instead of pedantic. The notion that "clinical depression is not 'the blues'" is common enough these days, and it's well-intended, but people can get a somewhat mistaken impression from it. I think maybe it's safer to say that "clinical depression is indeed the blues, but also some other stuff on a list of 8, for an extended period of time".


This isn’t actually true. It’s right there in the sources you quoted.

> either depressed mood or the loss of interest or pleasure in nearly all activi­ties

Note the “or” rather than an “and”, meaning it can be the second half only. And also note that lack of interest/pleasure does not mean you’re sad, it just means you don’t enjoy activities. It’s actually quite different!

Additionally, the DSM paints depression as a constellation of symptoms - so even if you don’t have A.1 you could have a couple of the others and still be depressed.

> clinical depression is indeed the blues, but also some other stuff on a list of 8, for an extended period of time

The most precise statement would actually be “clinical depression can be the blues, but could also be a set of other symptoms.”


You seem to be trivializing this experience in a well meaning fashion. An abrasion can be trivial or life threatening as a matter of degree right? But you wouldn't treat both kinds the same, or even really talk about them the same. I'm not going to try and convince you further on this point in part because it would involve getting into a discussion on mental health diagnosis and seriously fuck that, but thank you for citing and quoting sources. I'm already familiar with this but it makes it a lot easier to respond when some one is willing to be specific in what they think and why. If it is pedantic then let there be more pedantry.

>not any blood test or neurotransmitter assay. "Depressed mood" is not described in any further detail, by the way. (Which, to me anyway, was always surprising in itself. You'd think there'd be more precision.)

This is actually really interesting! So, part of the reason that you don't get the precision that you're used to is that the brain is wicked complicated and the other part is it is very difficult to observe. Individual neurons are difficult to isolate and measure (Hodgkin and Huxley won the 1963 Nobel Prize for probing the 'squid giant axon') and thats just for measuring 'action potential', which is the gross exchange of ions across the neuron (typically what is meant when people say your brain runs on electricity). The actual signaling occurs by chemical transmission that occurs in yet smaller synaptic clefts! Those chemicals typically get reused or recycled by the brains support system, so it would be difficult to see anything occur in the blood, and a lot of those chemicals also perform other tasks or reactions in the body. These chemicals are called neurotransmitters when we observe them performing this specific kind of signaling. Some neurotransmitters are associated with certain parts of the brain, or subsystems, or feelings, but for all we know we could be aliens figuring out a car by slicing it into thin pieces and making a big deal out of the different kinds of motor oil.

We've got a good idea of how different parts are connected, and some models for how some of the pieces might work that are almost certainly wrong (but instructively so), but we aren't really 'there' yet with respect to being able to understand mental health in the same way that we understand, say, diabetes.


> because it would involve getting into a discussion on mental health diagnosis and seriously fuck that

I mean, speak for yourself :) I love that sort of discussion. It's my job! Email me.

> You seem to be trivializing this experience in a well meaning fashion.

Thanks for your considered and interesting comment. Respectfully, whether something's being trivialized can be in the eye of the beholder as much as, uh, the eye that made the thing being beheld (lol). So let me say first that that was far from my intention, and I apologize if it read that way, and second, to clarify by quibbling slightly: in saying that many cases clinical depression does actually mean "feeling sad" in the plain English sense of it, my claim wasn't that the grouping of experiences and mental states we for call clinical depression is somehow less serious that it ought to be considered; it's actually that "feeling sad" is way too serious, and too complex, to be left to medicine and medicine-adjacent fields, as it has for more or less a century. In my view, it's precisely the construct "clinical depression", and the industries that have sprung up to lay claim to it, that are doing the trivializing. The reduction of an extremely grave and complicated social, relational, political, economic phenomenon like "feeling sad" to talk of neurotransmitters and synaptic clefts is, if I'm giving the benefit of the doubt, a tragedy. (And if I'm not giving the benefit of the doubt, it's deceptive, and I wonder which forces it serves.) Depression is as much a political phenomenon as it is a biological one. But biology gets the NIH grants, so we're left with a zeitgeist featuring a lot neuro-conversation and little else, which to me is a huge loss, a huge missed chance.

> So, part of the reason that you don't get the precision that you're used to is that the brain is wicked complicated and the other part is it is very difficult to observe. [...]

> We've got a good idea of how different parts are connected, and some models for how some of the pieces might work that are almost certainly wrong (but instructively so), [...]

This is interesting stuff to me, as someone curious about scientific endeavors far from my own, but there are at least two problems with it: the first is that biological psychiatry's now well-worn claim that phenomena like long-lasting low mood are brain diseases remains unproven, with no concrete and replicated backing. It's of course trivially true that mental states are brain phenomena; being depressed, whatever we agree that that means, is a much "in the brain" as my typing out this comment is. The issue remains the disease part. I'm still looking at my watch.

Which leads me to the second problem: none of the brain talk, fascinating though it is, actually matters to the people who experience these things, or to those who treat them. I mean "matters" here as "makes a concrete difference in how we assess and deal with" the phenomena. I've been a clinician in mental health for a decade now, and there's nothing I can call on from biological or neuroscientific research to help me in assessing or helping with things like MDD. Zero. To the extent that I've ever helped anyone experiencing these forms of profound pain, understanding them as having anything to do synaptic clefts or serotonin or the brain has mattered not a whit.

This would be easier to deal with if the inductive project we're talking about were only a few years old; maybe I'd be able to tell myself that there's something on the horizon. But it's not a few years old -- it's 70 years and uncountable billions of dollars old. I'm just one person, of course, but I can read the tea leaves, and I can tell you that a lot of people in the profession are tiring of the endless promises from mental health research that the second coming is right around the corner. The promises of "genes for depression", or the promises of safe and effective pharmacological treatments, or a reliable and valid psychiatric nosology. The list goes on.

(Don't just take me on faith on this, either. It's on the cover of the NYT. Two decades ago, physicians were belittling depressive patients who asked if stopping their antidepressant was making their depression worse. It was extremely taboo to even suggest that psychiatric medications could induce debilitating (and, ironically, seriously psychiatric) withdrawal syndromes. You can read the admonitions in contemporary medical journals: psychiatrists urging each other not to say the w-word for fear of an association with addiction. Now, it's front-page news [1].)

So to hear that the reason we don't have any precision in measurement or intervention in mental health is because of complexity and difficult observation ... well, first, I just don't believe that anymore, and I've got good empirical ground for doing so.

And second, extend the scenario and imagine that we somehow did obtain the holy grail of precise and accurate external observation of mental phenomena like depression, and that "treatments" were developed at that level: for millions, all we'd be doing is making them feel better about living in miseries about which medicine and the bench sciences have nothing to say. It's ghastly to think of curing the depression of, say, an elderly patient who's lost their savings and, in parallel, has been estranged from their family for decades without somehow having a way to address those circumstances at the same time. I'll make another claim: addressing these factors cures the depression. The bottom line for me is that no matter how incredible the findings, neuroscientific research cannot by definition address these factors. And in the US, these other factors are not meaningfully addressed on a large scale.

> but we aren't really 'there' yet

If you're interested in a friendly bet, I'd wager good money that we never will be.

[1] https://www.nytimes.com/2018/04/07/health/antidepressants-wi...


UCLA has been working with them for quite a while on this:

https://newsroom.ucla.edu/releases/ucla-launches-major-menta...

It's creepy in a way, but monitoring activity and expressions could result in some valuable insights about warning signs, habits and such. My guess is Apple is just wondering if it can snip out something clinically viable from the part that relies on its hardware. Doesn't mean it's a great idea, but this is part of a larger science push.


Here's an ideia: Why not just remove Facebook, Instagram?


Because those are 2 of the 3 most popular apps and my retirement account depends on their continued success.


You could always start divesting.


Or, severely regulate what they can do with your information, and make them pay their users for any material the user contributes.

It could be a few cents for commenting, but anything to stop the growth, and monopolies.

The catch is only on companies that are valued over a billion dollars, excluding credit. A company couldn't borrow heavily in order to stay below a threshold.

This would Only apply to large monopolies that are valued over 1 billion.

The small companies would be free of any regulations. I mean any. Small companies could experment, and maybe have a chance at making it.

I could just see Zucky sitting on a high chair in front of Congress crying, "But it's just not faaaair, and my wife doesn't like it either? You are penalizing Machiavellian success governor--oops--I mean Senator? The one Senator that kinda understand tech, "No Marky--we are just stopping cancer, and giving small companies a chance to get out of your shadow.". Mark, "Do you have any tissue besides this government cotten stuff. I'm used to lanolin apricot infused tissues! And where's my professional nose Wiper?"


That would depress my sister. She gets tons of joy from her facebook and instagram usage. And she has zero FOMO.


These are features I do not want and will not buy. This is one step too personal.


after a college shooting sometime ago Russia started (and after the recent one in Perm - added even more funds of course) a program to identify and monitor potentially dangerous youth - the first line is automated monitoring of Internet, mostly social media (i don't remember anything specific about mobile device tracking/listening though i bet it is included officially or not), and the signal from the automated system is passed to human analysts, and they may apply actions like trigger police visit "to talk"/etc.


If a human with a medical degree has a hard time diagnosing mental health issues after spending hours with a person, I don't see how this research will provide anything except noisy garbage data.


On the health app there's a new feature to share your biometric data with your doctor, which I can see improving their accuracy and diagnoses. I'm not sure where you got the idea that the data is garbage, but even I (untrained, not a doctor) can pick up patterns and make positive changes from the data collected.


My comment is focused on data related to mental health issues. A heart rate is a very specific number that is a directly related to your heart function. Cognitive functions, focus and mental health issues can't directly be tied to things like screen use time, sleep patterns, facial expressions, and the most nebulous metric, a self reported mood score.

The point of this research is also to remove the human element from the analysis loop.


Apple is working on iPhone features that, when they leak, will hurt your ability to get a job, and health insurance.


The same could be said about electronic health record software generally.

Electronic healthcare data as a phenomenon isn't going anywhere; it will only get bigger, and individuals stand to benefit tremendously from this fact as it will make early diagnoses and getting the right care at the right time more likely.

Rather than panic when we find out that another company will be handling healthcare data, we need to set high standards for engineering, security, user consent, and privacy expectations for companies working in this area.


The comparison between software that lets doctors file data you give them and software that may do proactive automatic monitoring of users is absurd.


The specific concern raised in the parent comment is about healthcare data leaks.

In my view, every company handling healthcare data should be responsible for keeping that data safe and having proper consent, regardless of whether the software is paid for by doctors or by iPhone users.

I don't think this viewpoint is absurd.


>The specific concern raised in the parent comment is about healthcare data leaks.

Hacking electronic health record software requires hacking into data centres/cloud providers. Hacking this proposal merely requires hacking an iPhone.

Electronic health record software only stores data you wish to allow your health provider to discover. Apple and co. are (according to the article) thinking about automatic collection.

This proposal is dramatically more expansive than typical health records.


This is where consent comes into play. Today, a lot of people have already consensually downloaded their healthcare provider's patient portal app onto their mobile devices, so some subset of their doctor's EHR's data has already graced their mobile devices by their own consent, based on their own judgement of the risks involved. From the standpoint of electronic healthcare data merely existing on a device that could theoretically be hacked, having a consensual system like this on a mobile device would not break any new ground.

Edit: I also wouldn't assume that an iPhone is less secure than various computers and cloud servers that doctors use for their EHR software. This is where high engineering and security standards come into play.


Your previous deleted post compared hacking this to a constructive proof of P=NP. Are you kidding me?

>This is where consent comes into play.

Oh please. Having a buried setting isn't consent, since there are a million settings to disable. Besides, if and when these things are hacked, it's easy to turn on.

>Today, a lot of people have already consensually downloaded their healthcare provider's patient portal app

An automatic opt-out scanning system on a massive amount of phones, where all the data is in a standard place on a phone, isn't comparable to an unknown amount of people downloading unstructured data to their phones and keeping an unknown amount of it.


I can't keep responding to this thread. The following is what I said in the beginning of it.

> we need to set high standards for engineering, security, user consent, and privacy expectations for companies working in this area


I think people are aware of when they're Doomscrolling[0]. Do they need their devices to remind them of excessive screentime? That depends on what info they're consuming. I deliberately follow and subscribe to positive and uplifting content, and people. There are plenty of groups on Facebook, and lists on Twitter that have nothing but inspirational messages and content to make you Think Different™ about your life. I have two news apps on my device that I check daily just to keep tabs on current events. I deliberately train my feeds not to have news-type content because once I've read the news, I don't need to see it popup in my feeds again throughout the day.

[0] https://en.wikipedia.org/wiki/Doomscrolling


> I deliberately train my feeds not to have news-type content because once I've read the news, I don't need to see it popup in my feeds again throughout the day.

Sad that we have to train the feed on our apps to be more positive. I'm happy that you found a way to beat the system but I believe some people do have a negative relationship that feels "out of their control" with their phones and the internet. I think apple wants people to always have a positive relationship with their devices and promoting healthier habits on their devices is a step to keep iOS users happy.


Considering the unspoken consequences that a depression diagnosis can carry, surely this is rife for abuse.

Imagine being involuntary committed after a "wellness check" brought on by your "smart" phone.


Imagine being red flagged by your phone and having a swat team show up to confiscate your guns. Google and Apple absolutely know if you are a gun owner.


Just measure the amount of time spent on various social media apps...


If this had been released in, oh, 2019... it would have been very revealing during the worst of the pandemic.


The single most valuable datapoint for advertisers is depression - they figured out a long time ago that depressed people are more easily marketed to. You can be sure they've already perfected the state of the art as far as is economically practical. Either Apple knows this already, or... actually, I can't think of any plausible reason that they wouldn't already know this.


Not sure how true that is. I’m relatively depressed … I don’t spend much money as I can’t be bothered.


Yeah I'm just making it up, all those years managing analytics for tens of millions of highly engaged subscribers and dealing with scumbag databrokers pales in comparison to "advertising doesn't work on me".


If and when these become common, while there will be instances where such devices and detection is helpful and life saving, I'm wondering if for the larger population it will only lead to increased hypochondria. Only time will tell I guess..


At least one study has discovered that for some hypochondriacs, seeing previously-hidden health data can increase health anxiety. I personally find it reassuring and helpful in improving behaviors.


so your next iphone will autosuggest to get rid of itself ? one step closer to full sentience.


I can't see how this will be sold. Get the latest iPhone to see if you are depressed!

Depressing.


Backwards. It’ll be an OS update that tells you your happiness score is nil, but you can start tracking it with the new iPhone. Then you’ll get nudges to close that Mood Ring by watching Ted Lasso.


Will this detect depression caused by too much time on your iPhone? </snark>


the obvious use case is to use facial expression detection to track how much attention people are paying to advertisements


Could we not waste time on making inane comments like this, just for today. What you just said has nothing to do with this article, nor does it match anything Apple does, as they don't even serve advertisements in any form, and offer lots of ways for users to cut down on advertising.


More spying from the spiPhone people. Nice!


I've lost all trust in Apple.


I am getting a nokia 3310


We''l tell you when you're happy, we'll tell you when you're sick, put your faith in us! We don't miss a trick.

We'll tell you when to sleep, where and what to eat, with neural networked data our judgment can't be beat.

It only costs a little, all your friends bought in, you don't want to be the odd one, do you? Clubhouse is The Thing!

We know your body, we know your mind, we hold your worldview safe. Hew only unto Apple lest the unclean seal your fate.


Something something Adam and Eve ate an apple and were banished from paradise.


Their hell was a version of paradise where they had to eat apples all day long, for the fear of missing out and for the sake of appearances.


Just waiting for the Taliban moderated Clubhouse channel to add “Be Good for Goodness’ Sake!”


This is creepy.


It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself -- anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offence.

  -- 1984


Glory to sentiment analysis. Glory. I am a proud machine learning engineer.


Can you elaborate ? You just want to ignore any impact your work has just cause it's fun ?


The parent's post is irony.


/s


In the Soviet period, punitive psychiatry was so widespread that one fourth of the dissidents accused of political crimes were declared "mentally ill." Even more widespread was the practice of compulsory hospitalization without court order, done simply on the orders of the KGB. Each year the highest authorities in Moscow sent up to 1,000 people to psychiatric hospitals. These were people who had come to capital to fight for justice denied in their hometowns. The same situation could be observed in the provinces. https://www.themoscowtimes.com/2013/10/13/soviet-psychiatry-...

Since 1921, the Soviet authorities began to gradually use psychiatry to combat dissidents and human rights defenders. They abused their powers, falsifying diagnoses of unwanted people and sending them to hospitals for an indefinite period. This practice has become known as "punitive psychiatry", becoming one of the most famous and brutal examples of repression in the USSR. Over the years, Joseph Brodsky and Yegor Letov became its victims, and the practice stopped only under the decline of the USSR, when up to two million people passed through it. https://tjournal.ru/stories/153921-karatelnoe-lechenie-inako... That is just Google translated above


Read 1984 so many times and it still gives me chills whenever I read it again.


If you, ahem, liked 1984, you'll want to read: https://elan.school/


I've read this to the end. This makes North Korea or the actual 1984 look like a failure.

Edit: oh, look. Downvoted without so much as a comment about the reason why. This seems to be happening here more and more frequently on this site. Luckily I can leave this site, unlike the unlucky Elan kids. I wanted to delete my account for a while, but whoever downvoted this just sealed my decision. Thank you for that. Peace out.


Whoever made this is incredibly talented. What a tragedy for anyone to have gone through this.


OK, you lost me 15 minutes, possibly more soon. What is this?


An art therapy project done by a man who as a teenager found himself in a "tough love" "school"(heavy quotes in both cases) as an alternative to juvie after he was caught with a month's worth of hashish.

Long story short prison would have been more humane.

EDIT: Reading this made me physically ill. It's the embodiment of my worst fears.


This was terrifying and heartbreaking.


Obviously the problem is having screens detect a list of things every human around can easily detect too, not what happens after that, yep.

(I'm not saying there's no issue here, I'm saying this is a useless comparison.)

Edit: As far as I'm concerned, quoting that is about as useful as mentioning number of the beast in relation to bar codes or credit cards. It almost but doesn't actually fit the situation.


I am reminded of one of the Trump rallies the other year. There was a fellow in a plaid shirt who was ejected and detained by the SS for not manifesting suitably ecstatic facial expressions.

https://time.com/5390792/plaid-shirt-guy-speaks-out-trump-ra...


I can imagine the advertising value of watching for depression and cognitive decline. In the latter case, you could probably sell a dozen extended warranties for the same item.

EZ prediction of the day.

Combine depression, cognitive decline, anger, etc. values against a database of gun owners (picked up through registration, social media posts, ATF background checks, etc.). Send police to house when some crossover point is hit. It's for the children.


We need a name for the equivalent of Godwin’s Law for the hypothetical situation which gets stretched then ends with ‘they want to take my guns’.


Absolutely agree. lostlogin's law?


Funny how that sounds more plausible than them selling mental healthcare services. Those are extremely hard to get. Have a doctor actually help you? Nah, police raid.

I'm speaking as a second class European with experience in Belgium, UK and Germany.


Same. Who needs an app to tell you that you’re depressed? Memes are writing themselves. Imagine your government mandating stay-home orders, having lost most friends, craving Youtube content, ruminating about previous experiences, and your phone popping up a message saying “That’s it. You’re suicidal. Here are friends you can call.” and the list shows up empty. Or there’s your ex, the local pizza service and your late parents.

When people are depressed, they tend to know it. (but the sibling comments say it may be good to point a maniac episode for a bipolar person).


This is precisely why there isn't a central database of gun owners in the USA.

Some states have even banned the keeping of identifying data in purchase records at the individual stores, too.


Never mind all the states that allow private transfer without a FFL.


Or maybe detect people who might be unstable through paranoid ramblings about how Apple is out to get gun owners before they can "take a stand against the tyranny".

Probably less work and marginally more fruitful.


I'll improve that...

...detect people who might be unstable through paranoid ramblings about how Apple is out to _fill_in_the_blank_.

The spice must flow to keep those stock options in Cupertino rising.


Why do I get the impression this is an attempt at being mysterious because "can't openly state something that's wrongthink!"

When really no one would consider what you're vaguely implying "wrongthink" or inflammatory at all. They'd probably just consider it "normal wrong" and poorly supported by reality...


No mystery. You can simply make the claim that Apple would detect anything that's anti-Apple. Why not? It isn't like that the company has anything approaching morality, aside from perhaps the occasional craziness from the worker-drones there.


[flagged]


Please don't cross into internet psychiatric diagnosis - it never does any good and is basically a form of personal attack, even if you didn't intend it that way.

https://hn.algolia.com/?sort=byDate&type=comment&dateRange=a...

https://news.ycombinator.com/newsguidelines.html


This is an even flimsier excuse than CSAM for my phone to spy on me, isn't it.


This website is a disaster.


Depressed after spending so much on a iPhone? here's your happy face emoji.


Used in conjunction with Apple Maps, they could start pushing people to take long drives off of short piers.


From what i understand, absurd overregulation prevents hundreds of diseases to be diagnosed because ... reasons. They cant add various sensors, but they happily add useless and dangerous stuff like face recognition, even if it 's harder to implemetn. You can't even get a health report based on your genetic profile anymore. Which i understand can be a problematic in the US but it s happening in europe too

It is curious that they chose depression and cognitive decline because , even if you diagnose them , depression is iffy to treat and i dont know anything to prescribe for cognitive decline


> From what i understand, absurd overregulation prevents hundreds of diseases to be diagnosed because ... reasons. They cant add various sensors, but they happily add useless and dangerous stuff like face recognition, even if it 's harder to implemetn. You can't even get a health report based on your genetic profile anymore. Which i understand can be a problematic in the US but it s happening in europe too

Opting into [a feature] should be as easy as opting out. I for one don't mind people making their data available for analysis but to include a blanket default opt-in for everything is the very reason why "absurd overregulation" exists now.

> It is curious that they chose depression and cognitive decline because , even if you diagnose them , depression is iffy to treat and i dont know anything to prescribe for cognitive decline

Especially when depression and cognitive decline can be so subjective. For example, I think CSAM is quite objectively understood as detrimental for everyone but is it really in Apple's interest to specify a systemwide implementation for all of their product lines to combat this behavior?


Everyday passing by I feel better and better that removed myself from Apple ecosystem. As I felt they will be more invasive than Google and Faceebook combined. Google has your search data, Faceebook has what you share, Apple has your entire life in a box. And this is only the start. Minority Report seems like very optimistic rendition of the future.

Where are my soma pills, Apple? What you mean that "the bottle is not included" in a purchase? It is subscription service? For my health and well being? OK.

And dear friends, if you are blind where all this "advancements" and "progress" are going, nobody can help you.

I feel that conspiracy theorists will have hard times ahead. They cannot be creative and competitive as marketing and pr cohorts of richest companies of the world.:)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: