Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


This is the case with literally all kinds of professional ethicists. Thanks to them, we do not routinely do scientific challenge trials and plenty of other valuable experiments, and in many cases are reduced to collecting significantly worse data and doing it very slowly and expensively.


Much better off when were literally traumatizing babies to see how they would react to adverse stimulus.

* https://en.wikipedia.org/wiki/Little_Albert_experiment


And the Tuskegee Syphilis Study … The goal was to observe the effects of syphilis even though a cure was available

https://en.m.wikipedia.org/wiki/Tuskegee_Syphilis_Study


I would agree to that sentiment only if we applied the same ethical standards to parents and people in general, not just scientists.

Babies and children today are routinely subjected to avoidable adverse stimuli, such as circumcision and corporal punishment. That a typical parent can do this at will for no good reason, but a scientist wanting to do something similar to advance human knowledge must seek approval from an ethics board (which they would not get) is ridiculous.


A scientist can certainly circumcise and spank their own child. It is doing it to other people's kids that is a no-go. If they want to traumatize their own kid and write it up then they certainly can, but it will make for a terrible study. I'm not sure what your point is.


Ah yes, because there is no reasonable middle ground between forbidding experiments where a subject might get a mote in their eye, and literally sacrificing babies to Moloch. Please. You can make better argument than that.


I was responding to 'scientific ethics enforcers are ruining science'. Sorry for the lack of nuance but why should I make the effort when you didn't?


The irony with statements like these is that they are themselves ad-hoc ethical statements.

Whatever "reasonable middle ground" you find, you'd have to make logically sound ethical arguments. This is what these people do for a living, and they are very well trained in those matters.

As programmers and engineers we are also trained in logic, use it every day. But that typically pales in comparison of what philosophers do. They deal with much richer logical systems and can precisely apply them to statements and arguments.

When applied to ethics, they typically end up with much more radical conclusions that we are dealing with here. In fact the resulting initiatives are already softened compromises and a "reasonable middle ground" before it even clashes with policy making.


Perhaps so, but I’ve seen from the inside what people looking for their next promotion try to build when there isn’t the specter of the evil mad scientist label (why is it always a panopticon or an easily abused tool?) and it makes me grateful that there are people whose jobs are to think about frameworks for reasoning about what is and isn’t at least approximately ethically neutral.


I wonder - is it possible that medical research in (say) China might overtake the US (even “the West” as a whole) at some point, due to having less stringent ethical standards? Maybe not in the immediate future, but how about in 25 years from now? Or 50 even?


The ethical standards, interestingly enough, are actually relatively stringent but based on more self-control and collective feel.

Now the use of the results of that research...


which ones ?


I do think the push for ethics in AI is important. As much as these functions do get sidelined in the prioritization process as outlined in the parent comment, the intent of introducing the rigor is valid.


I agree AI ethics is important, but unfortunately, the field is dominated by frauds.

Do you remember the drama[1] with Google's AI ethics team? Here's a sample of the "research" they were producing: https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf

[1] - https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google


Why scare quotes on research? I feel like you should just say why you think it's not good.


Google has been having a hard time making an LLM product. It's sad to see them having this sort of ordering the burning of the ships of Zheng He moment.


How is Timnit a fraud? Why are you scare quoting "research"? It's well written imo and has over 128 sources.

I'd think that in light of all the problems we've seen with chatGPT "hallucinating" answers it reinforces their concerns about "stochastic parrots" more than anything.

I mean, if you have specific grievances, please do share them as this is an important conversation, but as of now your comment amounts to mudslinging and no substance.


Medical research cleared this hurdle a long time ago. Table 1 is always demographics of your population sample, allowing outsiders to assess your generalizability by age, gender, race, and a host of other factors that are often context-dependent.


Thing is, what gets called "AI ethics" actually has approximately* nothing to do with AI.

It's all about whichever field the particular use-case belongs to, and would lose nothing by seeing the AI as a black box.

* IIRC there have been issues with models that find correlations being presented as finding causality. That is something that belongs to AI as a field; other people can't evaluate your stuff accurately if you've misinformed them about its capabilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: