Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
kgwgk
on Oct 4, 2019
|
parent
|
context
|
favorite
| on:
A Gentle Introduction to Bayes’ Theorem for Machin...
You cannot use any prior, let alone literally any regularizer, and say it would work almost just as well.
A standard normal prior centered at 0 and one centered at 42 can give very different results.
mlevental
on Oct 4, 2019
[–]
i said almost - that's code for "obviously i'm not talking about pathological regularizers"
kgwgk
on Oct 4, 2019
|
parent
[–]
Well, in that case minimizing the (negative) loglikelihood seems principled but you could minimize literally any loss function and it would work almost just as well.
mlevental
on Oct 4, 2019
|
root
|
parent
[–]
Lol agreed!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
A standard normal prior centered at 0 and one centered at 42 can give very different results.