In the context of mechine learning, hyperparameter optimization or model
selection is the problem of choosing a set of hyperparameters for a
learning algorithm, usually with the goal of optimizing a measure of
the algorithm‘s performance on a independent data set.
超参数的选择是在独立数据集上算法表现最优。
Often cross-validation is used to estimate the generalization performance.
一般使用交叉验证。
Hyerparameter optimization contrasts with actual learning problems, which
are also often cast as optimization problems,but optimize a loss function
on the training set alone. In effect, learning algorithms learn parameters
that model/reconstruct their inputs well, while hyperparameter optimiztion
is to ensure the model dose not overfit its data by tuning.
算法的目标是更好地拟合数据,超参数最优化是防止过拟合。
e.g. regularization:
这里的正则化是指如AIC BIC引入的惩罚项。
正则化防止过拟合的道理,是对参数加限定,使其不会“拟合”得那么好,去掉一些
对噪声的过分拟合。
2025年机器学习基础 维基翻译 超参数选择 K近邻法 及简单的sklearn例子
机器学习基础 维基翻译 超参数选择 K近邻法 及简单的sklearn例子In the context of mechine learning hyperparamet optimization or model selection is the problem of choosing a set of hyperparamet for a learning algorithm usually with the goal of optimizing
大家好,我是讯享网,很高兴认识大家。
2025年函数关系与相关关系
上一篇
2025-03-16 13:53
2025年基于关联规则的分类算法
下一篇
2025-03-31 19:01

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容,请联系我们,一经查实,本站将立刻删除。
如需转载请保留出处:https://51itzy.com/kjqy/48908.html