线性回归是一种常见的机器学习算法,用于建立一个线性模型来预测连续型的输出变量。梯度下降是一种优化算法,用于最小化模型的损失函数。下面是线性回归梯度下降的Python实现:
import numpy as np class LinearRegression: def __init__(self, learning_rate=0.01, num_iterations=1000): self.learning_rate = learning_rate self.num_iterations = num_iterations self.weights = None self.bias = None def fit(self, X, y): num_samples, num_features = X.shape self.weights = np.zeros(num_features) self.bias = 0 for _ in range(self.num_iterations): # 计算预测值 y_pred = np.dot(X, self.weights) + self.bias # 计算梯度 dw = (1/num_samples) * np.dot(X.T, (y_pred - y)) db = (1/num_samples) * np.sum(y_pred - y) # 更新参数 self.weights -= self.learning_rate * dw self.bias -= self.learning_rate * db def predict(self, X): return np.dot(X, self.weights) + self.bias
讯享网
使用示例:
讯享网# 创建<em>线性回归</em>对象 regressor = LinearRegression(learning_rate=0.01, num_iterations=1000) # 准备<em>训练</em><em>数据</em> X_train = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) y_train = np.array([6, 15, 24]) # <em>训练</em>模型 regressor.fit(X_train, y_train) # 准备测试<em>数据</em> X_test = np.array([[2, 3, 4], [5, 6, 7]]) # 预测结果 y_pred = regressor.predict(X_test) print(y_pred)

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容,请联系我们,一经查实,本站将立刻删除。
如需转载请保留出处:https://51itzy.com/kjqy/146168.html