梯度下降简介
发布日期:2021-05-09 05:33:40 浏览次数:17 分类:博客文章

本文共 1582 字,大约阅读时间需要 5 分钟。

目录

TensorFlow2教程完整教程目录(更有python、go、pytorch、tensorflow、爬虫、人工智能教学等着你):

Outline

  • What's Gradient
  • What does it mean
  • How to Search
  • AutoGrad

What's Gradient

  • 导数,derivative,抽象表达
  • 偏微分,partial derivative,沿着某个具体的轴运动
  • 梯度,gradient,向量

\[\nabla{f} = (\frac{\partial{f}}{\partial{x_1}};\frac{\partial{f}}{{\partial{x_2}}};\cdots;\frac{\partial{f}}{{\partial{x_n}}})\]

What does it mean?

  • 箭头的方向表示梯度的方向
  • 箭头模的大小表示梯度增大的速率

How to search

  • 沿着梯度下降的反方向搜索

For instance

\[\theta_{t+1}=\theta_t-\alpha_t\nabla{f(\theta_t)}\]

AutoGrad

  • With Tf.GradientTape() as tape:

Build computation graph

\(loss = f_\theta{(x)}\)

  • [w_grad] = tape.gradient(loss,[w])
import tensorflow as tf
w = tf.constant(1.)x = tf.constant(2.)y = x * w
with tf.GradientTape() as tape:    tape.watch([w])    y2 = x * w
grad1 = tape.gradient(y, [w])grad1
[None]
with tf.GradientTape() as tape:    tape.watch([w])    y2 = x * w
grad2 = tape.gradient(y2, [w])grad2
[
]
try:    grad2 = tape.gradient(y2, [w])except Exception as e:    print(e)
GradientTape.gradient can only be called once on non-persistent tapes.
  • 永久保存grad
with tf.GradientTape(persistent=True) as tape:    tape.watch([w])    y2 = x * w
grad2 = tape.gradient(y2, [w])grad2
[
]
grad2 = tape.gradient(y2, [w])grad2
[
]

\(2^{nd}\)-order

  • y = xw + b
  • \(\frac{\partial{y}}{\partial{w}} = x\)
  • \(\frac{\partial^2{y}}{\partial{w^2}} = \frac{\partial{y'}}{\partial{w}} = \frac{\partial{X}}{\partial{w}} = None\)
with tf.GradientTape() as t1:    with tf.GradientTape() as t2:        y = x * w + b    dy_dw, dy_db = t2.gradient(y, [w, b])d2y_dw2 = t1.gradient(dy_dw, w)
上一篇:激活函数及其梯度
下一篇:误差计算

发表评论

最新留言

不错!
[***.144.177.141]2025年04月13日 08时38分47秒