Part 9 (10 points, coding task)
The purpose of this part is to guide you to learn using torch.autograd.grad.
For each given input \left( t, x \right), we not only compute U \left( t, x \mid \mathbf{\theta} \right) (output from the model), but also its 1st and 2nd order partial derivatives.
In PyTorch, these can be done by using torch.autograd.grad.
Part 9.1
Consider the following function
Do the following tasks at \left( p, q \right) = \left( 1, 2 \right):
-
Define tensors
pandqthat have values 1.0 and 2.0 (float data type), respectively, an identical shape()(that is, 0-dim), andrequires_grad = True. -
Compute tensor
faccording to the formula above. -
Compute \frac{\partial f \left( p, q \right)}{\partial p} by using
f_p = autograd.grad(f, p, create_graph=True)[0]Print
f_p. -
Compute \frac{\partial f \left( p, q \right)}{\partial q} by using
f_q = autograd.grad(f, q, create_graph=True)[0]Print
f_q. -
Compute \frac{\partial^2 f \left( p, q \right)}{\partial p^2} by using
f_pp = autograd.grad(f_p, p, create_graph=True)[0]Print
f_pp. -
Compute \frac{\partial^2 f \left( p, q \right)}{\partial q^2} by using
f_qq = autograd.grad(f_q, q, create_graph=True)[0]Print out
f_qq. -
Compute \frac{\partial^2 f \left( p, q \right)}{\partial p \partial q} by using
f_pq = autograd.grad(f_p, q, create_graph=True)[0]Print
f_pq. -
Compute \frac{\partial^3 f \left( p, q \right)}{\partial q^3} by using
f_qqq = autograd.grad(f_qq, q, create_graph=True)[0]Print
f_qqq.
Part 9.2
Consider the following function
Let x be a vector with values 0, 0.1, \cdots , 0.9, 1.
Do the following tasks.
-
Generate
xas a 1-dim tensor and setx.requires_grad = True. -
Generate
g = x**2. Thus,ghas the same shape asx. -
Define
g_xto be an element-wise 1st-order derivative of function g with respect to x. Thus,g_xhas the same shape asx.Write code to compute
g_x. Printg_xandg_x.shape.Hint: by using
autograd.grad(f, x, create_graph=True)[0], tensorxcan be with any dimension, but tensorfmust be with dimension 0. In this problem, tensorgis not with dimension 0. So you need to think about how to address this issue. -
Define
g_xxto be an element-wise 2st-order derivative of function g with respect to x. Thus,g_xxhas the same shape asx. Write code to computeg_xx. Print it outg_xxandg_xx.shape.