# [SOLVED] Gradient descent extended function example

## Issue

Here is my gradient descent code:

``````import numpy
def gradient_descent(func, grad_func, w_init, n_epochs=100, lr=0.001, verbose=0):

i = 0
w = w_init
while i < n_epochs:

delta_w = -lr * grad_func(w)
w = w + delta_w

if verbose > 0:
print("f={}; w: {}".format(func(w), w))

i += 1

return w
``````

Now the example that was explained to me is with simple function i.e:
`f(p,q) = p^2 + q^2`.
The partial derivates are in vector: [2p,2q] and that is clear.
The code for that function and its gradient is:

``````def f(w):
return numpy.sum(w*w)

return 2*w
``````

So the calculation goes:

``````# starting point
w_init = numpy.array([10,10])

# learning rate
lr = 0.1

# apply gradient descent
w_opt = gradient_descent(f, grad, w_init, n_epochs=25, lr=lr, verbose=1)
``````

My problem is calculating it for another function such as: `f(p,q) = (p^2 + 2q^3)`. I know the values for partial derivative, but how to implement those values to this particular code?
How to write new function f and gradient function grad for it and to compute it with main function?

## Solution

Probably you’re looking for something like this:

``````def f(w):
return w ** 2 + 2 * w ** 3