dl_with_numpy package

Submodules

dl_with_numpy.activation_functions module

Activation functions for neural network.

class dl_with_numpy.activation_functions.SigmoidActivation(n)

Sigmoid activation layer of a neural network.

Create activation layer.

Parameters:n (integer) – Size of input and output data. This layer accepts inputs with dimension [batch_size, n] and produces an output of the same dimensions.
backward_pass()

Perform backward pass of autodiff algorithm on this layer.

Calculate the derivative of this layer’s input with respect to the loss from the derivative of this layer’s output with respect to the loss. Store the result.

Returns:Nothing
forward_pass()

Perform forward pass of autodiff algorithm on this layer.

Calculate the output of this layer from its input and store the result.

Returns:Nothing
static sigmoid(x)

Calculate the sigmoid function of the input.

Parameters:x – Input
Returns:Sigmoid(x_in)
sigmoid_derivative(x)

Calculate the derivative of the sigmoid of the input.

Derivative is with respect to the input.

Parameters:x – Input
Returns:Derivative of sigmoid(x_in)

dl_with_numpy.layer module

Abstract class for layers of neural network.

class dl_with_numpy.layer.Layer(n_in, n_out)

Base class for a single layer of neural network.

‘Layer’ here is broader than the standard meaning and includes activation and loss layers.

Initialise attributes of base class.

Parameters:
  • n_in (integer) – Size of input to this layer. This layer accepts inputs with dimension [batch_size, n_in].
  • n_out (integer) – Size of output of this layer. This layer creates outputs with dimension [batch_size, n_out]
backward_pass()

Perform backward pass of autodiff algorithm on this layer.

Calculate the derivative of this layer’s input with respect to the loss from the derivative of this layer’s output with respect to the loss. Store the result.

Returns:Nothing
calc_param_grads()

Calculate the gradients of the parameters of this layer.

This is the gradient of the network’s loss with respect to this layer’s parameters, if there are any. The result is stored.

Returns:Nothing
forward_pass()

Perform forward pass of autodiff algorithm on this layer.

Calculate the output of this layer from its input and store the result.

Returns:Nothing
update_params(learning_rate)

Update this layer’s parameters, if there are any.

Parameters:learning_rate (float) – Learning rate
Returns:Nothing

dl_with_numpy.linear_layer module

Linear layer for neural network.

class dl_with_numpy.linear_layer.LinearLayer(n_in, n_out, seed=0)

A linear layer for a neural network.

Initialise object.

Parameters:
  • n_in (integer) – Size of input to this layer. This layer accepts inputs with dimension [batch_size, n_in].
  • n_out (integer) – Size of output of this layer. This layer creates outputs with dimension [batch_size, n_out]
  • seed (integer) – Random seed for initialising the linear layer’s parameters.
backward_pass()

Perform backward pass of autodiff algorithm on this layer.

Calculate the derivative of this layer’s input with respect to the loss from the derivative of this layer’s output with respect to the loss. Store the result.

Returns:Nothing
calc_param_grads()

Calculate the gradients of the parameters of this layer.

This is the gradient of the network’s loss with respect to this layer’s parameters, if there are any. The result is stored.

Returns:Nothing
forward_pass()

Perform forward pass of autodiff algorithm on this layer.

Calculate the output of this layer from its input and store the result.

Returns:Nothing
update_params(learning_rate)

Update this layer’s parameters, if there are any.

Parameters:learning_rate (float) – Learning rate
Returns:Nothing

dl_with_numpy.losses module

Loss layers for neural network.

class dl_with_numpy.losses.MeanSquareLoss

Mean square loss layer for neural network.

Create mean square loss layer.

backward_pass()

Perform backward pass of autodiff algorithm on this layer.

Calculate the derivative of this layer’s input with respect to the loss from the derivative of this layer’s output with respect to the loss. Store the result.

Returns:Nothing
forward_pass()

Perform forward pass of autodiff algorithm on this layer.

Calculate the output of this layer from its input and store the result.

Returns:Nothing

dl_with_numpy.network module

Neural network.

class dl_with_numpy.network.NeuralNetwork

A simple neural network.

The computation graph is recorded as a doubly-linked list, meaning that only simple network structures are permitted (i.e. no branching in computation graph).

Initialise the neural network.

add_input_layer(n_in, n_out)

Add a linear input layer to the neural network.

This must be the first layer added to the neural network. If the neural network is already found to have one or more layers, a ValueError is raised.

Parameters:
  • n_in (integer) – Size of inputs to this layer. Inputs expected to have dimensions [batch_size, n_in]
  • n_out (integer) – Size of outputs of this layer. Outputs will have dimensions [batch_size, n_out]
Returns:

Nothing

add_linear_layer(n_out)

Add a linear hidden layer to the end of the neural network.

The input dimension to this layer is automatically taken from the current last layer in the network.

Parameters:
  • n_out (integer) – Size of outputs of this layer. Outputs will
  • dimensions [batch_size, n_out] (have) –
Returns:

Nothing

add_mse_loss_layer()

Add a mean-square error loss layer to the end of the neural network.

Returns:Nothing
add_output_layer(n_out)

Add a linear output layer to the end of the neural network.

The input dimension to this layer is automatically taken from the current last layer in the network.

The only difference between an output layer and a hidden layer is that an output layer’s output can be accessed through NeuralNetwork’s output_layer attribute. Only one output layer is permitted.

Parameters:
  • n_out (integer) – Size of outputs of this layer. Outputs will
  • dimensions [batch_size, n_out] (have) –
Returns:

Nothing

add_sigmoid_activation()

Add a sigmoid activation layer to the end of the neural network.

Returns:Nothing
backwards_pass()

Perform the backwards pass of the neural network.

The gradients of each layer’s input and output with respect to the loss are stored in preparation for the calculation of the parameter gradients.

Returns:Nothing
calc_gradients()

Calculate the gradients of all the parameters in each layer.

The gradients are stored in preparation for updating the parameters.

Returns:Nothing
forward_pass(x)

Perform a forward pass through the neural network.

Each layer’s input and output values are stored in preparation for the backwards pass to calculate gradients.

Parameters:x (2d Numpy array) – Data input
Returns:Numpy array that is output of output layer.
training_step(x, y, learn_rate)

Perform a full training step using the back-propagation algorithm.

This method requires the neural network to have at least an output layer and a loss layer.

Parameters:
  • x (2d Numpy array) – Training data x values. Dimensions must be [batch_size x input_dim]
  • y (2d Numpy array) – Training data y values (i.e. labels). Dimensions must be [batch_size x 1]
  • learn_rate – Learning rate for the learning process.
Returns:

Nothing

update_params(learn_rate)

Update the parameters of the neural network.

The update is in the direction of reducing the loss.

Parameters:learn_rate (float) – Learning rate
Returns:Nothing