notebook 3- missing Code Skeleton ?

notebook 3- missing Code Skeleton ?

di Leonie Schafferhans -
Numero di risposte: 1

Hey,

it has already been mentioned, but there is still an issue with the not existent Fully Connected Layer Code Skeleton. In the test section there are 3 appearances of the following code line (slightly different of course): 

self.layers[2] = self.FullyConnected(6, self.categories, 0.)

But there seems to be no Class FullyConnectedLayer and I`m also not able to use the Block of Fully Connected Layer that we implemented in the first notebook, because the parameters slightly differ. Except from this one statement above, I have no further information on how to implement this missing class therefore trying it in any way is rather challenging concerning datatypes, outputsize etc. Could you please check this - or provide a skeleton for the class. It is needed to pass 3 tests.

Best, Leonie

In riposta a Leonie Schafferhans

Re: notebook 3- missing Code Skeleton ?

di George Ciubotariu -

Hello, sorry for the delay!

By the way, please write on the forum if anything may seem wrong with the tests.

I believe that you may implement the FC Layer using the following snippet:

class FullyConnectedLayer(BaseLayer):
    def __init__(self, input_size, output_size, learning_rate):
        """ A fully connected layer.
            param: input_size (int): dimension n of the input vector
            param: output_size (int): dimension m of the output vector
            param: learning_rate (float): the learning rate of this layer
        """
        # TODO: define the neccesary class variables
        pass

    def forward(self, x):
        """ Compute the foward pass through the layer.
            param: x (np.ndarray): input with shape [b, n] where b is the batch size and n is the input size
            returns (np.ndarray): result of the forward pass, of shape [b, m] where b is the batch size and
                   m is the output size
        """
        # TODO: Implement forward pass of the fully connected layer
        # Hint: Think about what you need to store during the forward pass to be able to compute 
        # the gradients in the backward pass 
        pass
    
    def get_gradient_weights(self):
        """ 
        returns (np.ndarray): the gradient with respect to the weights and biases from the last call of backward(...)
        """
        # TODO: Implement 
        pass
    
    def backward(self, error):
        """ Update the weights of this layer and return the gradient with respect to the previous layer.
            param: error (np.ndarray): of shape [b, m] where b is the batch size and m is the output size
            returns (np.ndarray): the gradient w.r.t. the previous layer, of shape [b, n] where b is the 
                   batch size and n is the input size
        """
        # TODO: Implement backward pass of the fully connected layer
        # Hint: Be careful about the order of applying the update to the weights and the calculation of 
        # the error with respect to the previous layer.
        pass
    
    def initialize(self, weights_initializer, bias_initializer):
        """ Initializes the weights/bias of this layer with the given initializers.
            param: weights_initializer: object providing a method weights_initializer.initialize(weights_shape)
                   which will return initialized weights with the given shape
            param: bias_initializer: object providing a method bias_initializer.initialize(bias_shape) 
                   which will return an initialized bias with the given shape
        """
        # TODO: Implement
        pass