The SINABS documentation makes it clear that during training we need to reset the states of the spiking neural network layers since they are stateful.
In the SINABS docs ( https://sinabs.readthedocs.io/en/v1.2.9/tutorials/nmnist.html) this is done via the sinabs.reset_states() method.
In the sinabs-dynapcnn documentation ( https://synsense.gitlab.io/sinabs-dynapcnn/getting_started/notebooks/nmnist_quick_start.html) the call to sinabs.reset_states() is missing, and instead a manual detach() is run on the buffer of the stateful layers.
Are these two methods equivalent?
Hi, Hovren. These two methods are not equivalent; 'reset' will set vmem to 0, but 'detach' won't.
Hi @hovren,
There are two scenarios that are relevant during training:
1. Your gradients need to be reset for every batch after back propagation. You will be using detach() method to accomplish this for your state buffers that are not taken care of by optimiser's zero_grad() method.
2. Because you have internal states in some of the layers, you have a choice of deciding whether your initial state is always zero for each sample or if you would like it to be a random initial condition. Often, the most sensible random initial condition is to retain the previous sample's state as your new initial condition. Under this condition, you will want to use detach(). If instead, you want to just have your initial condition as a state with normal distribution or even simply zero state, then you should use reset_states() method.
I hope this gives you a better understanding of the difference between these two methods.
Best regards
Sadique