This paper presents a new convolutional neural network-based time-series model. Typical convolutional neural network (CNN) architectures rely on the use of max-pooling operators in between layers, which leads to reduced resolution at the top layers. Instead, this work considers a fully convolutional network (FCN) architecture that uses causal filtering operations, and allows for the rate of the output signal to be the same as that of the input signal. It goes on to propose a more refined version of the FCN, the undecimated fully convolutional neural network (UFCNN), which is motivated by the undecimated wavelet transform. Experimental results verify that using the undecimated version of the FCN is necessary for effective time-series modeling. The UFCNN has several advantages over time-series models such as the recurrent neural network (RNN) and long short-term memory (LSTM), since it does not suffer from either the vanishing or exploding gradients problems, and is therefore easier to train. Convolution operations can also be implemented more efficiently than with the recursion that is involved in RNN-based models. The performance of the model is evaluated in a synthetic target tracking task using only measurements generated from a state-space model, a probabilistic modeling of polyphonic music sequences problem, and a high frequency trading task using a time-series of ask/bid quotes and their corresponding volumes. The experimental results using synthetic and real datasets verify the significant advantages of the UFCNN compared to the RNN and LSTM baselines.
Read the paper:
Time-series modeling with undecimated fully convolutional neural networks. Roni Mittelman.