Neural networks with Parallella

nick-neuralIn Part 2 of his blog on “neural network design” for the Adapteva Parallella architecture, Nick Oppen looks at Training, or the hard bit as he puts it. Over the two blogs Nick breaks down the process of getting the best performance out of a parallel architecture for processing neural networks, and specifically Feed Forward – Back Propagation networks. This, he claims is not a rigorous academic work, but an experiment and he welcomes feedback. So if you find a better way drop him a line. Read the both parts of the blog at:

About Parallella

The goal of the Parallella project is to democratize access to parallel computing by providing an open and affordable hardware platform. Parallella boards are available for pre-order, with delivery now expected in November.

Leave a Reply