A parallel Fortran framework for neural networks and deep learning

Type: Article

Publication Date: 2019-03-28

Citations: 37

DOI: https://doi.org/10.1145/3323057.3323059

View Chat PDF

Abstract

This paper describes neural-fortran, a parallel Fortran framework for neural networks and deep learning. It features a simple interface to construct feed-forward neural networks of arbitrary structure and size, several activation functions, and stochastic gradient descent as the default optimization algorithm. Neural-fortran also leverages the Fortran 2018 standard collective subroutines to achieve data-based parallelism on shared- or distributed-memory machines. First, I describe the implementation of neural networks with Fortran derived types, whole-array arithmetic, and collective sum and broadcast operations to achieve parallelism. Second, I demonstrate the use of neural-fortran in an example of recognizing hand-written digits from images. Finally, I evaluate the computational performance in both serial and parallel modes. Ease of use and computational performance are similar to an existing popular machine learning framework, making neural-fortran a viable candidate for further development and use in production.

Locations

  • ACM SIGPLAN Fortran Forum - View
  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ A parallel Fortran framework for neural networks and deep learning 2019 Milan Curcic
+ PDF Chat Deep Learning and Machine Learning with GPGPU and CUDA: Unlocking the Power of Parallel Computing 2024 Ming Li
Ziqian Bi
Tianyang Wang
Yizhu Wen
Qian Niu
Junyu Liu
Benji Peng
Sen Zhang
Xiaoyong Pan
Jiawei Xu
+ How to Train Your Neural Network: A Comparative Evaluation. 2021 Shu-Huai Lin
Daniel Nichols
Siddharth Singh
Abhinav Bhatelé
+ PDF Chat How to Train Your Neural Network: A Comparative Evaluation 2021 Shu-Huai Lin
Daniel Nichols
Siddharth Singh
Abhinav Bhatelé
+ PDF Chat StreamBrain: An HPC Framework for Brain-like Neural Networks on CPUs, GPUs and FPGAs 2021 Artur Podobas
Martin Svedin
Steven W. D. Chien
Ivy Bo Peng
Naresh Balaji Ravichandran
Pawel Herman
Anders Lansner
Stefano Markidis
+ PDF Chat A framework for parallel and distributed training of neural networks 2017 Simone Scardapane
Paolo Di Lorenzo
+ cuDNN: Efficient Primitives for Deep Learning 2014 Sharan Chetlur
Cliff Woolley
Philippe Vandermersch
Jonathan D. Cohen
John C. Tran
Bryan Catanzaro
Evan Shelhamer
+ Survey on Large Scale Neural Network Training 2022 Julia Gusak
Daria Cherniuk
Alena Shilova
Alexander Katrutsa
Daniel Bershatsky
Xunyi Zhao
Lionel Eyraud‐Dubois
Oleg Shlyazhko
Denis Dimitrov
Ivan Oseledets
+ Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models 2023 Jan Finkbeiner
Thomas Gmeinder
Mark Pupilli
Alexander Titterton
Emre Neftci
+ PDF Chat Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models 2024 Jan Finkbeiner
Thomas Gmeinder
Mark Pupilli
Alexander Titterton
Emre Neftci
+ A Survey and Empirical Evaluation of Parallel Deep Learning Frameworks 2021 Daniel Nichols
Siddharth Singh
Shu-Huai Lin
Abhinav Bhatelé
+ dMath: A Scalable Linear Algebra and Math Library for Heterogeneous GP-GPU Architectures 2016 Steven Eliuk
Cameron Upright
Anthony Skjellum
+ dMath: A Scalable Linear Algebra and Math Library for Heterogeneous GP-GPU Architectures 2016 Steven Eliuk
Cameron Upright
Anthony Skjellum
+ PDF Chat Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence 2020 Sebastian Raschka
Joshua Patterson
Corey Nolet
+ Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures 2022 Felipe Farias
Teresa B. Ludermir
Carmelo Jose Albanez Bastos-Filho
+ cuDNN: Efficient Primitives for Deep Learning 2014 Sharan Chetlur
Cliff Woolley
Philippe Vandermersch
Jonathan Cohen
John Tran
Bryan Catanzaro
Evan Shelhamer
+ MCTensor: A High-Precision Deep Learning Library with Multi-Component Floating-Point 2022 Changyuan Yu
Went Guo
Jianan Canal Li
Tiancheng Yuan
Christopher De
+ DLL: A Blazing Fast Deep Neural Network Library 2018 Baptiste Wicht
Jean Hennebert
Andreas Fischer
+ PDF Chat Benchmarking network fabrics for data distributed training of deep neural networks 2020 Siddharth Samsi
Andrew Prout
Michael Jones
Andrew C. Kirby
Bill Arcand
Bill Bergeron
David Bestor
Chansup Byun
Vijay Gadepally
Michael E. Houle
+ Comparative Study: Standalone IEEE 16-bit Floating-Point for Image Classification 2023 Ju‐Young Yun
Byungkon Kang
François Rameau
Zhoulai Fu

Cited by (14)

Action Title Year Authors
+ PDF Chat RoseNNa: A performant, portable library for neural network inference with application to computational fluid dynamics 2023 Ajay Bati
Spencer H. Bryngelson
+ PDF Chat A perspective on machine learning methods in turbulence modeling 2021 Andrea Beck
Marius Kurz
+ PDF Chat Productive Performance Engineering for Weather and Climate Modeling with Python 2022 Tal Ben‐Nun
Linus Groner
Florian Deconinck
Tobias Wicky
Eddie C. Davis
Johann Dahm
Oliver D. Elbert
Rhea C. George
Jeremy McGibbon
Lukas TrĂŒmper
+ PDF Chat Implementation and Evaluation of a Machine Learned Mesoscale Eddy Parameterization Into a Numerical Ocean Circulation Model 2023 Cheng Zhang
Pavel Perezhogin
Cem GĂŒltekin
Alistair Adcroft
Carlos Fernandez‐Granda
Laure Zanna
+ PDF Chat Machine-learning-based models in particle-in-cell codes for advanced physics extensions 2022 Chiara Badiali
Pablo J. Bilbao
F. Cruz
L. O. Silva
+ PDF Chat A Fortran-Keras Deep Learning Bridge for Scientific Computing 2020 Jordan Ott
Mike Pritchard
Natalie Best
Erik Linstead
Milan Curcic
Pierre Baldi
+ A Fortran-Keras Deep Learning Bridge for Scientific Computing 2020 Jordan Ott
Mike Pritchard
Natalie Best
Erik Linstead
Milan Curcic
Pierre Baldi
+ Bridging observations, theory and numerical simulation of the ocean using machine learning 2021 Maike Sonnewald
Redouane Lguensat
Daniel C. Jones
Peter Dueben
Julien Brajard
V. Balaji
+ Deep reinforcement learning for computational fluid dynamics on HPC systems 2022 Marius Kurz
Philipp OffenhÀuser
Dominic Viola
Oleksandr Shcherbakov
Michael Resch
Andrea Beck
+ PDF Chat The State of Fortran 2022 Laurence Kedward
BĂĄlint Aradi
Ondƙej Čertı́k
Milan Curcic
Sebastian Ehlert
Philipp Engel
Rohit Goswami
Michael Hirsch
Asdrubal Lozada-Blanco
Vincent Magnin
+ RoseNNa: A performant, portable library for neural network inference with application to computational fluid dynamics 2023 Ajay Bati
Spencer H. Bryngelson
+ PDF Chat fv3gfs-wrapper: a Python wrapper of the FV3GFS atmospheric model 2021 Jeremy McGibbon
Noah Brenowitz
Mark Cheeseman
Spencer K. Clark
Johann Dahm
Eddie Davis
Oliver D. Elbert
Rhea George
Lucas Harris
Brian Henn
+ ParaDRAM: A Cross-Language Toolbox for Parallel High-Performance Delayed-Rejection Adaptive Metropolis Markov Chain Monte Carlo Simulations. 2020 Amir Shahmoradi
Fatemeh Bagheri
+ Toward Modern Fortran Tooling and a Thriving Developer Community. 2021 Milan Curcic
Ondƙej Čertı́k
Brad Richardson
Sebastian Ehlert
Laurence Kedward
Arjen Markus
Ivan Pribec
Jérémie Vandenplas