Activation functions for deep neural networks

Safaa Echamsi, Aziza El Bakali Kassimi, Essadik Belouafi, Abdelhakim Alali, Abdelaziz Bouroumi, Asmae Guennoun

Abstract


Deep Neural Networks are connectionist models composed of multiple layers of neuron-like computational units that try to simulate two fundamental aspects of human intelligence: i) learning from examples, and ii) generalizing the learned knowledge and skills to new and unseen examples. The design of such a model for any real-world application requires several steps, including the choice of an adequate architecture in terms of the number of layers, as well as the size, the type, and the activation function of each layer. This paper investigates the impact of activation functions on the overall performance of the network, which has remained underestimated until the early 2010s. The paper provides a brief historical review of recent literature, along with some practical recommendations for selecting the most appropriate activation functions according to each situation. To illustrate the studied problem, we also present some experimental results showing that a shallow neural network with appropriate activation functions may be more efficient than a deeper neural network using traditional and inappropriate functions.

Full Text: PDF

Published: 2025-10-10

How to Cite this Article:

Safaa Echamsi, Aziza El Bakali Kassimi, Essadik Belouafi, Abdelhakim Alali, Abdelaziz Bouroumi, Asmae Guennoun, Activation functions for deep neural networks, Commun. Math. Biol. Neurosci., 2025 (2025), Article ID 124

Copyright © 2025 Safaa Echamsi, Aziza El Bakali Kassimi, Essadik Belouafi, Abdelhakim Alali, Abdelaziz Bouroumi, Asmae Guennoun. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Commun. Math. Biol. Neurosci.

ISSN 2052-2541

Editorial Office: [email protected]

 

Copyright ©2025 CMBN