Strategy for Enhancing GRU-RNN Performance through Parameter Optimization

Authors

  • Hermansah Universitas Riau Kepulauan

DOI:

https://doi.org/10.24036/mjmf.v3i1.41

Keywords:

Forecasting, Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), Activation Functions, Stochastic Gradient Descent (SGD)

Abstract

This study examines the selection of optimal parameters in the Gated Recurrent Unit-Recurrent Neural Network (GRU-RNN) model for forecasting inflation in Indonesia. Accurate forecasting requires precise model parameter adjustments, especially for time-series data, which can be either linear or non-linear. The study evaluates several parameters, including learning rate, number of epochs, optimization methods (Stochastic Gradient Descent (SGD) and Adaptive Gradient (AdaGrad)), and activation functions (Logistic, Gompertz, and Tanh). The results show that the best combination consists of the SGD optimization method, logistic activation function, a learning rate of 0.05, and 450 epochs, which delivers the best performance by minimizing errors and achieving high prediction accuracy. When compared to other forecasting models such as Exponential Smoothing (ETS), Autoregressive Integrated Moving Average (ARIMA), Feedforward Neural Network (FFNN), and Recurrent Neural Network (RNN), the GRU-RNN model shows significant superiority. Additionally, the Logistic activation function proves to be more effective in maintaining stability and prediction accuracy, while the use of the Adaptive Gradient (AdaGrad) method results in lower performance. These findings underscore the GRU-RNN model's ability to handle non-linear time-series data and provide insights for developing more accurate and efficient forecasting models in the future.

Downloads

Published

2025-06-30

How to Cite

Hermansah. (2025). Strategy for Enhancing GRU-RNN Performance through Parameter Optimization. Mathematical Journal of Modelling and Forecasting, 3(1), 16–24. https://doi.org/10.24036/mjmf.v3i1.41

Issue

Section

Articles