Value-at-Risk Analysis for Measuring Stochastic Volatility of Stock Returns: Using GARCH-Based Dynamic Conditional Correlation Model

To assess the time-varying dynamics in value-at-risk (VaR) estimation, this study has employed an integrated approach of dynamic conditional correlation (DCC) and generalized autoregressive conditional heteroscedasticity (GARCH) models on daily stock return of the emerging markets. A daily log-returns of three leading indices such as KSE100, KSE30, and KSE-ALL from Pakistan Stock Exchange and SSE180, SSE50 and SSE-Composite from Shanghai Stock Exchange during the period of 2009–2019 are used in DCC-GARCH modeling. Joint DCC parametric results of stock indices show that even in the highly volatile stock markets, the bivariate time-varying DCC model provides better performance than traditional VaR models. Thus, the parametric results in the DCC-GRACH model indicate the effectiveness of the model in the dynamic stock markets. This study is helpful to the stockbrokers and investors to understand the actual behavior of stocks in dynamic markets. Subsequently, the results can also provide better insights into forecasting VaR while considering the combined correlational effect of all stocks.


Introduction
Nowadays, it has been a big challenge for the investors to predict the risk and return associated with the specific index or portfolio in dynamic markets. Higher market risk is because of the presence of unpredictability or volatility in stock prices. The reason for high volatility in stock returns is always the unstable country's conditions, both politically and economically (Afzal et al., 2019). Increasing market risk on stock has brought new challenges to find an effective way of predicting risk in dynamic conditions. Value-at-risk (VaR) is the most widely used standardized volatility measurement tool for the measurement of market risk (Jorion, 1996).
In dynamic markets, neither the stock returns could be identical nor could the future trend be predicted historically. Therefore, traditional volatility models, such as historical simulation, variance-covariance, and Monte-Carlo simulation method (Marshall & Siegel, 1997), are not capable of giving the right estimation of market risk; either they are based on assumptions or have the issues of underestimation and overestimation (Sampid & Hasim, 2018). Indeed, limited tools have been developed that could predict the volatility correctly without any issues of underestimation or overestimation. In doing so, this study desires to find an efficient stochastic model that can predict better estimation of risk and return in dynamic stock markets, thereby winning the interest of investors in wealth creation. Therefore, this study suggests that employing dynamic conditional models can capture better volatility of stock returns without any assumptions or problems of underestimation or overestimation of market risk, thereby maximizing the investors' confidence. Subsequently, modeling the dynamic correlation structure gives insight into both markets' volatility clustering and synchronization in financial series. Hence, under dynamic correlation structure, dynamic conditional correlation (DCC) and generalized autoregressive conditional heteroscedasticity (GARCH) are found to be efficient and practicable methods to capture the market volatility and compare the forecasting results of VaR. While looking for the model assumptions, this model can predict better VaR with time-varying correlation rather than using a constant correlation. In this dynamic structural model, an integrated model DCC-GARCH(1,1) has been used for the estimation of VaR and conditional correlation estimation. The findings of the study suggest that adopting an integrated hybrid tool of DCC and GARCH can give better insights of risk estimation in dynamic conditions. Furthermore, this study contributed to the literature in a way of introducing an effective method of risk estimation in dynamic conditions. This study has been expanded in the following section: section "Previous Research on VaR and Measurement Tools" describes the theoretical background of relevant studies; section "Data and Empirical Results" is based on the details of materials and methods; section "Key Findings" describes the key finding of the study; and the last section consists of the conclusions, implications, and future research directions.

Previous Research on VaR and Measurement Tools
VaR has been popular and is widely adopted as a market risk analysis tool, and it measures the volatility of stock indices. It summarizes the maximum possible loss that financial assets can have at a certain period under a certain confidence level. In 1993, VaR analysis was initially presented in a report "the practice and rules of derived products" published by Group 30. Afterward, in 1994, J.P. Morgan Company applied the VaR model to measure the market risk in stock exchanges. Nowadays, VaR has been adapted by different financial institutions, for instance, banks, fund managers, insurance companies, and stockbrokers. There have been a wide range of studies found on the method of VaR that address its significance in volatility measurement, particularly in stocks (Ackermann et al., 1999;Beder, 1995;Carhart, 1997;Favre & Galeano, 2002;Fung & Hsieh, 1997;Marshall & Siegel, 1997). The Monte-Carlo simulation and variancecovariance methods are based on the assumption that the returns are identically distributed and are independent of each other. Similarly, historical simulation relies on historic data believing that the same trend will get repeated in the future, yielding the desired outcome (Hull & White, 1998).
Traditionally, historical simulations, Monte-Carlo simulations, and variance-covariance methods assume that stock returns are normally distributed. Therefore, the fluctuations in asset prices show persistent volatility over time. However, volatility is not considered to be persistent in real-life dynamic conditions (Bansal et al., 2014). Recent studies have also described some practical VaR measurement models that can better address the nonpersistent nature of volatility. While addressing the normality issues in time series volatility, Engle (1982) has introduced a unique ARCH model. Later on, based on this ARCH model, Bollerslev (1986) has presented the GARCH with the estimation of parameters (p,q) that make it easier than the ARCH model for volatility measurement.
Similarly, Füss et al. (2007), in their study on volatility measurement, suggested that the GARCH-based VaR models appear to be superior and outperform the traditional VaR estimation methods. While working on funds' risk, Zhou et al. (2010) considered multiple distributions for the effectiveness of the GARCH model over the traditional VaR models. Thus, results have shown that generalized difference distribution outperforms the t-distribution and normal distribution. Furthermore, to overcome the weaknesses of asymmetric and long-memory volatility effects, several extensions of the GARCH family have been introduced, such as Exponential GARCH (EGARCH), Glosten-Jagannathan-Runkle GARCH (GJR-GARCH), Fractionally Integrated GARCH (FIGARCH), Fractionally Integrated Asymmetric Power ARCH (FIAPARCH), and Hyperbolic GARCH (HYGARCH). Unfortunately, this GARCH family has some limitations to measure the time-varying effect of volatility in dynamic conditions (Francq & Zakoïan, 2010;Yang, 2011).
Researchers have found that in dynamic conditions, the correlations among stock indices are asymmetric across the market movements and the return distribution tails are fatter (Ang & Chen, 2002;Boyer et al., 1997;Kolari et al., 2008;Longin, 2000;Longin & Solnik, 2001;Tastan, 2006). Therefore, forecasting returns seemed to be underestimated or overestimated using GARCH family models alone. To address the appropriation of VaR with time-varying effects in dynamic conditions, one possible solution is to apply the DCC model for volatility measurement as proposed by Engle in 2002 and further modified in 2006. Studies on DCC show that the DCC-GARCH model is found to be more accurate in yielding the conditional variances (Engle, 2002;Tse & Tsui, 2002). Modeling the DCC structure can provide insight into both markets' synchronization and volatility clustering in financial series. Thus, by using a conditional correlational and time-varying effect, this DCC model provides a better estimation of the dynamic correlation structure for capturing the volatilities and forecasting returns more efficiently than other models (Celik, 2012).

Data and Empirical Results
The data used in this study consist of three stock indices of the Pakistan Stock Exchange (PSX), that is, KSE100, KSE30, and KSE-ALL, and Shanghai Stock Exchange (SSE), that is, SSE180, SSE50, and SSE-Composite. A total of 2,528 equal daily log-returns for each index are used as sample data, because an equal number of observations for each asset class is necessary for DCC modeling for conditional correlation (Asai & McAleer, 2009). Data were gathered from the official websites of PSX and SSE from the period of 2009 to 2019. Table 1 shows the summary statistics of each index used in this study that explains the time frame of research and the total number of observations used. The distribution of data is slightly skewed to the left; the negative skew values in all indices show that there are more chances to earn negative returns than positive returns. For the kurtosis, we have a value of less than 3 in the case of PSX, which implies that the distribution of the data is platykurtic, but in the case of SSE the kurtosis is greater than 3, which shows that the distribution of SSE data is leptokurtic. Skewness and kurtosis are essential for volatility as kurtosis is the measure of the level of a distribution expressed as fat tails. Investors who are riskaversive always prefer to have low kurtosis distribution or simply the returns that are near to distribution mean (Shanmugam & Chattamvelli, 2016). If there is positive skewness, then it becomes possible to get high kurtosis by avoiding excessive negative returns in the future with the possibility of having more positive returns. When we have negative skewness, investors can face extreme negative returns due to the impact of a high excess kurtosis (Jondeau & Rockinger, 2003).
The procedural steps for DCC-GARCH(1,1) are described as follows: 1. The daily log-returns of series have been calculated as where S 1 represents the return of stock 1, SN N represents the return of the nth stock, t represents the time period, and R t represents the total return of a specific stock at time t. While using log-returns, Figure 1 shows the shreds of evidence of the existence of volatility clustering in three selected stock indices of PSX and SSE. Hence, modeling of conditional volatility can be done by considering the fact of volatility presence.
2. Financial data are always nonstationary or have normality issues; therefore, to check the normality of data, Shapiro test has been applied to each set of the data separately. The p-value for each asset after Shapiro test is less than 5% in all the selected indices, which indicates the data are not normally distributed (Hanusz & Tarasińska, 2015). The data are then normalized by considering M = 0 and variance = 1. On the normalized data set, Shapiro test shows the p-values higher than 5%, which indicates the data are now normally distributed around the mean, and they are ready for risk-modeling 3. Afterward, it is necessary to check whether the data have more or less volatility clustering. To check the volatility clustering, the Ljung-Box test (Q k ) has been applied to the squared log-returns of each index that indicates the presence of volatility clustering in each asset class (Peña & Rodríguez, 2002). Lagrangemultiplier test Q k r also confirmed the presence of arch effect in the series of log-returns of each asset class (see Table 2). The Ljung-box test checks confirm the presence of autocorrelation in the data based on the number of lags m. The purpose is to test whether the autocorrelation γ 1, . . . , γ m of z t is 0 or not.
The Ljung-box test can be described in the form of hypothesis: The test statistic is as follows: The Ljung-Box test statistics have been shown in Equation 2, where n shows the total number of samples used in this study. ρ t represents the correlation samples of where is the α-quantile of the chi-square distribution with m degrees of freedom. Hence, after testing the null hypothesis H 0 (no Arch effect present), H 0 is rejected because the p-value is greater than 5%. Thus, H 1 is accepted because the arch effect is present in a series of data (Burns, 2005). 4. In the next step, it is important to check the correlation pattern of return. Figure 2 shows the autocorrelation function (ACF) of actual returns of PSX and SSE, and Figure 3 shows the ACF of squared returns of PSX and SSE. Actual returns are serially uncorrelated, and then the 5% lags in the ACF plot are expected to fall outside the limits mentioned as reddotted lines in the ACF plots (Finlay et al., 2011). In Figure 2, we can see that the lags falling outside the series do not make any pattern and show random walk in all the selected stock indices of PSX as well as SSE. We can conclude that the actual returns are serially uncorrelated, but in Figure 3, the lags falling outside the limits (red-dotted lines) do make some patterns (Madsen, 2007). We can see that initial lags are greater, and then the ACF is decreasing. This is just because the lags falling outside the limits and inside the limits are making patterns, so the squared returns are not uncorrelated. The squared returns only can be uncorrelated if the actual returns are serially independent, but this is not the case here, which means the actual returns are dependent. Because of the uncorrelation of actual returns, correlation modeling is being done using the DCC-GARCH model by Engle (2002). 5. Before applying the DCC volatility matrix, first, GARCH mean and variances should be calculated (Sampid & Hasim, 2018). Bollerslev (1986) proposed the GARCH model that allows the dependence of the conditional variance to its previous lags. The GARCH(1,1) model has the following form: where in Equation 4 ζ u t , represents the log-returns of daily stock prices, χ u represents the conditional log-return mean, α u t , represents the mean residuals, ϑ u t , shows the white noise having variance 1 and 0 mean, and λ u , t shows the conditional volatility series; α, β, and γ in Equation 5 are described as the key parameters of GARCH(1,1) estimation, T represents the available sample size, and N represents the number of stock values. Thus, the covariance matrix of DCC at time t is as follows: And the DCC conditional correlation matrix is then P t : In Equation 7, G t represents the diagonal matrix of the N conditional volatilities of the stock returns, that is, and λ u,t is the (u,v)th component of the volatility matrix. Based on the above assumptions, the DCC model can be expressed as follows: In Equation 8, P shows the unconditional correlation matrix of ϑ t , θ 1 , and θ 2 , which are positive real numbers satisfying 0 ≤ Ω 1 + Ω< 1. ϕ t−1 stands for the correlation matrix of returns depending on {ϑ t−1 . . . ϑ t−n } for some integer n and is as follows: where the optimal parameters can be viewed as the smoothing parameter; the larger the value of n, the smoother will be the correlational effect (Sampid & Hasim, 2018). Engle (2002) has proposed the modified version of the DCC model, where the correlation matrix explained in Equation 8 can be further defined as follows: In Equation 10, Ω 1 and Ω 2 show the positive real numbers and 0 < Ω 1 + Ω 2 < 1; p t stands for a positive-definite matrix.
To renormalize the correlation matrix at each time t−1, Equation 10 uses the parameter ϑ t−1 (for more information, see Tsay, 2018). Table 3 shows the forecasted VaR of all three indices of each selected stock exchange at different quantiles of standard residuals calculated using the DCC-GARCH model. Quantile shows the probability of volatility scores of all three indices. The joint VaR value at α = 5% is 1.38229 for PSX and 0.87883 for SSE, showing a combined portfolio result.

Key Findings
As the study desired to measure future market risk using VaR, therefore, volatility forecasting has been calculated to find out the accurate measures of stock returns. In doing so, realized and forecasted correlation results of the DCC-GARCH model show the forecasting of returns of combined stocks. Using DCC, this study has come across a key finding of estimating more realized forecasted results than the traditional models. Figure 4 shows the realized conditional correlation between the selected indices of PSX and SSE from the period 2009 to 2019. Figure 5 shows the forecasted correlation of each stock index separately. In all, 20 lags from the estimated correlation matrix and 10 lags from the forecasted matrix have been selected to present the conditional correlation between the stock indices better. Cor_30.all is the correlation between KSE30 and KSE-ALL indices, cor_KSE100.all is the correlation between KSE100 and KSE-ALL, and, finally, cor_100.30 is the correlation between KSE100 and KSE30, respectively. Similarly, cor-180.50 is the correlation between SSE180 and SSE50 indices, cor-180.comp is the correlation between SSE180 and SSE-Composite indices, and, finally, cor-50.comp is the correlation between SSE50 and SSE-Composite indices.
The DCC model of Engle (2002) can better estimate the time-varying correlation between the asset classes. The main finding of this article is even in the highly volatile stock markets, and the bivariate time-varying dynamic conditional correlation model provides better performance than traditional models. See Figure 6 as the comparison of capturing volatilities across the sample indices that endorse the findings of this article. Figure 6 shows the volatility capturing of each selected stock index in comparison with the simple GARCH model and DCC model used for capturing the volatility. We can notice that the volatility captured by the GARCH(1,1) method is underestimated, but volatility captured through the DCC model is more accurately addressed. The GARCH family models alone are unable to capture the volatility effectively. The DCC model is a much more effective model to address the volatility as the parameters estimated by the DCC model indicate the effectiveness of the model in the selected stock market. Table 4 presents the estimated parameters from the DCC model for all the selected indices of PSX and SSE, including the p-values of each parameter estimated. Results reveal that the parameters estimated in Table 4 by DCC-GARCH(1,1) are highly significant. The parameter β1 is significantly positive that shows the connection of its risk measures with its conditional variance, which shows substantial and positive autocorrelation of returns of all indices. The persistence measurement is α1 + β1, which should be less than 1, but here it is almost equal to 1 in the case of all stock indices whether they are PSX stock indices or SSE stock indices. That means the estimated correlations can be integrated with the DCC-GARCH process, and it is nonstationary. More significance is given to the joint dcc and dcc α β 1 1 parameters as individual parameters α1 and β1 are of univariate GARCH model. In Table 4, dcc dcc α β 1 1 + is less than 1, which shows the stationary condition of the DCC model, indicating that there is no more volatility clustering behavior present after the modeling on selected stock indices of PSX and SSE. The findings of the study are helpful to the stockbrokers and investors to understand the actual behavior of stocks in dynamic markets. Subsequently, the results can also provide better insights into the forecasting of VaR while considering the combined correlation effect of all stocks because all stocks are prorogated to have a serial correction and time-varying effects. Therefore, this model gives reasonable estimates to the investors in a highly volatile market who are looking at estimating what kind of correlation and volatility dynamics is present in their potential portfolios to maximize returns.

Conclusion
The nature of the model is critical in addressing the volatility of any stock market portfolio. Investors are always wondering to find better forecast methods to select the potential portfolios for their investment.   model provides better performance than traditional models. The joint dcc and dcc α β 1 1 parameters are more significant than the individual parameters α 1 and β 1 that are of univariate GARCH model. This indicates that there is no more volatility clustering behavior. The volatility captured by the GARCH(1,1) method is underestimated, but the volatility captured through the DCC model is more accurately addressed. The GARCH family models alone are unable to capture the volatility effectively. The DCC model is a much more effective model to address the volatility as the parameters estimated by the DCC model indicate the effectiveness of the model in the selected stock market.
Thus, this study contributes to the body of knowledge in a way to introduce an efficient method of risk estimation in dynamic market. This model gives a new way of risk consideration rather than using traditional methods. Furthermore, it is suggested that only dynamic models should be considered for risk estimation in dynamic market. Similarly, investors and financial experts can increase their market confidence by adopting this DCC-GARCH model for market risk estimation in dynamic capabilities. Future work could be done on  the estimation of conditional correlation using the traditional GARCH family models together with the dependence measurement by copulas in volatile stock markets, where news impact is powerful. One can check the compatibility of copulas with individual stock dynamics.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.