question |
réponse |
(Data) A panel is called balanced if all micro-units (cross-sectional data) have measurements in all periods. commencer à apprendre
|
|
|
|
|
(Components of the regression model) In the model: y = B1+ B2xi, +ei, the variable x can be called a dependent variable. commencer à apprendre
|
|
FALSE, In this model, 𝑦 y is the dependent variable, while 𝑥 x is the independent or explanatory variable.
|
|
|
(Components of the regression model) In the model: y = B+ B+e, 8, is the slope. commencer à apprendre
|
|
FALSE, B1 is the intercept in this model, while B2 is the slope
|
|
|
(Assumptions of the regression model) Multicollinearity of explanatory variables is one of the assumptions underlying a multiple regression model. commencer à apprendre
|
|
FALSE, Multicollinearity is not an assumption of the regression model; rather, it's a problem when explanatory variables are highly correlated, violating the assumption of no perfect multicollinearity.
|
|
|
(The Gauss-Markov theorem) The Gauss-Markov theorem states that the OLS estimator is best because, under specific assumptions, it is unbiased. commencer à apprendre
|
|
|
|
|
(Ordinary least squares) OLS estimates are selected in such a way that the sum of residuals was the smallest. commencer à apprendre
|
|
FALSE, OLS minimizes the sum of the squared residuals, not just the sum of the residuals.
|
|
|
(Coefficient of determination) If the model does not contain an intercept parameter, SST ≠ SSR+SSE. commencer à apprendre
|
|
|
|
|
(Statistical tests) The level of significance of a test is the probability of committing an error consisting in rejecting the null hypothesis which is true. commencer à apprendre
|
|
|
|
|
(t-tests) When testing the null hypothesis Ho: Bk = c against the alternative hypothesis H1: Bk>c you should reject the null hypothesis if the test statistic t=<t with subscript "(1-alpha; N-K)" commencer à apprendre
|
|
FALSE, You reject the null hypothesis if the test statistic 𝑡 is greater than the critical value 𝑡 with subscript (1 − 𝛼; 𝑁 − 𝐾) in a one-tailed test, not less than.
|
|
|
(Prediction) For a simple regression model: the variance of the forecast error depends on the variation in the explanatory variable. commencer à apprendre
|
|
|
|
|
(F-tests) In general, an F-test statistic value depends on restricted estimation results only. commencer à apprendre
|
|
FALSE, The F-test statistic depends on both restricted and unrestricted models since it compares the two.
|
|
|
(Restricted estimation) The restricted least squares estimator stays unbiased, even if the constraints that are imposed are false. commencer à apprendre
|
|
FALSE, If the imposed constraints are incorrect, the estimator will generally be biased because the true model is mis-specified.
|
|
|
(Nonlinear models) In the log-log model the slope is constant. commencer à apprendre
|
|
FALSE, In a log-log model, the elasticity (percentage change in y with respect to percentage change in x) is constant, but the slope itself is not constant.
|
|
|
(The Jarque-Bera test) The Jarque-Bera test statistic depends on skewness and kurtosis of the data. commencer à apprendre
|
|
|
|
|
(Specification errors) The omitted-variable bias occurs if the omitted variable is corre- lated with the variables included in the model. commencer à apprendre
|
|
|
|
|
(Collinearity) One of the consequences of strong linear dependencies between explanatory variables is that the standard errors are small. commencer à apprendre
|
|
FALSE, Strong multicollinearity actually leads to inflated (large) standard errors, making it harder to detect significant relationships.
|
|
|
(Heteroskedasticity) The Breusch-Pagan test uses a variance function including all explanatory variables from the model under investigation. commencer à apprendre
|
|
|
|
|
(Dummy variables) A slope-indicator variable allows for a change in the intercept. commencer à apprendre
|
|
FALSE, A slope-indicator variable allows for a change in the slope, not the intercept. It interacts with an explanatory variable to change the slope for different groups.
|
|
|
(Dummy variables) The value 0 for a dummy variable defines the reference group, or base group. commencer à apprendre
|
|
|
|
|
(Autocorrelation) One consequence of autocorrelated errors is that the least squares estimator is no longer best. commencer à apprendre
|
|
|
|
|
(Types of data) Annual profit for each of 400 randomly chosen micro enterprises from Poland for the year 2022 is an example of cross sectional series. commencer à apprendre
|
|
|
|
|
(Components of the regression model) Regressand can be otherwise referred to as an explanatory variable. commencer à apprendre
|
|
FALSE, Regressand refers to the dependent variable, not the explanatory variable
|
|
|
(Components of the regression model) In the model: yi = β1 +β2xi +ei, β1 and β2 are random variables. commencer à apprendre
|
|
FALSE, β1 and β2 are parameters, not random variables
|
|
|
(Assumptions of the regression model) Homoskedasticity of the error term is one of the assumptions underlying a multiple regression model. commencer à apprendre
|
|
|
|
|
(The Gauss-Markov theorem) The Gauss-Markov theorem implies that the OLS estimator is better than any nonlinear unbiased estimator. commencer à apprendre
|
|
FALSE, The Gauss-Markov theorem only applies to linear unbiased estimators, and it does not state that OLS is better than any nonlinear estimator
|
|
|
(Ordinary least squares) Standard errors are square roots of estimated variances of the OLS estimators. commencer à apprendre
|
|
|
|
|
(Coefficient of determination) The value of R2 can decrease if we add an insignificant explanatory variable to the model. commencer à apprendre
|
|
FALSE, The value of 𝑅 2 R 2 cannot decrease by adding an explanatory variable, even if it is insignificant
|
|
|
(Confidence intervals) For a given dataset and model, a 99% interval estimate of a parameter of the model is wider than a 95% interval. commencer à apprendre
|
|
|
|
|
(t-tests) Using a t-test we can test whether all the variables in the multiple regression model are jointly insignificant. commencer à apprendre
|
|
FALSE, A t-test tests the significance of individual variables, while an F-test is used to test whether all variables in the model are jointly insignificant
|
|
|
(Prediction) For a simple regression model: the variance of the forecast error depends on the value of explantory variable used to compute the prediction. commencer à apprendre
|
|
|
|
|
(Testing) In an F-test a p-value of 0.02 leads to the rejection of the null hypothesis at 5% significance level. commencer à apprendre
|
|
|
|
|
(Scaling the variables) In the simple regression model: if the scale of y and x is changed by the same factor then the estimated intercept will change. commencer à apprendre
|
|
|
|
|
(Nonlinear models) In the model ln(yi) = β1 + β2ln(xi) + ei, the parameter β2 is elasticity. commencer à apprendre
|
|
|
|
|
(The Jarque-Bera test) The null hypothesis in the Jarque-Bera test concerns the normal distribution of the variable being tested. commencer à apprendre
|
|
|
|
|
(Specification errors) Including some unnecessary regressors in the multiple regression model produces biased estimators of the coefficients of the regressors that belong in the equation. commencer à apprendre
|
|
FALSE, Adding unnecessary variables to a regression model increases the variance of the estimates but does not affect the accuracy (unbiasedness) of the estimates for the important variables already in the model.
|
|
|
(Multicollinearity) It is not possible to estimate the model by least squares when there is exact multicollinearity. commencer à apprendre
|
|
|
|
|
(Model selection) The AIC would choose, from models with the same sum of squared residuals, the model with the smallest number of parameters. commencer à apprendre
|
|
FALSE, The AIC penalizes models for the number of parameters, but it doesn’t necessarily choose the model with the smallest number of parameters
|
|
|
(Heteroskedasticity) Heteroskedasticity tests include: the Breusch-Pagan test and the Durbin-Watson test. commencer à apprendre
|
|
FALSE, The Durbin-Watson test is for autocorrelation, not heteroskedasticity. Breusch-Pagan is a test for heteroskedasticity
|
|
|
(Heteroskedasticity) One consequence of heteroskedasticity is that the usual standard errors are incorrect and should not be used. commencer à apprendre
|
|
|
|
|
(Dummy variables) A dummy variable trap means that the model cannot be estimated using ordinary least squares because of an incorrect use of indicator variables. commencer à apprendre
|
|
|
|
|