The Empirical Likelihood method (ELM) was introduced by A. B. Owen to test hypotheses in the early 1990s. It's a nonparametric method and uses the data directly to do statistical tests and to compute confidence intervals/regions. Because of its distribution free property and generality, it has been studied extensively and employed widely in statistical topics. There are many classical test statistics such as the Cramer-von Mises (CM) test statistic, the Anderson-Darling test statistic, and the Watson test statistic, to name a few. However, none is universally most powerful. This thesis is dedicated to extending the ELM to several interesting statistical topics in hypothesis tests. First of all, we focus on testing the fit of distributions. Based on the CM test, we propose a novel Jackknife Empirical Likelihood test via estimating equations in testing the goodness-of-fit. The proposed new test allows one to add more relevant constraints so as to improve the power. Also, this idea can be generalized to other classical test statistics. Second, when aiming at testing the error distributions generated from a statistical model (e.g., the regression model), we introduce the Jackknife Empirical Likelihood idea to the regression model, and further compute the confidence regions with the merits of distribution free limiting chi-square property. Third, the ELM based on some weighted score equations are proposed for constructing confidence intervals for the coefficient in the simple bilinear model. The effectiveness of all presented methods are demonstrated by some extensive simulation studies.