Theses and Dissertation collection from the Faculty of Physical Science

Browse

Recent Submissions

Now showing 1 - 20 of 36
  • Item
    BOUNDARY VALUE METHODS FOR NUMERICAL SOLUTION OF FOURTH ORDER PROBLEMS IN ORDINARY DIFFERENTIAL EQUATIONS
    (UNIVERSITY OF ILORIN, 2018-02) SALAMI, Adesina Jimoh
    Many real life situations can be modelled mathematically as Ordinary Di er-ential Equations (ODEs) or Partial Di erential Equations (PDEs). Problems arising from these are either of Initial Value Problem (IVP) or Boundary Value Problem (BVP) types in the case of ODEs and Initial Boundary Value Problems (IBVPs) in the case of PDEs. As PDEs are more di cult to solve, techniques have been developed to transform them into ODEs of BVP form. For higher order di erential equations, their solutions are mainly obtained numerically by reducing them to equivalent rst order systems. However, this is not considered e cient if direct method could be found for their so-lution. The main focus of the present study is therefore the development, analysis and application of a class of Boundary Value Methods (BVMs) for direct solution of Fourth Order ODEs of both Initial Value and Boundary Value types. A BVM is a Linear Multistep Method (LMM) coupled with boundary conditions. Also BVMs have the advantage of being self - starting viz-a-viz many other numerical methods. Therefore, the objectives of this study were to: (i) develop polynomial- tted BVMs for solving fourth order ODEs; (ii) develop trigonometrically- tted block BVMs for solving oscilla-tory fourth order ODEs; (iii) analyse the basic properties of the methods developed and these include zero- stability, consistency and convergence of the methods; (iv) implement the methods on speci c fourth order ODEs; and compare the performance of the proposed methods with those of existing ones. The problems considered in this work were the IVP : v yiv= f(x; y; y0; y00; y000) a x b y(a) = y0; y0(a) = y00; y00(a) = y000; y000(a) = y0000 and the BVP : yiv= f(x; y; y0; y00; y000) a x b y(a) = A1; y0(a) = A2; y00(b) = B1; y000(b) = B2 Trial solution of the type: p+q 1 Xr U(x) = arxr' y(x) =0 for the polynomial - tted BVMs and p+q 3 Xr arxr+ ak+3cos wx + ak+4sin wx ' y(x) U(x) = =0 for trigonometric - tted BVMs, where ar are uniquely determined coe cients,! is the frequency, and p and q are the number of interpolation and collocation points, respectively. Interpolation and collocation were achieved through the sets of equations: u(xn) = yn; u(xn+1) = yn+1 u(xn+k) = yn+k uiv(xn) = fn; uiv(xn+1) = fn+1; uiv(xn+k) = fn+k: This system of equations was solved by Mathematica 8.0 for ar; r = 0(1)(k+ The substitution of the resulting a0rs into the trial solution yielded the desired BVMs. The ndings of the study were the: vi derivation of BVMs with polynomial and basis; derivation of BVMs with trigonometric basis ; derived methods are consistent, zero stable and convergent; proposed BVMs that can handle both initial and boundary value prob-lems; and methods compare favourably well in terms of accuracy with existing methods . The study concluded that a class of BVMs with polynomial and trigonomet-ric bases derived was developed and successfully implemented on sti and non- sti problems in fourth order ODEs. It is therefore recommended for application in determining the solution of real life problems leading to IVPs and BVPs in Fourth Order ODEs.
  • Item
    RUIN PROBABILITIES FOR SOME COLLECTIVE RISK RESERVE PROCESSES WITH STOCHASTIC INVESTMENT AND INFLATION
    (UNIVERSITY OF ILORIN, 2018-01) OSENI, BAMIDELE MUSTAPHA
    Basically, Insurance firms collect premiums and pay claims. Not to go bankrupt in a highly competitive market, insurance companies engage in several investment portfolios which can be classified as either investments with fixed or stochastic returns but not both. Companies engaging in these investments need to monitor their investments and re-asses their strategy periodically in order not to be bankrupt. This study provided a model for the risk reserve process of the insurance firm involved in both investment portfolios at the same time, as against what is obtainable in several other works and investigated the ruin model through computation of ruin probabilities. The specific objectives were to: (i) develop a collective risk model with stochastic investment and inflation, (ii) determine the ruin probabilities for the model, (iii) compare various cases of the model and investigate the effect of stochastic investment and inflation and (iv) apply the model to real life data. A collective risk model representing situations where ruin is caused by claims and/or investments respectively was formulated. The model was converted into differential equations and were resolved using the theory of confluence hypergeometric functions. R-statistical software was used with “fAsian Options” package to generate results. These results were generated at varying values of capital, claim size, premium rate, claims arrival rate, investment, interest, drift and volatility. The following were the findings of the study: 1. A collective risk model with stochastic investment and inflation was developed and resolved into ruin probabilities. 2. The results show that ruin probabilities for either claims or investments decrease as capital increases, irrespective of the values of premium rate, average claim size, arrival rate of claims and returns from investments. 3. Using various cases, ruin probabilities decrease when average claim size, claim arrival rate, volatility and fraction of investments are increased. 4. Ruin probability attributable to claims was found to be constant when claim arrival rate is set to zero while other parameters are kept constant. 5. Ruin probabilities for the model is independent of inflation and decrease as fraction of investments into risky asset decreases. The lowest value was obtained when there is no investment with stochastic returns while the highest value was obtained when all investments are assumed to be with stochastic returns. 6. The results from real life data, were comparable to the findings from empirical data. It can be concluded from this study that investing in markets with stochastic returns is comparatively more risky than either investing part or all resources into fixed returned market. In particular, the lowest ruin probability from investment were obtained when all resources were plunged into markets with fixed returns. It is therefore recommended to invest in markets with both fixed and stochastic returns. This study also showed that ruin probabilities were independent of inflation.
  • Item
    MIXED VARIABLE DISCRIMINANT ANALYSIS WITH ZERO CELL FREQUENCY
    (UNIVERSITY OF ILORIN, 2018-06) MBAEYI, GEORGE CHINANU
    When available data for discriminant analysis is the mixed variables type, the common procedure assigns codes to the possible states of the discrete variables and proceeds with analysis as if all data are continuous. This may lead to loss of information. The Location Model (LM) proposed by Olkin and Tate and developed by Krzanowski has combined these two data types. The problems of large number of parameters to be estimated when discrete variables are many, inability to perform analysis when one or more cell has zero frequency and limitation of number of discrete variables to be handled are some of the disadvantages of LM. Therefore, the aim of this study is to propose a modification to the LM, called Modified Location Model (MLM), particularly for when one or more cells of the resulting contingency table have zero frequency. The objectives were to: (i) propose a model, MLM, for developing a discrimination procedure for mixed variables; (ii) derive an estimator of the variance covariance matrix for the mixed variable case; (iii) derive an estimator of the vector of means and cell probabilities in the presence of empty cell(s); and (iv) compare the proposed procedure with two existing methods, namely, the LM and Fisher Linear Discriminant Function (FLDF). The MLM procedure was obtained by first estimating the variance covariance matrices of each cell of the two groups. When one or more cells have zero frequency, it uses the Independent Binary Model (IBM) to estimate cell probabilities, vector of means and variance covariance matrix for the empty cell(s). Simulated data were analyzed for several combinations of number of discrete and continuous variables including states within variables that were cross classified yielding some empty cells. Error rates, sensitivity and specificity measures were used as performance criteria. Three sets of real life data were used to validate the results obtained from simulation study. Findings from this study were that: i. MLM procedure for mixed variables in discriminant analysis was obtained; ii. a procedure for estimating variance covariance matrix when there are empty cells was obtained; iii. IBM, for estimating cell probabilities and mean vector when there are empty cells was obtained; iv. proposed MLM gave higher classification accuracy than the LM over all cases considered; v. MLM was better than both the FLDF procedure and the LM for small sample sizes in terms of classification accuracy; vi. MLM and FLDF procedure performed closely and better than LM in terms of specificity and sensitivity; and vii. MLM performed better than both the LM and the FLDF procedure when validated with real life data. The study concluded that the proposed Modified Location Model was feasible, applicable and performed better than both FLDF and LM procedures based on error rate, when available data for discriminant analysis is of the mixed variables type. The MLM is useful when one or more unique response patterns of discrete variable have empty/zero frequency. It is therefore recommended for discriminant analysis of mixed variables especially when there are many discrete variables
  • Item
    SOME MODIFIED FACTOR-TYPE ESTIMATORS FOR POPULATION MEAN IN SAMPLE SURVEY
    (UNIVERSITY OF ILORIN, 2018) AUDU, AHMED
    Factor-type estimators of population mean are applicable to where correlations between the study and auxiliary variables are positive or negative. Their efficiency depends on the optimum estimate of the value of positive real number which is usually a function of the study and auxiliary variables. However, existing factor-type estimators are found not to be defined for some values of d and some have lower precision for positive correlation coefficient . This study aimed at modifying some conventional factor-type estimators under single-phase and two-phase sampling schemes by incorporating more information on auxiliary variables like coefficients of skewness ( ), kurtosis ( ), variation ( ) and standard deviation ( ) to produce more efficient ones. The specific objectives of this study were to: (i) obtain four modified factor-type estimators under single and two-phase sampling; (ii) derive the Biases and Mean Square Errors (MSEs) of these modified factor-type estimators; (iii) derive the conditions under which these modified estimators will be more efficient than the conventional ones; (iv) determine the relative efficiency of these modified estimators using both real life and simulated data; and (v) determine the robustness of these modified factor-type estimators under super-population and non-response model. The four modified estimators under single-phase and two-phase sampling schemes considered are: , , and . These are done by incorporating more information on auxiliary variables like coefficients of skewness ( ), kurtosis ( ), variation ( ) and standard deviation ( ) to produce more efficient ones. The findings were: i. four modified factor-type estimators for population mean were obtained; ii. Biases and MSEs of these modified factor-type estimators were theoretically and empirically derived; iii. conditions under which these modified estimators were more efficient were derived; iv. using both real life and simulated data, the modified factor-type estimators were found to have lower Mean Square Errors and highest relative efficiency, hence more efficient than the conventional ones; and v. the four modified estimators demonstrated high level of robustness under super-population and non-response models in comparison to the conventional ones. This study concluded that when: (i). (ii). (iii). and (iv). , these modified factor-type estimators are better than the existing ones where, while , and are the correlation coefficients between x and y, x and z and, y and z respectively . Therefore, these modified estimators are recommended for use in estimating finite population mean in sample survey.
  • Item
    STRUCTURAL EQUATION MODELING OF BILATERAL LATENT STRUCTURE OF SATISFACTION WITH LIFE AND QUALITY OF LIFE
    (UNIVERSITY OF ILORIN, 2018-05) ALADENIYI, OLABIMPE BODUNDE
    Structural Equation Modeling (SEM) is a method which uses various types of models to depict relationships among observed variables, with the same basic goal of providing a quantitative test of a theoretical model hypothesized by the researcher. Several instruments including World Health Organization (WHOQoL-BREF) for non HIV patients, WHOQoL-HIV (BREF) for HIV patients and Satisfaction with Life Scale (SWLS) have been provided for assessing quality of life. These instruments may be non-country or regional specific. The aim of the study therefore, was to investigate the bilateral structure between the instruments as well as to reduce the number of the variables in the instruments. The specific objectives are to (i) assess the impact of the variables of the instruments on their various latent constructs through their bilateral structure; (ii) formulate the SEMs for non-HIV and HIV patients; (iii) compare Maximum Likelihood (ML) and Generalized Least Squares (GLS) estimates of the equations in (ii); (iv) regroup variables in WHOQoL-HIV (BREF) and formulate their SEMs and (v) obtain a reduced instrument from the combined WHO QoL instruments and assess the effectiveness. The two estimators used on the instruments for parameter estimation in the SEMs were ML and GLS.The binary logistic regression model was also used on the combined instruments. Life data were collected from over 300 patients from each of the two randomly selected states out of the six states in the Southern Western Nigeria. AMOS version 21 Statistical software was used to analyze the data. The following were the major findings of the study: i. Most of the variables in the instrument had significant impact on their latent constructs (domains). Also there were variations in the path diagrams for the WHO QoL instruments. ii. The SEMs were formulated and it was noted that for non-infected patients, psychology domain had the most substantial causal effect on their quality of life, while, physical domain had the most substantial causal effect on the quality of life of HIV patients. iii. Using some goodness of fit criteria, GLS estimation method was more appropriate in analysing ordinal categorical data, as in this study than ML method. iv. Reclassification of the domains in the WHOQoL-HIV (BREF) showed that some variables like pain and discomfort; dependence on medication were miss grouped. v. The study established that not all the models formulated using WHOQoL instruments were admissible in the proposed regrouped variables. vi. The positive predictive value and the negative predictive value of the merged reduced instrument was as good as the standard WHO quality of life instruments. In conclusion, the instrument with the merged reduced variables can be used to obtain data for both groups of patients together as well as for each group separately. Moreover, the GLS estimation method should be used when analyzing data from these instruments. It is therefore recommended that the merged reduced instrument should be used for both patients and the GLS used for analysis.
  • Item
    MODIFICATION TO SELECTION PROCEDURES IN PROBABILITY PROPORTIONAL TO SIZE SAMPLING WITHOUT REPLACEMENT
    (UNIVERSITY OF ILORIN, 2017-06) DAWODU, OMOTOLA OMOTAYO
    Probability proportional to size sampling (or unequal probability sampling) is a probability sampling in which every unit in the population has different probabilities of being selected in the sample. It can be with or without replacement. Most sample selection procedures in probability proportional to size sampling without replacement (PPSSWOR) focus on selection of sample size two which is the challenge of this study. A procedure involves selection of the first unit with probability proportional to without replacement, then the second unit is selected with probability , where and are the non – zero known probability values assigned to each unit of the population at any specific draw for , and N is the total number of units. Another procedure involves selection of the first unit with probability proportional to without replacement, and then a random sample of size n-1 from the remaining N-1 units is selected , where α and β are integer constants. This study aimed at modifying the sample selection procedures in PPSSWOR described above to produce more efficient ones that can be used for any sample size. The specific objectives of this study were to: (i) obtain two modified sample selection procedures (I and II) in PPSSWOR for any sample size; (ii) derive the probability of inclusion of single unit π_i and the joint probability of inclusion of units π_ij in the sample for procedures I and II; and (iii) determine the most efficient sample selection procedure with minimum variance using both real life and simulated data. Modified Procedure I involves selection of the first unit with probability proportional to P_i/( 2-P_i )without replacement, and then, a random sample of size n-1 from the remaining N-1 units will be selected while the modified procedure II involves selection of the first unit with probability proportional to (P_i^α)/( 2-P_i^β ) without replacement, and then a random sample of size n-1 from the remaining N-1 units is selected. The optimum values of α and β would be determined through empirical studies. The findings were: (i). two modified sample selection procedures (I and II) in PPSSWOR were obtained. Both procedures can be used to select sample of any size for specified values of two constants ; (ii). the probabilities of inclusion of single unit, π_i, and the joint probability of inclusion of units π_ij in the sample in PPSSWOR for these modified procedures were derived; and (iii). using both real life and simulated data, modified procedure II was found to have minimum variance, hence, most efficient sample selection procedure for any sample size. This study concluded that when and , modified procedure II is better than the two existing procedures. It is therefore recommended for use in probability proportional to size sampling without replacement.
  • Item
    EFFECTS OF PERTURBATIONS IN THE CORIOLIS AND CENTRIFUGAL FORCES ON THE STABILITY OF GENERALIZED PHOTO-GRAVITATIONAL RESTRICTED THREE-BODY PROBLEM
    (UNIVERSITY OF ILORIN, 2017-02) JAIYEOLA, Sefinat Bola
    The study of classical Restricted Three-body Problem (RTBP) and its generalizations have been of major interest to researchers over the years. This is due to the rise in the need for accuracy in determining astrometric positions which would help to reveal some peculiarities of components of motion and draw conclusions on the stability of space vehicles to be launched. This has led to the necessity of considering all possible physical properties (oblateness/triaxiality, radiation pressure, Poynting-Robertson (PR) drag, perturbing forces etc.) that affect the motion of particles in space. The effect of perturbations in the coriolis and centrifugal forces on the stability of the generalized photo-gravitational RTBP has been a major focus of investigations. However, the effect under the influence of the PR-drag from both oblate bodies has received little or no attention. Therefore, the aim of this research work was to investigate how perturbations in the coriolis and centrifugal forces affect the stability of the triangular libration points of the RTBP when the primaries were considered to be oblate, radiating with PR-drag effects. The objectives of this study were to: determine the effect of PR-drag on the stability of the libration of the generalized RTBP; investigate the effects of and on the stability of the generalized RTBP in the linear sense; establish the periodic orbit: period of oscillation, orientation and semi-axes of the proposed system; and verify the results obtained using astrophysical data for the Kruger 60 and RXJ0450, 1-5658 binary systems. The Hamiltonian and Lagrangian methods were employed to establish the relevant equations of motion, obtain the triangular libration points and investigate their stability using Murray’s and Routh Hurwitz’s criteria and verifying the results for the two binary systems using, MATLAB and Microsoft Excel Mathematical softwares. The findings from this study showed that the: • generalized system was unstable around the triangular libration points due to the presence of the PR-drag effect from both bodies; • presence of the parameter of the stabilizing factor, , in the roots of the characteristics equation does not change the instability of the system around the libration points; • period for the growth of the particle oscillation is dependent on the PR-drag parameter only, in the linear sense; • orientation and length of semi- axes are dependent on all the perturbing parameters; and • change in the values of and affects the values of the libration points and roots of the characteristics equations computed for the two binary but does not satisfy the criteria for stability. The study concluded that the system remained unstable even with the significant influence of perturbations due to the strong destabilizing effect of the PR-drag force. This work as a generalization of the classical case and the work of others, is therefore recommended to serve as a form of reference to achieving more interesting and vital results in Space Dynamics and also an added value to designers of space crafts and aerospace agencies.
  • Item
    ALGEBRAIC AND COMBINATORIAL RESULTS OF ORDER-PRESERVING FULL CONTRACTION TRANSFORMATION SEMIGROUP
    (UNIVERSITY OF ILORIN, 2017-02) IBRAHIM-GARBA, Risqot
    Let be a finite set, the semigroup of full contraction transformation and the subsemigroup of of all order-preserving full contraction transformation semigroup. Several works have been done on algebraic properties of semigroups and results were obtained among these are generating set, the structure of starred Green’s relations in , the local and global U-depth of singular self and strictly partial one-one mappings but the combinatorics properties of has not be considered. Therefore, this study focuses on combinatorics properties of using the classes of the starred Green’s relations and other algebraic properties such as the local U-depth, status of which as not be investigated were examined. The structure of Green’s relations of were also examined, which extended some results in the literature. The aim of this study is to develop the algebraic and combinatorics properties of order-preserving full contraction transformation semigroup and objectives are to: (i) determine the local and global U-depth of , where is the generating set; (ii) obtain the status of using the global U-depth; (iii) examine the number of , , and classes of height r ; (iv) determine the total number of , , and classes of ; (v) investigate the number of elements in each , , and classes within ; and (vi) characterize Green’s relations of . The following procedures were used to obtained the results of the study: the elements of the semigroup were arranged based on their height, within each height by their images sets and their kernel sets; from the table obtained triangular array and sequences were formed; the patterns of the arrangement were studied; formulas were deduced in each case through the combinatorial principles. The gap software were used to confirmed the total number of elements. Also, the minimum length of factorisation that gives were obtained from the known generating set, for all The findings of the study were: • for each the local U-depth of is equal to its defect and global U-depth is . ; • the status of satisfies the property ; • the number of , , and classes of height r are : , ; ; and respectively. • the total number of , , and classes in are: ; ; and respectively. • the number of elements in each , and within -class are: ; ; and respectively. • the equivalence classes of Green’s relations were also characterized based on their image set and kernel classes. In conclusion, some algebraic and combinatorics properties results of subsemigroup OCTn were obtained with relevant examples. This research work and its findings are expected to be beneficial in the areas such as: computational theory, automata theory and formal languages. It can assist in sorting data and designing better network and also, new in.
  • Item
    SOME RESULTS OF OPIAL-TYPE INTEGRAL INEQUALITIES
    (UNIVERSITY OF ILORIN, 2018-03) ANTHONIO, Yisa Oluwatoyin
    Inequalities are essential branches of Mathematics and are useful tools in some theory of analyses. Some inequalities such as Wirtinger’s, Holder’s, Cauchy’s, Minkowski’s, Hardy’s and Opial’s inequalities are more currently used by researchers and these are of particular interest in this study. Opial inequality and its generalisations have various applications in the theories of differential and difference equations. Opial inequality on time-scale (an arbitrary non-empty closed subset of real numbers) was introduced by Hilger in order to unify discrete and continuous analyses. Little work has been done on the determination of best possible constant for Opial-type inequalities on time-scales, multi-function and higher order delta derivatives. This study aimed at generalising Opial-type inequalities with best possible constant. The objectives of the study were to: (i) obtain some deductions from Opial-type inequalities of Shum, Pachpatte, Olech, Calvert, Fabelurin, Oguntuase, Beesack, Yang, and Maroni; (ii) investigate the nature of the deduced Opial-type inequalities; (iii) obtain sharp bound of the new Opial-type inequalities; and (iv) generalise Opial-type inequalities on time-scales. The methodology adopted was based on the definite and indefinite integrals with modified Jensen’s inequality. The theorem relating to this inequality states that: Let be convex. Then, for each , there exist such that Then implies The findings of the study were that: (i) deductions from Shum(1974), Pachpatte(1986), Olech (1962), Calvert(1967), Fabelurin(2010), Oguntuase(2009), Beesack(1962), Yang(1983), and Maroni(1967) results were generalised; (ii) the properties of Opial-type inequalities involving integral of functions and their derivatives were verified; (iii) sharp bounds for the new Opial-type inequalities were obtained; and (iv) Opial-type inequalities were generalised on time-scales. The study concluded that modified Jensen’s inequality was useful in refining and extending Opial-type inequalities in order to generalise some results on time-scales. The results recommended that refined Jensen’s inequality is sufficient in obtaining Opial-type inequalities.
  • Item
    MODELING DYNAMIC MICRO AND MACRO PANEL DATA WITH AUTOCORRELATED ERROR TERMS
    (UNIVERSITY OF ILORIN, 2018-05) UTHMAN, KAFAYAT TOLANI
    Estimation and Inference in dynamic panel data models is limited by the presence of autocorrelation of the error terms. This causes the traditional panel estimators to be biased and inconsistent. This study was aimed at investigating the sensitivity of some dynamic panel data estimators in the presence of autocorrelation. The objectives of the study were to: (i) evaluate the performances of some dynamic panel data estimators in the presence of autocorrelation; (ii) evaluate the performances of the estimators with change in sample sizes and time periods; (iii) propose an estimator by modifying the existing estimators; and (iv) compare the performance of new estimator with some of the existing estimators at different sample sizes and levels of autocorrelation. Simulation study was carried out using Monte-Carlo experiment in the environment of R statistical package to generate data with panel structure for sample sizes 10, 20, 50, 100 and 200 and time periods 5, 10, 15 and 20, autoregressive coefficients ( and =0.3, 0.5 and 0.7) and autocorrelation coefficients ( and = 0.2, 0.5 and 0.9). Five dynamic panel data estimators; Ordinary Least Square (OLS), Arellano-Bond Generalized Method of Moments one-step (ABGMM1), Blundell-Bond System Generalized Method of Moments one-step (SYS1), Anderson-Hsiao Instrumental Variable in difference form (AH (d)), Proposed modified estimator (P-est.) were compared. Two robust estimators (M and MM) were also used. Real life data from the Organization of Petroleum Exporting Countries was used to confirm the results from simulation study. Absolute bias and root mean square error were used to evaluate finite properties of the estimators The following were the major findings from the study: (i) AH (d) and ABGMM1 estimators performed well in the presence of autocorrelation. (ii) AH (d) estimator performed relatively well when the time period is small while ABGMM1 estimator outperformed all other estimators when sample size (n) is large for all the time periods considered; (iii) The results showed that all estimators (with the exception Blundell-Bond System GMM and OLS) generally performed better with small and large time periods (T). However, ABGMM1 seemed to show the largest improvement as sample size (n) and time periods (T) increases; (iv) The proposed modified estimator was obtained from the existing estimators; (v) The proposed modified estimator outperformed all other estimators in small and large samplesize irrespective of time period; and (vi) Results obtained from the analysis of the Real life data validated the findings from Monte-Carlo studies. In conclusion, the proposed modified estimator is more appropriate in estimating the parameter of lagged dependent and exogenous variables when dealing with dynamic panel data models in the presence of autocorrelation. It is recommended to use the proposed modified estimator when dealing with dynamic panel data models in the presence of autocorrelation
  • Item
    HYBRID PLAN DESIGNED AS POSSIBLE PENSION PLAN FOR NIGERIA
    (UNIVERSITY OF ILORIN, 2016-09) AKALONU, RAPHAEL ONYECHEFULE
    The history of pensionschemes in Nigeria started with the 1951 Pensions Ordinance. In 1979, the then Military Government established a Defined Benefit Scheme (Old) for civil servants. The scheme failed to meet its objective due to maladministration and lack of funding, resulting in non¬payment of benefits to workers on retirement. Consequently, the government established the Pension Reform Act 2004 (current), a Defined Contribution (DC) plan, aimed at remedying the shortcomings of the old one. Shortly, complaints adorned the media that the 15 percent contribution of the current plan was not enough to provide meaningful benefit after 35 years of service.The main objective of this Study is to design Hybrid Plan as possible Pension Plan for Nigeria. Specifically, the Study focused on: (i) to compare the monetary benefits between the old and the current pension schemes; (ii) to compare the adequacy of the current scheme with those of eight other developed and developing countries’ pension schemes selected from five continents; (iii) to determine effects of the Pension Risk Factors (Mortality and Interest Rate Volatility); and (iv) to design three new pension plans. The population for the study was Nigerian employees grouped into seven by Nigeria’s National Salaries, Incomes and Wages Commission (NSIWC) as at July 2010. Four groups were randomly selected for the study. Replacement Ratio data for the eight countries whose pension plans were to be compared with that of Nigeria were obtained from 2012 publications of International Monetary Fund and Organization for Economic Cooperation and Development. The schedule for calculating pension benefits under the old scheme was obtained. Such schedule does not exist for the current scheme. Actuarial method for estimating such benefits was used. The pension benefits of the two pension schemes were then compared. Replacement Ratio was calculated for the current scheme to compare its adequacy with those of the eight countries whose Pension Replacement Ratios had been obtained. Assumed simulated interest rates, salary incremental and annuity values were used and combined with mortality functions. The results showed that: (i) the ratio of gratuity paid by the old and current schemes was 3.5 to 1, while that of pension benefits was 2.3 to 1, implying that the old scheme paid at least twice as the current; (ii) the eight other countries had adopted the World Bank's (1994) three-pillar pension models in their respective reforms while Nigeria has the mandatory Defined Contribution plan for workers and no social security or voluntary plan for either formal or informal sectors; (iii) increase in interest rate increased the amount available for purchase of annuity while decrease in mortality rate improved life expectancy and hence annuity rate resulting in decrease in amount of pension receivable; (iv) three pension plans, namely, Minimum Guaranteed Money Purchase Plan, Cash Balance Plan and Hybrid Plan were designed for the formal sector and a Mandatory Collective Personal Plan was also proposed for the informal sector. The study concluded that the Hybrid Plan had higher replacement ratio, even at 20% volatility, than others. It was therefore recommended that the Hybrid Plan with higher replacement ratio be adopted by Nigerian Government for its workers.
  • Item
    MODIFIED TECHNIQUES FOR MODELLING SURVIVAL DATA WITH COMPETING RISKS
    (UNIVERSITY OF ILORIN, 2017-07) DAUDA, KAZEEM ADESINA
    A major drawback of traditional Cox Proportional Hazard (CPH) technique for modeling survival data is its inability to identify and classify subjects based on their inherent biological associations with respective risk endpoints as well as its inability to model the possible interaction effects among the covariates. To this end, this work aims at developing some techniques for modeling survival time data, especially with competing risks to address these challenges. The objectives of this studyare to: (i) develop survival tree-based technique via the Classification and Regression Tree (CART) using within-node homogeneity and between-node heterogeneity methods; (ii) extend the Multi-Layer Perceptron Artificial Neural Network (MLPANN) for modeling survival data with competing risks; (iii) extend the Multivariate Adaptive Regression Splines (MARS) method to model classical survival data using Cox-Snell residuals; and (iv) compare the efficiencies of the proposed methods with the existing methods using simulated and real life data sets. The survival trees were formed by calculating the reduction in impurity going from the parent node to the child nodes via an impurity function. Two impurity functions of Deviance and Cox-Snell Residuals were used for both the sum of squares and absolute value impurity functions in within-node homogeneity. The Cox-Snell Residual was employed as a common response in MLPANN and MARS models. For the between-node heterogeneity trees, different inference procedures for testing the equality of two cumulative incidence functions were employed. The efficiency of the various methods were assessed using root mean square, mean absolute and relative square errors. Findings of the study were that: (i) the within-node homogeneity and between-node heterogeneity survival Trees with competing risks were developed to model the data and classify the subjects based on their associated risks; (ii) a MLPANN-based method for modeling survival data with competing risks was developed; (iii) method for modeling classical survival data using MARS techniques was developed and was found capable of modeling the possible interactions among the covariates; (iv) the proposed survival trees model determined the cutpoint of the tree better than the existing methods; (v) the proposed MLPANN-based method was shown to be more efficient than the Cox PH model using three real life data sets; and (vi) results obtained based on simulated and real-life HIV data sets showed that the proposed survival MARS model was better than the existing methods considered. This study concluded that the proposed methods were suitable for identifying subjects with their associated risks as well as being able to model possible interaction effects among the covariates on survival time. They are therefore recommended whenever interest is focused on identifying subjects with their biological risks through prognostic variable biomarkers for diagnostic purposes.
  • Item
    DEVELOPMENT AND APPLICATION OF ZERO-TRUNCATED PROBABILITY MASS FUNCTION OF THE COM-BINOMIAL DISTRIBUTION
    (UNIVERSITY OF ILORIN, 2018-05) ADEROJU, SAMUEL ADEWALE
    The major problem in modeling count data is over-dispersion, which occurs as a result of long tail, too many occurrences of zeros or none occurrence than would normally be expected by Poisson and binomial distributions. The zero-truncated distributions arise in those situations where there is no zero by nature of the data (structurally).The existing zero-truncated distributions are not flexible enough to capture peculiar characteristics such as long tail as well as over-dispersion at the same time. The aim of this research was to develop and implement a more generalized zero-truncated distribution that come from the mixture of distributions for non-zero count data to include most of the characteristics which existing ones do not have. The specific objectives were to: (i) examine the issues zero-truncated count data; (ii) investigate the effectiveness of selected binomial-based distributions for count data with various dispersions; (iii) develop zero-truncated Com-binomial distribution; (iv) identify some properties of the newly developed distribution and (v) demonstrate the use and assess the performance of the distribution against some existing ones using real life datasets. Following mixture distribution methodology, the Zero-Truncated Com-Binomial (ZTCB) was derived from the mixture of Conway-Maxwell-Poisson generalization distribution to the Binomial distribution. The first two moments via probability generating function were also derived. The Maximum Likelihood Estimation of the parameters were also obtained by direct maximization of the log-likelihood function using “optim” routine in R software. The findings of the study were: (i) the challenge in modeling count data, with a long tail and the problem arising from zeros (truncated or excess) were resolved; (ii) the Com-Binomial (CB) distribution outperformed other mixture distributions in handling either under-dispersion or over-dispersion situation; (iii) the probability mass function (pmf) of ZTCB distribution was derived and the first two moments were derived via probability generating function; (iv) the ZTCB distribution peaks around the mean to have two tails to resemble the normal distribution or peaks around 1 or n to resemble exponential distribution for carefully chosen parameter v . It is the generalization of the Zero-Truncated Com-Poisson, Zero-Truncated Poisson and Zero-Truncated Binomial distributions; and (v) the ZTCB distribution is more flexible to handle all levels of dispersion than Zero-Truncated Multiplicative Binomial distribution. The study concluded that the ZTCB distribution which is characterized by two parameters is more flexible than other distributions. . It is recommended that when modeling structurally zero-truncated data, ZTCB distribution be used to obtain a robust result. This study therefore provides useful alternative to the existing distributions.
  • Item
    DERIVATION OF A GENERALIZED DISTRIBUTION FOR MATERNAL MORTALITY RATIO
    (UNIVERSITY OF ILORIN, 2018-05) OMEKAM, IFEYINWA VIVIAN
    Maternal mortality is an outcome of adverse health and remains a major challenge in some societies. Study on distribution fitting for global Maternal Mortality Ratio (MMR) is therefore needful. Analysis of data without a pre-knowledge of the distribution that describes data may lead to misleading or irrelevant results. Distribution fitting to data provides the best fitting distribution for data analysis. Limitations in characteristics of existing distributions motivate generalization of distributions in order to improve goodness of fit and introduce more flexibility. The aim of this study was to derive a generalized distribution for MMR while the specific objectives were to: (i) generate families of generalized distributions; (ii) illustrate the flexibility of generalized distributions; (iii) fit some existing distributions to MMR to determine the best fitting distribution; (iv) derive generalized distributions for the best fitted distribution; and (v) fit and assess the derived generalized distributions to MMR. Five existing parameter induction methods were applied to generate families of generalized distributions with an additional parameter. Methods in permutations of these five existing parameter induction methods, namely: Lehmann Alternative 1; Lehmann Alternative 2, Marshall and Olkin Method, Power Transformation Method and α-Power Transformation Method taking two methods at a time were applied sequentially to obtain generalized families with two additional shape parameters. Same methods were also applied twice. Exponentiated families of Generalized Pareto Distribution (GPD) were generated and flexibility of generalized families was illustrated by showing effects of introduced parameters on the Probability Density and Hazard Function shapes. Histogram plot, Kolmogorov-Smirnov (K-S) distances and Akaike Information Criterion (AIC) were employed in determining the best fitting distribution for MMR. Using best fitted distribution as base distribution, its generalized distributions were derived from families generated and subsequently fitted to MMR using AIC and K-S distances for selection. Findings of the study were that: (i) seventeen distinct generalized families of distributions having two additional parameters were generated for commutativity and idempotency of some methods. Five families with one additional parameter were also generated; (ii) Lehmann type I GPD improved flexibility by introducing the unimodal and bathtub shapes in both Probability Density and Hazard Functions shapes; (iii) Frechet distribution provided the best fit for MMR amongst plausible distributions studied; (iv) four generalized Frechet distributions with an additional shape parameter and two generalized Frechet distributions with two additional shape parameters were generated; and (v) generalized Frechet distributions improved goodness of fit based on K-S distances with Lehman type II generalized Frechet giving the best fit while AIC selected Marshall-Olkin generalized Frechet distribution as a good fitting but parsimonious distribution for MMR. In conclusion, families of generalized distribution introducing n parameters may be generated by sequentially applying methods in permutations of s (s≥n) distinct parameter induction methods taking n methods at a time. Lehmann type I GPD introduced flexibility thereby illustrating flexibility of generalized distributions. Generalized Frechet distributions improved goodness of fit but for parsimony, Marshal-Olkin generalized Frechet distribution is recommended as an alternative reference distribution to the Frechet distribution for modelling MMR.
  • Item
    CONSTRUCTION OF OPTIMAL MINIMUM REPLICATE DESIGNS
    (UNIVERSITY OF ILORIN, 2018) AIYELABEGAN, Adijat Bukola
    A binary block design of size D(t,b,k,r) has ttreatments, set out in b blocks, each block having kunits, and each treatment occurring once or not at all in each block and r=bk/t the number of replicates. The focus in every experimental situation is to search for a design that provides maximum information but utilizes minimum experimental material. A design which can guarantee high precision in estimating parameters with small probabilities of committing Type I and Type II errors is considered useful to the experimenter. Therefore, the aim of this research work was to construct a new class of optimal designs with minimum replication. The specific objectives were to; (i) develop an algorithm for the optimal binary block designs, (ii) assess the binary designs constructed using a new optimality criterion, Minimum Variance MV={∑_(i=1)^(t-1)▒(e_i-e ̅ )^2 } where e_iis the i^th eigenvalue of the information matrix D(t,b,kr)and e ̅ the mean of the eigenvalues, (iii) classify the obtained designs as partially balanced incomplete block design PBIBD with massociate classes, with m=1,2,. . .. The best designs being cases withm=1 or 2, namely PBIBD/1 and PBIBD/2, respectively. Classes of optimal designs were constructed using the developed algorithm for the following parameter combinations, namely, 2 ≤r≤t, 6≤t≤15, and 2 ≤k≤t/2. The information matrix was obtained for each design D(t,b,kr)and the one with minimum MV was selected. Designs with equal eigenvalues were classified as the PBIBD/1, those with only two distinct eigenvalues as PBIBD/2. The following were the major findings from this study: The algorithm developed had been able to search for the optimal design amongst a class of designs of same sizeD(t,b,kr). The MV criterion used in design assessment outperformed the well known A-, D-, or the E-Optimality criteria and it was found to be less cumbersome in computation, A new class of optimal designs were constructed and results obtained for parameters combinations 2 ≤r≤t, 6≤t≤15, and 2 ≤k≤t/2. The study concluded that the algorithm for constructing small size replicates optimal experiments was effective. The study also confirmed that increasing the number of replications or block sizes would lead to improved efficiency but higher cost for the experiment. Thus, the research has opened up efficient designs for pilot survey and for easier computation with less cost. For an experimenter to obtain optimal design for the replication and unit constraint problem, construction of designs and their complementary designs should be considered. Optimal minimum replicate design is recommended for the construction of a better design for a particular anti-symmetric design with an improved frequency distribution of concurrence matrix.
  • Item
    MODELING OF A CURE RATE FOR AN INFECTIOUS DISEASE WITH CO-INFECTION
    (UNIVERSITY OF ILORIN, 2018-06) BALOGUN, OLUWAFEMI SAMSON
    Several investigations nowadays allow the examination of cure fraction in the analysis for the survival function of the disease. This research is being motivated by the fact that an infectious disease with a co-infection is becoming more prominent in recent time but not much research has been done in the area of its epidemiological modeling. Consequently, the use of a survival model that incorporates the cured rate of the management in the analysis which is called cure rate model is adopted. The aim of the study is to model any infectious disease with a co-infection so as to estimate the performance of the management. The specific objectives are to: (i) derive the appropriate probability density functions for the sole infectious and co-infection disease; (ii) determine the distribution that best fits and estimate the cure rate parameter for the two situations; and (iii) examine and determine some risk factors associated with the two situations. The existing model used in literature has been the Exponential distribution. This study was extended to include two forms of the Weibull distribution to estimate the cure rate using maximum likelihood estimation method. Goodness-of-fit was presented to screen the distributions for use. Simulation and Real-life data were used for this study using R and STATA softwares in the estimation procedure. The findings of the study were: (i) the two-parameter Weibull distribution was the best fit for TB and TB-HIV co-infected patients in this situation; (ii) the cure rate of TB was 26.3% which was higher than that of the TB-HIV co-infection which was 23.1% (0.0001); (iii) the non-parametric median survival time of TB patients was 51 months while that of TB-HIV co-infected patients was 33 months; and (iv) there was no risk factor associated with TB-HIV co-infected patients while age was significantly a risk factor for TB patients among the suspected risk factors used. The study concluded that appropriate parametric model is applicable and can be used to model an infectious disease with a co-infection. The cure rate model is useful when sufficient information is available to implement it. It recommended that this work is particularly useful to estimate cure rate in Hospital setting or prevalence in cross sectional data and that since hazard increases with age from the real-life data used, early screening of people is highly encouraged. The study therefore provides information which serves as a warning signal to the entire population to intensify the fight against TB and TB-HIV co-infection.
  • Item
    A ROBUST ESTIMATION PROCEDURE IN LINEAR REGRESSION IN THE PRESENCE OF OUTLIERS
    (2018-07) ALANAMU, TAOHEEDAT
    The best method that can be used in regression analysis under some assumptions is Ordinary Least Squares (OLS) method. Such assumptions include; constant variance, normality and uncorrelated error terms among others. Violation of these assumptions such as the presence of outliers may have a large effect on the ordinary least square estimates. The existing robust techniques; Least Absolute (LA) estimator, Huber Maximum (H-M)estimator, Bi-Square Maximum (B-M)estimator, M-Maximum (MM) estimator, Least Median Square (LMS)estimator, Least Trimmed Square (LTS) estimator and S estimator (S) cope with these unusual observations. However, they may be sensitive to the presence of outliersespecially when they are in both directions of X and Y. The aim of this study therefore was to obtain a procedure in estimating parameters in linear regression model that is robust in the presence of outliers. The objectives were to: (i) obtain the best method in linear regression analysis among the existing robust methods using efficiency and breakdown point; (ii) propose an efficient procedure in estimation of parameters in Linear Regression in the presence of outliers with high breakdown points; and (iii) validate and compare the proposed procedure with the existing ones. An estimation method for the classical linear regression was proposedusing regularization methods that allow one to handle a variety of inferential problems. Specifically, each outlying point in the data is estimated using case-specific parameter through Ridge regression approach and penalized estimators were suggested when the number of parameters in the model is more than the number of observed data points. The proposed estimation methodwas compared with the existing methods usingsimulated data with varying proportion and magnitude of outliers in all directions. The results from simulation were validated using real-life datasets. The performances were assessedusing breakdown point and efficiency. The findings of the study were that: (i) the strength and weakness of any robust estimator strongly depend on the percentage, magnitude and direction of the outlier; (ii) among the existing robust estimators, MM is the best in terms of unidirectional outlier (X or Y direction); (iii) the proposed method was found to be better in dealing with outliers in all directions (X or Y or both); and (iv) robust estimators cannot improve the adequacy property of linear regression model. The study concluded that the proposed method is a better procedure in estimation of parameters in linear regression in the presence of outliers due to its high efficiency and breakdown point of up to 50%. The proposed method is therefore preferred and recommended whenever there is high proportion of outliers and the direction of outliers is unknown.
  • Item
    UNSTEADY HEAT AND MASS TRANSFER IN CONVECTIVE ROTATORY RIVLIN-ERICKSEN FLOW PAST A POROUS VERTICAL PLATE
    (UNIVERSITY OF ILORIN, 2018-07) AGUNBIADE, Samson Ademola
    Modelling of non-Newtonian fluid flow gives rise to non-linear problems which involves much computation. A non-Newtonian flow of Rivlin-Ericksen type past a porous vertical plate has been the subject of investigation by researchers due to its applications in engineering, food, paper and petroleum industries. A vast number of these investigations ignored rotatory and viscous dissipation effects of the fluid for simplification of problems. However, fluid flow realistically includes rotatory and viscous dissipation. Therefore, the objectives of the study were to: (i) determine the contribution of viscous dissipation to the Rivlin-Ericksen fluid; (ii) evaluate rotatory effects on convective Rivlin-Ericksen flow in a porous vertical plate; (iii) examine the effects of chemical reaction and thermal-diffusion on concentration profiles; (iv) analyze the influence of thermal radiation and viscous dissipation on fluid temperature; and (v) determine the impact of diffusion-thermo on concentration, temperature and velocity profiles. The non-dimensional momentum, energy and species governing equations of the study respectively, are: (1) (2) and (3) where , and are non-dimensional velocity components in the , , and directions respectively, and , , , , , , , and are, respectively, temperature, concentration, time, rotation parameter, magnetic parameter, Grashof number for heat transfer, Grashof number for mass transfer, viscoelasticity parameter and heat absorption coefficient. , , , , , , , , and are radiation parameter, Prandtl number, Schmidt number, chemical reaction parameter, suction velocity parameter, oscillation parameter, scalar constant, Dufour number, Soret parameter and Dissipation parameter respectively. These equations were reduced to ordinary differential equations using perturbation technique and the coupled ordinary differential equations obtained with a set of corresponding boundary conditions were solved for velocity, temperature and concentration using Adomian decomposition method. The effects of various fluid parameters on velocity, temperature and concentration were presented in tabular and graphical forms. The findings of the study were that: (i) resultant velocity profiles were enhanced with an increase in the value of dissipation parameter; (ii) an increase in rotational parameter speed up the resultant velocity; (iii) concentration distribution decreased with a rise in chemical reaction and accelerated with an increase in thermal diffusion; (iv) temperature profiles experienced decline with an increase in the value of radiation parameter, while it was enhanced with an increase in the value of viscous dissipation parameter; and (v) the presence of diffusion-thermo parameter enhanced the temperature distribution and reduced the velocity and concentration profiles. The study concluded that effects of rotatory and viscous dissipation have considerable influence on the velocity, temperature and concentration of Rivlin-Ericksen flow. It is therefore recommended that combined effects of rotatory, viscous dissipation, radiation, thermal-diffusion and diffusion-thermo be included when modeling Rivlin-Ericksen fluid flow for practical purpose.  
  • Item
    Two-Level Time Series Modelling With Autocorrelated Error Terms
    (UNIVERSITY OF ILORIN, 2018-07) AZEEZ, OLASUNKANMI ISIAKA
    The multilevel models, viewed as linear mixed-effects models, have been adapted as a standard method of conducting analyses in repeated measure data. This mixed model may still violate some of the assumptions, such as random error independence and homogeneity of variance, thereby leading to model misspecification. However, the literature has not adequately addressed these challenges. To this end, this work was motivated to examining the effects of the violation of these assumptions and their impacts on both standard and correlated modelling techniques. The objectives of this study were to: (i) investigate the consequences of violation of non-autocorrelation assumption on the parameter estimates in two-level time series modelling techniques; (ii) examine the asymptotic behaviour of two-level time series modelling techniques when the assumption of non-autocorrelated error terms is not met; (iii) determine the robustness of two-level correlated time series modelling technique to misspecification of the true dependency structure between observations; and (iv) validate the modelling techniques on suitable real-life data sets. Data were simulated by injecting different levels of autocorrelation at varying sample sizes. The data generated were of first-order autoregressive errors with specified parameters. The maximum likelihood and Bayes estimation methods were used to estimate the parameters. The sensitivity of model fit criteria and the robustness of the mixed effect estimates of the two-level time series models for violations of non-autocorrelation assumptions were examined. The components of the covariance structure for the first-level model were mis-specified to determine their effects on parameter estimation. The measures of performance used were mean square error, bias, and variance. A real-life data set was used to validate the modelling techniques. The findings of the study were that: i. violating non-autocorrelated errors assumption affected the accuracy of the parameter estimates of both standard and correlated modelling techniques, however, the severity of this bias depends on the degree to which errors are autocorrelated, but adding the autocorrelation parameter improved the fit to the data under correlated model; ii. though REML exhibited less bias when compared with MLE, however sensitivity of both level sample sizes to a non-autocorrelation violation is not consistent iii. misspecifying the within-subject covariance structure has a negative effect on the parameter estimates; and iv. with the observed variance-covariance matrices, there is low levels of inferential accuracy of mixed effect estimates and increase sample sizes at both levels has no effect on the bias rate. The study concluded that violation of model assumptions has significant effects on the accuracy of the model parameter estimates regardless of sample sizes, while a correlated model provided a better fit. The study recommended that autocorrelation should be tested for, and if found should be modelled appropriately. This study, therefore, provides additional knowledge concerning the performance of two-level time series modelling techniques when a non-autocorrelation assumption is violated.
  • Item
    Bayesian Inference from Discretely Observed Epidemic Model Using Multidimensional Diffusion Approximation Approach
    (UNIVERSITY OF ILORIN, 2018-08) Aliu, Abbas Hassan
    Most epidemic data discretely observed are subjected to environmental influence. The dynamic of such data can well be described by discretised-version of Stochastic Differential Equations (SDEs). However, estimating the parameter for such a process proves to be challenging in practice. The difficulty that underlies this was the general intractability of the transition density, resulting into the likelihood functions that are not in closed forms. A direct implementation of Bayesian method of estimation results into convergence problems. Indeed, the literature has not adequately addressed these challenges for multi-dimensional cases. The aim of the study was to find the estimation procedure which would not suffer from these challenges. The objectives of these study were to: (i) derive epidemic models for explaining the behaviour of SDEs models, (ii) propose an improved diffusion bridge sampler capable of overcoming convergence problems, (iii) propose Bayesian data-augmentation method of parameters estimation which would not suffer from convergence, and (iv) compare the performance of the proposed methods with existing methods using simulated and real-life datasets. The exact likelihood functions of SDEs are very rare in practice. A numerical approximation approach through discretization of the process using Euler-Maruyama scheme was adopted. Data were simulated by introducing augmented values between every two consecutive time points to allow Euler scheme converge to the true continuous-time SDEs. Then, Bayesian data-augmentation method of estimation was performed for such problems via Markov Chain Monte Carlo (MCMC) method of sampling. Posterior means (PM), posterior credible interval (PCI), acceptance probability, trace plot, Autocorrelation function (ACF) plot and the density plot were use to assess the performance of the estimators under different number of intermediate sub-interval data point. The findings of the study were that: (i) the evolution of SDEs (diffusion process) was found to be the most suitable trajectory that mirrored the exact dynamic epidemic models’ influence; (ii) amongst the diffusion sampler considered, the proposed diffusion bridge sampler was found to be better when the number of augmented values tend to infinity; (iii) the proposed method of parameter estimate was found not to suffer from convergence problem; and (iv) results of assessment from epidemic outbreak in the simulation study and in two different real-life datasets revealed that the proposed method of parameter estimate was better than existing methods. The study concluded that modeling from discretely-observed Stochastic Differential Equations (SDEs) using Bayesian data-augmentation approach provided alternative method of estimation to obtained parameter estimates of any natural dynamic phenomena that experience random variations. It is therefore recommended that the epidemiologists should be encouraged to apply diffusion approximation approaches whenever epidemic models are subjected to environmental influence.