Browsing by Author "Salihu, S.A."
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item HYBRID SFLA-TABU SEARCH ALGORITHM FOR OPTIMAL PROJECT SCHEDULING AND STAFFING(12th AICTTRA Conference Proceedings Ile Ife., 2019) Mojeed, H.A.; Jimoh, R.G.; Sadiku, P.O.; Salihu, S.A.Planning a large scale software project involves the objectives of optimal ordering of a set of activities and an allocation of staff to activities. Current adopted method presents difficulty in reaching optimal good solutions when the two objectives are combined. This study proposes a hybrid SFLA-TABU search algorithm to solve the project scheduling and staffing problem with the two objective combined. The hybrid algorithm retains the framework of SFLA algorithm but employs the neighborhood structure method of tabu search and its avoidance of already explored area in the solution space to move towards optimal solution within the local memetic evolution. The algorithm was applied on three randomly generated problem instances representing small, medium and large sized problems. Results showed that the proposed algorithm was able to produce good optimal solutions with average fitness values 0.44, 0.56 and 0.15 in small, medium and large sized problems respectively. The hybrid algorithm outperformed the baseline algorithms in 100% of the problem instances and findings from the experiment revealed theoretically, the scalability of the proposed approach in handling various sizes of software project.Item INVESTIGATING THE EFFECT OF DATA NORMALIZATION ON PREDICTIVE MODELS(Faculty of Communication and Information Sciences, 2017) Ajiboye, A.R.; Ajiboye, I.K.; Salihu, S.A.; Tomori, R.A.The creation of predictive model using a supervised learning approach involves the task of building a model of the target variable as a function of the explanatory variables. Before a model is created, it is necessary to put the data in a suitable format. Studies have shown that normalization of data is crucial to descriptive mining as it improve the accuracy and efficiency of mining algorithms. However, in the case of prediction, it is not in all cases that predictive models are created from normalized data. This paper presents the experimental results of investigating the effect of normalizing the input variables on models created for prediction purposes. Experiments are conducted for the creation of predictive models from two different sets of equal size of data using neural network techniques. The trained network models created with the same architecture and configurations are subsequently simulated using a set of untrained data. The evaluation results and the comparison of the models created through the two data sets of different format reveals that, the model created from a normalized data appears to be more accurate as a decrease in error by 0.003 are consistently recorded. The model also converges much earlier than the model created from the data that does not undergo any form of normalization.Item Performance Evaluation Of Manhattan And Euclidean Distance Measures For Clustering Based Automatic Text Summarization.(FUOYE Journal of Engineering and Technology, 2019) Salihu, S.A.; Onyekwere, I.P.,; Mabayoje, M.A.; Mojeed, H.A.In the past few years, there has been an explosion in the amount of text data from a variety of sources. This volume of text is a valuable source of information and knowledge which needs to be effectively summarized to be useful. In this paper, automatic text summarization with K-means clustering techniques is presented by employing two different distance measurement methods (Euclidean and Manhattan). The dataset extracted from African prose was preprocessed using stopwords removal and tokenization. The preprocessed document is converted into vector representation using tf-idf technique and k-means clustering is applied using Euclidean and Manhattan distance measures to generate summary. There are different distance measures for k-means which has been used in several works. However, there is dearth of work on performance evaluation of these distance measures in text summarization. The experimental analysis was performed on Waikato Environment for Knowledge Analysis. The results obtained showed that the Euclidean variation produced an extractive summary of sentences amounting to 72% from three different clusters while the Manhattan variation produced an extractive summary of sentences that made up 94% of the total document all in one cluster using compression ratio as the performance metric.Item USING T-WAY INTERACTION TECHNIQUES FOR THE REDUCTION IN THE NUMBER OF TEST CASES(Department of Computer Science, University of Ilorin., 2017) Ajiboye, A.R.; Mejabi, O.V.; Salihu, S.A.A test case is a set of input data designed to discover a particular type of error or defect in the software system. In order to develop software that perform as expected, extensive testing should be carried out to ensure reliability. Ideally, software testers would want to test every possible permutation of the software, but in practice, due to the complexity of the software, exhaustive testing is usually not feasible. This paper presents the use of tway interaction techniques with a view to reducing the number of test cases in the process of software testing. The software on which the approach is implemented consists of parameters that have the same number of values and their interaction is based on pairwise combination. The technique minimizes the number of test cases as it tests all pairs of variables. The resulting outputs show a significant reduction in the number of test cases from 8 to 6; this is a 25 % reduction. Thus, the overall time required to test the software is optimized. Also, the final reduced test cases are found to be free of redundancy and the technique used shows a high degree of parameter interaction.