Browsing by Author "Bajeh, Amos Orenyi"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item A 2- Dimensional Gabor-Filters for Face Recognition System: A Survey(Faculty of Computers and Applied Computer Science, "Tibiscus" University of Timişoara, Romania., 2017) Aro, Taye Oladele; Oluwade, Bamidele A; Abikoye, Oluwakemi Christiana; Bajeh, Amos OrenyiAn efficient recognition algorithm for human face is a technique discovered to be based on good facial feature representation. A two-dimensional Gabor represents a group of wavelets which capture optimally frequency information and local orientation from a digital image. Gabor filters have been employed greatly and highly considered to be one of the best performing techniques for feature extraction in face recognition owing to its invariant against local distortion initiated by changes in expression, lighting and pose. This paper discusses some reviews on 2-Dimensional Gabor-based facial recognition techniques. The huge feature dimensionality problem associated with Gabor feature is stated and several techniques to reduce this problem are suggested.Item Email Data Security Using Cryptography(Faculty of Communication and Information Sciences, University of Ilorin, Ilorin, Nigeria., 2016) Bajeh, Amos Orenyi; Ayeni, A .O.; Balogun, Abdullateef Oluwagbemiga; Mabayoje, Modinat AboloreEmail has become one of the prominent medium of message transmission between web users. Thus, email security measures need to evolve from time to time in order to mitigate security threats which are also evolving and becoming sophisticated. This paper presents a study that proposed, implemented and evaluated the performance of an email security approach that utilizes the Rivest Shamir Adleman algorithm as an additional layer of security. The approach involves an application that converts messages into cipher text as a separate platform.The cipher text is transmitted as the original message on the email platform. Also, the cryptographic keys are transmitted between communicating users via another media such as mobile phone.The proposed approach is evaluated by measuring the accuracy of the message transmitted between users. It showed an average accuracy of 98% overall the scenarios examined.Item Email Data Security using Cryptography(International Journal of Information Processing and Communication (IJIPC), 2016) Bajeh, Amos Orenyi; Ayeni, A.O.; Balogun, A.O.; Mabayoje, M.A.Email has become one of the prominent medium of message transmission between web users. Thus, email security measures need to evolve from time to time in order to mitigate security threats which are also evolving and becoming sophisticated. This paper presents a study that proposed, implemented and evaluated the performance of an email security approach that utilizes the Rivest Shamir Adleman algorithm as an additional layer of security. The approach involves an application that converts messages into cipher text as a separate platform. The cipher text is transmitted as the original message on the email platform. Also, the cryptograph keys are transmitted between communicating users via another media such as mobile phone. The proposed approach is evaluated by measuring the accuracy of the message transmitted between users. It showed an average accuracy of 98% over all the scenarios examined.Item Heterogeneous ensemble with combined dimensionality reduction for social spam detection(International Association of Online Engineering, 2021) Oladepo, Abdulfatai Ganiyu; Bajeh, Amos Orenyi; Balogun, Abdullateef Oluwagbemiga; Mojeed, Hammed Adeleye; Salman, Abdulsalam Abiodun; Bako, Abdullateef IyandaSpamming is one of the challenging problems within social networks which involves spreading malicious or scam content on a network; this often leads to a huge loss in the value of real-time social network services, compromise the user and system reputation and jeopardize users trust in the system. Existing methods in spam detection still suffer from misclassification caused by redundant and irrelevant features in the dataset as a result of high dimensionality. This study presents a novel framework based on a heterogeneous ensemble method and a hybrid dimensionality reduction technique for spam detection in micro-blogging social networks. A hybrid of Information Gain (IG) and Principal Component Analysis (PCA) (dimensionality reduction) was implemented for the selection of important features and a heterogeneous ensemble consisting of Naïve Bayes (NB), K Nearest Neighbor (KNN), Logistic Regression (LR) and Repeated Incremental Pruning to Produce Error Reduction (RIPPER) classifiers based on Average of Probabilities (AOP) was used for spam detection. To empirically investigate its performance, the proposed framework was applied on MPI_SWS and SAC’13 Tip spam datasets and the developed models were evaluated based on accuracy, precision, recall, f-measure, and area under the curve (AUC). From the experimental results, the proposed framework (Ensemble + IG + PCA)outperformed other experimented methods on studied spam datasets. Specifically, the proposed framework had an average accuracy value of 87.5%, an average precision score of 0.877, an average recall value of 0.845, an average F-measure value of 0.872 and an average AUC value of 0.943. Also, the proposed framework had better performance than some existing approaches. Consequently, this study has shown that addressing high dimensionality in spam datasets, in this case, a hybrid of IG and PCA with a heterogeneous ensemble method can produce a more effective model for detecting spam contents.Item Hybridization of El-Gamal and Blow-Fish Algorithm for Data Security(Faculty of Communication and Information Sciences, University of Ilorin, Ilorin, Nigeria., 2017) Bajeh, Amos Orenyi; Olatunde, Yusuf Olanrewaju; Akintola, Abimbola Ganiyat; Balogun, Abdullateef Oluwagbemiga; Wasiu, M. O; Sulaiman, A. T.The internet connects all part of the world together and thus, makes it very easy and fast to communicate from different location. Reports by internet user on security issues such as hacking of e-mail account, SQL injection, unauthorized access to transaction details makes users to migrate from one platform to another. The increasing activities of hackers to overcome the existing security measures have raised the need for more and improving security technique.Therefore,this study demonstrates hybridization of Blowfish and El-Gamal algorithm to improve data security and improve the performance of El-Gamal algorithm. In the hybrid system, Blowfish is used to encrypt message containing private data using a secret key after which the secret key is encrypted by El-Gamal algorithm using public and private key which are mathematically related.The cipher text produced contains the mixture of private data and secret key. A simulation program is developed using java in order to experiment the difference between the hybrid system, blowfish and El-Gamal algorithm. The result showed that the developed hybrid system is more secure and faster compared to El-Gamal algorithm.Item Hybridization of El-Gamal and Blowfish Algorithm for Data Security(International Journal of Information Processing and Communication (IJIPC), 2017) Bajeh, Amos Orenyi; Olatunde, Y.O.; Akintola, A.G.; Balogun, A.O.; Wasiu, M.O.The internet connects all part of the world together and thus, makes it very easy and fast to communicate from different location. Reports by internet user on security issues such as hacking of E-mail account, SQL injection, unauthorized access to transactional details makes users to migrate from one platform to another. The increasing activities of hackers to overcome the existing security measures have raised the need for more and improving security technique. Therefore, this study demonstrates hybridization of Blowfish and El-Gamal algorithm to improve data security and improve the performance of El-Gamal algorithm. In the hybrid system, Blowfish is used to encrypt message containing private data using a secret key after which the secret key is encrypted by El-Gamal algorithm using public and private key which are mathematically related. The cipher text produced contains the mixture of private data and secret key. A simulation program is develop using java in order to experiment the difference between the hybrid system, blowfish and El-Gamal algorithm. The result shows that the developed hybrid system is more secure and faster compared to El-Gamal algorithm.Item Improved Performance of Intrusion Detection System using feature Reduction and J48 Decision Tree Classification(Department of Computer Science, University of Ilorin, Ilorin, Nigeria, 2016) Abikoye, Oluwakemi Christiana; Balogun, A.Oluwagbemiga; Olarewaju, A.Kehinde; Bajeh, Amos OrenyiDue to the obvious importance of accuracy in the performance of intrusion detection system, in addition to the algorithms used there is an increasing need for more activities to be carried out, aiming for improved accuracy and reduced real time used in detection. This paper investigates the use of filtered dataset on the performance of J48 Decision Tree classifier in its classification of a connection as either normal or an attack. The reduced dataset is based on using Gain Ratio attribute evaluation technique (entropy) for performing feature selection (removal of redundant attributes) and feeding the filtered dataset into a J48 Decision Tree algorithm for classification. A 10-fold cross validation technique was used for the performance evaluation of the J48 Decision Tree classifier on the KDD cup 1999 dataset and simulated in WEKA tool. The results showed J48 decision tree algorithm performed better in terms of accuracy and false positive report on the reduced dataset than the full dataset(Probing full dataset: 97.8%, Probing reduced dataset: 99.5%, U2R full dataset: 75%, reduced dataset: 76.9%, R2L full dataset: 98.0%, reduced dataset: 98.3%).Item IMPROVED PERFORMANCE OF INTRUSION DETECTION SYSTEM USING FEATURE REDUCTION AND J48 DECISION TREE CLASSIFICATION ALGORITHM(Department of Computer Science, Facutly of Communication and Information Sciences, University of Ilorin Ilorin, Nigeria., 2016) Abikoye, Oluwakemi Christianah; Balogun, Abdullateef Oluwagbemiga; Olarewaju, A. K.; Bajeh, Amos OrenyiDue to the obvious importance of accuracy in the performance of intrusion detection system, in addition to the algorithms used there is an increasing need for more activities to be carried out, aiming for improved accuracy and reduced real time used in detection. This paper investigates the use of filtered dataset on the performance of J48 Decision Tree classifier in its classification of a connection as either normal or an attack. The reduced dataset is based on using Gain Ratio attribute evaluation technique (entropy) for performing feature selection (removal of redundant attributes) and feeding the filtered dataset into a J48 Decision Tree algorithm for classification. A 10-fold cross validation technique was used for the performance evaluation of the J48 Decision Tree classifier on the KDD cup 1999 dataset and simulated in WEKA tool. The results showed J48 decision tree algorithm performed better in terms of accuracy and false positive report on the reduced dataset than the full dataset(Probing full dataset: 97.8%, Probing reduced dataset: 99.5%, U2R full dataset: 75%, reduced dataset: 76.9%, R2L full dataset: 98.0%, reduced dataset: 98.3%).Item MEMETIC APPROACH FOR MULTI-OBJECTIVE OVERTIME PLANNING IN SOFTWARE ENGINEERING PROJECTS(School of Engineering, Taylor’s University, Malaysia., 2019-12) Mojeed, Hameed Adeleye; Bajeh, Amos Orenyi; Balogun, Abdullateef Oluwagbemiga; Adeleke, HammidSoftware projects often suffer from unplanned overtime due to uncertainty and risk incurred due to changing requirement and attempt to meet up with time-to-market of the software product. This causes stress to developers and can result in poor quality. This paper presents a memetic algorithmic approach for solving the overtime-planning problem in software development projects. The problem is formulated as a three-objective optimization problem aimed at minimizing overtime hours, project makespan and cost. The formulation captures the dynamics of error generation and propagation due to overtime using simulation. Multi-Objective Shuffled Frog-Leaping Algorithm (MOSFLA) specifically designed for overtime planning is applied to solve the formulated problem. Empirical evaluation experiments on six real-life software project datasets were carried out using three widely used multi-objective quality indicators. Results showed that MOSFLA significantly outperformed the existing traditional overtime management strategies in software engineering projects in all quality indicators with 0.0118, 0.3893 and 0.0102 values for Contribution (IC), Hypervolume (IHV) and Generational Distance (IGD) respectively. The proposed approach also produced significantly better IHV and IGD results than the state of the art approach (NSGA-IIV ) in 100% of the project instances. However, the approach could only outperform NSGA-IIV in approximately 67% of projects instances with respect to IC.Item Optimized Gabor Features For Facial Recognition System(Editura Universitatii din Pitesti, 2018) Aro, Taye Oladele; Abikoye, Oluwakemi Christiana; Bajeh, Amos OrenyiFeature extraction is a significant process in any pattern recognition, computer vision and image processing. Among several feature extraction techniques like Fisher Linear Discriminant Analysis (FLDA), Principal Component Analysis (PCA), Elastic Bunch Graph Matching (EBGM) and Local Binary Pattern (LBP), Gabor-filters possess the ability of obtaining multi-orientation features from a facial image at several scales with the derived information being of local nature. Its optimal functionality in facial recognition is linked to its biological importance (similarity to the receptive fields of simple cells in primary visual cortex) and computational properties (optimal for calculating local spatial frequencies). Despite all the outstanding properties of Gabor-filters, this technique suffers high feature dimensionality. This paper addresses the problem of high feature dimensionality by application of Ant Colony Optimization meta-heuristic algorithm for feature selection of relevant and optimal features. Two face image databases; Olivetti Research Laboratory (ORL) Database and Locally Acquired Face Image Database (LAFI) are used to evaluate the performance of the proposed facial recognition model. The final experimental results showed better performance.Item PERFORMANCE EVALUATION OF SELECT DATA MINING SOFTWARE TOOLS FOR DATA CLUSTERING(Federal University Wukari, Taraba State, Nigeria., 2018-09-10) Ameen, Ahmed Oloduowo; Bajeh, Amos Orenyi; Adesiji, Boluwatife Aderinsola; Balogun, Abdullateef Oluwagbemiga; Mabayoje, Modinat AboloreData mining is used to discover knowledge from information system. Clustering is one of the techniques used for data mining. It can be defined as a technique of grouping un-labelled data objects such that objects belonging to one cluster are not similar to the objects belonging to another cluster. Data mining tools refer to the software that are used for the process of efficiently analysing, summarizing and extracting useful information from different perspectives of data. This paper presents a comparative analysis of four open-source data mining software tools (WEKA, KNIME, Tanagra and Orange) in the context of data clustering, specifically K-Means and Hierarchical clustering methods. The results of the performance analysis based on the execution time and quality of clusters showed that WEKA tool outperforms the other tools with the lowest SSE of 199.7308 with an average execution time of 1.535 seconds. Knime has SSE of 222.217 but with an average execution time of 7.13 seconds, and then Tanagra with SSE of 269.3902 and average execution time of 2.01 seconds, Orange has the poorest performance with SSE of 388.78.Item Software Defect Prediction Using Ensemble Learning: An ANP Based Evaluation Method(FUOYE Journal of Engineering and Technology, Faculty of Engineering, Federal University Oye-Ekiti, Ekiti State, Nigeria., 2018-09) Balogun, Abdullateef Oluwagbemiga; Bajeh, Amos Orenyi; Orie, Victor Agwu; Yusuf-Asaju, Wuraola AyisatSoftware defect prediction (SDP) is the process of predicting defects in software modules, it identifies the modules that are defective and require extensive testing. Classification algorithms that help to predict software defects play a major role in software engineering process. Some studies have depicted that the use of ensembles is often more accurate than using single classifiers. However, variations exist from studies, which posited that the efficiency of learning algorithms might vary using different performance measures. This is because most studies on SDP consider the accuracy of the model or classifier above other performance metrics. This paper evaluated the performance of single classifiers (SMO, MLP, kNN and Decision Tree) and ensembles (Bagging, Boosting, Stacking and Voting) in SDP considering major performance metrics using Analytic Network Process (ANP) multi-criteria decision method. The experiment was based on 11 performance metrics over 11 software defect datasets. Boosted SMO, Voting and Stacking Ensemble methods ranked highest with a priority level of 0.0493, 0.0493 and 0.0445 respectively. Decision tree ranked highest in single classifiers with 0.0410. These clearly show that ensemble methods can give better classification results in SDP and Boosting method gave the best result. In essence, it is valid to say that before deciding which model or classifier is better for software defect prediction, all performance metrics should be considered.Item SOFTWARE DEFECT PREDICTION: ANALYSIS OF CLASS IMBALANCE AND PERFORMANCE STABILITY(School of Engineering, Taylor’s University, 2019-12) Balogun, Abdullateef Oluwagbemiga; Basri, Shuib; Said, Jadid Abdulkadir; Adeyemo, Victor Ebenezer; Imam, Abdullahi Abubakar; Bajeh, Amos OrenyiThe performance of prediction models in software defect prediction depends on the quality of datasets used for training such models. Class imbalance is one of data quality problems that affect prediction models. This has drawn the attention of researchers and many approaches have been developed to address this issue. In this study, an extensive empirical study is presented, which evaluates the performance stability of prediction models in SDP. Ten software defect datasets from NASA and PROMISE repositories with varying imbalance ratio (IR) values were used as the original datasets. New datasets are generated from the original datasets using undersampling (Random under Sampling: RUS) and oversampling (Synthetic Minority Oversampling Technique: SMOTE) methods with different IR values. The sampling techniques were based on the equal proportion (100%) of the increment (SMOTE) of minority class label or decrement (RUS) of the majority class label until each dataset is balanced. IR is the ratio of the defective instances to non-defective instances in a dataset. Each newly generated datasets with different IR values based on different sampling techniques were randomized before applying prediction models. Nine standard prediction models were used on the newly generated datasets. The performance of the prediction models was measured using the Area Under Curve (AUC) and Co-efficient of Variation (CV) is used to determine the performance stability. Firstly, experimental results showed that class imbalance had a negative effect on the performance of prediction models and the oversampling method (SMOTE) enhanced the performances of prediction models. Secondly, Oversampling method of balancing datasets is better than using Undersampling methods as the latter had poor performance as a result of the random deletion of useful instances in the datasets. Finally, among the prediction models used in this study, it appeared that Logistic Regression (LR) (RUS: 30.05; SMOTE: 33.51), Naïve Bayes (NB) (RUS: 34.18; SMOTE: 33.05), and Random Forest (RF) (RUS: 29.24; SMOTE: 64.25) with their respective CV values are more stable prediction models and they work well with imbalanced datasets.Item SOFTWARE DEFECT PREDICTION: EFFECT OF FEATURE SELECTION AND ENSEMBLE METHODS(Federal University Wukari, Taraba State, Nigeria., 2018-09-10) Mabayoje, Modinat Abolore; Balogun, Abdullateef Oluwagbemiga; Bajeh, Amos Orenyi; Musa, Badamasi AbubakarSoftware defect prediction is the process of locating defective modules in software. It facilitates testing efficiency and consequently software quality. It enables a timely identification of fault-prone modules. The use of single classifiers and ensembles for predicting defects in software has been met with inconsistent results. Previous analysis say ensemble are often more accurate and are less affected by noise in datasets, also achieving lower average error rates than any of the constituent classifiers. However, inconsistencies exist in these various experiments and the performance of learning algorithms may vary using different performance measures and under different circumstances. Therefore, more research is needed to evaluate the performance of ensemble algorithms in software defect prediction. Adding feature selection reduces data sets with fewer features and improves the classifiers and ensemble performance over the datasets. The goal of this paper is to assess the efficiency of ensemble methods in software defect prediction using feature selection. This study compares the performance of four ensemble algorithms using 11 different performance metrics over 11 software defect datasets from the NASA MDP repository. The results indicate that feature selection and use of ensemble methods can improve the classification results of software defect prediction. Bagged ensemble models have the best results. In addition, Voting and Stacking also performed better than individual base classifiers. In terms of single classifier, SMO performs best as it outperformed Decision Tree (J48), MLP, and KNN with and without feature selection. Thus, it can be derived that feature selection can help improve the accuracy of both individual classifiers and ensemble methods by removing noisy and inconsistent features in the datasets.Item Solving the Next Release Problem using a Hybrid Metaheuristic(Computers and Applied Computer Science Faculty in "Tibiscus" University of Timişoara, Romania., 2016) Balogun, Abdullateef Oluwagbemiga; Mabayoje, Modinat Abolore; Makinwa, Sayo Michael; Bajeh, Amos OrenyiThe Next Release Problem is characterized by the need to determine the features that are to be included in a particular software system to make up the next release. These features are to be selected, such that users’ demands and needs are satisfied as much as possible, given a limited resources, by ensuring that the available resources are used to develop the most important features first. This work applies a hybrid of Variable Neighbourhood Search (VNS) and Tabu Search (TS) for solving bi-objective NRP, using a cost-value model for requirements. Experiments showed the hybrid metaheuristics to produce a Pareto optimal set with a controllable dynamic number of options whose score and cost value range can be controlled via parameters that can be modified without a significant effect on execution time.