Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Српски
  • Yкраї́нська
  • Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Balogun, Abdullateef Oluwagbemiga"

Now showing 1 - 20 of 25
Results Per Page
Sort Options
  • Item
    Automation of Library (Codes) Development for Content Management System (CMS)
    (Foundation of Computer Science (FCS), NY, USA, 2012) Mustapha, Mutairu Omotosho; Balogun, Abdullateef Oluwagbemiga
    Previous researchers have found that most programmers or web developer usually have problems with the generation of libraries, creation of databases and the integration of the two aforementioned. These problems also include the re writing of libraries for different applications, maintenance of already written libraries in case of any changes in database or vice versa. Various kinds of open source softwares are usually used in the creation of web applications or content management systems. However, these problems are relative due to the level of the programmer and the programming paradigm employed by the programmer. This paper presents a Library (Codes) Generating Machine that can be used to solve these problems, for it automatically generates libraries and creates databases based on the approach employed by the programmer. The two ways that are adopted in the usage of this system are either starting from the creation of database and its tables (Backward Approach i. e. starting from database design) or from the creation of class(es) (Forward approach i. e. starting from object identification).
  • Item
    Comparative Analysis of Selected Heterogeneous Classifiers for Software Defects Prediction Using Filter-Based Feature Selection Methods
    (Faculty of Engineering, Federal University Oye-Ekiti, Ekiti., 2018-03-31) Akintola, Abimbola Ganiyat; Balogun, Abdullateef Oluwagbemiga; Lafenwa-Balogun, Fatimah; Mojeed, Hameed Adeleye
    Classification techniques is a popular approach to predict software defects and it involves categorizing modules, which is represented by a set of metrics or code attributes into fault prone (FP) and non-fault prone (NFP) by means of a classification model. Nevertheless, there is existence of low quality, unreliable, redundant and noisy data which negatively affect the process of observing knowledge and useful pattern. Therefore, researchers need to retrieve relevant data from huge records using feature selection methods. Feature selection is the process of identifying the most relevant attributes and removing the redundant and irrelevant attributes. In this study, the researchers investigated the effect of filter feature selection on classification techniques in software defects prediction. Ten publicly available datasets of NASA and Metric Data Program software repository were used. The topmost discriminatory attributes of the dataset were evaluated using Principal Component Analysis (PCA), CFS and FilterSubsetEval. The datasets were classified by the selected classifiers which were carefully selected based on heterogeneity. Naïve Bayes was selected from Bayes category Classifier, KNN was selected from Instance Based Learner category, J48 Decision Tree from Trees Function classifier and Multilayer perceptron was selected from the neural network classifiers. The experimental results revealed that the application of feature selection to datasets before classification in software defects prediction is better and should be encouraged and Multilayer perceptron with FilterSubsetEval had the best accuracy. It can be concluded that feature selection methods are capable of improving the performance of learning algorithms in software defects prediction.
  • Item
    Comparative Analysis of Selected Supervised Classification Algorithms.
    (Computer Chapter of the Institute of Electrical & Electronics Engineers (IEEE) Nigeria Section., 2015-10) Mabayoje, Modinat Abolore; Balogun, Abdullateef Oluwagbemiga; Salihu, Shakirat Aderonke; Oladipupo, Kehinde Razak
    Information is not packaged in a standard easy-to-retrieve format. It is an underlying and usually subtle and misleading concept buried in massive amounts of raw data. From the beginning of time it has been man’s common goal to make his life easier. The prevailing notion in society is that wealth brings comfort and luxury, so it is not surprising that there has been so much work done on ways to sort large volume of data. Over the year, there are various data mining techniques and used to sort large volume of data. This paper considers Classification which is a supervised learning technique. Therefore the need to come up with the most efficient way to deal with voluminous data with very little time frame has been one of the biggest challenges to the AI community. Hence, this paper presents a comparative analysis of three classification algorithms namely; Decision Tree (J-48), Random Forest and Naïve Bayes. A 10-fold cross validation technique is used for the performance evaluation of the classifiers on KDD’’99, VOTE and CREDIT datasets using WEKA (Waikato Environment forKnowledge Analysis) tool. The experiment shows that the type of dataset determines which classifier is suitable.
  • Item
    DATA MINING OF NIGERIANS’SENTIMENTS ON THE ADMINISTRATION OF FEDERAL GOVERNMENT OF NIGERIA
    (Computers and Applied Computer Science Faculty in "Tibiscus" University of Timişoara, Romania., 2016) Amusa, Lateef; Yahya, Wahab; Balogun, Abdullateef Oluwagbemiga
    The opinions and sentiments expressed by citizens of a country on the policies of the government of such country are very vital to the overall running of the affairs of such a government. This paper therefore explored data mining tools to evaluate peoples’ sentiments (positive or negative) towards the administration of the Federal Government of Nigeria (FGN) under President Muhammadu Buhari (PMB). Data were collected through a popular social medial network (Twitter) on various tweets by Nigerians with respect to their perceptions about the current administration PMB. The simple but powerful Naïve Bayes (NB) classifier was adopted to classify the various tweets submitted by Nigerians through this medium into positive and negative sentiments. For polarity, it was trained on the combination of Janyce Wiebe’s subjectivity lexicon and Bing Liu’s subjectivity lexicon which polarized the submitted words as being negative or positive. Out of about 13,000 features (peoples’ sentiments) considered, 4,770 of them were used after data cleaning. The results showed that the proportion of positive and negative sentiments, as obtained from the data, were 45.2% and 54.8% respectively. However, the data were randomly partitioned into 80:20 training and testing parts respectively and the NB classifier was learned on the training set while its goodness was assessed on the test set. The prediction accuracy, misclassification error rate, sensitivity and specificity of the classifier were 78.3%, 21.7%, 82.5% and 88.1% respectively. All analyses were carried out in the environment of R statistical package (version 3.2.2).
  • Item
    Design and Analysis of Network Models for QoS Routers on UDP and CBR
    (Georgian Technical University and St. Andrew the First Called Georgian University of The Patriarchy of Georgia, 2017) Ameen, Ahmed Oloduowo; Olatinwo, D.D.; Alamu, F.O; Olatinwo, S.O.; Balogun, Abdullateef Oluwagbemiga
    To address the issues of packet delay and unfairness among multimedia UDP flows, this paper presents the design and evaluation of network models to study different parameters for quality-of-service (QoS) provisioning in differentiated service (DiffServ) routers using user datagram protocol (UDP) as network traffic agent and constant bit rate (CBR) as traffic generator. Traffic marker algorithms are used to define the treatment an incoming traffic (packet streams) receives at the edge routers in a DiffServ domain. In order to implement the TSW2CM and TSW3CM marker algorithms, a network model was designed. The designed models were simulated, analysed and evaluated. For the purpose of evaluation, packet delay and fairness index were considered. The obtained evaluation results were analysed based on a ranking system approach to showcase the strengths and weaknesses of the TSW2CM and TSW3CM algorithms for multimedia UDP flows. The adopted approach showed that the TSW3CM algorithm was ranked first with a packet delay value of 0.237704 while TSW2CM algorithm was marked second (with 0.431778), and the TSW3CM algorithm was ranked first with a fairness rate value of 0.3823960 while TSW2CM algorithm was ranked second (with 0.2817353). The obtained results indicate that applications that requires low packet delay can be deployed on UDP protocol using TSW3CM algorithm while applications that requires high fairness rate values can be deployed on UDP protocol using TSW3CM algorithm
  • Item
    Email Data Security Using Cryptography
    (Faculty of Communication and Information Sciences, University of Ilorin, Ilorin, Nigeria., 2016) Bajeh, Amos Orenyi; Ayeni, A .O.; Balogun, Abdullateef Oluwagbemiga; Mabayoje, Modinat Abolore
    Email has become one of the prominent medium of message transmission between web users. Thus, email security measures need to evolve from time to time in order to mitigate security threats which are also evolving and becoming sophisticated. This paper presents a study that proposed, implemented and evaluated the performance of an email security approach that utilizes the Rivest Shamir Adleman algorithm as an additional layer of security. The approach involves an application that converts messages into cipher text as a separate platform.The cipher text is transmitted as the original message on the email platform. Also, the cryptographic keys are transmitted between communicating users via another media such as mobile phone.The proposed approach is evaluated by measuring the accuracy of the message transmitted between users. It showed an average accuracy of 98% overall the scenarios examined.
  • Item
    Enhanced Classification via Clustering Techniques using Decision Tree for Feature Selection
    (Foundation of Computer Science (FCS), NY, USA, 2015-09-01) Balogun, Abdullateef Oluwagbemiga; Mabayoje, Modinat Abolore; Salihu, Shakirat Aderonke; Salvation, Arinze
    Information overload has raggedly increased as a result of the advances in the aspect of storage capabilities and data collection in previous years. The growth seen in the number of observation has partly cause a collapse in analytical method but the increases in the number of variable associated with each observation has grossly collapse it. The number of variables that are measured on each observation.is referred to as the dimension of the data, and a major problem of dataset containing high dimensions is that, there exist only few “important” measured variables for understanding the fundamental occurrences of interest. Hence, dimension reduction of the original data prior to any modeling of the data is of great necessity today. In this paper, a précis of K-Means, Expectation Maximization and J48 decision tree classifier is presented with a framework on the performance measurement of base classifiers with and without feature reduction. A performance evaluation was carried out based on F-Measure, Precision, Recall, True Positive Rate, False Positive Rate, ROC Area and Time taken to build model. The experiment revealed that the reduced dataset yielded improved results than the full dataset after performing classification via clustering.
  • Item
    An Ensemble Approach Based on Decision Tree and Bayesian Network for Intrusion Detection
    (Computers and Applied Computer Science Faculty in "Tibiscus" University of Timişoara, Romania., 2017-06-01) Balogun, Abdullateef Oluwagbemiga; Balogun, Adedayo Miftaudeen; Sadiku, Peter Ogirima; Amusa, Lateef
    This paper presents an overview of intrusion detection and a hybrid classification algorithm based on ensemble method (stacking) which uses decision tree (J48) and Bayesian network as base classifiers and functional tree algorithm as the meta-learner. The data set is passed through the decision tree and node Bayesian network for classification. The meta-learner (Functional tree classifier) will then select the value of the base classifier that has the higher accuracy based on majority voting. The key idea here is to always pick the value with higher accuracy since both base classifier (decision tree and Bayesian network) will always classify all instances. A performance evaluation was performed using a 10-fold cross validation technique on the individual base classifiers (decision tree and Bayesian network) and the ensemble classifier (DT-BN) using the KDD Cup 1999 dataset on WEKA tool. Experimental results show that the hybrid classifier (DT-BN) gives the best result in terms of accuracy and efficiency compared with the individual base classifiers (decision tree and BN). The decision tree gave a result of (99.9974% for DoS, 100% for Normal, 98.8069% for probing, 97.6021% for U2R and 73.0769% for R2L), the Bayesian network (99.6410% for DoS, 100% for Normal, 97.1756% for probing, 97.0693% for U2R and 69.2308% for R2L),while the ensemble method gave a result of (99.9977% for DoS, 100% for Normal, 98.8069% for probing, 97.6909% for U2R and 73.0769% for R2L).
  • Item
    A Framework for Coordinating Usability Engineering and Software Engineering Activities in the Development of Content Management Systems
    (AIMS Research Journal Publication Series The International Centre for Information Technology & Development (ICITD), USA., 2017-06) Balogun, Abdullateef Oluwagbemiga; Mabayoje, Modinat Abolore; Adeniyi, Ebenezer Olufemi; Salihu, Shakirat Aderonke
    Due to the expansion of the internet in recent years, we have witnessed an increase in popularity of web applications and its technologies. A particular technology of web application, Content Management system, has also gained relevance as they facilitate the distribution of wide varieties of content. The process involved in designing and developing content management system is a complex procedure due to the variability of its requirement over time which has effects on its architecture and design. Currently, the Usability Engineering (UE) and Software Engineering (SE) processes are practiced as being independent of each other. However, several dependencies and constraints exist between these two frameworks, which make coordination between the UE and the SE teams crucial. Failure of coordination between the UE and SE teams leads to CMS that often lacks necessary functionality and impedes user performance. At the same time, the UE and SE processes cannot be integrated because of the differences in focus, techniques, and terminology. We therefore propose a development framework that incorporates SE and UE efforts to guide current CMS development. The framework characterizes the information exchange that must exist between the UE and SE teams during CMS development to form the basis of the coordinated development framework. The UE Scenario-Based Design (SBD) process provides the basis for identifying UE activities. Similarly, the Requirements Generation Model (RGM), and Structured Analysis and Design are used to identify SE activities. We identify UE and SE activities that can influence each other, and identify the high-level exchange of information that must exist among these activities. We further examine these interactions to gain a more in-depth understanding as to the precise exchange of information that must exist among them. The identification of interacting activities forms the basis of a coordinated development framework that incorporates and synchronizes the UE and SE processes.
  • Item
    Gain Ratio and Decision Tree Classifier for Intrusion Detection
    (Foundation of Computer Science (FCS), NY, USA, 2015) Mabayoje, Modinat Abolore; Akintola, Abimbola Ganiyat; Balogun, Abdullateef Oluwagbemiga; Ayilara, Opeyemi
    With the evident need for accuracy in the performance of intrusion detection system, it is expedient that in addition to the algorithms used, more activities should be carried out to improve accuracy and reduce real time used in detection. This paper reviews how data mining relates to IDS, feature selection and classification. This paper proposes architecture of IDS where GainRatio is used for feature selection and decision tree for classification using NSL-KDD99 dataset, It also includes the evaluation of the performance of the Decision tree on the dataset and also on the reduced dataset.
  • Item
    Heterogeneous Ensemble Models For Generic Classification
    (Computers and Applied Computer Science Faculty in "Tibiscus" University of Timişoara, Romania., 2017-05-10) Balogun, Abdullateef Oluwagbemiga; Balogun, Adedayo Miftaudeen; Sadiku, Peter Ogirima; Adeyemo, Victor Elijah
    This paper presents the application of somedata mining techniques in the field of health care and computer network security. The selected classifiers wer eused individually and also, they were ensemble methods using four different combinations for the purpose of classification. Naïve Bayes, Radial Basis Function and Ripper algorithms were selected and the ensemble methods were majority voting, multi-scheme, stacking and Minimum Probability. The KDDCup’99 dataset was used as the benchmark for computer network security, while for the health care, breast cancer and diabetes dataset from the WEKA repository were used. All experiments and simulations were carried out, analyzed and evaluated using the WEKA tool. The Multi-scheme ensemble method gave the best accuracy result for the KDD dataset (99.81%) and the breast cancer dataset (73.08%) but its value of (75.65%) on breast cancer is the least of them all. Ripper algorithm gave the best result accuracy (99.76%) on KDD dataset amongst the base classifier but it was slightly behind in the breast cancer and diabetes dataset.
  • Item
    Heterogeneous Ensemble Methods Based On Filter Feature Selection
    (Research Nexus Africa’s Networks in Conjunction with The African Institute of Development Informatics & Policy (AIDIP) Ghana & The International Centre for Information Technology & Development (ICITD), USA, 2016) Ameen, Ahmed Oloduowo; Balogun, Abdullateef Oluwagbemiga; Usman, Ganiyat; Fashoto, Gbenga Stephen
    While certain computationally expensive novel methods can construct predictive models with high accuracy from high dimensional data, it is still of interest in many applications to reduce the dimension of the original data prior to any modeling of the data. Hence, this research presents a précis of ensemble methods (Stacking, Voting and Multischeme) and Multilayer perceptron, K Nearest Neighbour and NBTree with a framework on the performance measurement of base classifiers and ensemble methods with and without feature selection techniques (Principal Component Analysis, Information Gain Attribute Selection and Gain Ratio Attribute Selection). The enhancement is based on performing feature selection on dataset prior to classification. The notion of this study is to evaluate the performances of the ensemble methods on original and reduced datasets. A 10-fold cross validation technique is used for the performance evaluation of the ensemble methods and base classifiers (Root to Local) R2L KDD cup 1999 dataset and UCI Vote dataset using Waikato environment for knowledge analysis (WEKA) tool. The experiment revealed that the reduced dataset yielded improved results than the full dataset after using the ensemble methods based on stacking, voting and multischeme. On the R2L dataset, Multischeme ensemble method gave accuracy of 98.76% with PCA as feature selection on R2L dataset while 98.58% accuracy was given without feature selection. Using the gain ratio attribute selection, the Multischeme gave 98.93% accuracy over 98.76% without feature selection while using information gain attribute selection gave accuracy 98.85% over 98.76% without feature selection. For the Vote Dataset, Multischeme ensemble method proved best with an accuracy of 92.18% with PCA feature selection over 89.88% without feature selection, 95.40% accuracy with information gain as feature selection over 93.10% without feature selection and 95.40% accuracy with gain ratio as feature selection over 93.10% without feature selection. In arguably, it can be concluded that ensemble methods works well with feature selection.
  • Item
    Heterogeneous ensemble with combined dimensionality reduction for social spam detection
    (International Association of Online Engineering, 2021) Oladepo, Abdulfatai Ganiyu; Bajeh, Amos Orenyi; Balogun, Abdullateef Oluwagbemiga; Mojeed, Hammed Adeleye; Salman, Abdulsalam Abiodun; Bako, Abdullateef Iyanda
    Spamming is one of the challenging problems within social networks which involves spreading malicious or scam content on a network; this often leads to a huge loss in the value of real-time social network services, compromise the user and system reputation and jeopardize users trust in the system. Existing methods in spam detection still suffer from misclassification caused by redundant and irrelevant features in the dataset as a result of high dimensionality. This study presents a novel framework based on a heterogeneous ensemble method and a hybrid dimensionality reduction technique for spam detection in micro-blogging social networks. A hybrid of Information Gain (IG) and Principal Component Analysis (PCA) (dimensionality reduction) was implemented for the selection of important features and a heterogeneous ensemble consisting of Naïve Bayes (NB), K Nearest Neighbor (KNN), Logistic Regression (LR) and Repeated Incremental Pruning to Produce Error Reduction (RIPPER) classifiers based on Average of Probabilities (AOP) was used for spam detection. To empirically investigate its performance, the proposed framework was applied on MPI_SWS and SAC’13 Tip spam datasets and the developed models were evaluated based on accuracy, precision, recall, f-measure, and area under the curve (AUC). From the experimental results, the proposed framework (Ensemble + IG + PCA)outperformed other experimented methods on studied spam datasets. Specifically, the proposed framework had an average accuracy value of 87.5%, an average precision score of 0.877, an average recall value of 0.845, an average F-measure value of 0.872 and an average AUC value of 0.943. Also, the proposed framework had better performance than some existing approaches. Consequently, this study has shown that addressing high dimensionality in spam datasets, in this case, a hybrid of IG and PCA with a heterogeneous ensemble method can produce a more effective model for detecting spam contents.
  • Item
    A Hybrid Multi-Filter Wrapper Feature Selection Method for Software Defect Predictors
    (ExcelingTech Publisher, UK, 2019-04) Balogun, Abdullateef Oluwagbemiga; Basri, Shuib; Abdulkadir, Said Jadid; Sobri, Ahmad Hashim
    Software Defect Prediction (SDP) is an approach used for identifying defect-prone software modules or components. It helps software engineer to optimally, allocate limited resources to defective software modules or components in the testing or maintenance phases of the software development life cycle (SDLC). Nonetheless, the predictive performance of SDP models reckons largely on the quality of dataset utilized for training the predictive models. The high dimensionality of software metric features has been noted as a data quality problem which negatively affects the predictive performance of SDP models. Feature Selection (FS) is a well-known method for solving high dimensionality problem and can be divided into filter-based and wrapper-based methods. Filter-based FS has low computational cost, but the predictive performance of its classification algorithm on the filtered data cannot be guaranteed. On the contrary, wrapper-based FS have good predictive performance but with the high computational cost and lack of generalizability. Therefore, this study proposes a hybrid multi-filter wrapper method for feature selection of relevant and irredundant features in software defect prediction. The proposed hybrid feature selection will be developed to take advantage of filter-filter and filter-wrapper relationships to give optimal feature subsets, reduce its evaluation cycle and subsequently improve SDP models overall predictive performance in terms of Accuracy, Precision and Recall values.
  • Item
    IMPROVED PERFORMANCE OF INTRUSION DETECTION SYSTEM USING FEATURE REDUCTION AND J48 DECISION TREE CLASSIFICATION ALGORITHM
    (Department of Computer Science, Facutly of Communication and Information Sciences, University of Ilorin Ilorin, Nigeria., 2016) Abikoye, Oluwakemi Christianah; Balogun, Abdullateef Oluwagbemiga; Olarewaju, A. K.; Bajeh, Amos Orenyi
    Due to the obvious importance of accuracy in the performance of intrusion detection system, in addition to the algorithms used there is an increasing need for more activities to be carried out, aiming for improved accuracy and reduced real time used in detection. This paper investigates the use of filtered dataset on the performance of J48 Decision Tree classifier in its classification of a connection as either normal or an attack. The reduced dataset is based on using Gain Ratio attribute evaluation technique (entropy) for performing feature selection (removal of redundant attributes) and feeding the filtered dataset into a J48 Decision Tree algorithm for classification. A 10-fold cross validation technique was used for the performance evaluation of the J48 Decision Tree classifier on the KDD cup 1999 dataset and simulated in WEKA tool. The results showed J48 decision tree algorithm performed better in terms of accuracy and false positive report on the reduced dataset than the full dataset(Probing full dataset: 97.8%, Probing reduced dataset: 99.5%, U2R full dataset: 75%, reduced dataset: 76.9%, R2L full dataset: 98.0%, reduced dataset: 98.3%).
  • Item
    Influence of Feature Selection On Multi-Layer Perceptron Classifier for Intrusion Detection System
    (Research Nexus Africa’s Networks in Conjunction with The African Institute of Development Informatics & Policy (AIDIP) Ghana & The International Centre for Information Technology & Development (ICITD), USA., 2016-12-15) Mabayoje, Modinat Abolore; Balogun, Abdullateef Oluwagbemiga; Ameen, Ahmed Oloduowo; Adeyemo, Victor Elijah
    The usage of the most popular neural network – Multilayer perceptron, as gained ground for the purpose of detecting intrusion. A lot of researchers had used it judiciously but there exist problem of slow training time and data over-fitting. This paper reviews the various data mining techniques for applied in the area intrusion detection, categories of attacks, and techniques for feature selection. This paper proposes an architecture where information gain is used for feature selection and multilayer perceptron (MLP) for classification on KDD’99 dataset. Evaluation of the performance of the MLP classifier on the KDD’99 dataset and also on the reduced dataset was conducted.
  • Item
    MEMETIC APPROACH FOR MULTI-OBJECTIVE OVERTIME PLANNING IN SOFTWARE ENGINEERING PROJECTS
    (School of Engineering, Taylor’s University, Malaysia., 2019-12) Mojeed, Hameed Adeleye; Bajeh, Amos Orenyi; Balogun, Abdullateef Oluwagbemiga; Adeleke, Hammid
    Software projects often suffer from unplanned overtime due to uncertainty and risk incurred due to changing requirement and attempt to meet up with time-to-market of the software product. This causes stress to developers and can result in poor quality. This paper presents a memetic algorithmic approach for solving the overtime-planning problem in software development projects. The problem is formulated as a three-objective optimization problem aimed at minimizing overtime hours, project makespan and cost. The formulation captures the dynamics of error generation and propagation due to overtime using simulation. Multi-Objective Shuffled Frog-Leaping Algorithm (MOSFLA) specifically designed for overtime planning is applied to solve the formulated problem. Empirical evaluation experiments on six real-life software project datasets were carried out using three widely used multi-objective quality indicators. Results showed that MOSFLA significantly outperformed the existing traditional overtime management strategies in software engineering projects in all quality indicators with 0.0118, 0.3893 and 0.0102 values for Contribution (IC), Hypervolume (IHV) and Generational Distance (IGD) respectively. The proposed approach also produced significantly better IHV and IGD results than the state of the art approach (NSGA-IIV ) in 100% of the project instances. However, the approach could only outperform NSGA-IIV in approximately 67% of projects instances with respect to IC.
  • Item
    Multiple Ceaser Cipher Encryption Algorithm
    (Mathematical Association of Nigeria (MAN)., 2017-12) Balogun, Abdullateef Oluwagbemiga; Sadiku, Peter Ogirima; Mojeed, Hameed Adeleye; Raifu, Hameed Adetunji
    The Caesar cipher has always been the major reference point when cryptographic algorithms (also called ciphers) are discussed. This, probably, is due to its being an age-long cipher. It may also be owing to the belief that the Caesar cipher was the first cipher used ever. Caesar cipher operation is based on shift-by-3 rule which makes its breaking obviously easy since an exhaustive key search of the other 25 keys can be conveniently performed. Ipso facto, an investigation into an enhancement of this too-simple-to-crack cipher is invariably necessary and ultimately important. This study is, therefore, concerned with developing a new enhanced model of Caesar cipher for a better security using multiple encryption technique, whereby an already-encrypted message is encrypted one or more times using the same or different algorithm. The new model works by wrapping a plaintext message in three crypto-wrappers and each encryption/decryption phase uses a different shift key from the other. The model supports both uppercase and lowercase characters. However, the model does not encrypt/decrypt numbers, special characters, whitespace, and file types such as word document, binary, or pdf files, but only text files. Most importantly, the new enhanced model is able to provide a better security of message by encrypting a plaintext message three times; in this way, brute forcing or an exhaustive key search will be difficult to perform; thus, making cryptanalysis almost a mirage!
  • Item
    The Organisational Factors of Software Process Improvement in Small Software Industry: Comparative Study
    (Springer, Cham, 2019-09) Basri, Shuib; Almomani, Malek Ahmad; Imam, Abubakar Abdullahi; Thangiah, Murugan; Gilal, Abdul Rehman; Balogun, Abdullateef Oluwagbemiga
    Small and Medium Enterprises (SMEs) are a great contribution to the international economy and have also been considered an important component in today’s world business. Thus, in order to be more competitive, it is necessary for these companies to deliver their products with high-quality. However, despite their importance, small software companies still face myriad challenges and barriers in producing high-quality products. The objective of this study is to identify the organizational factors that have a positive impact to enable Software Process Improvement (SPI) effort in the small software industry. A Systematic Literature Review (SLR) was conducted to achieve the main objective of this study. The findings from this study provide a roadmap to guide future research in order to enable SPI effort in the small software development industry. We believe that findings from this study will give interesting insights to encourage researchers in using compromise technique to analyze future empirical studies based on a specific region to validate the suitability of identified factors in the specific country.
  • Item
    Parameter tuning in KNN for software defect prediction: an empirical analysis
    (Department of Computer Engineering, Universitas Diponegoro, Indonesia., 2019-10-31) Mabayoje, Modinat Abolore; Balogun, Abdullateef Oluwagbemiga; Jibril, Hajarah Afor; Atoyebi, Jelili Olaniyi; Mojeed, Hammed Adeleye; Adeyemo, Victor Elijah
    Software Defect Prediction (SDP) provides insights that can help software teams to allocate their limited resources in developing software systems. It predicts likely defective modules and helps avoid pitfalls that are associated with such modules. However, these insights may be inaccurate and unreliable if parameters of SDP models are not taken into consideration. In this study, the effect of parameter tuning on the k nearest neighbor (k-NN) in SDP was investigated. More specifically, the impact of varying and selecting optimal k value, the influence of distance weighting and the impact of distance functions on k-NN. An experiment was designed to investigate this problem in SDP over 6 software defect datasets. The experimental results revealed that k value should be greater than 1 (default) as the average RMSE values of k-NN when k>1(0.2727) is less than when k=1(default) (0.3296). In addition, the predictive performance of k-NN with distance weighing improved by 8.82% and 1.7% based on AUC and accuracy respectively. In terms of the distance function, kNN models based on Dilca distance function performed better than the Euclidean distance function (default distance function). Hence, we conclude that parameter tuning has a positive effect on the predictive performance of k-NN in SDP.
  • «
  • 1 (current)
  • 2
  • »

University of Ilorin Library © 2024, All Right Reserved

  • Cookie settings
  • Send Feedback
  • with ❤ from dspace.ng