Computer Science
Permanent URI for this collection
Browse
Browsing Computer Science by Author "Adigun, M.O."
Now showing 1 - 20 of 40
Results Per Page
Sort Options
- ItemAn authentication framework for securing guiset infrastructure(University of Zululand, 2011) Mhlongo, Noluthando Brilliant; Adigun, M.O.Grid computing has appeared as a progressive and substantial area which has gained considerable attention from academic and business environments. It provides large, flexible resource sharing environment where resources and services are distributed over distributed administrative domains. Although there are many corporations benefiting from the grid computing technology, SMMEs still face some difficulties in exploiting these capabilities because of their lack of understanding the benefits that e-commerce enabling technology can provide. GUISET was developed to enable the integration of skills and resources by the SMMEs so that they can share and collaborate between themselves and independent business associates. With GUISET technology based on an open resource sharing paradigm, and provision of unlimited access to end users, comes with a great challenge of security. The widely deployed technology for securing these shared grid resources exploited by end users is the public key infrastructure. It proves the identities of grid users through the use of certificates. GSI is one of the implementations of the PKI technology and delivers essential requirements in a security infrastructure and those include delegation services, mutual authentication and single sign-on. However, the GSI has been discovered to suffer from poor scalability because of the use of certificates. In addressing the above-mentioned challenge, an authentication infrastructure for GUISET has been developed. The model employs the distribution of certification authorities granting identification to users requiring access to GUISET resources. An implementation was performed for evaluating the performance of the proposed model. The results showed that the newly developed authentication framework for securing GUISET resources provides more scalability than the previously developed grid security infrastructure.
- ItemChange impact analysis model-based framework for service provisioning in a grid environment(2009) Ekabua, Obeten Obi; Adigun, M.O.Grid-based Utility Infrastructure for Small, Medium and Micro Enterprise (SMME) Enabling Technology (GUISET) is an architecture in which distributed applications must interact across platforms via the concept of services. The building block in a GUISET application are business process services which are expected to undergo maintenance just like any other software components. The basic operation of software evolution is change, thus making change inevitable in software development. Throughout the entire lifecycle of a software system, from conception to retirement, things happen that require the system to be changed. Changes are required to fix faults, improve or update products and services. Under these conditions, it is crucial to have full control over and knowledge about what changes mean to the system and its environment. Otherwise, changes might lead to deterioration and a multitude of other problems. The effectiveness and efficiency with which a company can predict or control these changes could have a significant impact on its competitiveness. In a complex product, where the constituent parts and systems are closely dependent, changes to one item of a system are highly likely to result in a change to another item, which in turn can propagate further. The activity of assessing the implications of realizing a change is termed change impact analysis. One glaring issue is to anticipate changes and structure the service in such a way that changes will be discovered early to avoid change propagation. This is because, it is likely that services will be exposed to many unanticipated changes during its lifetime, and the support for unanticipated changes creates a gap for an important research goal. Two issues are involved here: (1) the assessment of the consequences of altering (or not) the functionality of the service, and (2) the identification of the service dependencies that are affected by the change. Traditionally, research on change impact analysis has mainly focused on technical aspects such as traceability analysis and software code change propagation. By contribution and extension of existing knowledge, the research presented in this thesis is exploratory in its nature, conducted through overarching emphasis of monitoring change propagation during service provisioning, with the understanding that software is now being consumed as services. The main goal is to advance the current state of change impact analysis by evolving a model-based framework for validating, analyzing and monitoring change propagation in our typical grid service provisioning environment (GUISET). The framework emanating from this endeavour consists of two associated formal models of change Impact Analysis Factor Adaptation mechanism, and a Fault and Failure Assumption Model for service provisioning in our GUISET grid environment. As a part of the empirical validation of the framework, we graphically represent the relationship between change and impact, indicating that, as the number of changes increased, the impact also increased. This is because the number of changes was affected by the number of dependent services. Therefore, if the dependencies were high, the number of changes will be high, and consequently the impact due to fault propagation will be high.
- ItemConceptual design of an ontological approach to personalising GUISET portal(University of Zululand, 2014) Gumbi, Nonhlanhla Melody; Adigun, M.O.; Jembere, E.Searching and finding specific information from web-based information systems has become tedious and time consuming due to information overload on the web. The current adaptation and personalisation techniques employed are gradually becoming inadequate, as the information available on the web grows exponentially. Measures were taken towards improving the current solutions. The Generic Adaptation Framework (GAF) has been proposed as a standard for building personalised web-based systems. It identifies the standard adaptation components and how adaptation process should be done. The components of the GAF overlay model are only conceptualised to interact at syntax level, which limits interoperability between the components and subsequently the quality of the personalisation results (recommendations). This work proposes semantic interaction of the GAF modelling components, which will not only improve the interaction, but also enhance the overall adaptation process. To enhance personalisation, this work introduced ontologies to model the knowledge about the components and ontology mapping to support meaningful inter-component interaction. The proposed solution, Ontology-based GAF (O-GAF), was applied in a recommender system and compared with the classical GAF recommender system. The experiments carried out showed the O-GAF performs better than the classical GAF in terms of its recommendation accuracy.
- ItemContext-aware service discovery for dynamic grid environments(2010) Sibiya, Mhlupheki George; Xulu, S.S.; Adigun, M.O.The advent of the Service Oriented Computing (SOC) paradigm, web services, and grid services can contribute towards addressing the hardware and software acquisition problems for the resource constrained enterprises such as SMMEs by providing computing resources on-demand. In a Service Oriented Architecture (SOA) all participants including resource constrained mobile devices are not only expected to participate as service consumers, but also as service providers. However, service discovery in such environments is hindered by the fact that existing standards do not fully support dynamic service discovery at run time. There is, therefore, a need for a scalable and robust discovery mechanism that will also minimize false positive and false negatives in the discovered services. This is still an issue for the current widely adopted service discovery standards. Factors contributing negatively in service discovery range from the matchmaking approaches to the discovery mechanisms. There are many research efforts that attempted to address these issues in service discovery but they still leave a lot to be desired. In this research work we address service discovery shortcomings by proposing an approach that utilizes both the pure syntactic and the semantic matchmaking techniques. This is realizable through our proposed Context Aware Service Discovery Architecture (CASDA) framework. CASDA proposes a description of services that allows the annotation of both the dynamic and static contexts of a service. It also provides a framework that utilizes requester context, service context and semantics without trading-off scalability. CASDA builds upon OWLS-MX’s capability to utilize both the syntactic and logic techniques to service matching. It also takes into account the requester and service context in fulfilling the needs of the requester during the service discovery process. The results obtained showed an improvement on scalability and recall when compared with OWLS-MX. However, there is a slight cost on precision.
- ItemA Contracts-Based Model for managing web services evolution(University of Zululand, 2015) Chiponga, Kudzai; Adigun, M.O.; Tarwireyi, P.Service-based systems need to be designed in such a way that they are able to accommodate the volatility of the environment in which they operate. Failure to evolve service-based systems will result in service providers losing their competitive edge. Additionally, failure to evolve these systems properly will have far-reaching negative impacts on all stakeholders, especially if disruptions are allowed to occur. However, evolving service-based systems in a non-disruptive manner is still a challenge both to the research community and industry. Over recent years, many organisations have been adopting technology-based service solutions in what has become a technology-driven business environment, with web services at the forefront of being a business-enabler. This dissertation focusses on developing a contracts-based model for managing the evolution of web services in a manner that is consistent and transparent to business partners. The design science research methodology was used to architect a model that can alleviate the challenge of evolving service oriented systems. It was found that technical web service contracts can be leveraged upon to manage and maintain consumers while evolution is carried out. A contracts-based service proxy was developed as an instantiation of the model. Experiments demonstrated that this proxy maintained compatibility between evolving services and existing consumers. The model developed in this research presented a cost effective solution to managing the evolution of webservices while reusing the same computing resources and cutting down on the development time needed in evolving web services. Even though the proxy introduced processing overheads, the resultant loss in service throughput was negligible, especially when we take into consideration the amount of time and effort taken in evolving services manually.
- ItemCoordination-theoretic framework for sharing e-business resources(2008) Aremu, Dayo Reuben; Adigun, M.O.Market-based models are frequently used in the resource allocation on the computational grid. However, as the size of the grid grows, it becomes more difficult for customers to negotiate directly with all the grid resource providers. Middle agents are introduced to mediate between the providers and the customers so as to facilitate the resource allocation process. The most frequently deployed middle agents are the matchmakers, the brokers, and the market-maker. The matchmaking agent finds possible candidate providers who can satisfy the requirements of the customer, after which the customer directly negotiates with the candidates. The broker agents are mediating the negotiation with the providers in real time. The market-maker acquires resources and resource reservations in large quantities, and resells it to the customers. The purpose of this study was to establish a negotiation framework that enables optimal allocation of resources in a grid computing environment, such that: clients are allowed to have an on - demand access to pool of resources; collaboration is enhanced among resource providers; cost saving and efficiency are ensured in resource allocation. The objectives to realize this purpose were: first, to design an appropriate negotiation model that could be adopted to achieve optimal resource allocation; second, to determine an effective search strategy that could be employed in order to reach a Pareto efficient negotiation solution; third, to adopt a negotiation strategy or tactics that negotiators could use to arrive at optimal resource allocation. In order to achieve the goals and objectives set for the study, the following methodologies were used: (i) a critical survey of the existing economic approach and models for negotiating grid resources was conducted; (ii) The knowledge gained from the literature surveyed was used to construct a novel model called Co-operative Modeler for mediating grid resource sharing negotiation. We used Mathematical notations: first, to construct a theoretical model for allocation of resources to clients* task; second, to present a novel Combinatorial Multi-Objective Optimization model (CoMbO). by modeling negotiation offers of agents as multi-objective optimi2ation problem; and third, to present Genetic and Bayesian Learning Algorithms for implementing the model presented; (iii) an implementation prototype of the Co- operative Modeler was developed by: first, implementing the Co-operative Modeler to mimic the real world negotiation situation; second, using time-dependent negotiation tactics to evaluate the negotiation behavior of the Co-operative agents. The Co-operative Modeler has been shown to guarantee: (i) Scalability of number of users, i.e. multiple users can access a virtualized pool of resources in order to obtain the best possible response time overall by maximizing utilization of the computing resources;(ii) Enhanced collaboration.- that is promoting collaboration so that grid resources can be shared and utilized collectively, efficiently, and effectively to solve computer-intensive problems; (iii) Improved business agility - that is decreasing time to process data and deliver quicker results; and (iv) Cost saving. - i.e. leveraging and exploiting unutilized or under utilized power of all computing resources within a grid environment.
- ItemDesign of an IP address Auto-Configuration scheme for Wireless Multi-Hop Networks(2008) Mutanga, Murimo Bethel; Xulu, S.S.; Adigun, M.O.The importance of wireless ad-hoc networks (eg wireless mesh networks) in community and commercial connectivity cannot be underestimated in view of the benefits associated with such networks. An ad-hoc network must assemble itself from any devices that happen to be nearby, and adapt as devices move in and out of wireless range. High levels of self organization will minimize the need for manual configuration. In essence, self-organization provides an out-of-the-box functionality such that very little technical expertise is required to setup a network. However, efficiently providing unique IP addresses in ad-hoc networks is still an open research question. The goal of this study, on wireless multi-hop networks, was to develop algorithms for IP address auto-configuration. These algorithms should address the following among other problems: Achieving high levels of address uniqueness without compromising on latency and communication overhead. To achieve the overall goal of this research we proposed changes to the traditional DAD procedure, the Wise-DAD protocol was proposed. We introduced state information maintenance, which is passively collected and synchronized. Passively collecting state information reduced the number of DAD trials thereby reducing latency and communication overhead. Simulations were done in NS-2 to test the performance of the proposed protocol. A comparative analysis was then conducted. Wise-DAD was compared with the Strong-DAD protocol. Experiments on the effect of network size, node density and node arrival rate on communication overhead, address uniqueness and latency were conducted. Results from the simulation experiments show that Wise-DAD outperforms Strong-DAD in all the three metrics used for performance evaluation. First, Wise-DAD showed better scalability since it performed better than Strong-DAD when network size was increased. Communication overhead in Wise-DAD was generally low whilst the latency was generally uniform. The number of IP address duplicates recorded was reasonably low. Second, Wise- DAD was not affected by node arrival rate on all the three metrics that were recorded. On the other hand, the number of address duplicates in Strong-DAD decreased as the node arrival rate was increased. Interference significantly affected communication overhead recorded in Strong-DAD. Wise-DAD, on the other hand, was not affected by interference. The number of address conflicts in both protocols showed an inverse relationship to interference. However, the number of conflicts for both protocols was significantly different; Wise-DAD recorded much less address conflicts than Strong-DAD.
- ItemDesign of partnership-centred information repository(2007) Okharedia, Joseph Ayangbeso; Adigun, M.O.Organizations require the development of Information Repository to improve their products and services. With the rising interest in the semantic web, research efforts have been geared towards the integration of distributed information sources into an interoperable knowledge-base environment. The objective of this research was to design an information repository to facilitate the integration of operational data from various sources into a single and consistent knowledge base that supports analysis and decision making within and across different organizations. The research activities that resulted in the proposed information repository consisted of two major steps: (i) the design of an information repository aimed at promoting the usage of e-commerce through the Internet amongst the Small Medium Enterprises and the larger organizations and (ii) the implementation of repository architecture to facilitate knowledge sharing in response to queries on distributed metadata and easily interoperable sources merged into one semantic entity. The SMEs and the larger organizations are focused as partners in the design of this information repository. The implication of focusing on the SMEs and the larger organizations is in line with the South African Government efforts to promote Black businesses, because the former is considered to be one of the most viable sectors with economic potential growth. The functionality of the information repository is partitioned into a set of services which include: (i) registration of SMEs' profile, (ii) service advertisement and (Hi) service delivery. These services provided the mechanism for storing, retrieving and updating of information. Richards Bay Minerals, an organization in the mining industry that has outsourced part of her core business to junior partner was used as a case study to illustrate the model. XML, the Extensible Mark-up Language provided the descriptive language for the exchange of information between the different organizations via the web. The Java programming language was adopted for implementation. A performance evaluation of the information repository based on a set of parameters like usability, scalability, functionality and collaboration was carried out, with the aim of determining areas where improvements could be made in providing solution to the design of information repository for the web environment.
- ItemDesign of the service-based architectural framework for the South African National Park System(2004) Khumalo, Themba Cyril; Adigun, M.O.A typical response to the challenge of rendering competitive services to the customers of the South African National Parks led to the development of architectural mechanisms for providing services by taking advantage of the dynamic web protocol standards and frameworks. This was done in three steps: (1). The evaluation of existing IT-level support in providing nature conservation information and marketing of National Park services; (2). Investigation of mobile commerce services to create customer values that promote customer loyalty based on enterprise values; and (3). The development of a distributed service-based architectural framework for the South African National Park system based on service-oriented architectural model. The building blocks (publishing, registration, personalization of services, etc) of the architecture serve as the basis of designing system services for disseminating nature conservation information and marketing the services of the national parks. A prototype of the information service system was used to prove the usability of the architectural framework developed. The architecture proposed in this work is a guide and should provide the basis for an ICT infrastructure that responds to the quest for modernising the national parks information system.
- ItemA dynamic and adaptable system for service interaction in mobile grid(2007) Moradeyo, Otebolaku Abayomi; Adigun, M.O.Mobile and pervasive computing with its peculiar feature of providing services at anywhere anytime basis has been at the centre of major computing researches in recent times. The device resource poverty and network instability have been reasons behind unsuccessful use of handheld technology for service request and delivery. However, interaction of these mobile service components can be adapted to further improve on the quality of service experienced by service consumers. Content, user interfaces and other adaptation mechanisms have been explored, but these have not provided needed service qualities. However, one of the challenges of designing an adaptable system is on making adaptation decisions. This dissertation, therefore, presents a dynamic and adaptable system for service interaction. A context-aware utility-based adaptation model that uses service reconfiguration pattern to effect adaptation based on contexts was developed. It was assumed that developers of mobile services design services with variants that can be selected at runtime to fit the prevailing context situation of the environment. All variants differ in required context utilities. The service variants selection decision is based on a heuristic algorithm developed for this purpose. A prototype of the model was built to validate the concept. Experiments were then conducted to evaluate the proposed model purposely to measure the interaction adaptation quality, the overall response time with or without adaptation, and the effect of service consumer preference for a given service variant on adaptation process. Results from the experiments showed that though the adaptation process comes with additional overheads in terms of variation in response time, the adaptation of service interaction is beneficial. It was observed that the overall response time increased initially as the number of service variants increased which was due to overheads by the adaptation process. However, as the number of variants increased, the response time began to fall sharply and then became steady. This proved that adaptation can actually help reduce service response time. We also found that adaptation quality degraded with increased number of service variants. The lesson learnt was that adaptation can help reduce overall response time and can improve service quality perceived by the service consumers.
- ItemDynamic composition of portal interface for GUISET services(University of Zululand, 2014) Sihawu, Siyasanga Amanda; Adigun, M.O.; Xulu, S.S.The increased usage of technology requires the user interface to be user friendly and easy to understand. In portals, the user interface is one of the critical components. Portal is a medium that Small Medium and Macro Enterprises (SMME) can use to advertise, grow exposure of their business and interact with their customers and community at large. This requires the portal interface to adapt to runtime changes because of the rapid update of services. This research work addressed the issue of analysis of user request in order to have a clear understanding of the request. This is achievable by taking into consideration the user history, preference and profile. This work is realizable through the use of the design of architectural model, the Dynamic User Interface (DUI) model which analyzes the user request and also aggregate different components and presents them as a single unit to the user. The DUI model was prototyped via Grid-based Utility Infrastructure for SMME-enabled Technology (GUISET) user interface use case. Experiments were conducted to compare DUI portal and Jetspeed-2 to test usability subjects. Usability was evaluated under four experiment variations: customization, adaptation after query, user satisfaction with response and portal performance. For each experiment, data was generated and graphically presented for analysis. The results obtained show scalability of the portal interface with regards to the increased number of users requesting for the same services and the increased number of portlets. The user satisfaction with the interface adaptation on the fly after the query and content is also shown on the results that were obtained after the experiments.
- ItemDynamic multi-target user interface design for grid based M-Services(2008) Ipadeola, Abayomi Olayeni; Xulu, S.S.; Adigun, M.O.Device heterogeneity and the diversity in user preferences are challenges, which must be addressed in order to achieve effective information communication in a mobile computing environment. Many model based approaches for the generation of user interfaces have been proposed. These approaches were aimed at achieving effective information communication in a mobile computing environment. Unfortunately, current model based approaches do not support user participation in interface adaptation. Hence, user interfaces are adapted for and on behalf of users, leaving users with inappropriate interfaces. There is therefore a need for an approach that allows direct user participation in interface adaptation. Such an approach must be responsive to changes in user needs and preferences. This work therefore proposes a Polymorphic Logical Description (PLD) method for automatic generation of multi-target user interfaces. The PLD method consists of user preference aware models, which are created at design time. Interface artifacts are considered in the PLD as methods with polymorphic attributes in a bid to address the diversities experienced in a mobile computing environment. An interface artifact is dynamically selected for inclusion on an interface based on the context information of a requesting user and intrinsic characteristics of a device. Our approach is enabled by a support toolkit, named Custom-MADE (CoMADE) for automatic interface generation. The evaluation result of CoMADE shows that it achieves a high degree of user participation during interface generation, flexibility, dynamism, ease of application extensibility and user centeredness interfaces.
- ItemDynamic service recovery in a grid environment(2008) Sibiya, Sihle Sicelo; Xulu, S.S.; Adigun, M.O.Grid computing is fast becoming a popular technology in both academic and business environment. The adoption of this technology into the business environment has been slow due to some challenges such as how to overcome low service availability. This challenge emanates from the dynamic nature of the Grid environment and the complexity of services. These two cause services to be fault prone. Therefore, there is a need to develop an autonomic fault recovery mechanism that will effectively monitor, diagnose and recover a running service from failure. In addressing the above mentioned challenge, a dynamic service recovery model has been proposed. The model uses the replication approach to improve service availability whenever service failure is envisaged. The performance of the replication approach depends on how well a reliability index can be used to dynamically select two services of high reliability to serve an incoming service request. The service with higher reliability between the two selected services becomes the primary service while the other one becomes an active replica. An autonomic computing MAPE loop is implemented by the model to achieve runtime fault recovery. A simulation was carried out to evaluate the performance of the proposed model. The model was also compared with the existing active replication model. The results revealed that the newly proposed model exhibits superior performance characteristics especially when there are services with high and low reliability. It was also found that dynamic service recovery efficiently utilizes resource as a result of not-more-than two services serving a request.
- ItemDynamic service recovery in a grid environment(2009) Sibiya, Sible Sicelo; Xulu, S.S.; Adigun, M.O.Grid computing is fast becoming a popular technology in both academic and business environment. The adoption of this technology into the business environment has been slow due to some challenges such as how to overcome low service availability. This challenge emanates from the dynamic nature of the Grid environment and the complexity of services. These two cause services to be fault-prone. Therefore, there is a need to develop an autonomic fault recovery mechanism that will effectively monitor, diagnose and recover a running service from failure. In addressing the above mentioned challenge, a dynamic service recovery model has been proposed. The model uses the replication approach to improve service availability whenever service failure is envisaged. The performance of the replication approach depends on how well a reliability index can be used to dynamically select two services of high reliability to serve an incoming service request. The service with higher reliability between the two selected services becomes the primary service while the other one becomes an active replica. An autonomic computing MAPE loop is implemented by the model to achieve runtime fault recovery. A simulation was carried out to evaluate the performance of the proposed model. The model was also compared with the existing active replication model. The results revealed that the newly proposed model exhibits superior performance characteristics especially when there are services with high and low reliability. It was also found that dynamic service recovery efficiency utilizes resource as a result of not-more-than two services serving a request.
- ItemThe effect of topology control for wireless multi-hop networks(2007) Mudali, Pragasen; Adigun, M.O.Wireless multi-hop networks are not restricted to rural development efforts. They have found uses in the military, industry, as well as in urban areas. The focus of this study is on stationary wireless multi-hop networks whose primary purpose is the provisioning of Internet access using low cost, resource-constrained network nodes. Topology control algorithms have not yet catered for low cost, resource-constrained network nodes resulting in a need for algorithms that do cater for these types of wireless multi-hop network nodes. An algorithm entitled "Token-based Topology Control (TbTC)" was proposed. TbTC comprises three components, namely: transmit power and selection, network connectivity and next node selection. TbTC differs significantly in its treatment of the synchronisation required for a topology control algorithm to work effectively by employing a token to control the execution of the algorithm. The use of the token also ensures that all the network nodes eventually execute the topology control algorithm through a process called neighbour control embedded within the next node selection component. The proposed topology control algorithm, TbTC was simulated using ns-2 and the performances of a 30-node network before and after the algorithm was utilised, were compared. The Packet Delivery Ratio, Delay. Routing Protocol Overhead and Power Consumption were used as the simulation parameters. The neighbour control process was found to significantly reduce the number of hops taken by the token to visit each network node at least once. It was found that this process shortened the token traversal by 37.5%. Based on the results of its simulation, TbTC proves the positive benefits that can be accrued to the use of tokens in topology control as well as highlighting the negative benefits of the creation of uni-directional links in wireless multi-hop networks that utilise the IEEE 802.11 standard.
- ItemEnterprise component architecture for mobile commerce services(2004) Kunene, Hlengiwe Pinky.; Adigun, M.O.This research focuses on creating a component based repository architecture for mobile commerce services called (e-TOCOR) with the emphasis on component storage and retrieval. To realize this framework three tasks were carried out namely (i) a model for engineering component based m-commerce service was defined using existing models (ii) the Universal Description Discovery and Integration (UDDI) was used to model a component repository (hi) the mobile travel reservation application prototype was developed to demonstrate the proposed model. The results obtained were threefold (i) by evaluating the existing component based architectures, the study showed that m-commerce services are not the same as e-commerce services, the Information Requirement Elicitation (IRE) was adopted as a mechanism for eliciting a request and a service delivery protocol for end-user mobile commerce services and (ii) the prototype was developed to show how enterprise components can be delivered in mobile devices using the IRE protocol. It was also shown that the way existing m-commerce services elicit requests takes much time, the shortest way was to use a text message (iii) the repository framework was created emanating from the home-based reference architecture. In conclusion* the proposed repository could not be compared with the existing repository architecture because it was not implemented, instead the UDDI was used.
- ItemEvaluating the effect of quality of service mechanisms in power-constrained wireless mesh networks(University of Zululand, 2013) Oki, Olukayode Ayodele; Mudali, P.; Adigun, M.O.Wireless Mesh Network (WMN) is a collection of wireless nodes which can dynamically communicate with one another in multi-hop manner. This network has received considerable attention as a means to connectivity in community and commercial entities. The easy deployment and self-management characteristics of WMN, makes it a good choice for rural areas. However, in most developing countries, electricity is scarce or unreliable in the rural areas. A candidate solution to the lack of electricity supply in these areas is the use of solar/battery-powered nodes. Significant efforts have gone into optimization of Quality of Service (QoS) provisioning in WMN; hence, a lot of QoS mechanisms on OSI layer have been proposed. It is, however, not clear how different QoS mechanisms on OSI layer affect the node lifetime and the energy cost per bit of a battery-powered WMN nodes. Different protocols at different layers have varying effects on the energy efficiency of the battery-powered WMN nodes, when those protocols are subjected to various transmission power levels and payload sizes. The goal of this study is to evaluate how different existing QoS mechanisms affect the operational lifetime of battery-powered WMN nodes. This goal was achieved by evaluating how connection (TCP) and connectionless (UDP) Transport Layer protocols together with Reactive (AODV) and Proactive (OLSR) Routing protocols influence the lifetime of battery-powered nodes when subjected to different transmission power levels and payload sizes. The evaluation was carried out using NS-2 simulation and a fourteen nodes indoor testbed. The overall results of both the simulation and the testbed experiments show that for a TCP-based scenarios, TCP with OLSR at maximum transmission power level and maximum payload size outperform others in terms of packet delivery ratio, average throughput and average energy cost per bit, while TCP with AODV at minimum transmission power level and maximum payload size outperform others in terms of node lifetime. And for the UDP-based scenarios, UDP with AODV at maximum transmission power level and maximum payload size outperform others in terms of packet delivery ratio, average throughput, node lifetime and average energy cost per bit, while TCP with AODV at minimum transmission power level and maximum payload size outperform others in terms of node lifetime. The results of this study also reveal that simulation results only give a rough estimate of the real world network performance. Hence, whenever it is feasible, validating a simulation result using testbed is highly recommended in order to have clear and better understanding of the protocol performances.
- ItemEvaluation of Mesh Points Placement Schemes for Rural Wireless Mesh Network (RWMN) Deployment(University of Zululand, 2014) Ndlela, Nokubonga Zamandlela Yvonne; Adigun, M.O.; Mudali, P.Internet connectivity in most rural African areas has been viewed as a major challenge due to lack of reliable power, scarcity of network expertise, expensive installation of network equipment and the high cost of providing Internet connectivity. However, with the rapid development of wireless technologies, Wireless Mesh Network (WMN) has emerged as a promising networking infrastructure, bridging the digital divide between town and countryside. This is because of low cost, easy deployment and increased high speed of WMN internet connectivity. Despite the aforementioned features, WMN deployments still suffer from a challenge of achieving optimum network performance which is affected by a number of factors. One of these factors is choosing optimal positions for Mesh Points’ (MP’s) placement in a geographical environment. MP’s placement problems have been thoroughly investigated in the WMN field and different research works propose MP’s placement schemes that can be used to solve the placement problem. Deploying a WMN requires taking into account limitations and topology of the terrain. However, none of the existing MP’s placement schemes have been evaluated and compared using rural settlement patterns in order to judge their suitability of solving MP’s placement problem in such environments. The purpose of this study was to compare the existing MP’s placement schemes and to recommend the best scheme(s) to be used during rural WMN deployment. This can be viewed as a twofold objective, the first one involves evaluating Mesh Access Points (MAP’s) placement schemes and the second one involves evaluating Mesh Portal Points (MPP’s) placement schemes in the rural settlement patterns. Four MAP’s placement schemes were evaluated namely; Hill Climbing, Virtual Force Based (VFPlace), Time-efficient Local Search and Random placement schemes and, on the other hand, Four MPP’s placement schemes were evaluated namely Incremental Clustering, Multi-hop Traffic-flow Weight (MTW), Grid Based and Random MPPs placement schemes. Simulation was done in NS2, and four rural settlement patterns were considered, Nucleated, Dispersed, Linear and Isolated rural settlement patterns. The experimental evaluation revealed that Hill Climbing is better fitted to solving a placement problem in nucleated and dispersed settlement patterns. VFPlace could be applied as a first choice for placement in linear settlement patterns while Time-efficient Local Search is better for deployment in isolated settlement patterns. As an improvement from the existing works of MAP’s placement schemes, this study not only focuses on optimizing coverage and connectivity among the MAPs, but also focuses on the other factors like network throughput, packet deliver ratio and end-to-end delay. To complete the deployment problem, the recommended placement schemes were used as bases for MPP’s placement. The study revealed that MTW is best fit for placement in linear and nucleated settlement patterns, Grid is best for dispersed settlement patterns while incremental clustering is best for isolated settlement patterns. The performance of MPP’s placement schemes was evaluated by measuring and comparing the packet delivery ratio, throughput and end-to-end delay in the network. Random selection of MP’s led to poor network performance in all scenarios. Deploying single MPP improves network performance only in a small instance. However, the increase in the number of MAP’s results in a bottleneck on the MPP, hence the findings of this study recommends more than one MPP should be deployed when there are too many MAP’s in the network. In all scenarios a good placement scheme should be adaptive to the number of MAP’s and MPPs.
- ItemEvaluation study of Leader Selection Algorithms in Wireless Mesh Networks(University of Zululand, 2015) Zulu, Nkosinathi Hendrick; Adigun, M.O.Wireless Mesh Network (WMN) is a group of wireless devices which can dynamically communicate with one another in multi-hop manner. Wireless mesh networks (WMNs) are getting more attention and recognition as a scalable substitute for Wired Network infrastructure. The rising popularity of WMNs has necessitated the development of security mechanisms. The newly-ratified IEEE 802.11s mesh networking standard specifies a security mechanism that builds upon the IEEE 802.11i security standard meant for wireless local area networks. The IEEE 802.11s security mechanism requires the existence of a single Mesh key Distributor (MKD) which assists the authentication of new nodes that join the network. However, there is no mechanism for selecting a new MKD if the current MKD is unreachable or has failed. This scenario can occur due to the dynamic nature of WMN backbone topologies, wireless link variability in deployed networks, and battery depletion in battery-powered WMNs. MKD selection in WMN deployments can be performed by adapting Leader Selection Algorithms from Wireless Sensor Networks. The goal of this research study is to evaluate the existing leader selection algorithms in the context of selecting an MKD for WMNs. This goal was achieved by evaluating the existing wireless ad hoc networks leader selection algorithms. The energy-based and position-based leader selection algorithms were evaluated in the context of MKD selection and were subjected to different leader selection rounds and network sizes. The evaluation shows that on the energy-based LSAs, the heterogeneous-based LSAs EECS and UDAC) outperform the homogeneous-based LSAs (LEACH and EECHA) based on communication overhead cost and the energy consumption rate. Whilst the homogeneous-based LSAs outperform the 3 heterogeneous LSAs in terms of leader selection delay. The evaluation further revealed that, for the position-based LSAs, the event-based algorithms (EDC & EECED) outperform the distance-based algorithms (EDBCP & EDBC) in the achieved performance for communication overhead cost, leader selection delay and the energy consumption rate.
- ItemGauranteed real-time delivery of context-aware messages in publish/subscribe system(2007) Shabangu, Petrus Sipho; Adigun, M.O.Publish/subscribe communication paradigm is becoming popular with its decoupling factor, and the filtering of the message stream during the process of dissemination. In a publish/subscribe communication model, a subscriber is decoupled from the publisher, in the sense that a publisher and a subscriber are physically separated from each other. This dissertation reports an ongoing attempt at guaranteeing real time delivery of messages for publish/subscribe systems in a mobile environment where subscribers continuously change location. It focuses on ensuring that messages are delivered in time and space, do not permit stale messages to be delivered but allows subscribers to select priorities based on their preferences. This research was conducted by first, surveying the theory which gave rise to the theoretical framework used for literature review. Second, the research formulation activity consisted of model building, proving of the crafted model by using a prototyped scenario in which the proposed message delivery architecture was demonstrated and finally, a simulation that tested the reliability of the message delivery model. The results obtained from this research testified that the emanating message delivery architecture ensured that messages are guaranteed to be delivered to registered subscribers based on subscriber preferences.