Computer Science
Permanent URI for this collection
Browse
Browsing Computer Science by Title
Now showing 1 - 20 of 62
Results Per Page
Sort Options
- ItemA blockchain-based firmware update architecture for Long-Range Wide Area Network (LORAWAN)(University of Zululand, 2022) Mtetwa, Njabulo SakhileNetwork security is increasingly becoming a critical and continuous issue due to technological advancements. These advancements give rise to several security threats, especially when everything is connected to the Internet. Security in IoT still requires a lot of research and it is receiving a lot of attention both in industry and academic research. IoT devices are designed for special use cases, and most are constrained in resources and lack important security features. The lack of security features enables attackers to compromise IoT devices resulting in the retrieval of sensitive information from the devices. One of the challenges in IoT is ensuring the security of firmware updates on active devices on the Internet. This is a challenge because it becomes difficult to incorporate traditional security techniques due to the limitations in memory and processing capabilities of constrained IoT devices. Thus, IoT devices remain vulnerable and open to security threats. The device manufacturers are required to release firmware updates based on exposed vulnerabilities to fix bugs and improve the functionality of the devices. However, delivering a new version of the firmware securely to affected devices remains a challenge, especially for constrained devices and networks. This study aims to develop an architecture that utilizes Blockchain and the InterPlanentary File System (IPFS) to secure firmware transmission over a low data rate and constrained Long-Range Wide Area Network (LoRaWAN). The proposed architecture focuses on resource-constrained devices to ensure confidentiality, integrity, and authentication through symmetric algorithms by providing high availability and eliminating replay attacks. To demonstrate the usability and applicability of the architecture, a proof of concept was developed and evaluated using low-powered devices and symmetric algorithms. The experimental results show HMAC-SHA256 as one of the symmetric algorithms utilized in the firmware update process which consumes less memory compared to the CMAC algorithm. When updating the 5 kB of firmware HMAC consumes 6.9 kB of RAM whereas CMAC consumed 7.3 kB. The memory consumption results (RAM and flash) imply that MAC algorithms are adequate in providing security on low-powered devices and are suitable for constrained low-powered devices. This conclusion is premised on the fact that the memory does not exceed the memory of the low-powered device thus, making the proposed architecture feasible for constrained and low-powered LoRaWAN devices.
- ItemAn authentication framework for securing guiset infrastructure(University of Zululand, 2011) Mhlongo, Noluthando Brilliant; Adigun, M.O.Grid computing has appeared as a progressive and substantial area which has gained considerable attention from academic and business environments. It provides large, flexible resource sharing environment where resources and services are distributed over distributed administrative domains. Although there are many corporations benefiting from the grid computing technology, SMMEs still face some difficulties in exploiting these capabilities because of their lack of understanding the benefits that e-commerce enabling technology can provide. GUISET was developed to enable the integration of skills and resources by the SMMEs so that they can share and collaborate between themselves and independent business associates. With GUISET technology based on an open resource sharing paradigm, and provision of unlimited access to end users, comes with a great challenge of security. The widely deployed technology for securing these shared grid resources exploited by end users is the public key infrastructure. It proves the identities of grid users through the use of certificates. GSI is one of the implementations of the PKI technology and delivers essential requirements in a security infrastructure and those include delegation services, mutual authentication and single sign-on. However, the GSI has been discovered to suffer from poor scalability because of the use of certificates. In addressing the above-mentioned challenge, an authentication infrastructure for GUISET has been developed. The model employs the distribution of certification authorities granting identification to users requiring access to GUISET resources. An implementation was performed for evaluating the performance of the proposed model. The results showed that the newly developed authentication framework for securing GUISET resources provides more scalability than the previously developed grid security infrastructure.
- ItemBelief change in probabilistic knowledge representations for open and dynamic computing environments(University of Zululand, 2020) Jembere, EdgarTheory refinement in Probabilistic Knowledge Representations is the task of up dating the Graphical Network structure in light of observations inconsistent with the current network structure. However, in the literature on Belief Change in Probabilis tic Knowledge Representations, theory refinement is only thought of as a change in the model parameters when data consistent with the Network structure is observed. Such Belief Change is not rich enough to capture the semantics of Belief Change in dynamic domains. In dynamic domains, the actual network structure at any given time is unknown and is unobservable. Only the data emitted from the domain is observable. Further to the foregoing, the Belief Change Model needs to cater to both changes necessititated by the correction of incorrect Beliefs (Belief Revision) and changes necessitated by changes in the domain (Belief Update). This thesis hy pothesised a Belief Change Meta-Model for Bayesian Network (BN) based Knowledge Representation in dynamic domains, and subsequently used the meta-model to de fine a Unified Belief Change Model for Bayesian Networks that caters for both Belief Revision and Belief Update of the Bayesian Network Structure. The Belief Change Model was conceptualised by first modelling the evolving Bayesian Network structure as a dynamical system whose impetus for change is driven by the occurrence of some events in the domain. The derived Unified Belief Change Model was formally validated by analogy using the Qualitative Belief Change Model for dynamic environments and the theory of Partially Observable Markov Decision Processes (POMDP). It was also proven that the proposed Belief Change model meets the postulates for revision of p-functions. xvii Apart from arguing the efficacy of the proposed Unified Belief Change Model from a theoretical standpoint, this thesis also provides empirical evidence for the same. A Belief Change operator, the Unified Belief Change Operator for Bayesian Networks (UBCOBaN), based on the proposed Belief Change Model was developed. The opera tor was then used to illustrate how the model achieves Belief Change using a synthetic example with one (1) iteration of Belief Change. Further to the fore-going the operator was implemented in java and was used for evaluating the efficacy of the model in both Propositional Bayesian Networks and in Multi-Entity Bayesian Networks (MEBN). MEBN is a variant of First Order Probabilistic Logic (FOPL) this research chose to use for evaluating the proposed model for Belief Change in First-Order Probabilis tic Knowledge Representations. The benchmark propositional Bayesian Networks used in the study were the ASIA, ALARM, HAILFINDER, HEPAR II, and the AN DES Bayesian Networks. The benchmark relational datasets considered for MEBN were the CORA, WebKP, UW std, and Financial std datasets. The results obtained showed that the proposed model adheres to the principle of minimal change (prin ciple information economy) better than the classical Search-and-Score algorithm in all the afore-mentioned propositional Bayesian Networks and all the datasets consid ered for MEBN. The model was also found to be at least as agile as the classical Search-and-Score algorithm in instances where data inconsistent with the assumed network structure was observed. This was observed for all the benchmark proposi tional Bayesian Networks used in ths study, and all the relational datasets consid ered for MEBN. The results obtained for an investigation on whether Belief Update improves rationality of the proposed Unified Belief Change Model on propositional Bayesian Networks showed that the Unified Belief Change Model with Belief Update has superior performance compared to the one without Belief Update. However, the superior performance was not statistically significant at 95% Confidence Interval.
- ItemChange impact analysis model-based framework for service provisioning in a grid environment(2009) Ekabua, Obeten Obi; Adigun, M.O.Grid-based Utility Infrastructure for Small, Medium and Micro Enterprise (SMME) Enabling Technology (GUISET) is an architecture in which distributed applications must interact across platforms via the concept of services. The building block in a GUISET application are business process services which are expected to undergo maintenance just like any other software components. The basic operation of software evolution is change, thus making change inevitable in software development. Throughout the entire lifecycle of a software system, from conception to retirement, things happen that require the system to be changed. Changes are required to fix faults, improve or update products and services. Under these conditions, it is crucial to have full control over and knowledge about what changes mean to the system and its environment. Otherwise, changes might lead to deterioration and a multitude of other problems. The effectiveness and efficiency with which a company can predict or control these changes could have a significant impact on its competitiveness. In a complex product, where the constituent parts and systems are closely dependent, changes to one item of a system are highly likely to result in a change to another item, which in turn can propagate further. The activity of assessing the implications of realizing a change is termed change impact analysis. One glaring issue is to anticipate changes and structure the service in such a way that changes will be discovered early to avoid change propagation. This is because, it is likely that services will be exposed to many unanticipated changes during its lifetime, and the support for unanticipated changes creates a gap for an important research goal. Two issues are involved here: (1) the assessment of the consequences of altering (or not) the functionality of the service, and (2) the identification of the service dependencies that are affected by the change. Traditionally, research on change impact analysis has mainly focused on technical aspects such as traceability analysis and software code change propagation. By contribution and extension of existing knowledge, the research presented in this thesis is exploratory in its nature, conducted through overarching emphasis of monitoring change propagation during service provisioning, with the understanding that software is now being consumed as services. The main goal is to advance the current state of change impact analysis by evolving a model-based framework for validating, analyzing and monitoring change propagation in our typical grid service provisioning environment (GUISET). The framework emanating from this endeavour consists of two associated formal models of change Impact Analysis Factor Adaptation mechanism, and a Fault and Failure Assumption Model for service provisioning in our GUISET grid environment. As a part of the empirical validation of the framework, we graphically represent the relationship between change and impact, indicating that, as the number of changes increased, the impact also increased. This is because the number of changes was affected by the number of dependent services. Therefore, if the dependencies were high, the number of changes will be high, and consequently the impact due to fault propagation will be high.
- ItemConceptual design of an ontological approach to personalising GUISET portal(University of Zululand, 2014) Gumbi, Nonhlanhla Melody; Adigun, M.O.; Jembere, E.Searching and finding specific information from web-based information systems has become tedious and time consuming due to information overload on the web. The current adaptation and personalisation techniques employed are gradually becoming inadequate, as the information available on the web grows exponentially. Measures were taken towards improving the current solutions. The Generic Adaptation Framework (GAF) has been proposed as a standard for building personalised web-based systems. It identifies the standard adaptation components and how adaptation process should be done. The components of the GAF overlay model are only conceptualised to interact at syntax level, which limits interoperability between the components and subsequently the quality of the personalisation results (recommendations). This work proposes semantic interaction of the GAF modelling components, which will not only improve the interaction, but also enhance the overall adaptation process. To enhance personalisation, this work introduced ontologies to model the knowledge about the components and ontology mapping to support meaningful inter-component interaction. The proposed solution, Ontology-based GAF (O-GAF), was applied in a recommender system and compared with the classical GAF recommender system. The experiments carried out showed the O-GAF performs better than the classical GAF in terms of its recommendation accuracy.
- ItemContext-aware service discovery for dynamic grid environments(2010) Sibiya, Mhlupheki George; Xulu, S.S.; Adigun, M.O.The advent of the Service Oriented Computing (SOC) paradigm, web services, and grid services can contribute towards addressing the hardware and software acquisition problems for the resource constrained enterprises such as SMMEs by providing computing resources on-demand. In a Service Oriented Architecture (SOA) all participants including resource constrained mobile devices are not only expected to participate as service consumers, but also as service providers. However, service discovery in such environments is hindered by the fact that existing standards do not fully support dynamic service discovery at run time. There is, therefore, a need for a scalable and robust discovery mechanism that will also minimize false positive and false negatives in the discovered services. This is still an issue for the current widely adopted service discovery standards. Factors contributing negatively in service discovery range from the matchmaking approaches to the discovery mechanisms. There are many research efforts that attempted to address these issues in service discovery but they still leave a lot to be desired. In this research work we address service discovery shortcomings by proposing an approach that utilizes both the pure syntactic and the semantic matchmaking techniques. This is realizable through our proposed Context Aware Service Discovery Architecture (CASDA) framework. CASDA proposes a description of services that allows the annotation of both the dynamic and static contexts of a service. It also provides a framework that utilizes requester context, service context and semantics without trading-off scalability. CASDA builds upon OWLS-MX’s capability to utilize both the syntactic and logic techniques to service matching. It also takes into account the requester and service context in fulfilling the needs of the requester during the service discovery process. The results obtained showed an improvement on scalability and recall when compared with OWLS-MX. However, there is a slight cost on precision.
- ItemA Contracts-Based Model for managing web services evolution(University of Zululand, 2015) Chiponga, Kudzai; Adigun, M.O.; Tarwireyi, P.Service-based systems need to be designed in such a way that they are able to accommodate the volatility of the environment in which they operate. Failure to evolve service-based systems will result in service providers losing their competitive edge. Additionally, failure to evolve these systems properly will have far-reaching negative impacts on all stakeholders, especially if disruptions are allowed to occur. However, evolving service-based systems in a non-disruptive manner is still a challenge both to the research community and industry. Over recent years, many organisations have been adopting technology-based service solutions in what has become a technology-driven business environment, with web services at the forefront of being a business-enabler. This dissertation focusses on developing a contracts-based model for managing the evolution of web services in a manner that is consistent and transparent to business partners. The design science research methodology was used to architect a model that can alleviate the challenge of evolving service oriented systems. It was found that technical web service contracts can be leveraged upon to manage and maintain consumers while evolution is carried out. A contracts-based service proxy was developed as an instantiation of the model. Experiments demonstrated that this proxy maintained compatibility between evolving services and existing consumers. The model developed in this research presented a cost effective solution to managing the evolution of webservices while reusing the same computing resources and cutting down on the development time needed in evolving web services. Even though the proxy introduced processing overheads, the resultant loss in service throughput was negligible, especially when we take into consideration the amount of time and effort taken in evolving services manually.
- ItemCoordination-theoretic framework for sharing e-business resources(2008) Aremu, Dayo Reuben; Adigun, M.O.Market-based models are frequently used in the resource allocation on the computational grid. However, as the size of the grid grows, it becomes more difficult for customers to negotiate directly with all the grid resource providers. Middle agents are introduced to mediate between the providers and the customers so as to facilitate the resource allocation process. The most frequently deployed middle agents are the matchmakers, the brokers, and the market-maker. The matchmaking agent finds possible candidate providers who can satisfy the requirements of the customer, after which the customer directly negotiates with the candidates. The broker agents are mediating the negotiation with the providers in real time. The market-maker acquires resources and resource reservations in large quantities, and resells it to the customers. The purpose of this study was to establish a negotiation framework that enables optimal allocation of resources in a grid computing environment, such that: clients are allowed to have an on - demand access to pool of resources; collaboration is enhanced among resource providers; cost saving and efficiency are ensured in resource allocation. The objectives to realize this purpose were: first, to design an appropriate negotiation model that could be adopted to achieve optimal resource allocation; second, to determine an effective search strategy that could be employed in order to reach a Pareto efficient negotiation solution; third, to adopt a negotiation strategy or tactics that negotiators could use to arrive at optimal resource allocation. In order to achieve the goals and objectives set for the study, the following methodologies were used: (i) a critical survey of the existing economic approach and models for negotiating grid resources was conducted; (ii) The knowledge gained from the literature surveyed was used to construct a novel model called Co-operative Modeler for mediating grid resource sharing negotiation. We used Mathematical notations: first, to construct a theoretical model for allocation of resources to clients* task; second, to present a novel Combinatorial Multi-Objective Optimization model (CoMbO). by modeling negotiation offers of agents as multi-objective optimi2ation problem; and third, to present Genetic and Bayesian Learning Algorithms for implementing the model presented; (iii) an implementation prototype of the Co- operative Modeler was developed by: first, implementing the Co-operative Modeler to mimic the real world negotiation situation; second, using time-dependent negotiation tactics to evaluate the negotiation behavior of the Co-operative agents. The Co-operative Modeler has been shown to guarantee: (i) Scalability of number of users, i.e. multiple users can access a virtualized pool of resources in order to obtain the best possible response time overall by maximizing utilization of the computing resources;(ii) Enhanced collaboration.- that is promoting collaboration so that grid resources can be shared and utilized collectively, efficiently, and effectively to solve computer-intensive problems; (iii) Improved business agility - that is decreasing time to process data and deliver quicker results; and (iv) Cost saving. - i.e. leveraging and exploiting unutilized or under utilized power of all computing resources within a grid environment.
- ItemDatabase-as-a-service integration with ad hoc mobile cloud-powered GUISET(University of Zululand, 2017) Fakude, Siphelele C.; Mba, I.N; Adigun, M.OThe World Wide Web has advanced considerably in the recent years. These advancements have also been expanded to the mobile environments, where mobile devices are now capable of doing most of the operations that were initially intended for traditional computers. Due to the resource poverty situation facing mobile devices, i.e., mobile devices having limited storage capacity, processing and battery power, these advanced capabilities come at a cost. Because of this challenge, computing paradigms such as Mobile Cloud Computing (MCC) have been introduced in an attempt to mitigate this challenge. With MCC, most of the processing and storage is moved from the mobile device to the cloud server, thus conserving the device’s resources. For this to be realisable, the presence of an Internet link between the mobile device and the cloud is required. However, due to the fluctuation of wireless links, the always-on Internet connectivity between the cloud and mobile devices is not achievable at all times. The Internet disconnections result in cloud resources being unreachable by the mobile users, which has a negative impact on most applications. In an attempt to mitigate the connectivity outage issue between mobile devices and the cloud, this work proposes the usage of a cache to maintain data availability to mobile clients who were initially using the DBaaS model to store their data. The proposed cache model is used only during Internet disconnections, and uses a prefetching mechanism to periodically retrieve its data from the central cloud. The usability of this model applies in Ad hoc Mobile Clouds (AMC), where a group of disconnected mobile devices collaborate to form a community cloud. Furthermore, the integration of an AMC with the Grid-based Utility Infrastructure for SMME Enabling Technology (GUISET) to achieve smooth running of offline clients is proposed. This work further develops a prototype of the model, and the evaluation results show that our model performs better in terms of maintaining data availability to occasionally disconnected clients.
- ItemDesign of a communication infrastructure for GUISET services based on multiple enterprise service buses(University of Zululand, 2013) Shezi, Themba; Jembere, E.; Oladosu, J.In recent years, Service Oriented Architecture (SOA) has become a paradigm for enabling more efficient and flexible business processes in a service-based economy. The significance of this paradigm results in many organizations moving their businesses and making them available as online services so that they can be accessed ubiquitously by anyone connected to the network. The idea is to increase level of resource sharing and collaboration among geographically dispersed individuals/organizations. One of the successful SOA implementation that has recently received a lot of attention is the Enterprise Service Bus (ESB). ESB provides a key infrastructure that support guaranteed event handling, durable messaging, and data transformation capabilities that are needed by SOA environments. The success of ESB resulted in many ESB products being implemented and offered as both commercial and open source integration solutions. However, these products offer different approaches towards achieving ESB capabilities. Therefore, selecting the most suitable ESB becomes a challenging task, not only because there are many factors to consider in this selection, but also owing to the relationships between these factors and requirements of a particular integration scenario. There are many research efforts that have attempted to assist in ESB selection. They only consider evaluation of ESB products against given integration requirements. These evaluations are only useful when there is an ESB product that best support all the integration requirements of a given environment. This is hardly the case because ESBs perform well in some capabilities and worst in others. It is, therefore, believed that multiple ESBs can be integrated to get the best of individual ESBs that can give better performance compared to a single ESB. On the backdrop of the foregoing, this work considered GUISET integration requirements and investigated the validity of the above mentioned belief by integrating multiple ESBs to work together as a federation. Federation of ESBs allows each ESB to be used for the capability it best supports. Key capabilities investigated in this study are Service Discovery and Composition. An investigation was carried to find out which among the three (3) ESBs (ServiceMix, Mule and JBoss) considered best supports each of the afore-mentioned capabilities. The results showed that ServiceMix has the best support for Service Composition while JBoss has the best support for Service Discovery. These findings were then used for empirical evaluation of Directly Connected, Hub-Spoke and Brokered ESB Federation patterns, with each ESB providing the capability it best supports to the federation. Directly Connected ESB federation pattern outperformed the other patterns. We then compared the performance of Directly Connected ESB Federation and ServiceMix ESB to determine whether ESB federation has better performance compared to a single ESB. The results showed that ESB federation has better performance in terms of response time and throughput compared to a single ESB.
- ItemDesign of an IP address Auto-Configuration scheme for Wireless Multi-Hop Networks(2008) Mutanga, Murimo Bethel; Xulu, S.S.; Adigun, M.O.The importance of wireless ad-hoc networks (eg wireless mesh networks) in community and commercial connectivity cannot be underestimated in view of the benefits associated with such networks. An ad-hoc network must assemble itself from any devices that happen to be nearby, and adapt as devices move in and out of wireless range. High levels of self organization will minimize the need for manual configuration. In essence, self-organization provides an out-of-the-box functionality such that very little technical expertise is required to setup a network. However, efficiently providing unique IP addresses in ad-hoc networks is still an open research question. The goal of this study, on wireless multi-hop networks, was to develop algorithms for IP address auto-configuration. These algorithms should address the following among other problems: Achieving high levels of address uniqueness without compromising on latency and communication overhead. To achieve the overall goal of this research we proposed changes to the traditional DAD procedure, the Wise-DAD protocol was proposed. We introduced state information maintenance, which is passively collected and synchronized. Passively collecting state information reduced the number of DAD trials thereby reducing latency and communication overhead. Simulations were done in NS-2 to test the performance of the proposed protocol. A comparative analysis was then conducted. Wise-DAD was compared with the Strong-DAD protocol. Experiments on the effect of network size, node density and node arrival rate on communication overhead, address uniqueness and latency were conducted. Results from the simulation experiments show that Wise-DAD outperforms Strong-DAD in all the three metrics used for performance evaluation. First, Wise-DAD showed better scalability since it performed better than Strong-DAD when network size was increased. Communication overhead in Wise-DAD was generally low whilst the latency was generally uniform. The number of IP address duplicates recorded was reasonably low. Second, Wise- DAD was not affected by node arrival rate on all the three metrics that were recorded. On the other hand, the number of address duplicates in Strong-DAD decreased as the node arrival rate was increased. Interference significantly affected communication overhead recorded in Strong-DAD. Wise-DAD, on the other hand, was not affected by interference. The number of address conflicts in both protocols showed an inverse relationship to interference. However, the number of conflicts for both protocols was significantly different; Wise-DAD recorded much less address conflicts than Strong-DAD.
- ItemDesign of partnership-centred information repository(2007) Okharedia, Joseph Ayangbeso; Adigun, M.O.Organizations require the development of Information Repository to improve their products and services. With the rising interest in the semantic web, research efforts have been geared towards the integration of distributed information sources into an interoperable knowledge-base environment. The objective of this research was to design an information repository to facilitate the integration of operational data from various sources into a single and consistent knowledge base that supports analysis and decision making within and across different organizations. The research activities that resulted in the proposed information repository consisted of two major steps: (i) the design of an information repository aimed at promoting the usage of e-commerce through the Internet amongst the Small Medium Enterprises and the larger organizations and (ii) the implementation of repository architecture to facilitate knowledge sharing in response to queries on distributed metadata and easily interoperable sources merged into one semantic entity. The SMEs and the larger organizations are focused as partners in the design of this information repository. The implication of focusing on the SMEs and the larger organizations is in line with the South African Government efforts to promote Black businesses, because the former is considered to be one of the most viable sectors with economic potential growth. The functionality of the information repository is partitioned into a set of services which include: (i) registration of SMEs' profile, (ii) service advertisement and (Hi) service delivery. These services provided the mechanism for storing, retrieving and updating of information. Richards Bay Minerals, an organization in the mining industry that has outsourced part of her core business to junior partner was used as a case study to illustrate the model. XML, the Extensible Mark-up Language provided the descriptive language for the exchange of information between the different organizations via the web. The Java programming language was adopted for implementation. A performance evaluation of the information repository based on a set of parameters like usability, scalability, functionality and collaboration was carried out, with the aim of determining areas where improvements could be made in providing solution to the design of information repository for the web environment.
- ItemDesign of the service-based architectural framework for the South African National Park System(2004) Khumalo, Themba Cyril; Adigun, M.O.A typical response to the challenge of rendering competitive services to the customers of the South African National Parks led to the development of architectural mechanisms for providing services by taking advantage of the dynamic web protocol standards and frameworks. This was done in three steps: (1). The evaluation of existing IT-level support in providing nature conservation information and marketing of National Park services; (2). Investigation of mobile commerce services to create customer values that promote customer loyalty based on enterprise values; and (3). The development of a distributed service-based architectural framework for the South African National Park system based on service-oriented architectural model. The building blocks (publishing, registration, personalization of services, etc) of the architecture serve as the basis of designing system services for disseminating nature conservation information and marketing the services of the national parks. A prototype of the information service system was used to prove the usability of the architectural framework developed. The architecture proposed in this work is a guide and should provide the basis for an ICT infrastructure that responds to the quest for modernising the national parks information system.
- ItemDevelopment of a cloud based privacy monitoring framework for the health sector(University of Zululand, 2014) Shabalala, M.V; Tarwireyi, P; Adigun, M.OCloud computing is growing in popularity due to its ability to offer dynamically scalable resources provisioned as services regardless of user or location device. However, moving data to the cloud means that the control of the data is more in the hands of the cloud provider rather than the data owner. This is a great challenge that continues to hinder cloud computing from successfully achieving its potential. This is due to the fact that with cloud computing, the storage and processing of private information is done on remote machines that are not owned or even managed by the cloud consumers. This brings about significant security and data privacy concerns that impede the broader adoption of cloud computing, which compromises the vision of cloud computing as a new IT procurement model. In an attempt to address the aforementioned challenge, a privacy monitoring framework for the cloud computing environment was developed in this work. The design science methodology for information system was followed. The framework was evaluated using an experimental method. The evaluation of the framework mainly focused on the metrics that evaluate the satisfaction of the users’ goals. The quantitative evaluation aspect entailed the usability test and questionnaires to get results. From questionnaires the statistical data was found and analyzed. The results reported in this study show that the developed privacy monitoring framework could help cloud customers monitor the privacy of their personally identifiable information in the cloud. The framework employs the developed informative event and access logs analyser which enables customers to track and comprehend how their data is handled in the cloud.
- ItemThe development of a cloudlet business model based on stakeholder’s service guarantees(University of Zululand, 2020) Nxumalo, Mandisa NomfundoIn this study, a Service Level Agreement (SLA)-based Cloudlet business model was developed to facilitate the easy deployment of a Cloudlet instance by Small Medium Enterprises (SMEs) that ensures service guarantees by the edge. The development of the model is due to the lack of Cloudlet Business Models to assist and guarantee SMEs that the deployment of a Cloudlet can yield desired profits. The model consists of three role-players or stakeholders namely, the Cloudlet consumer, Cloudlet owners and Internet Service Providers (ISP) or Cloud Providers. The roles and interactions between the role-players are captured in an SLA. The SLA facilitates the provisioning and management of Cloudlet resources to ensure service guarantees and avoid deployment failures. The management and provision of Cloudlet resources can result in high throughput in the network, improving consumer Quality of Experience (QoE). The SLA defines the service description, service objectives, cost, and penalties and is imposed on consumers using a disclaimer page. The feasibility study of the developed model based on a coffee shop scenario was conducted using a Cost-Benefit Analysis (CBA) tool. The findings show that a coffee shop can gain about 164 cents per rand spent on Cloudlet operational costs over a period of 3 years. This can surely help SMEs gain financial stability and profit. Also, to demonstrate the effectiveness of an SLA on the Cloudlet Business Model, a coffee shop-based SLA simulation was conducted using CloudSim Plus tool. The results indicated that the provisioning of efficient resources in the network had a direct impact on the success of the SLA. The SLA’s success had an impact on both service guarantees and operational cost minimisation. Therefore it is concluded that an SLAbased Cloudlet business model proposed in this research ensures that stakeholders’ service guarantees are delivered.
- ItemA dynamic and adaptable system for service interaction in mobile grid(2007) Moradeyo, Otebolaku Abayomi; Adigun, M.O.Mobile and pervasive computing with its peculiar feature of providing services at anywhere anytime basis has been at the centre of major computing researches in recent times. The device resource poverty and network instability have been reasons behind unsuccessful use of handheld technology for service request and delivery. However, interaction of these mobile service components can be adapted to further improve on the quality of service experienced by service consumers. Content, user interfaces and other adaptation mechanisms have been explored, but these have not provided needed service qualities. However, one of the challenges of designing an adaptable system is on making adaptation decisions. This dissertation, therefore, presents a dynamic and adaptable system for service interaction. A context-aware utility-based adaptation model that uses service reconfiguration pattern to effect adaptation based on contexts was developed. It was assumed that developers of mobile services design services with variants that can be selected at runtime to fit the prevailing context situation of the environment. All variants differ in required context utilities. The service variants selection decision is based on a heuristic algorithm developed for this purpose. A prototype of the model was built to validate the concept. Experiments were then conducted to evaluate the proposed model purposely to measure the interaction adaptation quality, the overall response time with or without adaptation, and the effect of service consumer preference for a given service variant on adaptation process. Results from the experiments showed that though the adaptation process comes with additional overheads in terms of variation in response time, the adaptation of service interaction is beneficial. It was observed that the overall response time increased initially as the number of service variants increased which was due to overheads by the adaptation process. However, as the number of variants increased, the response time began to fall sharply and then became steady. This proved that adaptation can actually help reduce service response time. We also found that adaptation quality degraded with increased number of service variants. The lesson learnt was that adaptation can help reduce overall response time and can improve service quality perceived by the service consumers.
- ItemDynamic composition of portal interface for GUISET services(University of Zululand, 2014) Sihawu, Siyasanga Amanda; Adigun, M.O.; Xulu, S.S.The increased usage of technology requires the user interface to be user friendly and easy to understand. In portals, the user interface is one of the critical components. Portal is a medium that Small Medium and Macro Enterprises (SMME) can use to advertise, grow exposure of their business and interact with their customers and community at large. This requires the portal interface to adapt to runtime changes because of the rapid update of services. This research work addressed the issue of analysis of user request in order to have a clear understanding of the request. This is achievable by taking into consideration the user history, preference and profile. This work is realizable through the use of the design of architectural model, the Dynamic User Interface (DUI) model which analyzes the user request and also aggregate different components and presents them as a single unit to the user. The DUI model was prototyped via Grid-based Utility Infrastructure for SMME-enabled Technology (GUISET) user interface use case. Experiments were conducted to compare DUI portal and Jetspeed-2 to test usability subjects. Usability was evaluated under four experiment variations: customization, adaptation after query, user satisfaction with response and portal performance. For each experiment, data was generated and graphically presented for analysis. The results obtained show scalability of the portal interface with regards to the increased number of users requesting for the same services and the increased number of portlets. The user satisfaction with the interface adaptation on the fly after the query and content is also shown on the results that were obtained after the experiments.
- ItemDynamic multi-target user interface design for grid based M-Services(2008) Ipadeola, Abayomi Olayeni; Xulu, S.S.; Adigun, M.O.Device heterogeneity and the diversity in user preferences are challenges, which must be addressed in order to achieve effective information communication in a mobile computing environment. Many model based approaches for the generation of user interfaces have been proposed. These approaches were aimed at achieving effective information communication in a mobile computing environment. Unfortunately, current model based approaches do not support user participation in interface adaptation. Hence, user interfaces are adapted for and on behalf of users, leaving users with inappropriate interfaces. There is therefore a need for an approach that allows direct user participation in interface adaptation. Such an approach must be responsive to changes in user needs and preferences. This work therefore proposes a Polymorphic Logical Description (PLD) method for automatic generation of multi-target user interfaces. The PLD method consists of user preference aware models, which are created at design time. Interface artifacts are considered in the PLD as methods with polymorphic attributes in a bid to address the diversities experienced in a mobile computing environment. An interface artifact is dynamically selected for inclusion on an interface based on the context information of a requesting user and intrinsic characteristics of a device. Our approach is enabled by a support toolkit, named Custom-MADE (CoMADE) for automatic interface generation. The evaluation result of CoMADE shows that it achieves a high degree of user participation during interface generation, flexibility, dynamism, ease of application extensibility and user centeredness interfaces.
- ItemDynamic service recovery in a grid environment(2008) Sibiya, Sihle Sicelo; Xulu, S.S.; Adigun, M.O.Grid computing is fast becoming a popular technology in both academic and business environment. The adoption of this technology into the business environment has been slow due to some challenges such as how to overcome low service availability. This challenge emanates from the dynamic nature of the Grid environment and the complexity of services. These two cause services to be fault prone. Therefore, there is a need to develop an autonomic fault recovery mechanism that will effectively monitor, diagnose and recover a running service from failure. In addressing the above mentioned challenge, a dynamic service recovery model has been proposed. The model uses the replication approach to improve service availability whenever service failure is envisaged. The performance of the replication approach depends on how well a reliability index can be used to dynamically select two services of high reliability to serve an incoming service request. The service with higher reliability between the two selected services becomes the primary service while the other one becomes an active replica. An autonomic computing MAPE loop is implemented by the model to achieve runtime fault recovery. A simulation was carried out to evaluate the performance of the proposed model. The model was also compared with the existing active replication model. The results revealed that the newly proposed model exhibits superior performance characteristics especially when there are services with high and low reliability. It was also found that dynamic service recovery efficiently utilizes resource as a result of not-more-than two services serving a request.
- ItemDynamic service recovery in a grid environment(2009) Sibiya, Sible Sicelo; Xulu, S.S.; Adigun, M.O.Grid computing is fast becoming a popular technology in both academic and business environment. The adoption of this technology into the business environment has been slow due to some challenges such as how to overcome low service availability. This challenge emanates from the dynamic nature of the Grid environment and the complexity of services. These two cause services to be fault-prone. Therefore, there is a need to develop an autonomic fault recovery mechanism that will effectively monitor, diagnose and recover a running service from failure. In addressing the above mentioned challenge, a dynamic service recovery model has been proposed. The model uses the replication approach to improve service availability whenever service failure is envisaged. The performance of the replication approach depends on how well a reliability index can be used to dynamically select two services of high reliability to serve an incoming service request. The service with higher reliability between the two selected services becomes the primary service while the other one becomes an active replica. An autonomic computing MAPE loop is implemented by the model to achieve runtime fault recovery. A simulation was carried out to evaluate the performance of the proposed model. The model was also compared with the existing active replication model. The results revealed that the newly proposed model exhibits superior performance characteristics especially when there are services with high and low reliability. It was also found that dynamic service recovery efficiency utilizes resource as a result of not-more-than two services serving a request.