Computer Science

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 62
  • Item
    Improving the gateway placement algorithm in long range wide area network (LoRaWAN)
    (2022-12-02) Mnguni, Petros Smangaliso
    Internet of Things (IoT) is expected to grow exponentially such that the number of devices connected to the internet will be up to 125 billion by the year 2030. IoT end node devices rely on Gateways for data transmission to the internet and to ensure coverage for IoT devices, Gateways need to be optimally placed. However, physical infrastructure and topography as features of the target area are essential for IoT Gateways optimal placement. Recently, Wireless Mesh Networks (WMN) has gained an important role in current communication technologies. It has been used in several applications such as surveillance and rescue systems. Furthermore, the network congestion can be minimised and throughput can be improved by placing many Gateways in the network but on the other hand, deployment cost and interference will increase. Therefore, this work focuses on the Gateway placement algorithms on the newly developed wireless technology called Long Range Wide Area Networks (LoRaWAN) protocol and its performance. A review of existing Gateway Placement Algorithms has been conducted to bring together the state-of-the-art WMN, Mobile Ad hoc Networks (MANETs)-Satellite, Backbone Wireless Mesh Network (BWMN), Vehicular Ad hoc Network (VANET), 5G cellular network, and Low Power Wide Area Network (LPWAN).These Algorithms were studied in different networks to distinguish each of their strengths and weaknesses that require improvements. Literature provided insight into the performance of the existing Gateway placement algorithms in both short-range and long-range transmission. However, it is still not clear how the algorithms perform in a network that supports long-range transmission technologies such as LPWAN. Arising from the foregoing is the need to evaluate the performance of short-range algorithms in LPWAN environment; to improve the algorithms for a long-range technology such as LoRa; assuming that they showed the prospect of overcoming the drawbacks mentioned in the literature review. This study has improved existing Gateway placement algorithm by firstly evaluating the existing algorithms that were implemented in a different environment i.e., short-range transmission, and determining the strength and drawback of those algorithms. Secondly, after the identification of an algorithm with some promising features that can be integrated into a long-range transmission, it is then improved for Gateway placement in LoRa technology. The algorithm implemented previously was for a different purpose and was implemented in a different network. However, due to its capability in the previous environment which can be beneficiary in a newly developed LoRa technology, the algorithm was improved and implemented in LPWAN environment to improve iv Gateway placement. The simulation results showed that the improved algorithm outperformed the existing algorithms. Some of the outstanding observation with the improved algorithm the SF7 accommodated an average of 25% for LoRa nodes created in both network scenarios where other algorithms could accommodate only 20% of the network average. The increase of Gateways in the network can help to reduce the energy consumption by LoRa nodes even though it can be expensive.
  • Item
    A blockchain-based firmware update architecture for Long-Range Wide Area Network (LORAWAN)
    (University of Zululand, 2022) Mtetwa, Njabulo Sakhile
    Network security is increasingly becoming a critical and continuous issue due to technological advancements. These advancements give rise to several security threats, especially when everything is connected to the Internet. Security in IoT still requires a lot of research and it is receiving a lot of attention both in industry and academic research. IoT devices are designed for special use cases, and most are constrained in resources and lack important security features. The lack of security features enables attackers to compromise IoT devices resulting in the retrieval of sensitive information from the devices. One of the challenges in IoT is ensuring the security of firmware updates on active devices on the Internet. This is a challenge because it becomes difficult to incorporate traditional security techniques due to the limitations in memory and processing capabilities of constrained IoT devices. Thus, IoT devices remain vulnerable and open to security threats. The device manufacturers are required to release firmware updates based on exposed vulnerabilities to fix bugs and improve the functionality of the devices. However, delivering a new version of the firmware securely to affected devices remains a challenge, especially for constrained devices and networks. This study aims to develop an architecture that utilizes Blockchain and the InterPlanentary File System (IPFS) to secure firmware transmission over a low data rate and constrained Long-Range Wide Area Network (LoRaWAN). The proposed architecture focuses on resource-constrained devices to ensure confidentiality, integrity, and authentication through symmetric algorithms by providing high availability and eliminating replay attacks. To demonstrate the usability and applicability of the architecture, a proof of concept was developed and evaluated using low-powered devices and symmetric algorithms. The experimental results show HMAC-SHA256 as one of the symmetric algorithms utilized in the firmware update process which consumes less memory compared to the CMAC algorithm. When updating the 5 kB of firmware HMAC consumes 6.9 kB of RAM whereas CMAC consumed 7.3 kB. The memory consumption results (RAM and flash) imply that MAC algorithms are adequate in providing security on low-powered devices and are suitable for constrained low-powered devices. This conclusion is premised on the fact that the memory does not exceed the memory of the low-powered device thus, making the proposed architecture feasible for constrained and low-powered LoRaWAN devices.
  • Item
    Evaluating Routing Protocol for Low power and Lossy Networks (RPL)-Based Load Balancing Routing Algorithms in Internet of Things (iot) Networks
    (University of Zululand, 2022) Magubane, Zibuyisile
    The Internet of Things (IoT) is the network of different objects communicating different information in different scenarios. The networking objects are embedded with low-power devices that are responsible for collecting data in the physical environment and transmitting the data from one point to another. Routing is a mandatory factor in low-power devices to improve data transmission due to the volume of data transmitted in an IoT network. The Internet engineering task force designed an IPv6 routing protocol for low-power and lossy network (RPL) to govern data transmission in low-power devices. However, RPL fails to transmit data effectively in large IoT networks because it does not balance the load distribution during data transmission. Load balancing is the factor that enables the data to be distributed effectively among the IoT devices until they reach the destination. Several authors proposed load-balancing routing algorithms for RPL which were evaluated in different network scenarios. The proposed RPL-based load balancing routing algorithms offers partial load balancing and were evaluated in non-standard network areas and network size. Thus, it is challenging to identify an effective RPL-based load-balancing routing algorithm. To find the effective RPL-Based load balancing routing algorithms for IoT networks, we proposed three RPL-based load-balancing routing algorithms for the IoT network namely: Enhanced Context aware and load balancing routing algorithm for RPL (ENCLRPL), Buffer occupancy load balancing for RPL (BLRPL) and Enhanced ETXPC-RPL(EN-ETXPC-RPL). The design science research method (DSRM) was adopted to conduct this study. The algorithms were developed in the Contiki operating system and demonstrated in a simulation environment to find their effectiveness in an IoT network. The performance of the RPL-based load-balancing routing algorithms was evaluated based on reliability and stability metrics in different network sizes. The results obtained indicate that BLRPL is the most effective routing algorithm for IoT networks with the maximum of 96% of packet delivery ratio, 0.16 ms of network delay and 1.0 mW power consumption.
  • Item
    The development of a cloudlet business model based on stakeholder’s service guarantees
    (University of Zululand, 2020) Nxumalo, Mandisa Nomfundo
    In this study, a Service Level Agreement (SLA)-based Cloudlet business model was developed to facilitate the easy deployment of a Cloudlet instance by Small Medium Enterprises (SMEs) that ensures service guarantees by the edge. The development of the model is due to the lack of Cloudlet Business Models to assist and guarantee SMEs that the deployment of a Cloudlet can yield desired profits. The model consists of three role-players or stakeholders namely, the Cloudlet consumer, Cloudlet owners and Internet Service Providers (ISP) or Cloud Providers. The roles and interactions between the role-players are captured in an SLA. The SLA facilitates the provisioning and management of Cloudlet resources to ensure service guarantees and avoid deployment failures. The management and provision of Cloudlet resources can result in high throughput in the network, improving consumer Quality of Experience (QoE). The SLA defines the service description, service objectives, cost, and penalties and is imposed on consumers using a disclaimer page. The feasibility study of the developed model based on a coffee shop scenario was conducted using a Cost-Benefit Analysis (CBA) tool. The findings show that a coffee shop can gain about 164 cents per rand spent on Cloudlet operational costs over a period of 3 years. This can surely help SMEs gain financial stability and profit. Also, to demonstrate the effectiveness of an SLA on the Cloudlet Business Model, a coffee shop-based SLA simulation was conducted using CloudSim Plus tool. The results indicated that the provisioning of efficient resources in the network had a direct impact on the success of the SLA. The SLA’s success had an impact on both service guarantees and operational cost minimisation. Therefore it is concluded that an SLAbased Cloudlet business model proposed in this research ensures that stakeholders’ service guarantees are delivered.
  • Item
    A resource management framework for fog computing networks
    (University of Zululand, 2020) Mtshali, Mxolisi
    The evolution of the Internet of Things (IoT) has drastically changed how computing devices can be deployed by enabling them to be located anywhere on the cloud-to-things continuum. This has emerged as a solution to most emerging market challenges. These ubiquitous computing devices do not only possess the data-processing capabilities, they also have computing and storage capabilities. Based on these device capabilities, a new paradigm has evolved. This paradigm decentralizes all the centralized cloud capabilities and locates them at the edge of the network. This is popularly known as fog computing. The goal is to avoid deploying the IoT services to the cloud core servers for processing and storage resources, to also mitigate latency, and cost of deployment. Since the paradigm is relatively new, there are challenges that need to be tackled in order to have a reliable network deployment. The main challenge of this paradigm arises from the relocation of service from the resource-rich physical underlying infrastructure of the cloud core to the Fog layer where there is limited resources and physical infrastructure, because limited infrastructure offers limited capacity in terms of computing, network, processing and storage resources. This means that should a heavy application be executed on these limited FN services, they can be over-consumed, which could lead to network breakdown or failure owing to device shutdown. As the resources are the main significant part of the network, they must be able to offer services on demand without being exploited. Therefore, in order to optimize the resource management operation efficiently, a method such as resource scheduling must be applied to fog deployments. To address the matter, factors such as scheduling algorithms and framework are considered. In the process of network design and deployment, critical questions arise, such as: How can a resource management framework be used to address the challenges in fog computing; Why do existing resource scheduling mechanisms not respond adequately to resource management challenges of fog computing networks? Which resource-scheduling algorithms can be used to address specific resource management challenges, and what are their relevant achievements? Henceforth, this challenge will be referred to as a resource management problem. This challenge is a multi- objective problem and comprises competing objectives such as service delay, energy consumption and network utilization. Therefore, it affirms that there is no universal solution available. Therefore, this dissertation proposes a resource management framework as a solution to the network-planning problem. The planning covers the optimal placement of tasks with respect to resource consumption optimization and optimal offloading of tasks in the continuum. Addressing this multi-objective optimization problem involves two stages. First, four classical scheduling methods, namely, first come first serve (FCFS), shortest job first (SJF), round robin (RR), and priority-based (PB) have been evaluated. The optimal method progresses from the first stage to the second stage, where its overall performance is evaluated against an unsupervised machine-learning algorithm K-means. The simulation findings show that, as the number of sensor request scales up, the service delay and the energy consumption of the FCFS scheduling method increase linearly; while in the case of K-means, the service delay increases exponentially. The FCFS method yields optimal results in terms of service delay and CPU execution time spent on the task, and also achieved the best trade-off between the competing objectives of service delay and energy consumption, whereas, the K-means clustering method has optimal energy consumption with high service delay resulting in the worst trade-off. The modelling covers most realistic fog computing applications, parameters and constraints, therefore it can be easily deployed in the fog computing landscape while extending the current cloud architecture.