Search for a command to run...
ABSTRACT Network virtualization (NV) enables service providers (SPs) to efficiently segment physical network (PN) infrastructure into secluded virtual network requests (VNRs) belonging to diverse real‐time applications such as video streaming, online gaming services, and 5G network slices. Each VNR consists of interdependent virtual machines (VMs) and virtual links (VLs). Though NV offers significant opportunities, such as improved resource utilization, privilege isolation, secure communication, and improved quality of service (QoS), it also presents complex research challenges, such as efficacious allocation of physical infrastructure resources to VNR components. This concern is generally characterized as the virtual network embedding (VNE) problem, and it is known to be an ‐hard problem. In VNE, increasing the acceptance ratio is a key objective for the SPs, as it clearly contributes to higher revenue and improved utilization of the underlying physical infrastructure. However, most of the existing works mainly suffer from the following pitfalls, such as (i) resource‐centric constraints, (ii) less scalability, (iii) a narrow set of topological features, and (iv) computational complexity. To overcome these limitations, this work proposes an Efficient Resource Allocation through Algae Growth‐Based Dynamic VNE strategy (ADViN), inspired by the behavior of algae populations. The ADViN activates in two primary phases: the VM embedding and the VL embedding. The former stage employs a meta‐heuristic policy inspired by the reproductive behavior of algae to proficiently explore the search space and balance the acceptance and revenue‐to‐cost ratios through fitness‐guided population evolution. The latter phase confirms bandwidth‐aware path selection using the shortest‐path method to guarantee feasible link embedding. The simulations reveal that ADViN enhances the average acceptance ratio and long‐term revenue‐to‐cost efficiency by 43% and 54%, respectively, compared to baseline methods.
Published in: Concurrency and Computation Practice and Experience
Volume 38, Issue 6
DOI: 10.1002/cpe.70646