Show simple item record

dc.contributor.authorAslanpour, MSen_US
dc.contributor.authorGill, SSen_US
dc.contributor.authorToosi, ANen_US
dc.date.accessioned2020-08-27T08:41:32Z
dc.date.available2020-08-08en_US
dc.date.issued2020-08-11en_US
dc.identifier.issn2327-4662en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/66658
dc.description.abstractOptimization is an inseparable part of Cloud computing, particularly with the emergence of Fog and Edge paradigms. Not only these emerging paradigms demand reevaluating cloud-native optimizations and exploring Fog and Edge-based solutions, but also the objectives require significant shift from considering only latency to energy, security, reliability and cost. Hence, it is apparent that optimization objectives have become diverse and lately Internet of Things (IoT)-specific born objectives must come into play. This is critical as incorrect selection of metrics can mislead the developer about the real performance. For instance, a latency-aware auto-scaler must be evaluated through latency-related metrics as response time or tail latency; otherwise the resource manager is not carefully evaluated even if it can reduce the cost. Given such challenges, researchers and developers are struggling to explore and utilize the right metrics to evaluate the performance of optimization techniques such as task scheduling, resource provisioning, resource allocation, resource scheduling and resource execution. This is challenging due to (1) novel and multi-layered computing paradigm, e.g., Cloud, Fog and Edge, (2) IoT applications with different requirements, e.g., latency or privacy, and (3) not having a benchmark and standard for the evaluation metrics. In this paper, by exploring the literature, (1) we present a taxonomy of the various real-world metrics to evaluate the performance of cloud, fog, and edge computing; (2) we survey the literature to recognize common metrics and their applications; and (3) outline open issues for future research. This comprehensive benchmark study can significantly assist developers and researchers to evaluate performance under realistic metrics and standards to ensure their objectives will be achieved in the production environments.en_US
dc.format.extent100273 - 100273en_US
dc.publisherElsevieren_US
dc.relation.ispartofInternet of Thingsen_US
dc.rightshttps://doi.org/10.1016/j.iot.2020.100273
dc.titlePerformance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy, Benchmarks and Standards for Future Researchen_US
dc.typeArticle
dc.rights.holder© 2020 Elsevier B.V.
dc.identifier.doi10.1016/j.iot.2020.100273en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
dcterms.dateAccepted2020-08-08en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record