دانلود رایگان مقاله انگلیسی مدیریت منابع به عنوان یک زیرساخت همانند یک سرویس در محاسبات ابری: یک نظرسنجی به همراه ترجمه فارسی
عنوان فارسی مقاله: | مدیریت منابع به عنوان یک زیرساخت همانند یک سرویس در محاسبات ابری: یک نظرسنجی |
عنوان انگلیسی مقاله: | Resource management for Infrastructure as a Service (IaaS) in cloud computing: A survey |
رشته های مرتبط: | مهندسی کامپیوتر، فناوری اطلاعات، رایانش ابری یا محاسبات ابری، اینترنت و شبکه های گسترده، مدیریت سیستم های اطلاعاتی |
فرمت مقالات رایگان | مقالات انگلیسی و ترجمه های فارسی رایگان با فرمت PDF میباشند |
کیفیت ترجمه | کیفیت ترجمه این مقاله خوب میباشد |
توضیحات | ترجمه این مقاله به صورت خلاصه انجام شده است. |
نشریه | الزویر – Elsevier |
کد محصول | f148 |
مقاله انگلیسی رایگان (PDF) |
دانلود رایگان مقاله انگلیسی |
ترجمه فارسی رایگان (PDF) |
دانلود رایگان ترجمه مقاله |
خرید ترجمه با فرمت ورد |
خرید ترجمه مقاله با فرمت ورد |
جستجوی ترجمه مقالات | جستجوی ترجمه مقالات مهندسی کامپیوتر |
بخشی از ترجمه فارسی مقاله: 5.4 تطبیق منابع |
بخشی از مقاله انگلیسی: 5.4. Resource adaptation The primary reason for adapting cloud computing from a user perspective is to move from the model of capital expenditure (CAPEX) to operational expenditure(OPEX). Instead of buying IT resources like machines, storage devices etc. and employing personnel for operating, maintaining etc., a company pays another company (the provider) for the actual resources used (pay-as-you-go). An important aspect of this is that a company no longer needs to overprovision its IT resources. It is typical today, when a company invests in its own resources, that the amount of resources invested in corresponds to the maximum amount of resources needed at peak times with the result that much of these resources are not needed at all during regular periods. The key conceptual component of framework discussed in Zhu and Agrawal (2010) is a dynamic resource adaptation algorithm, which is based on control theory. A reinforcement learning guided control policy is applied to adjust the adaptive parameters so that application benefit is maximized within the time constraint using modest overhead. Such a control model can be trained fast and accurately. Furthermore, a resource model is proposed to map any given combination of values of adaptive parameters to resource requirements in order to guarantee that the resource cost stays under the budget. Duong et al. (2009) have proposed an extensible framework for dynamic resource provisioning and adaptation in IaaS clouds. The core of this framework is a set of resource adaptation algorithms which utilize workload and resource information to make informed provisioning decisions in light of dynamically changing users demands. Jung et al. (2010) present Mistral, a holistic optimization system that balances power consumption, application performance, and transient power/performance costs due to adaptation actions and decision making in a single unified framework. By doing so, it can dynamically choose from a variety of actions with differing effects in a multiple application, and dynamic workload environment. Calyam et al. (2011) use OnTimeMeasure-enabled performance intelligence to compare utility-driven resource allocation schemes in virtual desktop clouds. The results from the global environment for network innovations(GENI) infrastructure experiments carried out by the authors demonstrated how performance intelligence enables autonomic nature of FI (Future Internet) applications to mitigate the costly resource overprovisioning and user QoE (Quality of Experience) guesswork, which are common in the current Internet. Senna et al. (2011) present an architecture for management and adaptation of virtual networks on clouds. Their infrastructure allows the creation of virtual networks on demand, associated with the execution of workflows, isolating and protecting the user environment. The virtual networks used in workflow execution has its performance monitored by the manager which acts preemptively in the case of performance dropping below stated requirements. Flexibility enables the adaptation of cloud solutions to all users to ensure that they get exactly what they want and need. By that, cloud computing not only introduces a new way of how to perform computations over the Internet, but some observers also observed that it holds the potential to solve a range of ICT (information and communications technology) problems identi- fied within disparate areas such as education, healthcare, climate change, terrorism, economics etc. as per Schubert (2010). Resource adaptation of the virtual hosts should dynamically scale to the updated demands (cloud computing) as well as colocate applications to save on energy consumption (green computing) as per Sclater (2011). Most importantly, resource transitions during workload surges should occur while minimizing the expected loss due to mismatches of the resource predictions and actual workload demands. A system that can automatically scale its share of infrastructure resources is designed in Charalambous (2010). The adaptation manager monitors and autonomically allocates resources to users in a dynamic way. However, this centralized approach cannot fit in the future multiprovider cloud environment, since different providers may not want to be controlled by such a centralized manager. There have been great advances towards automatically managing collections of inter-related and context-dependent VMs (i.e. a service) in a holistic manner by using policies and rules. The degree of resource management, the bonding to the underlying API and coordinating resources spread across several clouds in a seamless manner while maintaining the performance objectives are major concerns that deserve further study. Also, dynamically scaling LBS (location based services) and its effects on whole application scalability are reported in Vaquero et al. (2011) and Amazon auto scaling service. The goal of authors in Baldine et al. (2009) is to manage the network substrate as a first-class resource that can be co-scheduled and co-allocated along with compute and storage resources, to instantiate a complete built-to-order network slice hosting a guest application, service, network experiment, or software environment. The networked cloud hosting substrate can incorporate network resources from multiple transit providers and server hosting or other resources from multiple edge sites (a multi-domain substrate). Jung et al. (2008) propose a novel hybrid approach for enabling autonomic behavior that uses queuing theoretic models along with optimization techniques to predict system behavior and automatically generate optimal system configurations. Marshall et al. (2010) have implemented a resource manager, built on the Nimbus toolkit to dynamically and securely extend existing physical clusters into the cloud. The elastic site manager interfaces directly with local resource managers, such as Torque. Raghavan et al. (2009) present the design and implementation of distributed rate limiters, which work together to enforce a global rate limit across traffic aggregates at multiple sites, enabling the coordinated policing of a cloud-based service network traffic. This abstraction not only enforces a global limit, but also ensures that congestion-responsive transport-layer flows behave as if they traversed a single, shared limiter. Table 10 summarizes some of the resource adaptation schemes. Table 11 lists out the performance metrics of the resource adaptation schemes. 5.4.1. Open challenges in resource adaptation How is the demand for using the cloud services provided by the vendor? Is it mostly constant or widely varying? What is the frequency of usage of cloud resources? Is it highly frequent? Very frequent usage in fact makes less economic sense to go for cloud based pay-as-you-go model. Do we need highly customized services/API (application programming interfaces) to be exposed by the vendor? Cloud vendors would not find it economically attractive to provide highly customized services and hence price for enterprise (users of cloud) might also be not very attractive. Is the application mission critical? A mission critical application would need very stringent SLAs, which cloud vendors could not be able to satisfy as yet. An industry or application with highly stringent compliance requirements might still not find it suitable to consume key services from a vendor due to inherent risks involved. Can a problem occurence in our slice environment that impacts our QoE be identified and notified to our application to adapt and heal? Can problem information also be shared with us and our application service provider if our application cannot automatically heal itself? Can we monitor all the detailed active (e.g., Ping, traceroute, iperf) and passive (e.g., TCP dump, netow, router-interface statistics) measurements at end-to-end hop, link, path and slice levels across multiple federated ISP domains? Can we analyze all the measurements to offline provision adequate resources to deliver satisfactory user QoE, and online i.e., real-time identify anomalous events impacting user QoE? |