Senior Doctoral Fellow, Jawaharlal Nehru University
Increasing demand for high computational power by modern service applications led to the establishment of large-scale data centers that consume an enormous amount of electrical energy. These large-scale data centers have a profound impact on the environment due to the large amount of $CO_2$ emission associated with huge energy consumption. Green Computing is a recent development in computer science, which attempts to lower down the energy usage and carbon impact produced by computers/servers on distributed platforms such as clouds. Cloud computing is an emerging technology that can increase utilization and efficiency of hardware equipment by virtualizing underlying hardware infrastructure. Managing this hardware equipment in an autonomic manner for provisioning and de-provisioning of resources in an efficient way is still a great challenge. In our research, we look for the potential algorithms that attempt to bring down the energy consumption by servers deployed in the cloud infrastructure by improving resource utilizations that cut down the metered bill cost and including carbon footprint of the cloud data center.
Abstract: Although cloud computing offers many advantages with regards to adaption of resources, we witness either a strong resistance or a very slow adoption to those new offerings. One reason for the resistance is that (i) many technologies such as stream processing systems still lack of appropriate mechanisms for elasticity in order to fully harness the power of the cloud, and (ii) do not provide mechanisms for secure processing of privacy sensitive data such as when analyzing energy consumption data provided through smart plugs in the context of smart grids. In this white paper, we present our vision and approach for elastic and secure processing of streaming data. Our approach is based on StreamMine3G, an elastic event stream processing system and Intel's SGX technology that provides secure processing using enclaves. We highlight the key aspects of our approach and research challenges when using Intel's SGX technology.
Pub.: 18 May '17, Pinned: 02 Jul '17
Abstract: This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance.
Pub.: 25 May '17, Pinned: 02 Jul '17
Abstract: The paper illustrates how we built a federated cloud computing platform dedicated to the Italian research community. Building a cloud platform is a daunting task, that requires coordinating the deployment of many services, interrelated and dependent on each other. Provisioning, servicing and maintaining the platform must be automated. For our deployment, we chose a declarative modeling tool, that allows describing the parts that compose the system and their relations of supplier/consumer of specific interfaces. The tool arranges the steps to bring the deployment to convergence by transforming the state of the system until it reaches a configuration that satisfies all constraints. We chose a declarative service modeling approach for orchestrating both the deployment of the platform by the administrators and the deployment of applications by users. The cloud platform has been designed so that it can be managed by this kind of automation, facilitating the deployment of federated regions by anyone wishing to join and to contribute resources to the federation. Federated resources are integrated into a single cloud platform available to any user of the federation. The federation can also seamlessly include public clouds. We describe the architectural choices, how we adapted the OpenStack basic facilities to the needs of a federation of multiple independent organizations, how we control resource allocation according to committed plans and correspondingly how we handle accounting and billing of resource usage. Besides providing traditional IaaS services, the cloud supports self-service deployment of cloud applications. The cloud thus addresses the long tail of science, allowing researchers of any discipline, without expertise in system or cloud administration, to deploy applications readily available for their perusal.
Pub.: 16 Jun '17, Pinned: 02 Jul '17