FULL-CLOUD-COMPUTING-munotes

Page 1

1UNIT I1INTRODUCTION TO CLOUDCOMPUTINGUnit Structure1.0Objective1.1Introduction1.2Cloud computing at a glance1.2.1The vision of cloud computing1.2.2Defining a cloud1.2.3A closer look1.2.4The cloud computing reference model1.2.5Characteristics and benefits1.2.6Challenges ahead1.3Historical developments1.3.1Distributed systems1.3.2Virtualization1.3.3Service-oriented computing1.3.4Utility-oriented computing1.4Building cloud computing environments1.4.1Application development1.4.2Infrastructure and system development1.4.3Computing platforms and technologies1.4.3.1Amazon web services (AWS)1.4.3.2Google AppEngine1.4.3.3Microsoft Azure1.4.3.4Hadoop1.4.3.5Force.com and Salesforce.com1.4.3.6Manjrasoft Aneka1.5Summary1.6Unit Endquestions1.7Reference for further reading1.0 OBJECTIVEThis chapter would make your under the concept of following concepts•What isacloud computing?munotes.in

Page 2

2•What are characteristics and benefits of cloud computing?•It’s Challenges.•Historical development of technologies toward the growth of cloudcomputing•Types of CloudComputing Models.•Different types of Services in the Cloud Computing.•Application development and Infrastructure and system developmenttechnologies about the Cloud Computing.•Overviewof different sets of Cloud Service Providers.1.1 INTRODUCTIONHistorically, computing power was a scarce, costly tool. Today,with the emergence of cloud computing, it is plentiful and inexpensive,causing a profound paradigm shift—a transitionfrom scarcity computingto abundance computing. This computing revolution accelerates thecommoditization of products, services and business models and disruptscurrent information and communications technology(ICT)Industry.Itsupplied the services in the same way towater,electricity,gas, telephonyand other appliances.Cloud Computing offers on-demandcomputing,storage, software and other IT services with usage-based meteredpayment. Cloud Computing helps re-invent and transform technologicalpartnerships to improve marketing, simplify and increase security andincreasing stakeholder interest and consumer experience while reducingcosts.With cloud computing, you don't have to over-provision resourcesto manage potential peak levels of business operation. Then, you have theresources you really required. You can scale these resources to expand andshrink capability instantly as the business needs evolve.This chapteroffers a brief summary of the trend of cloud computing by describing itsvision, addressing its key features, and analyzing technical advances thatmade it possible. The chapter also introduces some key cloud computingtechnologies and some insights into cloud computing environments.1.2 CLOUD COMPUTING AT A GLANCEThe notion of computingin the "cloud" goes back to thebeginnings of utility computing, a term suggested publicly in 1961 bycomputer scientistJohn McCarthy:“If computers of the kind I have advocated become the computers of thefuture, then computing may someday be organizedas a public utility justas the telephone system is a public utility… The computer utility couldbecome the basis of a new and important industry.”The chief scientist of the Advanced Research Projects Agency Network(ARPANET), Leonard Kleinrock, said in1969:munotes.in

Page 3

3“as of now, computer networks are still in their infancy, but as they growup and become sophisticated, we will probably see the spread of‘computer utilities’ which, like present electric and telephone utilities, willservice individual homes and offices across the country.”This vision of the computing utilitytakes form with cloudcomputingindustry in the 21st century. The delivery of computingservicesis easily available on demand just like other utilities servicessuchas water, electricity, telephone and gas in today's society are available.Likewise, users (consumers) only have to pay service providers if theyhave access to computingresources. Insteadof maintaining their owncomputing systems or datacenters, customercanlease access fromcloudservice providers to applications and storage. The advantage of usingcloud computing services is that organizations can avoid the upfrontcostand difficulty of running and managing their own IT infrastructureand pay for when they use it.Cloud providers can benefit from largeeconomies of scale by offering the same services to a wide variety ofcustomers.In the case, consumers can access the services according to theirrequirement with the knowing where all their services are hosted. Thesemodelcan called as utility computing as cloud computing. As cloudcomputing called as utility computing because users can access theinfrastructure as a “cloud” as application as services from anywhere partin the world.Hence Cloud computing can be defined asa new dynamicprovisioning model of computing services that improves the use ofphysical resources and data centers is growing uses virtualization andconvergence to support multiple different systems that operate on serverplatforms simultaneously. The output achieved with different placementschemes of virtual machines will differ a lot. .By observing advancement in several technologies , we can trackof cloud computingthat is(virtualization, multi-core chips), especiallyin hardware; Internet (Web services, service-oriented architectures, Web2.0), Distributed computing (clusters, grids), and autonomousComputing,automation of the data center). The convergence of Figure 1.1 reveals theareas of technology that have evolved and led to the advent Cloudcomputing.Any of these technologies were considered speculation at anearly stage of development; however, they received considerable attentionlater Academia and big business companies have been prohibited.Therefore, a Process of specification andstandardization followed whichresulted in maturity andwide adoption. The rise of cloud computing isclosely associated with the maturity of these technologies.munotes.in

Page 4

4FIGURE 1.1. Convergence of various advancesleading to the advent ofcloud computing1.2.1The vision of cloud computing:The virtual provision of cloud computing is hardware, runtimeenvironment and resources for a user by paying money. As of these itemscan be used as long as the User, no upfront commitment requirement. Thewhole computerdevice collection is turned into a Utilities set that can besupplied and composed in hours rather than days together, to deploydevices without Costs for maintenance.A cloud computer's long-termvision is that IT services are traded without technologyand as utilities onan open market as barriers to the rules.We can hope in the near future that it can be identified thesolutionthat clearly satisfies ourneeds enteringour application on a global digitalmarket services for cloud computing.This market will make it possible toautomate the process of discovery and integration with its existingsoftware systems. A digital cloud trading platform is available serviceswill also enable service providers toboost theirrevenue. A cloud servicemay also bea competitor'scustomer service to meet its consumercommitments.Company and personal data is accessible in structured formatseverywhere, which helps us to access and communicate easily on an evenlarger level. Cloud computing's security and stability will continue to
munotes.in

Page 5

5improve, making it even safer with a wide variety of techniques. Instead ofconcentrating on what services and applications they allow, we do notconsider "cloud" to be the most relevant technology.The combination ofthe wearable and the bringing your own device (BYOD) with cloudtechnology with the Internet of Things ( IOT) would become a commonnecessity in person and working life such that cloud technology isoverlooked as an enabler.
Figure 1.2. Cloud computing vision.(Reference from“MasteringCloud Computing Foundations and Applications Programming”byRajkumarBuyya)1.2.2Defining a cloud:The fairly recent motto in the IT industry "cloud computing,"which came into being after many decades of innovation invirtualization,utilitycomputing, distributed computing, networking and softwareservices. A cloud establishes an IT environment invented to providemeasured and scalable resources remotely. It has evolved as a modernmodel for information exchange and internet services. This provides moresecure, flexible and scalable services for consumers. It is used as a service-oriented architecture that reduces end-user overhead information.Figure 1.3illustrates the variety of terms used in current cloud computingdefinitions.
munotes.in

Page 6

6FIGURE1.3Cloud computing technologies, concepts, and ideas.(Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)Internet plays a significant role in cloud computing forrepresenting a transportation medium ofcloud services which can deliverand accessible to cloud consumer. According to the definition given byArmbrustCloud computing refers to both the applications delivered as services overthe Internet and the hardware and system software in the datacentersthatprovide those servicesAbove definition indicated about the cloud computing whichtouching upon entire stack from underlying hardware to high levelsoftware as service. It introduced with the concept ofeverything as servicecalled asXaaswhere different part of the system like IT Infrastructure ,development platform for an application ,storage ,databases and so on canbe delivered as services to the cloud consumers and consumers has to paidfor the services what theywant. Thisnew paradigms of the technologiesnot only for the development of the software but also how the user candeploy the application ,make the application accessible and design of ITinfrastructure and how this companies allocate the costs for IT needs. Thisapproach encourage the cloud computing form global point of views thatone single user can upload the documents in the cloud and on the otherssideCompany owner want to deploy the entire infrastructure in the publiccloud.According to the definition proposed by the U.S. National Instituteof Standards and Technology (NIST):Cloud computing is a model for enabling ubiquitous, convenient,on-demand network access to a shared pool of configurable computing
munotes.in

Page 7

7resources (e.g., networks, servers, storage, applications, and services) thatcan be rapidly provisioned and released with minimal management effortor service provider interaction.Another approach of cloud computing is “utility computing” wherecould computing mainly focus on delivering services based upon thepricing modelit called as “pay-per-use”strategy.Cloud computing make allthe resources online mode such asstorage,you can lease virtual hardwareor you can use the resource for the application developmentand usershasto payaccording to their usage their willno or minimal amount of upfrontcost. All this above operations are performed and user have to pay the billby simply entering the credit card details and accesses this servicesthrough the web browsers. According to George ReeseHehave defined three criteria onwhether a particular service is a cloudservice:•The service is accessible via a web browser (nonproprietary) or webservices API.•Zero capital expenditure is necessary to get started.•You pay only for what you use as you use it.Many cloud service providers provides the cloud services freely tothe users but some enterprise class services can be provided by the cloudservice providers based upon specific pricing schemes where users have tosubscribe with the service provider on which a service level agreement(SLA) is defined based on the quality parameters between the cloudservice providers and user and cloud service providers has to delivered theservices according the service level agreement (SLA)RajKumar Buyya defined cloud computing basedon the nature of utilitycomputingA cloud is a type of parallel and distributed system consisting of acollection of interconnected and virtualized computers that aredynamically provisioned and presented as one or more unified computingresources based on service-level agreements established throughnegotiation between the service provider and consumers.1.2.3.A closer look:Cloud computing is useful in governments, enterprises, public andprivate institutions and research organizations which make moreeffectiveand demand-driven computing services systems.There seem to be anumber of specific examples demonstrating emerging applications ofcloud computing in both established companies and startups. Such casesare intended to illustrate the value proposition of viable cloud computingsolutions and the benefits businesses have gained from these services.munotes.in

Page 8

8New YorkTimes:One of the most widely known examples of cloud computingcommitment comes from New York Times. The New York Times hascollected a large number of high-resolution scanned images of historicalnewspapers, ranging from 1851-1922. They want to process this set ofimages into separate articles in PDF format. Using 100 EC2 instances,they can complete the processing within 24 hours at a totalcost of $ 890(EC2 calculation time is $ 240, S3 data transfer and storage use is $ 650,storage and transfer of 4.0TB source image and 1.5TB Output PDF).Derek Gottfrid pointed out: "Actually, it worked so well that we ran ittwice, because after the completion we found an error in the PDF."The New York Times had the option to utilize 100 servers for 24hours at the low standard cost of tencent an hour for every server.In theevent that the New York times had bought even a solitary server for thiserrand, the probable expense would have surpassed the $890 for simplythe hardware, and they likewise need to think about the expense ofadministration, power and cooling Likewise, the handling would haveassumed control more than a quarter of a year with oneserver. On the offchancethat theNew York Times had bought four servers, as DerekGottfrid hadconsidered,it wouldhave stilltaken almost a month ofcalculation time. The quick turnaround time (sufficiently quick to run theactivity twice) and endlessly lower cost emphatically represents theprevalent estimation of cloud services.Washington Post:In a related but more latest event, the Washington Post were ableto transform 17,481 pages of scanned document images into a searchabledatabase in just aday using Amazon EC2. On March 19th at 10am,Hillary Clinton’s official White House schedule from 1993-2001 waspublished to the public as a large array of scanned photographs (in PDFformat, but non-searchable). Washington Post programmer Peter Harkinsutilized 200 Amazon EC2 instances to conduct OCR (Optical CharacterRecognition) on the scanned files to create searchable text–“ I used 1,407hours of virtual machine time with a total cost of $144.62. We find it apositive proof of concept.DISA:Federal Computer Week mentioned that the Defense InformationSystems Agency (DISA) as compared the cost of the usage of AmazonEC2 versus internally maintained servers: “In a latest take a look at, theDefense Information Systems Agency in comparison the priceof growinga simple application known as the Tech Early Bird on $30,000 well worthof in-house servers and software program with the costs of growing theequal application using the Amazon ElasticCompute Cloud from AmazonWeb Services. Amazon charged 10cents an hour for the provider, andmunotes.in

Page 9

9DISA paid a total of $5 to expandsoftwarethat matched the overallperformance of the in-house application.SmugMug:SmugMug, an image posting and hosting web site like Flickr,stores a substantial level of its photo information in Amazon's S3 cloudstorage service . In 2006, they ended up saving "$500,000 in prepared diskdrive expenses in 2006 and reduce its disk storage space array costs inhalf" through the use of Amazon S3. Based on the CEO of SmugMug,they might "easily save a lot more than $1 million" in the next yearthrough the use of S3. The CEO known that their present growth rateduring the article necessitates about $80,000 worth of new hardware, andthe regular costs boost even more considerably after putting "power,cooling, the info center space, along with the manpower had a need tomanage them." On the other hand, Amazon S3 costs around $23,000 permonth for equivalent storage which is all-inclusive (power, maintenance,cooling, etc. are figured intothe expense of the storage.Eli Lily:Eli Lily, among the largest pharmaceuticalcompanies, is needs toutilize Amazon'sstorageand computeclouds to supply on-demand high-performanceprocessing for research reasons. John Foley highlights, "itaccustomed to acquire Eli Lilly sevenand a half weeksto deploy a serverinternally" whereas Amazon can provision a virtual server in 3 minutes.Furthermore "a 64-node Linux cluster could be online in 5 minutes(compared against 90 days internally)." Amazon's cloud providers notonly deliver on-demand scaling and usage-based billing, they enable EliLily to respond with considerably amplified agility in addition, eliminatingtime-consuming products deployment and acquisition functions.Best Buy’s Giftag:Best Buy's Giftag is a new online wish-list servicehosted byGoogle's App Engine. In a video interview, the developers suggested thatthey were beginning to build a platform with a different technology andmoved to Google App Engine with its superior speed of development andscaling advantages. As one developer eloquently stated it, "a lot of thework that none of us even needs to do is [already] completed for us." Thedevelopers also lauded App Engine 's design to allow effortless scaling;App Engine-based webapps inherit Google's best-in-class technologiesand expertise in running large-scale websites.By the end of the day, AppEngine helps developers to focus on building site-specific separatedfeatures: "Not worried with the operational aspects of an application goingaway always frees you to create excellent code or evaluate your codebetter.TC3:TC3 (Total Claims Capture & Control) is a healthcare servicesmunotes.in

Page 10

10companyimparting claims managementsolution. TC3 now makes use ofAmazon’s cloud services to allow on-demand scaling ofresourceandlower infrastructure costs. TC3’s CTO notes, “we’ re making use ofAmazon S3, EC2, and SQS to permit our claim processing capacity togrowth and reduce as required to satisfy ourservice levelagreements(SLAs). There are times we require massive quantities of computingresourcethat a long way exceed our machine capacities and when theseconditions took place inside the past ournaturalresponse became to nameour hardware vendor for a quote. Now, by using the usage ofAWSproducts, we can dramatically reduce our processing time from weeksormonths right down to days or hours and pay much less than shopping,housing and maintaining the servers ourselves”Another particular featureof TC3's activities is that, because they provide US health-related services,they are obligated to abide with the HIPPA (Health Insurance Portabilityand Accountability Act). Regulatory compliance is one of the mainobstacles facing corporate adoption of cloud infrastructure–the fact thatTC3is capable of complying with HIPPA on Amazon's platform issignificant.How all of the computing madepossible?in same IT services ondemand like computing power, storage and providing an runtimeenvironments for development of an applications on pay-as-you go basis.cloud computing not only provides an opportunity for easily accessing ofITservices as per demand, but also provides newly ideasregarding ITServices and resources as am utilities.Figure 1.4 provides a bird’s-eyeview of cloud computing
FIGURE 1.4A bird’s-eye view of cloud computing(Reference from“Mastering CloudComputing Foundations and Applications Programming”byRajkumar Buyya)
munotes.in

Page 11

11There are three deployment models for accessing the services ofcloud computing environment are public, private and hybridsclouds(seeFigure 1.5).Thepublic cloudis one of the most common deploymentmodels in which computing services is offered by third-party vendorsthatthe consumer areable to access and purchasethe resource fromthe publiccloud via the public internet. These can be free or on-demand, meaningthat consumers pay for their CPU cycles, storage or bandwidth peruse.Public clouds will save companies from the expensive procurement,management and on-site maintenance of hardware and applicationinfrastructure—all management and maintenance of the system is held toresponsibility of the cloud service provider. Public cloudscan also bedeployed faster than on-site infrastructures with a platform almostconstantly scalable.Although security issues have been posed by publiccloud implementations, the public cloud could be as secure as the mostefficiently operated private cloud deployment when it is implementedcorrectly.Aprivate cloudis an essentially one organization's cloudservice. In using aprivate cloud, the advantages of cloudcomputing areexperienced without sharing resources with otherorganizations. Therecanbe a private cloud within an organization, or be controlled from a thirdparty remotely, and accessed via the Internet(but it is not shared withothers, unlike a public cloud).Private cloud incorporates several of theadvantages of cloud computing—including elasticity, scalability and easyservice delivery—with the on-site control, security, and resourcecustomization.Many companies select private cloud over public cloud(cloud computing services delivered through multi-customerinfrastructure) because private cloud is a simpler (or the only way) way tosatisfy their regulatory compliance requirements. Others prefer privatecloud because their workloads deal with confidential information,intellectual property,and personallyidentifiable information (PII),medicalrecords,financial data and other sensitive data.Hybrid cloudis aninfrastructure that contains links between a user's cloud (typically referredto as "private cloud") and a third-party cloud (typically referred to as"public cloud"). Whilst the private and public areas of the hybrid cloud arelinked, they remain unique. This allows a hybrid cloud to simultaneouslyoffer the advantages of several implementation models. The sophisticationof hybrid clouds is very different. Some hybrid clouds, for example, onlyconnect the on-site to public clouds. The operations and application teamsare responsible for all the difficulties inherent in the two differentinfrastructures.munotes.in

Page 12

12FIGURE 1.5Major deployment models for cloud computing.Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)1.2.4The cloud computing reference model:A model that characterizes and standardizes the functions of acloud computing environment, is the cloud reference model. This is abasic benchmark for cloud computing development.The growingpopularity of cloud computing has expanded the definitions of differentcloud computing architectures. The cloud environment has a wide range ofvendors and multiple offer definitions which make the evaluation of theirservices very hard. The way the cloud functions and interacts with othertechnology canbe a little confusing with such complexity in itsimplementation.A standard cloudreference model for architects, softwareengineers, security experts and businesses is required to achieve thepotential of cloud computing. This cloud landscape is controlled by theCloud Reference Model. Figure1.6displays various cloud providers andtheir innovations in the cloud services models available on the market.
munotes.in

Page 13

13Figure1.6The Cloud Computing Reference Model.Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)Cloud computing is an all-encompassing term for all resources thatare hosted on the Internet. These services are classified under three maincategories: infrastructure as a service (IaaS), platform as aservice (PaaS)and software as a service (SaaS).These categories are mutually related asoutlined in Figure 1.6 which gives an organic view of cloud computing.The model structures the broad variety of cloud computing services in alayered view from the base to the top of the computing stack.At the stack foundation,Infrastructure as Service(IaaS) isthe mostcommon cloud computing service model, offering the basic infrastructureof virtual servers,networks,operating systems and storage drives. Thisprovides the flexibility, reliability and scalability many companies seekwith the cloud and eliminates the need for the office hardware. This makesitaperfect way to promote business growth for SMEs looking for a cost-effective IT way. IaaS is a completelyoutsourced pay-for-use service thatcan be run in a public, private or hybrid infrastructure.The next step in the stack isplatform-as-a-service(PaaS)solutions.Cloud providers deploy the software and infrastructure framework, butcompanies can developand run their own apps. Web applications caneasily and quickly be created via PaaS with the flexibility and robustnessof the service to support it. PaaS solutions are scalable and suitable ifmultiple developers work on a single project. It is also useful when usingan established data source (such as a CRM tool).Top of the stack,Software as aService(SaaS)This cloudcomputing solution includes deploying Internet-based software to differentcompanies paying via subscription or a paid-per-use model.It is animportant tool for CRM and applications which require a great deal ofWeb or mobile access–such as software for mobile sales management.SaaS is managed from a centralized location so that companies need not
munotes.in

Page 14

14be worried about its own maintenanceand is ideal for short-term projects.The big difference in control between PaaS and IaaS is users got.Essentially, PaaS makes it possible for suppliers to manage everythingIaaS calls for more customer management. Ingeneral,companies with asoftware package or application already have specific purpose and youshould choose to install and run it in the cloud IaaS rather than PaaS.1.2.5Characteristics and benefits:As both commercially and technologically mature cloudcomputing services, companies will be easier to maximize their potentialbenefits. However, it is equally important to know what cloud computingis and what it does.
FIGURE 1.7Features of Cloud ComputingFollowing are the characteristics of Cloud Computing:1.Resources Pooling:This means that the Cloud provider used a multi-leaner model todeliver the computingresources to various customers. There are variousallocated and reassigned physical and virtual resources, which rely oncustomer demand. In general, the customer has no control or informationabout the location of the resources provided, but can choose location on ahigher level of abstraction.2.On-Demand Self-Service:This is one of the main and useful advantages of Cloud Computingas the user can track server uptimes,capability and network storage on anongoing basis. The user can also monitorcomputingfunctionalities withthis feature.3.Easy Maintenance:The servers are managed easily and the downtime is small and
munotes.in

Page 15

15there are no downtime except in some cases. Cloud Computing offers anupdate every time that increasingly enhances it. The updates are moresystem friendly and operate with patched bugs faster than the older ones.4.Large Network Access:The user may use a device and an Internet connection to access thecloud data or upload it to the cloud from anywhere. Such capabilities canbe accessed across the network and through the internet.5.Availability:The cloud capabilities can be changed and expanded according tothe usage. This review helps the consumer to buyadditional cloud storagefor a very small price, if necessary.6.Automatic System:Cloud computing analyzes the data required automatically andsupports a certain service level of measuring capabilities. It is possible totrack, manage and report the usage. It provides both the host and thecustomer with accountability.7.Economical:It is a one-off investment since the company (host) is required tobuy the storage, which can be made available to many companies, whichsave the host from monthly or annual costs. Only the amount spent on thebasic maintenance and some additional costs are much smaller.8.Security:Cloud Security is one of cloud computing's best features. Itprovides a snapshot of the data stored so that even if one of the servers isdamaged, the data cannot get lost. The information is stored on the storagedevices, which no other person can hack or use. The service of storage isfast and reliable.9.Pay as you go:Users only have to pay for the service or the space in cloudcomputing. No hidden or additional charge to be paid is liable to pay. Theservice is economical and space is often allocated free of charge.10.Measured Service:Cloud Computing resources that the company uses to monitor andrecord. This use of resources is analyzed by charge-per-use capabilities.This means that resource use can be measured and reported by the serviceprovider, either on the virtual server instances running through the cloud.You will receive a models pay depending on the manufacturing company'sactual consumption.munotes.in

Page 16

161.2.6Challenges ahead:All has advantages and challenges. We saw many Cloud featuresand it’s time to identify the Cloud computing challenges with tips andtechniques you can identify all your own.Let's therefore start to explorecloud computingrisk and challenges.Nearly all companies are usingcloud computing because companies need to store the data. Thecompanies generate and store a tremendous amount of data. Thus, theyface many security issues. Companies would include establishments tostreamline and optimize the process and to improve cloud computingmanagement.This is a list of all cloud computing threats and challenges:1.Security & Privacy2.Interoperability & Portability3.Reliable and flexible4.Cost5.Downtime6.Lack of resources7.Dealing withMulti-Cloud Environments8.Cloud Migration9.Vendor Lock-In10.Privacy and Legal issues1.Security and Privacy of Cloud:The cloud data store must be secure and confidential. The clientsare so dependent on the cloud provider. In other words, the cloud providermust take security measures necessary to secure customer data. Securitiesare also the customer's liability because they must have a good password,don't share the password with others, and update our password on aregular basis. If the data are outsideof the firewall, certain problems mayoccur that the cloud provider can eliminate. Hacking and malware are alsoone of the biggest problems because they can affect many customers. Dataloss can result; the encrypted file system and several other issues can bedisrupted.2.Interoperability and Portability:Migration services into and out of the cloud shall be provided tothe Customer. No bond period should be allowed, as the customers can behampered. The cloud will be capable of supplying premises facilities.Remote access is one of the cloud obstacles, removing the ability for thecloud provider to access the cloud from anywhere.3.Reliable and Flexible:Reliability and flexibility are indeed a difficult task for cloudcustomers, which can eliminateleakage of the data provided to the cloudmunotes.in

Page 17

17and provide customer trustworthiness. To overcome this challenge, third-party services should be monitored and the performance, robustness, anddependence of companies supervised.4.Cost:Cloud computing is affordable, but it can be sometimes expensiveto change the cloud to customer demand. In addition, it can hinder thesmall business by altering the cloud as demand can sometimes cost more.Furthermore, it is sometimes costly to transfer data from the Cloud tothepremises.5.Downtime:Downtime is the most popular cloud computing challenge as aplatform free from downtime is guaranteed by no cloud provider. Internetconnection also plays an important role, as it can be a problem if acompany has a nontrustworthy internet connection, because it facesdowntime.6.Lack of resources:The cloud industry also faces a lack of resources and expertise,with many businesses hoping to overcome it by hiring new, moreexperienced employees. These employees will not only help solve thechallenges of the business but will also train existing employees to benefitthe company. Currently, many IT employees work to enhance cloudcomputing skills and it is difficult for the chief executive because theemployees are little qualified. It claims that employees with exposure ofthe latest innovations and associated technology would be more importantin businesses.7.Dealing with Multi-Cloud Environments:Today not even a single cloud is operating with full businesses.Accordingto the RightScale report revelation, almost 84 percent ofenterprises adopt a multi-cloud approach and 58 percent have their hybridcloud approaches mixed with the public and private clouds. In addition,five different public and private clouds are used byorganizations.munotes.in

Page 18

18Figure1.8RightScale 2019 report revelationThe teams of the IT infrastructure have more difficulty with along-term prediction about the future of cloud computing technology.Professionals have also suggested top strategies to addressthis problem,such as rethinking processes, training personnel, tools, active vendorrelations management, and the studies.8.Cloud Migration:While it is very simple to release a new app in the cloud, transferring anexisting app to a cloud computingenvironmentis harder. 62% said theircloud migration projects are harder than they expected, according to thereport. In addition, 64% of migration projects took longer than expectedand 55% surpassed their budgets. In particular, organizations that migratetheir applications tothe cloudreported migration downtime (37%), databefore cutbacks synchronization issues (40%), migration tooling problemsthat work well (40%), slow migration of data (44%), securityconfiguration issues (40%), and time-consumingtroubleshooting (47%).And to solve these problems, close to 42% of the IT experts said that theywanted to see their budget increases and that around 45% of them wantedto work at an in-house professional, 50% wanted to set the project longer,56% wantedmore pre-migration tests.9.Vendor lock-in:The problem with vendor lock-in cloud computing includes clientsbeing reliant (i.e. locked in) on the implementation of a single Cloudprovider and not switching to another vendor without any significantcosts, regulatory restrictions or technological incompatibilities in thefuture. The lock-up situation can be seen in apps for specific cloudplatforms, such as Amazon EC2, Microsoft Azure, that are not easilytransferred to any other cloud platform and that users are vulnerable tochanges made by their providers to further confirm the lenses of asoftware developer. In fact, the issue of lock-in arises when, for example,
18Figure1.8RightScale 2019 report revelationThe teams of the IT infrastructure have more difficulty with along-term prediction about the future of cloud computing technology.Professionals have also suggested top strategies to addressthis problem,such as rethinking processes, training personnel, tools, active vendorrelations management, and the studies.8.Cloud Migration:While it is very simple to release a new app in the cloud, transferring anexisting app to a cloud computingenvironmentis harder. 62% said theircloud migration projects are harder than they expected, according to thereport. In addition, 64% of migration projects took longer than expectedand 55% surpassed their budgets. In particular, organizations that migratetheir applications tothe cloudreported migration downtime (37%), databefore cutbacks synchronization issues (40%), migration tooling problemsthat work well (40%), slow migration of data (44%), securityconfiguration issues (40%), and time-consumingtroubleshooting (47%).And to solve these problems, close to 42% of the IT experts said that theywanted to see their budget increases and that around 45% of them wantedto work at an in-house professional, 50% wanted to set the project longer,56% wantedmore pre-migration tests.9.Vendor lock-in:The problem with vendor lock-in cloud computing includes clientsbeing reliant (i.e. locked in) on the implementation of a single Cloudprovider and not switching to another vendor without any significantcosts, regulatory restrictions or technological incompatibilities in thefuture. The lock-up situation can be seen in apps for specific cloudplatforms, such as Amazon EC2, Microsoft Azure, that are not easilytransferred to any other cloud platform and that users are vulnerable tochanges made by their providers to further confirm the lenses of asoftware developer. In fact, the issue of lock-in arises when, for example,
18Figure1.8RightScale 2019 report revelationThe teams of the IT infrastructure have more difficulty with along-term prediction about the future of cloud computing technology.Professionals have also suggested top strategies to addressthis problem,such as rethinking processes, training personnel, tools, active vendorrelations management, and the studies.8.Cloud Migration:While it is very simple to release a new app in the cloud, transferring anexisting app to a cloud computingenvironmentis harder. 62% said theircloud migration projects are harder than they expected, according to thereport. In addition, 64% of migration projects took longer than expectedand 55% surpassed their budgets. In particular, organizations that migratetheir applications tothe cloudreported migration downtime (37%), databefore cutbacks synchronization issues (40%), migration tooling problemsthat work well (40%), slow migration of data (44%), securityconfiguration issues (40%), and time-consumingtroubleshooting (47%).And to solve these problems, close to 42% of the IT experts said that theywanted to see their budget increases and that around 45% of them wantedto work at an in-house professional, 50% wanted to set the project longer,56% wantedmore pre-migration tests.9.Vendor lock-in:The problem with vendor lock-in cloud computing includes clientsbeing reliant (i.e. locked in) on the implementation of a single Cloudprovider and not switching to another vendor without any significantcosts, regulatory restrictions or technological incompatibilities in thefuture. The lock-up situation can be seen in apps for specific cloudplatforms, such as Amazon EC2, Microsoft Azure, that are not easilytransferred to any other cloud platform and that users are vulnerable tochanges made by their providers to further confirm the lenses of asoftware developer. In fact, the issue of lock-in arises when, for example,
munotes.in

Page 19

19a company decide to modify cloud providers (or perhaps integrate servicesfrom different providers), but cannot move applications or data acrossdifferent cloud services, as the semantics of cloud providers' resources andservices do not correspond. This heterogeneity of cloud semantics andAPIs creates technological incompatibility which in turn leads tochallenge interoperability and portability. This makes it very complicatedand difficult to interoperate, cooperate, portability, handle and maintaindata and services. For these reasons, from the point of view of thecompany it is important tomaintain flexibility in changing providersaccording to business needs or even to maintain in-house certaincomponents which are less critical to safety due to risks. The issue ofsupplier lock-in will prevent interoperability and portability between cloudproviders. It is the way for cloud providers and clients to become morecompetitive.10.Privacy and Legalissues:Apparently, the main problem regarding cloud privacy/datasecurity is 'data breach.' Infringement of data can be generically defined asloss of electronically encrypted personal information. An infringement ofthe information could lead to a multitude of losses both for the providerand for the customer; identity theft, debit/credit card fraud for thecustomer, loss of credibility, future prosecutions and so on. In the event ofdata infringement, American law requires notification of datainfringements by affected persons. Nearly every State in the USA nowneeds to report data breaches to the affected persons. Problems arise whendata are subject to several jurisdictions, and the laws on data privacydiffer. For example, the Data Privacy Directive of the European Unionexplicitly states that 'data can only leave the EU if it goes to a 'additionallevel of security' country.' This rule, whilesimple to implement, limitsmovement of data and thus decreases data capacity. The EU's regulationscan be enforced.1.3 HISTORICAL DEVELOPMENTSNo state-of-the-art technology is cloud computing. Thedevelopment of Cloud Computing through various phases,including GridComputing, UtilityComputing, Application Service ProvisionandSoftware as a Service, etc., has taken place. But the overall (whole)concept of the provision of computingresources via a global networkbegan in the 1960s. By 2020, it is projected that thecloudcomputingmarket will exceed 241 billion dollars. But the history ofcloud computing is how we got there and where all that started. Cloudcomputing has a history that is not that old, the first business andconsumer cloud computingwebsite was launched in 1999 (Salesforce.comand Google). Cloud computing is directly connected to Internetdevelopment and the development of corporate technology as cloudcomputing is the answer to the problem of how the Internet can improvecorporate technology. Business technology has a rich and interestingmunotes.in

Page 20

20background, almost as long as businesses themselves, but the developmentthat has influenced Cloud computing most directly begins with theemergence of computers as suppliers of real business solutions.History of Cloud Computing:Cloud computing is one of today's most breakthrough technology. Thenthere's a brief cloud-computing history.
Figure1.9History of Cloud Computing [*Gartner,**Constellation Research]EARLY 1960S:Computer scientist John McCarthy has a time-sharing concept thatallows the organization to use an expensive mainframe at the same time.This machine is described as a major contribution to Internet development,and as a leader in cloud computing.IN 1969:J.C.R. Licklider,responsible for the creation of the AdvancedResearch Projects Agency (ARPANET), proposed the idea ofan"Intergalactic Computer Network" or "Galactic Network" (a computernetworking term similar totoday’sInternet). His vision was to connecteveryone around the world and access programs and data from anywhere.IN 1970:Usage of tools such as VMware forvirtualization. More than oneoperating system can be run in a separate environment simultaneously. Ina different operating system it was possible to operate a completelydifferent computer (virtual machine).IN 1997:Prof Ramnath Chellappa in Dallas in 1997 seems to be the firstknown definition of "cloud computing," "a paradigm in which computingboundaries are defined solely on economicrather thantechnical limitsalone."
munotes.in

Page 21

21IN 1999:Salesforce.com was launched in 1999 as the pioneer ofdelivering client applications through its simple website. The services firmhas been able to provide applications via the Internet for both the specialistand mainstream software companies.IN 2003:This first public releaseof Xen ,is a software system thatenables multiple virtual guest operating systems to be run simultaneous ona single machine, which also known as the Virtual Machine Monitor (VMM) as a hypervisor.IN 2006:The Amazon cloud service was launched in 2006. First, itsElastic Compute Cloud ( EC2) allowed people to use their own cloudapplications and to access computers. Simple Storage Service (S3) wasthen released. This incorporated the user-as-you-go model and has becomethe standard procedure for both users and the industry as a whole.IN 2013:A total of £ 78 billion in the world 's market for public cloudservices was increased by 18.5% in 2012, with IaaS as one of the fastestgrowing serviceson the market.IN 2014:Global business spending for cloud-related technology andservices is estimated to be £ 103.8 billion in 2014, up 20% from 2013(Constellation Research).Figuregives an analysis of the development of cloud computingdistributed technologies. When we track the historic developments, wereview briefly five key technologies that have played a significant role incloud computing. They are distributed systems, virtualization, Web 2.0,service orientationand utility computing.
Figure1.10: Theevolution of distributedcomputing technologies, 1950s-2010s.Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)
munotes.in

Page 22

22Distributed computing is a computer concept that refers most ofthe time to multiple computer systems that work on a single problem. Asingle problem in distributed computing is broken down into many parts,and different computers solve each part. While the computers areinterconnected, they can communicate to each other to resolvetheproblem. The computer functions as a single entity if done properly.The ultimate goal of distributed computing is to improve theoverall performance through cost-effective, transparent and secureconnections between users and IT resources. It also ensures defecttolerance and provides access to resources in the event of failure of onecomponent.There really is nothing special about distributing resources in acomputer network. This began with the use of mainframe terminals, thenmoved to minicomputers and is now possible in personal computers andclient server architecture with several tiers.A distributed computer architecture consists of a number of verylightweight client machines installed with one or several dedicated serversfor computer management. Client agents normally recognize when themachine is idle, so that the management server is notified that the machineis not in use or that it is available. The agent then asks for a package.When this application package is delivered from the management server tothe client, when it has free CPUcycles, the software runs the applicationsoftware and returns the results to the management server. When the userreturns, the management server will return the resources used to perform anumber of tasks in the absence of the user.Distributed systems show heterogeneity, openness, scalability,transparency, concurrency, continuous availability and independentfailures. These characterize clouds to some extent, especially with regardto scalability, concurrencyand continuous availability.Cloud computing has contributed to three major milestones:mainframe, cluster computing and grid computing.Mainframes:A mainframe is a powerful computer which often serves as themain data repository for an IT infrastructure of an organization. It isconnected with users via less powerful devices like workstations orterminals. It is easier to manage, update and protect the integrity of data bycentralizing data into a single mainframe repository. Mainframes aregenerally used for large-scale processes which require greater availabilityand safety than smaller machines.Mainframes computers or mainframesare primarily machines for essential purposes used by large organizations;bulk data processing, for example census,industry and consumer statistics,enterprise resource planning and transaction processing.During the latemunotes.in

Page 23

231950s, mainframes only had a basic interactive interface, using punchedcards, paper tape or magnetic tape for data transmission and programs.Theyworked in batch mode to support back office functions, like payrolland customer billing, mainly based on repetitive tape and mergingoperations followed by a line printing to continuous stationary pre-printed.Introducing digital user interfaces almost solely used to executeapplications (e.g. airline booking) rather than to build the software.Typewriter and Teletype machines were standard network operators'control consoles in the early ' 70s, although largely replaced with keypads.
Figure1.11MainframesClustercomputing:The approach to computer clustering typically connects somecomputer nodes (personal computer used as a server) ready for downloadvia a fast local zone (LAN) network. Computing node activity coordinatedby the software "clusteringmiddleware," a software layer situated in frontof nodes that enables the users to access the cluster as a whole by meansof a Single system image concept. A cluster is a type of computer systemthat is parallel or distributed and which consists of a collection ofinterconnected independent computers, working together as a highlycentralized computing tool that integrates software and networking withindependent computers in a single system. Clusters are usually used toprovide greater computational powerthan can be provided by a singlecomputer for high availability for greater reliability or high performancecomputing. In comparison with other technology, the cluster technique iseconomical with respect to power and processing speeds, since it usesshelfhardware and software components in comparison with mainframecomputers which use own hardware and software components custombuilt.Multiple computers in a cluster work together to deliver unified
munotes.in

Page 24

24processing and faster processing. A Cluster can be upgraded to a higherspecification, or extended by adding additional nodes as opposed to amainframe computer.Redundant machines which take over the processingcontinuously minimize the single component failure. For mainframeapplications, this kind of redundancy is absent.PVM and MPI are the two methods most widely used in clustercommunication.PVM is the parallel virtual machine. The Oak Ridge NationalLaboratory was developed the PVMaround 1989. It is installed directly oneach node and provides a set oflibraries that transform the node into a"parallel virtual machine." It offers a runtime environment for control ofresources and tasks management, error reporting and messagepassing . C,C++ or Fortran may use PVM for user programs.MPI is the message passing interface. In the 1990s PVM wascreated and replaced. Different commercially available systems of the timeare the basis for MPI design. It typically uses TCP / IP and socketconnections for implementation. The communication system currentlyusedmost widely allows for parallel scheduling in C, Fortran, Python etc.
Figure1.12Cluster computingGridcomputing:Itis a processor architecture that combines computer resourcesfrom different fields to achieve the main purpose. The network computerscan work together on a work task in grid computing and therefore work asa super-computer. In general, a grid works on several tasks in a network,but can also work on specific applications.In general, a grid operates ondifferent network tasks, but it can also operate on specific applications. Itis intended to solve problems that are too large for a supercomputer and to
munotes.in

Page 25

25retain the ability to handle several small problems. Computing grids havea multi-user network that meets discontinuous informationprocessingrequirements.A grid is connected to a computer cluster, which runs on anoperating system, on Linux or free software,using parallel nodes. Theclustercan vary in size from small to multiple networks. The technology isused through several computing resources in a broad variety ofapplications, such as mathematical, research or educational tasks. It isoften used in structural analysis as well as in web services such as ATMbanking, back office infrastructure, and research in sciences or marketing.Grid computing consists of applications which are used in a parallelnetworking environment tosolve computationalproblems. It connectsevery PC and combines information into a computational application.Grids have a range of resources, whether through a network, orthrough open standards with clear guidelines to achieve common goalsand objectives, based on different software and hardware structures,computer languages and framework.Generally, grid operations are divided into two categories:Data Grid:a system that handles large distributed sets of data used tocontrol data and to share users. It builds virtual environments that facilitatescattered and organized research. A data grid example is SouthernCalifornia’s Earthquake Center, which uses a middle software frameworkto construct a digital library, a distributed filesystem and a continuousarchive.CPU ScavengingGrids:A cycle-scavenging system that moves projectsas necessary from one PC to another. The search for extraterrestrialintelligencecomputing, including more than 3 million computers,represents a familiar CPU scavenging grid.The detection of radio signalsin Searches for Extra Terrestrial Intelligence (SETI), is one of radioastronomy's most exciting applications. A radio astronomy dishwas usedby the first SETI team in the late 1950s. A few years later, the privatelyfunded SETI Institute was established to perform more searches withseveral American radio telescopes. Today, in cooperation with the radioastronomy engineersand researchers of various observatories anduniversities, the SETi Institute creates its own collection of private fundsagain. SETI 's vast computing capacityhas led to a unique gridcomputingconcept which has now been expanded into many applications.SETI@home is a scientific experiment using Internet connectedcomputers for downloading and analyzing SETI program radio telescopedata. A free software program utilizes the power of millions of computersand uses idle computer capacity to run in the background. Over twomillion years of combined processing time have taken place over 5.2million participants.munotes.in

Page 26

26Grid computing forbiology,medicine, Earth sciences,physics,astronomy, chemistry and mathematics are being used. The BerkeleyOpen Infrastructurefor Network Computing (BOINC)is free open sourcecomputer and desktop grid computing software. By using the BOINCplatform, users can divide jobs between several gridcomputing projectsand decide to only give them one percentage of CPU time.
Figure1.13Grid computing Environment1.3.2Virtualization:Virtualization is a process that makes the use of physical computerhardware more effective and forms the basis for cloud computing.Virtualization uses software to create a layer of abstraction over computerhardware, enabling multiple virtual computers, usually referred to as VMs,to split the hardware elements from a single computer—processors,memory,storage and more. Every VM performs its own OS and acts likean autonomous computer given the factthat it runs on only a portion of theunderlying computer hardware.The virtualization therefore facilitates a much more effective useof physical computer hardware, thus allowing a larger return on thehardware investment of an organization.Virtualization is today a common practice in IT architecture forcompanies. It is also the technology that drives the business of cloudcomputing. Virtualization allows cloud providers to service consumerswith their own physical computing hardware and allows cloudusers topurchase only the computer resources they need when they need it andscale them cost-effectively as their workloads increase.
munotes.in

Page 27

27Virtualization involves the creation of something's virtualplatform, including virtual computer hardware, virtual storage devices andvirtual computer networks.Software called hypervisor is used for hardware virtualization.With the help of a virtual machine hypervisor, software is incorporatedinto the server hardware component. The role of hypervisor is to controlthe physical hardware that is shared between the client and the provider.Hardware virtualization can be done using the Virtual Machine Monitor(VVM) to remove physical hardware. There are several extensions to theprocesses which help to speed up virtualization activities and increasehypervisor performance. When this virtualization is done for the serverplatform, it is called server socialization.Hypervisor creates an abstract layer from the software to thehardware in use. After a hypervisor is installed, virtual representationssuch as virtual processors take place. After installation, we cannot usephysical processors. There are several popular hypervisors includingESXi-based VMware vSphere and Hyper-V.
Figure1.14Hardware VirtualizationInstances in virtual machines are typically represented by one ormore data, which can be easily transported in physical structures. Inaddition, they are also autonomous since they do not have otherdependencies for their use other than the virtual machine manager.A Process virtual machine, sometimes known asanapplicationvirtual machine, runs inside a host OS as a common application,supporting a single process. It is created at the beginning and at the end ofthe process. Its aim is to provide a platform-independent programmingenvironment which abstracts the information about the hardware oroperating system underlying the program and allows it to run on anyplatform in the same way. For example, Linux wine software helps yourun Windows.
munotes.in

Page 28

28A high level abstraction of a VM process is the high levelprogramming language (compared with the low-level ISA abstraction ofthe VM system). Process VMs are implemented by means of aninterpreter; just-in-time compilation achieves performance comparable tocompiled programming languages.The Java programming language introduced with the Java virtualmachine has become popular with this form of VM. The .NET System,which runs on a VM called the Common Language Runtime, is anotherexample.
Figure1.15process virtual machine designReference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)Web 2.0:"Websites which emphasize user-generated content, user-friendliness, participatory culture, and interoperability for end users" orparticipatory, or participative / activist and social websites. Web 2.0 is anew concept that was first used in common usage in 1999 about 20 yearsago. It was first coined by Darcy DiNucci and later popularized during aconference held in 2004 by Tim O'Reilly and Dale Doughtery. It isnecessary to remember that Web 2.0 frameworks deal only with websitedesign and use without placing the designers with technical requirements.Web 2.0 is the term used to represent a range of websites andapplications that permit anyone to create or share information or materialcreated online. One key feature of the technology is the ability to people tocreate, share and communicate. Web 2.0 is different from other kinds ofsites because it does not require the participation of any Web design orpublishing skills and makes the creation, publication or communication ofwork in the world easy for people.The design allows it to be simple andpopular for a small community or a much wider audience to shareknowledge. The University will use these tools to communicate withstudents, staff and the university community in general. It can also be agood way for students and colleagues to communicate and interact.It represents the evolution of the World Wide Web; the web apps,which enable interactive data sharing, user-centered design and worldwide
munotes.in

Page 29

29collaboration. Web 2.0 is a collective concept of Web-based technologiesthat include blogging and wikis, online networking platforms, podcasting,social networks, social bookmaking websites, Really Simple Syndication(RSS) feeds. The main concept behind Web 2.0 is to enhance Webapplications' connectivity and enable users to easily and efficiently accessthe Web. Cloud computing services are essentially Web applications thatprovide computing services on the Internet on demand. As a consequence,Cloud Computing uses a Web 2.0 methodology, Cloud Computing isconsidered to provide a main Web 2.0 infrastructure; it facilitates and isimproved by the Web 2.0 FrameworkBeneath Web 2.0 is a setof webtechnologies. Recently appeared or shifted to a new production stage RIAs(Rich Internet Applications).One of them Web's most prominenttechnology and quasi-standard AJAX (Asynchronous JavaScript andXML). Other technologies like RSS (Really SimpleSyndication), Widgets(plug-in modular components) and Web services ( e.g.SOAP, REST).
Figure1.16Components of the social web (Web 2.0)PAGE 281.3.3Service-oriented computing:The computingparadigmthat uses services as a fundamentalcomponentin the creation of applications / solutions is service orientedcomputing (SOC). Services are computer platform-specific self-description components that enable the easy and cost-effectivecomposition of distributed applications.Services perform functions, fromsimple requests to complex business processes. Services permitorganisations, using common XML languages and protocols, to displaytheir core skills programming over the Internet or intra-network, and toexecute it via an open-standard self-description interface.Because services provide uniform and ubiquitous distributors ofinformation for a wide variety of computing devices ( e.g. handheldcomputers, PDAs, cell phones or equipment) as well as software platforms
munotes.in

Page 30

30(e.g. UNIX and Windows), they arethe next major step in distributedcomputing technology. Services are provided by service providers–organizations that provide the implementation of the service, provide theirdescriptions of service and related technical and business support. Sincedifferent services can be availableCompanies and Internet communications provide a centralizednetworking network for the integration and collaboration intra-and cross-company application. Service customers can be other companies 'orclients' applications, whether they are external applications, processes orclients / users. These can include external applications.Consequently, to satisfy these requirements services should be:•Technology neutral: they must beinvisiblethrough standardizedlowest common denominator technologies that are available to almostall ITenvironments. Thisimplies that the invocation mechanisms(protocols, descriptions and discovery mechanisms) should complywith widely accepted standards.•Loosely coupled: no customer or service side needs knowledge or anyinternal structures or conventions (context).•Transparency of support locations: Services should have theirdefinitions and location information saved in a repository such asUDDI and accessible to a range of customers whichcan locateservices and invoke them regardless of their location.Web-service interactions take place with the use ofWeb ServiceDescription Language (WSDL)as the common (XML) standard whencallingSimple Object Access Protocol (SOAP)containing XML data, andthe web-service descriptions. WSDL is used for the publishing of webservices, for port types (the conceptual description of the procedure andinterchange of messages), and for binding ports and addresses (thetangible concept of which packaging andtransport protocols, for instanceSOAPs, are used to interlink two conversational end-points). The UDDIStandard is a directory service that contains publications of services andenables customers to find and learn about candidate services.The software-as-a-service concept advocated byservice-orientedcomputing (SOC)was pioneering and first appeared on the softwaremodelASP (Application Service Provider). AnApplication ServiceProvider (ASP)is an entity which implements, hosts and handles theaccessof a thirdparty. Packagedapplication and provides clients withsoftware-based services and solutions from a central data center through abroad network. Subscription or rental applications are delivered throughnetworks. In essence, ASPs provided businesses with a way to outsourceany or all parts of their IT needs.The ASP maintains the responsibility for managing the applicationin its infrastructure, using the Internet as a connection between everymunotes.in

Page 31

31customer and the key software application, through acentrally hostedIntent application. What this means for an organization is for the ASP toretention and guarantee the program and data are accessible wheneverappropriate, including the related infrastructure and the customer data.While the ASP model first introduced the software as a servicedefinition, it was not able to provide full applications with customizabledue to numerous inherent constraints such as its inability of designingextremely interactive applications.Theresult has been themonolithicarchitectures and highly vulnerable integration of applicationsbased on tight couplingprinciple in customer-specific architectures.We are in the middle of yet another significant development todayin the development of asoftware as a service architecture forasynchronous loosely linked interactions based on XML standards withthe intention of making it easier for applications to access andcommunicate over the Internet.The SOC model enables the idea software-as-a-serviceto extend to use the provision of complicated businessprocesses and transactions as a service, and allow applications to becreated on the fly and services to be replicated across and by everyone.Many ASPs are pursuing more of a digital infrastructure and businessmodels which aresimilar with those of cloud service providers to therelative advantages of internet technology.Functional and non-functional attributes consist of the webservices. Quality of service ( QoS) is the so called unfunctional attributes.QoS is defined as aset of nonfunctional characteristics of entities used tomove from a web service repository to consumers who rely on the abilityof a web service to fulfill its specified or implied needs in an end-to-endway, according to the quality definition of ISO8402. Examples of QoSfeatures include performance, reliability, security, accessibility, usability,discovery, adaptively and composability.A SLA that identifies theminimum (or acceptable range) values for QoS attributes to be compliedwith on calling the service shall establish a QoS requirement between theclients and providers.What is Service Oriented Architecture?Service-oriented Architecture or SOA bring us all to understand itas a architecture which orients around services.. Services are discreetsoftware components implemented using well-defined interface standards.Service is delivered to a directory or registry until it is created andvalidated to allow other developers to access the service. The registry alsoprovides a repository that contains information on the published service,for example how to create the interface, what levels of service arerequired, how to retain authority, etc.munotes.in

Page 32

32Figure1.17Service-oriented ArchitectureSOA benefits:SOA services allow for agility of business. Byintegrating existingservices, developers can create applications quickly.The services are distinct entities and can be invoked without aplatform or programming language knowledge at run-time.The services follow a series of standards–Web ServicesDescription Language (WSDL), Representational State Transfer (REST),or the Simple Object Access Protocol(SOAP)–which facilitate theirintegration with both existing and new applications. The SOAP servicesare complemented by the following standards. SOAP.Safety through Service Quality (QoS). Certain elements of QoSincludeauthentication and authorisation, reliable and consistentmessaging, permission policies, etc.There is no interdependence of each other's service components.SOA and cloud computing challenges:•The network dependency of both of these technologies is one ofthe major challenges.•In addition, dependence on the cloud provider, contracts andservice levels agreements is the challenges specific to cloudcomputing.•One of the challenges for SOA today are the requests to improve orchange the service provided by SOA service providers.Does Cloud Computing compete with SOA?Some see cloud computing as a descendant of SOA. It would notbe completely untrue, as the principles of serviceguidelines both apply tocloud computing and SOA. The following illustration shows how Cloud
munotes.in

Page 33

33Computing Services overlap SOA-Cloud ComputingOverlapSOA via WebServices•Software as aService (SaaS)•Utility Computing•Terabytes onDemand•Data Distributedina Cloud•Platform as aService•StandardsEvolving forDifferent Layers ofthe Stack•Application LayerComponents/Services•Network Dependence•Cloud/IP Wide AreaNetwork(WAN)-supported ServiceInvocations•LeveragingDistributedSoftware Assets•Producer/ConsumerModel•System ofSystemsIntegrationFocus•DrivingConsistency ofIntegration•EnterpriseApplicationIntegration (EAI)•ReasonablyMatureImplementingStandards(REST,SOAP,WSDL,UDDI,etc.)It is very important to realize that while cloud computing overlapswith SOA, they focus on various implementation projects. In order toexchange information between systems and a network of systems, SOAimplementations are primarily used. Cloud computing, on the other hand,aims to leverage the network acrossthe whole range of IT functions.SOA is not suitable for cloud computing, actually they areadditional activities. Providers need a very good service-orientedarchitecture to be able to provide cloud services effectively.There are many common featuresof SOA and cloud computing,however, they are not and can coexist. In its requirements for delivery ofdigital services, SOA seems to have matured. Cloud Computing and itsservices are new as are numerous vendors such as public, community,hybrid and private clouds, with their offerings. They are also growing.1.3.5Utility-oriented computing:The concept Utility Computing pertains to utilities and businessmodels that provide its customers with a service provider, and charges youfor consumption. The computing power, storage or applications areexamples of such IT services. In this scenario the customer will be thesingle divisions of the company as a service provider at a data center ofthe company.The concept utility applies to utility services offered by a utilitiesprovider, such as electricity, telephone, water and gas. Related toelectricity or telephone, where the consumer receives the utilitycomputing, computing power is measured and paid on the basis of ashared computer network.munotes.in

Page 34

34The concept utility applies to utility services offered by a utilitiesprovider, such as electricity, telephone, water and gas. Related toelectricity or telephone, where the consumer receives the utilitycomputing, computing power is measured and paid on the basis of ashared computer network.Utility computing is very analogous to virtualization so that thetotal volume of web storage and the computing capacity available tocustomers is much greater than that of a single computer. To make thistype of web server possible, several network backend servers are oftenused. The dedicated webservers can be used in explicitly built and leasedcluster types for end users. The distributed computing is the approach usedfor a single 'calculation' on multiple web servers.
Figure1.18Cloud Computing Technology–Utility ComputingProperties of utility computingEven though meanings of utility computing are various, they usuallycontain the following five characteristics.Scalability:The utility computing shall ensure that adequate IT resources areavailable under all situations. Improved service demand does not sufferfrom its quality (e.g. response time).Price of demand:Until now, companies must purchase their own computing powersuch as hardware and software. It is necessary to pay for this ITinfrastructure beforehand, irrespective of its use in thefuture. Forinstance,technology providers to reach this link depends on how many CPUs theclient has enabled during leasing rate for their servers. If the computercapacity to assert the individual sections actually can be measured in a
34The concept utility applies to utility services offered by a utilitiesprovider, such as electricity, telephone, water and gas. Related toelectricity or telephone, where the consumer receives the utilitycomputing, computing power is measured and paid on the basis of ashared computer network.Utility computing is very analogous to virtualization so that thetotal volume of web storage and the computing capacity available tocustomers is much greater than that of a single computer. To make thistype of web server possible, several network backend servers are oftenused. The dedicated webservers can be used in explicitly built and leasedcluster types for end users. The distributed computing is the approach usedfor a single 'calculation' on multiple web servers.
Figure1.18Cloud Computing Technology–Utility ComputingProperties of utility computingEven though meanings of utility computing are various, they usuallycontain the following five characteristics.Scalability:The utility computing shall ensure that adequate IT resources areavailable under all situations. Improved service demand does not sufferfrom its quality (e.g. response time).Price of demand:Until now, companies must purchase their own computing powersuch as hardware and software. It is necessary to pay for this ITinfrastructure beforehand, irrespective of its use in thefuture. Forinstance,technology providers to reach this link depends on how many CPUs theclient has enabled during leasing rate for their servers. If the computercapacity to assert the individual sections actually can be measured in a
34The concept utility applies to utility services offered by a utilitiesprovider, such as electricity, telephone, water and gas. Related toelectricity or telephone, where the consumer receives the utilitycomputing, computing power is measured and paid on the basis of ashared computer network.Utility computing is very analogous to virtualization so that thetotal volume of web storage and the computing capacity available tocustomers is much greater than that of a single computer. To make thistype of web server possible, several network backend servers are oftenused. The dedicated webservers can be used in explicitly built and leasedcluster types for end users. The distributed computing is the approach usedfor a single 'calculation' on multiple web servers.
Figure1.18Cloud Computing Technology–Utility ComputingProperties of utility computingEven though meanings of utility computing are various, they usuallycontain the following five characteristics.Scalability:The utility computing shall ensure that adequate IT resources areavailable under all situations. Improved service demand does not sufferfrom its quality (e.g. response time).Price of demand:Until now, companies must purchase their own computing powersuch as hardware and software. It is necessary to pay for this ITinfrastructure beforehand, irrespective of its use in thefuture. Forinstance,technology providers to reach this link depends on how many CPUs theclient has enabled during leasing rate for their servers. If the computercapacity to assert the individual sections actually can be measured in a
munotes.in

Page 35

35company, the IT costs may be primarily attributable to each individual unitat internal cost. Additional forms of connection are possible with the useof IT costs.Standardized Utility Computing Services:A collection of standardized services is accessible from the utilitycomputing service provider. These agreements may differ in the level ofservice (Quality Agreement and IT Price).The consumer does not haveany impact on the infrastructure, such as the server platformUtility Computing and Virtualization:Virtualization technologies can be used to share web and otherresources in the common pool of machines. Instead of the physicalresources available, this divides the network into logical resources. Nopredetermined servers or storage of any other than a free server or poolmemory are assigned to an application.Automation:Repeated management activities may be automated, such as settingup new servers or downloading updates. Furthermore, the resourceallocation to the services and IT service management to be optimized mustbe considered, along with service standard agreements and IT resourceoperating costs.Advantages of Utility Computing:Utility computinglowers IT costs, despite the flexibility of existingresources. In fact, expenses are clear and can be allocated directly to thedifferent departments of a organization. Fewer people are required foroperational activities in the IT departments.The companies are more flexiblebecause their IT resources areadapted to fluctuating demand more quickly and easily. All in all, theentire IT system is simpler to handle, because no longer apps can takeadvantage of a particular IT infrastructure for any program.1.4BUILDING CLOUD COMPUTING ENVIRONMENTSCloud Computing Environment Application development occursthrough platforms & framework applications that provide various types ofservices, from the bare metal infrastructure to custom applications thatserve certain purposes.1.4.1Application development:A powerful computing model that enables users to useapplicationmunotes.in

Page 36

36on demand is providedby cloud computing.One of the most advantageousclasses of applications in this feature are Web applications.Theirperformance is mostly influenced by broad range of applications usingvarious cloud services can generate workloads for specific user demands.Several factors have facilitated the rapid diffusion of Web 2.0. First, Web2.0 builds on a variety of technological developments and advancementsthat allow users to easily create rich and complex applications, includingenterprise applications by leveraging the Internet now as the main utilityand user interaction platform. Such applications are characterized bysignificant complexprocesses caused by user interactions and byinteraction between multiple steps behind the Webfront. This is theapplication are most sensitive to improper infrastructure and servicedeployment sizing or work load variability.The resource-intensive applications represent another class ofapplications that could potentially benefit greatly by using cloudcomputing. These applications may be computational or data-intensive.Significant resources are in both casesRequired in reasonable time to complete the execution. It shouldbe noted that such huge quantities of resources are not constantly or for along time needed. Scientific applications, for example, may require hugecomputational capacity to conduct large-scale testing once in a while sothat infrastructure supports them cannot be purchased. Cloud computing isthe solution in this case.Resource-intensive applications are notcollaborative, and are characterized mainly by batch processing.1.4.2 Infrastructure and system development:The key technologiesfor providing cloud services from throughoutthe world include distributed computing, virtualization, orientation and theWeb 2.0. The development of cloud-enhancing applications and systemsrequires knowledge of all of these technologies.Distributed computing is a key cloud computing model since cloudsystems are distributed. Aside from administrative functions primarilyconnected to the accessibility of resources to the cloud, engineers anddevelopers have great challenges with the extremely dynamic CloudSystems, whereby new nodes and services are provided on demand. Thisfeature is somewhat unique to cloud-based solutions and is most oftendiscussed on the computer system's middleware level. Infrastructure-as-a-service offer the capability of substituting and eliminating resourcesalthough it is up to those that use systems with knowledge and efficiencyto use these possibilities. Platform-as-a-Service solutions incorporatealgorithms and rules in their framework that control the supply and leasingofresources. These should either be totally transparent or controlled bydevelopers. Another aspect of interest is the integration of cloud resourcesand existing system deployment. Web 2.0 systems are the mechanism forthe delivery, management and provisionof cloud computing services. Inmunotes.in

Page 37

37addition to interaction with rich web browser interfaces, web services havebecome, from a conceptual standpoint, the main access point to cloudcomputing systems. Service orientation is thus the fundamental frameworkdefining a cloud computing system architecture.In cloud computing, virtualization is one element which plays akey role. This technology is a key element of cloud providers'infrastructure. As already mentioned, the virtualization idea is 40 yearsold, although cloud computing poses new challenges, particularly inmanaging virtual environments, regardless of whether they are virtualhardware abstract concepts or runtime environments. Cloudapplicationdevelopers must know how the virtualization technologypreferred is constrained and what effects on the instability of some of itssystems' components are.These are all factors that affect the manner we program cloud-based applications and systems. Cloud computing offers effectivelymechanisms to respond to demand rise by replicating the necessarycomponents of stressful (i.e. highly loaded) computing systems. The keycomponents that should guide the development of such systems aredynamism, size and volatility.1.4.3Computing platforms and technologies:Cloudapplication development involves leveraging platforms andframeworks which offer different services, from the bare metalinfrastructure to personalized applications that serve specific purposes.1.4.3.1Amazon web services (AWS):Amazon Web Services(AWS) is a cloud computing platform withfunctionalities such as database storage, delivery of content, and secure ITinfrastructure for companies, among others.It is known for its on-demandservices namely Elastic Compute Cloud (EC2) and Simple StorageService (S3). Amazon EC2 and Amazon S3 are essential tools tounderstand if you want to make the most of AWS cloud.Amazon EC2 is a software for running cloud servers that is shortfor Elastic Cloud compute. Amazon launched EC2 in 2006, as it allowedcompaniesto rapidly and easily spin servers into the cloud, instead ofhaving to buy, set up, and manage their own servers on the premises.While Amazon EC2 server instances can also have bare-metal EC2instances, most Amazon EC2 server instances are virtual machines housedon Amazon's infrastructure. The server is operated by the cloud providerand you don't need to set up or maintain the hardware.) A vast number ofEC2 instances are available for different prices; generally speaking themore computing capacity you use, the higher the EC2 instance you need.(Bare metal Cloud Instances permit you to host a working load on amunotes.in

Page 38

38physical computer, rather than a virtual machine. In certain Amazon EC2examples, different types of applications such as the parallel processing ofbig data workload GPUs are optimized for use.EC2 offers functionality such as auto-scaling, which automates theprocess of increasing or decreasing computeresources available for agiven workload, not just to make the deployment of a server simpler andquicker. Auto-scaling thus helps to optimize costs and efficiency,especially in working conditions with significant variations in volume.Amazon S3 is a storage service operating on the AWS cloud (as itsfull name, Simple Storage Service). It enables users to store virtuallyevery form of data in the cloud and access the storage over a webinterface, AWS Command Line Interface, or AWS API. You need to buildwhat Amazon called a 'bucket' which is a specific object that you use tostore and retrieve datafor the purpose of using S3. If you like, you can setup many buckets.Amazon S3 is an object storage system which works especiallywell for massive, uneven or highly dynamic data storage.1.4.3.2Google AppEngine:The Google AppEngine (GAE) is a cloud computing service(belongingto the platform as a service (PaaS) category) to create and hostweb-based applications within Google's data centers. GAE webapplications are sandboxed and run across many redundancy servers toallow resources to be scaled up according to currently-existing trafficrequirements. App Engine assigns additional resources to servers to handleincreased load.Google App Engine is a Google platform for developers andbusinesses to create and run apps using advanced Google infrastructure.These apps must be written in one of the few languages supported, namelyJava, Python, PHP and Go. This also requires the use of Google querylanguage and Google Big Table is the database used. The applicationsmust comply with these standards,so that applications must either bedeveloped in keeping with GAE or modified to comply.GAE is a platform for running and hosting Web apps, whether onmobile devices and on the Web. Without this all-in function, developersshould be responsible for creating their own servers, database softwareand APIs that make everyone work together correctly. GAE takes awaythe developers' pressure so that they can concentrate on the app's front endand features to enhance user experience.1.4.3.3Microsoft Azure:Microsoft Azure is a platform as a service(PaaS) to develop andmunotes.in

Page 39

39manage applications for using their Microsoft products and in theirdatacenters. This is a complete suite of cloud products that allow users todevelop business-class applications without developing their owninfrastructure.Three cloud-centric products are available on the Azure Cloudplatform: the Windows Azure, SQL Azure & Azure App Fabric controller.This involve the infrastructure hosting facility for the application.In theAzure,theCloud service role is a set of virtual platformsthat work together to accomplish basic tasks, which is managed, load-balanced and Platform-as-a-Service. Cloud Service Roles are controlledby Azure fabric controller and provide the perfect combination ofscalability, control, and customization.Web Role is the role of an Azure Cloud service which isconfigured and adapted to operate web applications developed on theInternet Information (IIS) programming languages and technologies, suchas ASP.NET, PHP, Windows Communication Foundation and Fast CGI.Web Role is the role of an Azure Cloud service which isconfigured and adapted to operate web applications developed on theInternet Information (IIS) programming languages and technologies, suchas ASP.NET, PHP, Windows Communication Foundation and Fast CGI.Worker role is any role for Azure that works on applications andservices that do not usually require IIS. IIS is not enabled default inWorker Roles. They are mainly utilized to support web-based backgroundprocesses and to do tasks such as compressing uploaded imagesautomatically, run scripts, get new messages out of queue and process andmore, when something changes thedatabase.VM Role: The VM role is a type of Azure Platform role thatsupports theautomated management of already installed service packages,fixes, updates and applications for Windows Azure.The principal difference is that:A Web Role deploys and hosts the application automatically viaIISA Worker Role does not use IIS andruns the program independentlyThe two can be handled similarly and can be run on the same Azureinstances if they are deployed and supplied via the Azure ServicePlatform.For certain cases, instances of Web Role and Worker Roles worktogether and are also usedconcurrently by an application. For example, aweb role example can accept applications from users, and then pass themto a database worker role example.munotes.in

Page 40

401.4.3.4Hadoop:Apache Hadoop is an open sourcesoftware framework for storageand large-scale processing of data sets of commodity hardwareclusters.Hadoop is a top-level Apache project created and operated by a globalcommunity of contributors and users. It is under the Apache License 2.0.Two phases of MapReduce function, Map and Reduce. Map tasksare concerned with data splitting and mapping of the data, while Reducetasks shuffle and reduce the data.Hadoop can run MapReduce programs in a variety of languages like Java,Ruby, Python, and C++,. MapReduce program is parallel in nature andthus very useful for large-scale analyzes of data via multiple clustermachines.The input to each phase is key-value pairs. In addition, everyprogrammer needs to specify two functions: map function and reducefunction.1.4.3.5Force.com and Salesforce.com:The fundamental concepts on cloud computing must be understoodto understand the divergence between salesforce.com and force.com.Salesforce is a company and salesforce.com is an application builton the basis of software as a service (SaaS) for customerrelationshipsmanagement (CRM). The force.com platform assists developers andbusiness users in creating successful business applications.Salesforce is a SaaS product that includes the Out of Box (OOB)features built into a CRM system for sales automation, marketing, serviceautomation, etc. Some SaaS examples are Dropbox, Google Apps andGoToMeeting that refer to taking the software from your computer to thecloud.Force.com is a PaaS (Platform-as-a-Service)product; it includes aframework that allowsyou to build applications. It contains a developmentenvironment. Force.com helps you to customize the user interface,functionality and business logic.Simply put, Salesforce.com functionality saves contacts, textmessages, calls and other standard functions within the iPhone application.In force.com, the applications are constructed and operated.Salesforce.com runs on force.com, like the iPhone dialer workson theiPhone OS.1.4.3.6Manjrasoft Aneka:MANJRASOFT Pvt. Ltd. is one of organization thatworks onmunotes.in

Page 41

41cloud computing technology bydeveloping softwarecompatible withdistributed networksacross multiple servers.•Create scalable, customizable building blocks essential to cloudcomputing platforms.•Build software to accelerate applications thatis designed fornetworked multi-core computers.•Provide quality of service (QoS) and service level Agreement (SLA)-solutions based on the service level agreement(SLA) which allow thescheduling, dispatching, pricing of applications and accountingservices, Business and/or public computing networkenvironments.•Developmentof applications by enabling the rapid generation oflegacy and new applications using innovative parallel and distributedmodels of programming.•Ability of organizations to use computing resources .Business to speedup "compute" or "data" execution-intensive applicationsSUMMARYIn this chapter, we explored the goal and advantages andchallenges associated with thecloud computing. As a consequence of thedevelopment and integration of many of its supportive models andtechnologies, especially distributed computing, thecloud computingtechnologiesWeb2.0, virtualization,Servicesorientatedand UtilityComputing.We are examining various definitions, meanings andimplementationsof the concept. Only the dynamic provision of ITservices (whether it is virtual infrastructure, runtime environments orapplication services) and the implementation of a utility-based cost modelto value such services is the component shared by all different views ofcloud computing. This architecture is applied throughout the entirecomputing stack and allows the dynamic provision of IT and runtimeresources in the context of cloud-hosted platforms to create scalableapplications and their services.Thecloud computing reference method isrepresented by this concept. This model defines three importantcomponents of Cloud computing's industry and Services there areoffering: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS),and Software-as-service (SaaS). These components explicitly map thewide categories of the various types of cloud computing services.UNIT END QUESTIONS1.What is cloud computing’s innovative characteristic?2.What are the technologies that are supported by cloud computing?3.Provide a brief characterization of a distributed system.4.Define cloud computing and Identify the main features of cloudcomputing.munotes.in

Page 42

425.What are the most important distributed technologies that havecontributed to cloud computing?6.What is a virtualization?7.Explain the major revolution introduced by web 2.08.Give examples of applications for Web 2.0.9.Describe the main features of the service orientation.10.Briefly summarize the Cloud Computing Reference Model.11.What is the major advantage of cloud computing?12.Explain the different types of models in the cloud computing13.Explain the three cloud services in cloud computing.14.What is Web services? Explain the different types of web services.REFERENCE FOR FURTHER READING•Mastering Cloud Computing Foundations and ApplicationsProgramming Rajkumar Buyya ,Christian Vecchiola,S. Thamarai SelviMK publications ISBN: 978-0-12-411454-8•Cloud Computing Concepts, Technology & Architecture Thomas Erl,Zaigham Mahmood, and Ricardo Puttini , The Prentice Hall ServiceTechnology Series ISBN-10 : 9780133387520 ISBN-13 : 978-0133387520•Distributed and Cloud Computing: From Parallel Processing to theInternet of Things 1st Edition by Kai Hwang Jack Dongarra GeoffreyFox ISBN-10 : 9789381269237 ISBN-13 : 978-9381269237•https://www.geeksforgeeks.org/cloud-computing/•https://en.wikipedia.org/wiki/Cloud_computing•https://aws.amazon.com/what-is-cloud-computing/•http://www.manjrasoft.com/aneka_architecture.html•https://en.wikipedia.org/wiki/Microsoft_Azure•https://en.wikipedia.org/wiki/Apache_Hadoop*****munotes.in

Page 43

432PRINCIPLES OFPARALLEL AND DISTRIBUTEDCOMPUTINGUnit Structure2.0Objective2.1Eras of computing2.2Parallel vs. distributed computing2.3Elements of parallel computing2.3.1What is parallel processing?2.3.2 Hardware architectures for parallel processing2.3.2.1 Single-instruction, single-data (SISD) systems2.3.2.2 Single-instruction, multiple-data (SIMD) systems2.3.2.3 Multiple-instruction, single-data (MISD) systems2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems2.3.3 Approaches to parallel programming2.3.4 Levels of parallelism2.3.5 Laws of caution2.4Elements of distributed computing2.4.1General concepts and definitions2.4.2Components of a distributed system2.4.3Architectural styles for distributed computing2.4.3.1 Component and connectors2.4.3.2 Software architectural styles2.4.3.3 System architectural styles2.4.4 Models for interprocess communication2.4.4.1 Message-based communication2.4.4.2 Models for message-based communication2.5Technologies for distributed computing2.5.1 Remote procedure call2.5.2 Distributed object frameworks2.5.2.1 Examples of distributed object frameworks2.5.3 Service-oriented computing2.5.3.1What is a service?2.5.3.2Service-oriented architecture (SOA)2.5.3.3Web services2.5.3.4Service orientation and cloud computing2.6Summarymunotes.in

Page 44

442.7Review questions2.8Reference for further reading2.0 OBJECTIVEThecomputingcomponents (hardware, software, infrastructures)that allow the delivery of cloud computing services refer to a Cloudsystem or cloud computing technology.Consumers can acquire new skills without investing in newhardware or software via the public cloud. Instead, they pay a subscriptionfee for theircloud provider or only pay for their resources. These IT assetsare owned and managed through the Internet by the service providers.This chapter presents the basic principles and models of paralleland distributed computing, which provide the foundationfor buildingcloud computing systems and frameworks.2.1 ERAS OF COMPUTINGThe two most prominent computing era are sequential and parallel.In the past decade, the high performance computer searches for parallelmachines have become important competitors of vector machines. Figure2.1 provides a hundred-year overview of the development of thecomputing era. During these periods the four main computing elementsare created like architectures, compilers, applications and problem-solvingenvironments.Thecomputing era begins with the development of hardware,followed by software systems (especially in the area of compilers andoperating systems), applications, and with a growing problem solvingenvironment it enters its saturation level. Each computing element issubject to three stages: R&D, commercialization and commodity.
Figure2.1 Eras of computing
munotes.in

Page 45

452.2 PARALLEL VS. DISTRIBUTED COMPUTINGParallel computing and distributed computing terms, even thoughthey are somewhat different things, are often used interchangeably. Theterm parallel means a tightly coupledsystem, whereas the distributed onerefers to a wider system class, including the tightly coupled.The concurrent use of several computer resources to solve a computationalproblem is parallel computing:•A problem is divided into discrete pieces which can be solvedsimultaneously•A number of instructions for each part are broken down further•Instructions on various processors from each part run simultaneously•An overall mechanism for control/coordination is used
For example:
Figure2.2 Sequential and Parallel Processing
munotes.in

Page 46

46The problem in the computation should be:•Be divided into discreet parts of work that can be solved at the sametime;•At any given time, execute multiple program instructions;•With many compute resources in less time than one compute resource,can be solved.Typically, computation resources are:•One computer with several processors / cores•A random number of these computers connected through a networkInitially, onlycertain architectures were considered by parallelsystems .It featured multiple processors with the same physical memoryand a single computer. Over time, those limitations have been relaxed andparallel systems now include all architectures, whether physically presentor based on the concept of shared memory, whether the library support,specific hardware and a very efficient network infrastructure are presentphysically or created. For example, a cluster of the nodes linked byInfiniBand can be considereda parallel system and configured with adistributing shared memory systemComputing distributed is computed by distributed independentcomputers communicating only over a network (Figure ). Distributedcomputing systems are typically treated differentlythan parallelcomputing systems or shared memory systems, in which many computersshare a common memory pool used to communicate processors to eachother. Memory systems used multiple computers in order to solve acommon problem, computing between the connected computers (nodes)and communicating between nodes through message-passage.
Figure2.3A distributed computing system.
munotes.in

Page 47

47Distributed computing is limited to programs in a geographically-limited area with components shared among computers. Broaderdefinitions both include common tasks and program components.Distributed computing in the broadest sense means that something isshared between many systems, which can also happen in differentlocations.Examples of distributed systems / applications of distributed computing:•Intranets, Internet, WWW, email.•Telecommunication networks: Telephone networks and Cellularnetworks.•Network of branch office computers-Information system to handleautomatic processing of orders,•Real-time process control: Aircraft control systems,•Electronic banking,•Airline reservation systems,•Sensor networks,•Mobile and Pervasive Computing systems.2.3 ELEMENTS OF PARALLEL COMPUTINGThat is the exponential increase in computing power. In 1965,Intel's co-founder, Gordon Moore, noted that the number of transistors ona single-inch chip doubles per year, while the cost falls by about half. It'snow 18 months, and it gets longer. Silicon reaches a performance limit inan increasing number of applications requiring increased speed, reducedlatency and light detection. To address this constraint, the feasible solutionis to connect several processors to solve "Great Challenge" problems incoordination with each other. The initial steps towards parallel computinglead to the growth. Itincludes technology, architecture and systems formultiple parallel activities. This section refers to its propercharacterization, which includes the parallelism of the operation ofmultiple processors coordinating together within single computer.2.3.1What is parallel processing?Parallel processing is a way to manage different parts of an overalltask when comparing two or more processors. CPUs. Break up variousparts of a task among several processors can reduce the time a programneeds to run. Either machine with over one CPU or multi-core processorswhich are commonly found on computers today may perform parallelprocessing. In the parallel computing machine the concept known asdivide and conquer .Divide and conquer is an elegant way to solve aproblem. You split up problems in smaller problems of the same type maybe resolved individually, and partial outcomes combined in a totalmunotes.in

Page 48

48solution. The approach is used to break the problem into smaller andsmaller problems, until any problem is solved easily. Parallelprogramming is called multiprocessor system programming using thedivide and conquer techniqueIntensive computational problems and applications requireadditional processing power than it has ever been. Although theprocessor’s speed is increasing, traditional sequential computers do notdeliver the power to solve these problems. In parallel computers, an areain which many processors simultaneously take on problems, many of theseproblems are potentially addressed.Several factors influencethe development of parallel processing. Thefollowing are prominent among them are:1.In many fields of science and engineering parallel computing wasconsidered the "high end computing" to model problems that weredifficult to solve: In the fields like•Atmosphere, Earth, Environment•Physics-applied, nuclear, particle, condensed matter, highpressure, fusion, photonics•Bioscience, Biotechnology, Genetics•Chemistry, Molecular Sciences•Geology, Seismology•Mechanical Engineering-from prosthetics to spacecraft•Electrical Engineering, Circuit Design, Microelectronics•Computer Science, Mathematics•Defense, Weapons2.Sequential architectures are physically constrained by the speed oflight and the laws of thermodynamics. The saturation point (no verticalgrowth)is reached by a speed at which sequential CPUs can operate.Therefore, an alternate way to achieve high computational speed is toconnect several CPUs (the possibility for horizontal growth).3.Pipeline hardware, superscale etc. improvements are not scalable andrequire sophisticated compiler technology. The task is difficult todevelop this compiler technology4.Another attempt to improve performance was vector processing bydoing more than one task at a time. Capability to add (or subtract ormultiply, or otherwise manipulate) two numerical arrays to devices hasbeen introduced in this case. This was useful when data naturallyappeared in vectors or matrices in certain engineering applications.Vector processing was not so valuable in applications with less well-formed data.munotes.in

Page 49

495.There is indeed extensive R&D work on development tools andenvironments and parallel processing technology is mature, andcommercially exploitable.6.Essential networking technology advancement paves the way forheterogeneous computing.2.3.2 Hardware architectures for parallel processing:Parallel computers highlight the parallel processing of theoperations somehow. All basic parallel processing and computingconcepts have been specified in the previous unit. Parallel computers canbe distinguished by data and instruction streams of computerorganizations. They can also be classifiedon a computer structure, forexample multiple processors with a separate memory or a global sharedmemory. In a program called grain size, parallel levels ofprocessing canalso be defined based on the size of instructions. But computers in parallelcan be classified according to different criteriaThe following classification of parallel computers have been identified:1)Classification based on the instruction and data streams2)Classification based on the structure of computers3)Classification based on how the memory is accessed4)Classification based on grain sizeFlynn's Classical Taxonomy:•Parallel computers are classified in different ways.•Flynn Taxonomy has been one of the most widely used classificationsused since 1966.•Flynn's taxonomy defines the architecture of multi-processorcomputers according to how the two distinct aspects of instruction anddata stream can be categorized. Each of these dimensions can onlycontain aSingle or multiplestate of one kind.•The following matrix describes the four possible Flynn classifications:
Figure2.4Flynn Taxonomy
munotes.in

Page 50

502.3.2.1 Single-instruction, single-data (SISD) systems:SISD computing system is a uniprocessor that can execute a singleinstruction on a single data stream.The SISD processes machineinstructions sequentially, computers that adopt this model are commonlyreferred to as sequential computers. The SISD architecture is used in mostconventional computers. All processing instructions and data should bestored in the primary memory.Depending on the rates at which thecomputer can transfer information internally, the speed is restricted in theprocessing element of the SISD model. The IBMPC, workstations are theprevalent representative SISD systems.
Figure2.5Single-instruction, single-data (SISD) architecture2.3.2.2 Single-instruction, multiple-data (SIMD) systems:A SIMD system is a multi-processor system that can execute thesameinstruction on all CPUs but operates on many data streams. SIMD-based machines are ideal for scientific computing because they involvemany vector and matrix operations. The data may be divided into multiplesets (N-sets for N PE systems) so that the information can be transferred toall the processing elements (PEs). Each PE can process the same data set.This is ideally suited for complex problems with a high degree ofregularity like graphics / image processing.Most modern computers,particularly those with graphics processor units (GPUs) employ SIMDinstructions and execution units.Dominant representative SIMD systems isCray’s vector processing machine.Examples:Processor Arrays:Thinking Machines CM-2, MasPar MP-1 & MP-2,ILLIAC IVVector Pipelines:IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP,NEC SX-2, Hitachi S820, ETA10
502.3.2.1 Single-instruction, single-data (SISD) systems:SISD computing system is a uniprocessor that can execute a singleinstruction on a single data stream.The SISD processes machineinstructions sequentially, computers that adopt this model are commonlyreferred to as sequential computers. The SISD architecture is used in mostconventional computers. All processing instructions and data should bestored in the primary memory.Depending on the rates at which thecomputer can transfer information internally, the speed is restricted in theprocessing element of the SISD model. The IBMPC, workstations are theprevalent representative SISD systems.
Figure2.5Single-instruction, single-data (SISD) architecture2.3.2.2 Single-instruction, multiple-data (SIMD) systems:A SIMD system is a multi-processor system that can execute thesameinstruction on all CPUs but operates on many data streams. SIMD-based machines are ideal for scientific computing because they involvemany vector and matrix operations. The data may be divided into multiplesets (N-sets for N PE systems) so that the information can be transferred toall the processing elements (PEs). Each PE can process the same data set.This is ideally suited for complex problems with a high degree ofregularity like graphics / image processing.Most modern computers,particularly those with graphics processor units (GPUs) employ SIMDinstructions and execution units.Dominant representative SIMD systems isCray’s vector processing machine.Examples:Processor Arrays:Thinking Machines CM-2, MasPar MP-1 & MP-2,ILLIAC IVVector Pipelines:IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP,NEC SX-2, Hitachi S820, ETA10
502.3.2.1 Single-instruction, single-data (SISD) systems:SISD computing system is a uniprocessor that can execute a singleinstruction on a single data stream.The SISD processes machineinstructions sequentially, computers that adopt this model are commonlyreferred to as sequential computers. The SISD architecture is used in mostconventional computers. All processing instructions and data should bestored in the primary memory.Depending on the rates at which thecomputer can transfer information internally, the speed is restricted in theprocessing element of the SISD model. The IBMPC, workstations are theprevalent representative SISD systems.
Figure2.5Single-instruction, single-data (SISD) architecture2.3.2.2 Single-instruction, multiple-data (SIMD) systems:A SIMD system is a multi-processor system that can execute thesameinstruction on all CPUs but operates on many data streams. SIMD-based machines are ideal for scientific computing because they involvemany vector and matrix operations. The data may be divided into multiplesets (N-sets for N PE systems) so that the information can be transferred toall the processing elements (PEs). Each PE can process the same data set.This is ideally suited for complex problems with a high degree ofregularity like graphics / image processing.Most modern computers,particularly those with graphics processor units (GPUs) employ SIMDinstructions and execution units.Dominant representative SIMD systems isCray’s vector processing machine.Examples:Processor Arrays:Thinking Machines CM-2, MasPar MP-1 & MP-2,ILLIAC IVVector Pipelines:IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP,NEC SX-2, Hitachi S820, ETA10munotes.in

Page 51

51Figure2.6 : Single-instruction, multiple-data (SIMD) architecture.2.3.2.3 Multiple-instruction, single-data (MISD) systems:An MISD is a multiprocessor system that executes different instructionson different PEs, but they all operate in the same dataset.Multiple instructions: every processing unit works independently on thedata over separate streams of instructions.Single Data: A single stream of data is fed intomultiple processing units.
Figure 2.7 Multiple-instruction, single-data (MISD) architecture.Example Z = sin(x)+cos(x)+tan(x)On the same data set the system performs various operations. For mostapplications, machines designed using MISD are not useful, some aredesigned, but none of them are commercially available.2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems:MIMD is a multiprocessor system that can carry out multipleinstructions on multiple sets of data. Every PE in a model with aMIMDhas separate instructions and data streams, so any type of application canbe used on machines built using this model. In comparison to SIMD and
51Figure2.6 : Single-instruction, multiple-data (SIMD) architecture.2.3.2.3 Multiple-instruction, single-data (MISD) systems:An MISD is a multiprocessor system that executes different instructionson different PEs, but they all operate in the same dataset.Multiple instructions: every processing unit works independently on thedata over separate streams of instructions.Single Data: A single stream of data is fed intomultiple processing units.
Figure 2.7 Multiple-instruction, single-data (MISD) architecture.Example Z = sin(x)+cos(x)+tan(x)On the same data set the system performs various operations. For mostapplications, machines designed using MISD are not useful, some aredesigned, but none of them are commercially available.2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems:MIMD is a multiprocessor system that can carry out multipleinstructions on multiple sets of data. Every PE in a model with aMIMDhas separate instructions and data streams, so any type of application canbe used on machines built using this model. In comparison to SIMD and
51Figure2.6 : Single-instruction, multiple-data (SIMD) architecture.2.3.2.3 Multiple-instruction, single-data (MISD) systems:An MISD is a multiprocessor system that executes different instructionson different PEs, but they all operate in the same dataset.Multiple instructions: every processing unit works independently on thedata over separate streams of instructions.Single Data: A single stream of data is fed intomultiple processing units.
Figure 2.7 Multiple-instruction, single-data (MISD) architecture.Example Z = sin(x)+cos(x)+tan(x)On the same data set the system performs various operations. For mostapplications, machines designed using MISD are not useful, some aredesigned, but none of them are commercially available.2.3.2.4 Multiple-instruction, multiple-data (MIMD) systems:MIMD is a multiprocessor system that can carry out multipleinstructions on multiple sets of data. Every PE in a model with aMIMDhas separate instructions and data streams, so any type of application canbe used on machines built using this model. In comparison to SIMD and
munotes.in

Page 52

52MISD, PEs can operate synchronous or asynchronous, deterministic ornon-deterministic in MIMD computers.Currently, the most common typeof parallel computer-most modern supercomputers fall into this category.Examples: most current supercomputers, networked parallel computerclusters and "grids", multi-processor SMP computers, multi-core PCs.
Figure2.8Multiple-instructions, multiple-data (MIMD) architecture.MIMD machines are divided broadly into shared-memory MIMDand distributed-memory MIMDon the manner in which PEs are connectedto the main memory.Shared memory MIMD machines:All PEs are connected to a single global memory in the sharedMIMD (tightly coupled multiprocessor systems) model and have allaccess to it. The communication between PEs within this model takesplace by means of a sharedmemory, changes of the data stored by one PEin the global memory are visible to all other PEs. The dominant sharedmemory systems for Silicon Graphics and Sun / IBM (Symmetric Multi-Processing) are shared memory systems.Distributed memory MIMD machines:All PEs have a local memory on distributed memory MIMDmachines (loose multiprocessor systems). In this model, communicationamong PEs is carried out via the interconnection network (the inter-process communication channel or IPC). The network connecting PEs canbe set up in tree, mesh or as needed.The MIMD shared memory architecture is easier to design, but lesstolerant to failure and more difficult to expand compared to the MIMDdistributed memory model. Shared MIMD failures affect the entiresystem, but not the distributed model in which every PE can be easilyisolated. In comparison, MIMD shared memory architectures are lesslikely to scale as the introduction of more PEs triggers memory conflict.This is not the case with distributed memory, in which each PE has itsown memory. Thanks to realisticeffects and consumer specifications, thedistributed MIMD memory architecture is better than the others.
52MISD, PEs can operate synchronous or asynchronous, deterministic ornon-deterministic in MIMD computers.Currently, the most common typeof parallel computer-most modern supercomputers fall into this category.Examples: most current supercomputers, networked parallel computerclusters and "grids", multi-processor SMP computers, multi-core PCs.
Figure2.8Multiple-instructions, multiple-data (MIMD) architecture.MIMD machines are divided broadly into shared-memory MIMDand distributed-memory MIMDon the manner in which PEs are connectedto the main memory.Shared memory MIMD machines:All PEs are connected to a single global memory in the sharedMIMD (tightly coupled multiprocessor systems) model and have allaccess to it. The communication between PEs within this model takesplace by means of a sharedmemory, changes of the data stored by one PEin the global memory are visible to all other PEs. The dominant sharedmemory systems for Silicon Graphics and Sun / IBM (Symmetric Multi-Processing) are shared memory systems.Distributed memory MIMD machines:All PEs have a local memory on distributed memory MIMDmachines (loose multiprocessor systems). In this model, communicationamong PEs is carried out via the interconnection network (the inter-process communication channel or IPC). The network connecting PEs canbe set up in tree, mesh or as needed.The MIMD shared memory architecture is easier to design, but lesstolerant to failure and more difficult to expand compared to the MIMDdistributed memory model. Shared MIMD failures affect the entiresystem, but not the distributed model in which every PE can be easilyisolated. In comparison, MIMD shared memory architectures are lesslikely to scale as the introduction of more PEs triggers memory conflict.This is not the case with distributed memory, in which each PE has itsown memory. Thanks to realisticeffects and consumer specifications, thedistributed MIMD memory architecture is better than the others.
52MISD, PEs can operate synchronous or asynchronous, deterministic ornon-deterministic in MIMD computers.Currently, the most common typeof parallel computer-most modern supercomputers fall into this category.Examples: most current supercomputers, networked parallel computerclusters and "grids", multi-processor SMP computers, multi-core PCs.
Figure2.8Multiple-instructions, multiple-data (MIMD) architecture.MIMD machines are divided broadly into shared-memory MIMDand distributed-memory MIMDon the manner in which PEs are connectedto the main memory.Shared memory MIMD machines:All PEs are connected to a single global memory in the sharedMIMD (tightly coupled multiprocessor systems) model and have allaccess to it. The communication between PEs within this model takesplace by means of a sharedmemory, changes of the data stored by one PEin the global memory are visible to all other PEs. The dominant sharedmemory systems for Silicon Graphics and Sun / IBM (Symmetric Multi-Processing) are shared memory systems.Distributed memory MIMD machines:All PEs have a local memory on distributed memory MIMDmachines (loose multiprocessor systems). In this model, communicationamong PEs is carried out via the interconnection network (the inter-process communication channel or IPC). The network connecting PEs canbe set up in tree, mesh or as needed.The MIMD shared memory architecture is easier to design, but lesstolerant to failure and more difficult to expand compared to the MIMDdistributed memory model. Shared MIMD failures affect the entiresystem, but not the distributed model in which every PE can be easilyisolated. In comparison, MIMD shared memory architectures are lesslikely to scale as the introduction of more PEs triggers memory conflict.This is not the case with distributed memory, in which each PE has itsown memory. Thanks to realisticeffects and consumer specifications, thedistributed MIMD memory architecture is better than the others.
munotes.in

Page 53

53Figure 2.9 shared (left) and distributed (right)memory MIMD architecture.2.3.3 Approaches to parallel programming:In general, a sequential program always runs the same sequence ofinstructions with the same input data and always generates the sameresults were as programs must be represented by splitting work intoseveral partsrunning on different processors. The broken program is aparallelprogram.Various methods are available for parallel programming.The mostsignificant of these are:•Data parallelism•Process parallelism•Farmer-and-worker modelEach three of these models can be used for task-level parallelismIn the case of data parallelism, Divide and conquer is a multi-branchedrecursion-based design algorithm. A divide-and-conquer algorithm worksby breaking a data into two or more similar or related data repetitively andthe same instructions are used to process each data set for different PEs.This is a very useful approach for machine processing based on the SIMDmodel.With process parallelism, there are many (but separate) operationsin a single activity that could be done on several processors. Infarmerand-worker model,the main (master) computation causes many subproblems which slave fires off to be executing. The only communicationbetween the master and slave computations is to start the mastercomputation for slaves, and return the result of the slave computation tomaster.2.3.4 Levels of parallelism:Bit-level Parallelism:In this parallelism, it’s focused on the doubling ofthe word size of the processor. Increased parallelism in bit levels meansthat arithmetical operations for large numbers are executed more quickly.An 8-bit processor, for example, takes 2 cycles to perform a 16-bitaddition whereas a 16-bit is a single cycle. With the advent of 64-bitprocessors this degree of parallelism seems to be over.Instruction-level parallelism (ILP):This form of parallelism aims to
53Figure 2.9 shared (left) and distributed (right)memory MIMD architecture.2.3.3 Approaches to parallel programming:In general, a sequential program always runs the same sequence ofinstructions with the same input data and always generates the sameresults were as programs must be represented by splitting work intoseveral partsrunning on different processors. The broken program is aparallelprogram.Various methods are available for parallel programming.The mostsignificant of these are:•Data parallelism•Process parallelism•Farmer-and-worker modelEach three of these models can be used for task-level parallelismIn the case of data parallelism, Divide and conquer is a multi-branchedrecursion-based design algorithm. A divide-and-conquer algorithm worksby breaking a data into two or more similar or related data repetitively andthe same instructions are used to process each data set for different PEs.This is a very useful approach for machine processing based on the SIMDmodel.With process parallelism, there are many (but separate) operationsin a single activity that could be done on several processors. Infarmerand-worker model,the main (master) computation causes many subproblems which slave fires off to be executing. The only communicationbetween the master and slave computations is to start the mastercomputation for slaves, and return the result of the slave computation tomaster.2.3.4 Levels of parallelism:Bit-level Parallelism:In this parallelism, it’s focused on the doubling ofthe word size of the processor. Increased parallelism in bit levels meansthat arithmetical operations for large numbers are executed more quickly.An 8-bit processor, for example, takes 2 cycles to perform a 16-bitaddition whereas a 16-bit is a single cycle. With the advent of 64-bitprocessors this degree of parallelism seems to be over.Instruction-level parallelism (ILP):This form of parallelism aims to
53Figure 2.9 shared (left) and distributed (right)memory MIMD architecture.2.3.3 Approaches to parallel programming:In general, a sequential program always runs the same sequence ofinstructions with the same input data and always generates the sameresults were as programs must be represented by splitting work intoseveral partsrunning on different processors. The broken program is aparallelprogram.Various methods are available for parallel programming.The mostsignificant of these are:•Data parallelism•Process parallelism•Farmer-and-worker modelEach three of these models can be used for task-level parallelismIn the case of data parallelism, Divide and conquer is a multi-branchedrecursion-based design algorithm. A divide-and-conquer algorithm worksby breaking a data into two or more similar or related data repetitively andthe same instructions are used to process each data set for different PEs.This is a very useful approach for machine processing based on the SIMDmodel.With process parallelism, there are many (but separate) operationsin a single activity that could be done on several processors. Infarmerand-worker model,the main (master) computation causes many subproblems which slave fires off to be executing. The only communicationbetween the master and slave computations is to start the mastercomputation for slaves, and return the result of the slave computation tomaster.2.3.4 Levels of parallelism:Bit-level Parallelism:In this parallelism, it’s focused on the doubling ofthe word size of the processor. Increased parallelism in bit levels meansthat arithmetical operations for large numbers are executed more quickly.An 8-bit processor, for example, takes 2 cycles to perform a 16-bitaddition whereas a 16-bit is a single cycle. With the advent of 64-bitprocessors this degree of parallelism seems to be over.Instruction-level parallelism (ILP):This form of parallelism aims to
munotes.in

Page 54

54leverage the possible overlap in a computer program between instructions.On each hardware of the processor, most ILP types are implemented andapplied:Instruction Pipelining:Execute various stages in the same cycle ofvarious independent instructions and use all idle resources.Task Parallelism:Task parallelism involves breaking down a task intosubtasks and then assigning each of the subtasks for execution. Subtasksare carried out concurrently by the processors.Out-of-order execution:Instructions without breaching datadependencies may be executed if even though previous instructions arestill executed, a unit is available.2.3.5 Laws of caution:Already that we have implemented certain basic elements ofparallel computing in architecture and models, we can take into accountsome of the knowledge gained from the design and implementation ofthese systems. There are principles which could enable us to understandhow much parallelism will help an application or a software system.Parallelism is used in many activities together in order that the machinecan maximize its performance or speed. In particular, it should be kept inmind. But the relationships that manage the growth not linear pace.Forinstance, the user intends to speed for a given n processor increased up ton times. This is an optimal solution, but it seldom occurs due to overheadcommunication.Two important guidelines are here to be considered:•Computationspeed is proportional to the system's square root costs; itis neverlinearly increased. The faster a system gets, the costlier itsspeed will be (Figure ).•Speed increases with the logarithm of the number of processors(i.e.,y= k *log(N)) of parallel computer. Figure illustrates thatconcept.
Figure2.10 Cost versus speed
munotes.in

Page 55

55Figure2.11 Number processors versus speed.2.4 ELEMENTS OF DISTRIBUTED COMPUTINGIn this portion, we broaden these principles and discuss howdifferent tasks can be achieved by utilizing systems consisting of manyheterogeneous computer systems.They address what is commonly calleddistributed computing and more specifically, in terms of the softwaredesigner, they present the most important guidelines and principles for theimplementation of distributed computing systems.2.4.1 General conceptsand definitions:Distributed computing work explores the models, architectures,and algorithms used in the design and management of distributed systems.We use the one as a general definition of the distributed system proposedby TanenbaumA distributedsystem is a collection of independent computers that appearsto its users as a single coherent systemIt is definitely the ideal formof a distributed system thatcompletely hides from the user the "implementation details" of creating apowerful system from many more basic systems.Within this section, weconcentrate on the architectural models that use and present a coherentsystem to use independent computers.The fundamental step in alldistributed computer architectures is the concept of computercommunication. The distributed system is an application that performsprotocol collection to coordinate several communication network actionprocesses such that all components cooperate in order to perform one or anumber of similar tasks. The collaboratingcomputers can control bothremote and local resources in the distributed system overthecommunication network. Multiple existence Individual computers inthe distributed network are transparent to the user. The user does not knowthe work is performed in remote areas on different machines.Coulourisdefinition of Distributed system
munotes.in

Page 56

56A distributed system is one in which components located atnetworked computers communicate and coordinate their actions only bypassing messages.The distributed system components communicate with some kindof message passing, as defined in this above description. This term coversseveral models of communication.2.4.2 Components of a distributed system:Nearly all large computing systems are distributed now. Systemsare distributed. The distributed system is "a set of independentmachinesthat present to the user as one coherent system."Informationprocessing isdistributed on several machines instead of being confined to a singlecomputer.The overviews of the various layersinvolved in the delivery ofdistributed system services are presented in Figure 2.12.
Figure 2.12 A layered view of a distributed systemReference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)At the lowestlevel the physical infrastructure is computer andnetwork hardware, which is explicitly supervised by the operatingsystemthat provides the essential interprocess communication (IPC) services, thescheduling and management of the process, as well as the management ofresources in file systems and local systems.Combined, those other twolayers become the framework on the top for specialized software toconvert a number of networked computers to distributedsystemThe implementation of well-known operating system principlesand many more on a hardware and network level allows heterogeneouscomponents and their structure to be integrated easily into a consistent,unified framework. For instance, connectivity in the network among
munotes.in

Page 57

57various devices is governedby standards, allowing for smooth interaction.IPC services at operating system level have been introduced with theintroduction of standardized communication protocols like TCP / IP, UserDatagram Protocol (UDP) as well as other.The middleware layer utilizes such services to develop and deploydistributed applications in a consistent environment. When using theservices provided by the operating system, middleware creates its ownprotocols, data formats and programming language or frameworks tocreate distributed apps. This layer offers support to programmingparadigms for distributed systems. They all constitute an interface that isentirely independent of the underlying operating system and covers all ofthe lower layers' heterogeneities.The applications and services designed and built for middlewareuse reflect the upper part of the distributed system stack. These can beused for many reasons. Sometimes they can view their functionalitythrough a web browser in the form of graphical interfaces (GUIs).Forexample, the utilization of web technology is highly preferred in thecontext of a cloud computing system not only for interface applicationsdistributed appswith consumers, but also for platform services to createdistributed systems. An excellent example is the IaaS provider, forinstance Amazon Web Services (AWS), who provides virtualmachinecreation facilities, organizes it together into a cluster and deploysapplications and systems on top. Figure gives an example of how a cloudcomputing system's general reference architecture of a distributedsystemis contextualized.
Figure 2.13 A cloud computing distributed system.Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)2.4.2 Architectural styles for distributed computing:Distributed systems are also complex software components that are
munotes.in

Page 58

58distributed through many devices by design. It is important that theseprocesses are structured appropriately to overcome their complexities.There are variousways to see how a distributed System is organized, but itis readily apparent to distinguish between the logical organization andactual physical advancement of software components.The distributed systems are structured mainly by the softwareconstituentcomponents of the network. These software architecturesinform us the structure and interaction of different components of theprogram. In this chapter we will first concentrate on some commonapproaches to the organization of computer systems.To createan effective distributed network, software modules needto be mounted on specific computers and put on them. There are a numberof different choices to make. Sometimes named device architecture thefinal instantiation of a software architecture. In this chapter we examinetraditional central architectures in which the majority of softwarecomponents (and therefore features) are implemented by a single server,while remote clients can access that server with simple communicationmethods. Moreover, we call decentralized systems where computersperform more or less the same roles and hybrid organizations.Architectural Styles:Originally we find the logical arrangement of distributed systemsinto software modules, also called software architecture.Computerarchitectural work has evolved dramatically and the design or adoption ofarchitectures is now widely recognized as essential to large systemgrowth.The idea of an architectural style is important for our discussion.Such a style is formulated in components, the connections betweencomponents, the data exchange between components and the configurationof these elements together in one system. A part is a modular unit withwell-defined interfaces and which can be replaced in its environment. Asdiscussedbelow, the key point regarding a distributed system componentis that the component can be substituted if its interfaces are known. Amuch more complex term is a connector, usually defined as acommunication, coordination or co-operation mechanism betweenthecomponents. For example, the (remote) procedure calls, message passing,or streaming data can be generated by a connector.The architectural styles are organized into two main classes:•Software architectural styles•System architectural stylesThe first class is about the software's logical structure; the secondclass contains all types representing the physical structure of the softwaresystems represented by their major components.munotes.in

Page 59

592.4.3.1 Component and connectors:Component and connectorsvisions describe models consisting ofelements with a certain presence over time, such as processes, objects,clients, servers, and data storage. In addition, component and connectormodels provide interaction mechanisms, such as communication links andprotocols,information flows, and shared storage access, as components.These interactions also are conducted across complex infrastructure, suchas middleware systems, communication channels, and process schedulers.Component isa behavioral unit. The description ofthe component defineswhat the job can do and needs to do.Connectoris an indication that onecomponent is usually linked by relationships such as data flow or controlflow. Connector is a mechanism.2.4.3.2 Software architectural styles:Styles and patterns insoftwarearchitecture define how to organizethe system components to build a complete system and to satisfy thecustomer's requirements. A number of softwarearchitectural styles andpatterns are available in the software industry, so that it isnecessary tounderstand the special design of the project.These models form the basis on which distributed systems are builtlogically and discussed in the following sections.Data centered architectures:At the center of this architecture is a data store, which is oftenaccessed through other components that update, add, delete or modify thedata present in the store. This figure2.14 shows a typical data-centricstyle. A central repository is accessed by the client software. Variation ofsuch a method is used to turn the repository into a blackboard, asclient'sdata or client’s interest data change customer’ snotifications. It willfacilitate integrality with this data-centered architecture. This allows forchanges to existing components and the addition of new customercomponents to the architecture without the permission or concern of othercustomers. Customers may use blackboard mechanisms to transfer data.
Figure : 2.14 Typical Data-Centric Style
592.4.3.1 Component and connectors:Component and connectorsvisions describe models consisting ofelements with a certain presence over time, such as processes, objects,clients, servers, and data storage. In addition, component and connectormodels provide interaction mechanisms, such as communication links andprotocols,information flows, and shared storage access, as components.These interactions also are conducted across complex infrastructure, suchas middleware systems, communication channels, and process schedulers.Component isa behavioral unit. The description ofthe component defineswhat the job can do and needs to do.Connectoris an indication that onecomponent is usually linked by relationships such as data flow or controlflow. Connector is a mechanism.2.4.3.2 Software architectural styles:Styles and patterns insoftwarearchitecture define how to organizethe system components to build a complete system and to satisfy thecustomer's requirements. A number of softwarearchitectural styles andpatterns are available in the software industry, so that it isnecessary tounderstand the special design of the project.These models form the basis on which distributed systems are builtlogically and discussed in the following sections.Data centered architectures:At the center of this architecture is a data store, which is oftenaccessed through other components that update, add, delete or modify thedata present in the store. This figure2.14 shows a typical data-centricstyle. A central repository is accessed by the client software. Variation ofsuch a method is used to turn the repository into a blackboard, asclient'sdata or client’s interest data change customer’ snotifications. It willfacilitate integrality with this data-centered architecture. This allows forchanges to existing components and the addition of new customercomponents to the architecture without the permission or concern of othercustomers. Customers may use blackboard mechanisms to transfer data.
Figure : 2.14 Typical Data-Centric Style
592.4.3.1 Component and connectors:Component and connectorsvisions describe models consisting ofelements with a certain presence over time, such as processes, objects,clients, servers, and data storage. In addition, component and connectormodels provide interaction mechanisms, such as communication links andprotocols,information flows, and shared storage access, as components.These interactions also are conducted across complex infrastructure, suchas middleware systems, communication channels, and process schedulers.Component isa behavioral unit. The description ofthe component defineswhat the job can do and needs to do.Connectoris an indication that onecomponent is usually linked by relationships such as data flow or controlflow. Connector is a mechanism.2.4.3.2 Software architectural styles:Styles and patterns insoftwarearchitecture define how to organizethe system components to build a complete system and to satisfy thecustomer's requirements. A number of softwarearchitectural styles andpatterns are available in the software industry, so that it isnecessary tounderstand the special design of the project.These models form the basis on which distributed systems are builtlogically and discussed in the following sections.Data centered architectures:At the center of this architecture is a data store, which is oftenaccessed through other components that update, add, delete or modify thedata present in the store. This figure2.14 shows a typical data-centricstyle. A central repository is accessed by the client software. Variation ofsuch a method is used to turn the repository into a blackboard, asclient'sdata or client’s interest data change customer’ snotifications. It willfacilitate integrality with this data-centered architecture. This allows forchanges to existing components and the addition of new customercomponents to the architecture without the permission or concern of othercustomers. Customers may use blackboard mechanisms to transfer data.
Figure : 2.14 Typical Data-Centric Style
munotes.in

Page 60

60A repository architecture consists of a central(often a database)data structure and an independent collection of components whichfunction on the central data structureFor example, blackboard architectures, where a blackboards serveas communication centers of knowledge sources and repositories formultiple applications, include repository architectures.Repositories,including software development, CAD, are important in data integrationand are implemented in a variety of applications.In the blackboard style the principal components are shown in thefigure 1. The problem is solved from several sources of knowledge. Theproblem is solved by each source of information and its solution, partialsolution or suggestion is written on the blackboard. Around the same time,any other source of knowledge either modifies or extends the solutiongiven by the previous source of knowledge or writes to solve the problemitself. Control shell is used to organize and monitor the activities ofinformation sources to prevent them from creating a mess that may differfrom the current course of the project. This is the management, monitoringand control of all activities conducted during the troubleshooting sessionthrough the control shell.Scalability i.e. is one of the benefits of this architectural design.Source ofknowledge can easily be added or removed as needed from theprogram. Sources of knowledge are independent and thus workablesimultaneously under the control element constraint. It is a problem in thisarchitecture that it is not known in advance when to stop the solutionfinding process because more and more refinement is always feasible. Pairof synchronizes. It is difficult to achieve multiple sources of knowledge
Figure 2.15Blackboard architecture
munotes.in

Page 61

61Data-flow architectures:Data Flow Architecture converts input data into output data via acollection of computational or deceptive elements. It's a computerarchitecture that has no program counter, so the execution isunpredictable, meaning that behaviors are indeterminate. Data flow is anaspect of theVon-neumann computer model consisting of a singleprogram counter, sequential execution and control flow that defines fetch,execution, and commit order.This architecture has been applied successfully.•The architecture for data flow eliminates development time and canquickly switch from design to implementation.•It aims primarily to achieve the reuse and alteration features.•Through the architecture of data flows, the data can be flowed withoutcycles into the graph topology or into a linear structure.The modules are implemented in two different types:1.Batch Sequential2.Pipe and FilterBatch Sequential•Batch sequential compilation in 1970 was considered to be a sequenceprocess.•In Batch sequential,Separate program systems are run sequentially andthedata is transferred from one program to the next as an aggregation.•This is a typical paradigm for data processing.Figure 2.16 Batch Sequential•The diagram above shows the batch sequential architecture flow. Itoffers simpler sub-system divisions and each subsystem can be anindependent program which works on input and produces output data.•The biggest downside of the sequential batch architectures is the lackof a concurrency and interactive interface. It provides high latency andlow throughput.Pipe and Filter:•Pipe is a connector that transfers data from one filter to another filter•Pipe is a directional data stream which a data buffer implements tostore all data, before the following filter has time to process it.•It moves the data from one data source to a one data sink.•The stateless data stream is pipes.
munotes.in

Page 62

62Figure 2.17 Pipe and Filter•The figure above shows the sequence of the pipe filter. All filters arethe processes running concurrently, which means they can run asseparate threads or coroutines or be fully located on various machines.•Every pipe has a filter connection and has its own role in filter'soperation. The filters are robust, with the addition and removal ofpipes on runtime.•Filter reads the data from their input pipes,performs its function onthesedataand places the result on all output pipes. If the input pipesare not enough data, the filter only waits for them.Filter:•Filter is a component.•The interfaces are used to flow in a variety of inputs and to flow out avariety of outputs.•It processes and refines the data input.•The independent entities are filters.•Two ways to create a filter exist:1. Active Filter2. Passive Filter•The active filter creates the pipes' data flow.•Data flow on the pipes is driven by thepassive filter.•Filter does not share state with other filters.•The identity of upstream and downstream filters is unclear.•Separate threads are used for filters. It may be threads or coroutines ofhardware or software.Advantages of Pipes and Filters:•Pipe-filter provides high throughput and excessive data processingefficiency.•It allows maintenance of the system simpler and provides reusability.
munotes.in

Page 63

63•It has low connectivity and flexibility through sequential and parallelexecution.Disadvantages of Pipeand Filter:•Dynamic interactions cannot be accomplished with Pipe and Filter.•For data transmission in ASCII format it needs a low commondenominator.•Pipe-filter architecture can be difficult to dynamically configure.Virtual machine architectures:Virtual machine architecture refers to a structured system interfacespecification, including the logical behavior of the resources handled byinterface. Implementation defines an architecture’s real implementation.The levels of abstraction are the design layers, be they hardware orsoftware, each associated with a different interface or architecture.Insystems using this design, the general interface is as follows: the software(or application) determines its functions and state, as interpreted by thevirtual machine engine, in an abstract format.The implementation is based on an understanding of the program.The engine retains an internal structure of the state of the program. Therule-based systems, interpreter and command-language processors areverycommon examples within this group.The simplest type of artificialintelligence is rule-based systems (also known as production systems orexpert systems). A rule-based program requires rules for representingknowledge with system-coded knowledge .The concepts of a rule-basedsystemdepend almost entirely on expert systems that mimic human expertreasoning in the resolution of a wisdom-intensive question. Instead ofdescribing knowledge as a collection of true facts in a static anddeclarative way, a rule-based structure portrays knowledge as a series oflaws that say what to do or not.The networking domain provides anotherfascinating use of rule-based systems: network intrusion detection systems(NIDS) also are based on a set of rules to classify suspicious behaviorsassociated with potential computer device intrusions.Interpreter Style:The interpreter is an architectural style that is ideal for applicationsthat can not specifically use the most adequate language or machine toexecute the solution. The style comprises a few parts that are a programwe attempt to run, an interpreter we are attempting to interpret, theprogram's current state and the interpreter and the memory portion thatwill carry the program, the program’s actual state and its current state.Calls for procedures for communication between elements, and directmemory access, are the connector for the architectural style of aninterpreter.munotes.in

Page 64

64Four compositions of the interpreter:•Engine interpreter: the interpreter 's job is completed•Area of data storage: contains the pseudo code•Data store field: Reports current interpreter engine state•external data structure: Tracks the development of the source codeinterpretedInput:the input to the portion of the interpreted program is forwardedtothe program statewherethe interpreter is read by the programOutput:The output of the Program is placed in the state of the programwhere the data isinterpreted interface system part.This model is quiteuseful in designing virtual machines for high-level programming (Java,C#) and scripting languages (Awk, PERL, and so on).•Application portability and flexibility throughout different platforms•Virtualization. Machine code for one hardware architecture can beexecuted on another via the virtual machine.•System behavior defined by custom language or data structure;facilitates the development and comprehension of software.•Dynamic change supports (Efficiency)•Usually the interpreter only has to translate the code to a•Intermediate representation (or not translate at all), so that it takesconsiderably less time to test change.An interpreter or virtual machine does not have to follow all theinstructions of the source code that it processes. It can refuse in particularto execute code which breaches any security limitations under which itoperates.For example. JS-interpreter is a JavaScript interpreter that issandboxed in JavaScript. The arbitrary JavaScript code can be executedline by line. Performance of the main JavaScript environment iscompletely isolated. Multi-threaded competitor JavaScript without the useof web workers are available in JS-Interpreterinstances.Call & return architectures:The most frequently used pattern on computer systems was Call &return architectures style. The mechanism of call or function call includesmain programs and subroutines, remote procedure calls, object-orientedsystems, and layered systems. They all come under the call and return inthe style of architecture.Top-Down Style:The top down approach is basically the breakdown of a systems inorder to get details on its compositional sub-structures in a reversingengineering manner (also known as stepwise designand stepwise refiningand in some cases used in a decomposition fashion). In a top-downmunotes.in

Page 65

65approach,an overview of the system is made and all first-level subsystemsare defined, but not comprehensive. Each subsystem is then furthermodified, often at various other subsystem levels, until the fullspecification has been reduced to smaller elements. The "black boxes"helps define a top-down layout that is easier to manipulate. Nonetheless,black boxes could not explain or be precise enough to validate the modeleffectively. The big picture starts with the top down approach. It dividesinto smaller pieces.A top-down approach involves dividing the problem between tasksand separating tasks into smaller subtasks. In this approach, we firstdevelop the main module and then develop the next stage modules. Thisprocess is followed until all modules have been created.Object-Oriented Style:The object-oriented programme, instead of actions & logic, is aprogramming language paradigm structured around objects & data. Inorder to take data, process it and generate results, a traditional procedureprogram is organized. The program was centralized in terms of logicinstead of data. They focus object-orientated programming on objects andtheir manipulation rather than on the logic that creates them.The first phase in the OOPs is data modeling that includesdefining, manipulating andrelationship involvingall the objects. Themodeling of data is a planning phase that requires tremendous attention.We have a method to produce those objects once every object involved inthe program has been identified. It is known asthe class mechanism. Aclass includes data or properties and the logical sequence of methods formanipulating data. Every way is separate and the rationale which hasalready been established in other methods should not be repeated.Architectural styles based on independent components:A number of independent processes / objects communicating viamessages are part of the independent component architecture. Themessages can be transmitted via publish / subscribe paradigms for a givenor unnamed participant.Components typically do not control each other by sending data. Itcan be changed as the components are isolated.Examples:Event systems and communication processeare subsystems ofthis type.Event systems:This paradigm separates the implementationof the componentfrom the knowledge of component names and locations. The pattern of thepublisher / subscriber, where:munotes.in

Page 66

66Publisher(s):advertise the data you would like to share with othersSubscriber(s): Receipt of published data register interest.For communications between components, a message manager is used.Publishers send messages to the manager who redistributes them tosubscribers.Communication process:The architectural type of communication process is also known as Client-Server architecture.Client:begins a server call that requests for some service.Server:provides client data.Returns data access when the server works synchronously2.4.3.3 System architectural styles:The Client-server and Peer-to-peer (P2P) are the two key systemlevel architectures we use today. In our everyday lives, we use these twotypes of services, but the difference between them is often misinterpreted.Client Server Architecture:Two major components are in the client server architecture. Theserver and theclient. The server is the location of all transmission,transmission, and processing data, while the client can access the remoteserver services and resources. The server allows clients to make requests,and the server will reply. In general, the remoteside is managed only by acomputer. But in order to be on the safe side, we load balancingtechniques using several servers.
Figure 2.18 Client/server architectural styles
munotes.in

Page 67

67The Client Server architecture is a standard design feature with acentralized security database. This database includes information onsecurity, such as credentials and access details. Absent security keys, userscan't sign in to a server. This architecture therefore becomes a bit morestable and secure than Peer to Peer. The stability comes because thesecurity database can make for more efficient use of resources. However,on the other hand, the system could crash because only a small amount ofwork can be done by a server at a certain time.Advantages:•Easier to Build and Maintain•Better Security•StableDisadvantages:•Single point of failure•Less scalablePeer to Peer (P2P):There is no central control in a distributed system behind peer topeer. The fundamental idea is that at a certain time each node can be aclient ora server. If something is asked from the node, it could be referredto as a client and if something arrives from a node it could be referred toas a server. Usually every node is called a peer.
Figure 2.19 Peer to Peer (P2P)Any new node will first jointhis network. Upon joining, they mayeither request or provide a service. A node's initiation phase (joining a
munotes.in

Page 68

68node) can vary based on network'simplementation. There are two ways anew node can learn what other nodes provide.Centralized Lookup Server:The new node must register and mention the services on thenetwork with the centralized look up server. So, just contact thecentralized look up system anytime you need to have a service and it willdirect you to the appropriate service provider.Decentralized System:A node that seeks particular services will, broadcast and requesteach other node in the network, so that the service provider can respond.A Comparison between Client Server and Peer to Peer ArchitecturesBASIS FORCOMAPARISONCLIENT-SERVERPEER-TO-PEERBasicThere is a specificserver and specificclients connected to theserverClients and server are notdistinguished; each node actas client and server.ServiceThe client request forservice and serverrespond with theservice.Eachnode can request forservices and can also providethe services.FocusSharing theinformation.Connectivity.DataThe data is stored in acentralized server.Each peer has its own data.ServerWhen several clientsrequest for the servicessimultaneously, aservercan get bottlenecked.As the services are providedby several servers distributedin the peer-to-peer system, aserver in not bottlenecked.ExpenseThe client-server areexpensive toimplement.Peer-to-peer are lessexpensive to implement.StabilityClient-Server is morestable and scalable.Peer-to Peer suffers if thenumber of peers increases inthe system.2.4.4 Models for interprocess communication:A distributed system is a set of computers that behave as acohesive network to its users. One important thing is that differencesbetween the different computers and how they interact are often hiddenfrom users. This then gives the user a single image of the system. The OSmunotes.in

Page 69

69hides all communication details among the user's processes. The user doesnot know that many systems exist. The inter-process communicationcalled IPC is done by various mechanisms in distributed systems and fordifferent systems, these mechanisms mayvary. Another significant aspectis that users and applications can communicate consistent anduniformwith a distributed systemCommunications between various processes is the essence of alldistributed systems and it is important to understand how processes canshare information on different machines. In order to exchangedatabetween two application and processes, Inter Process Communication orIPC as its name implies. Processes may be on or in a different location onthe same machine. Distributed systems communication often depends onlow-level messaging as the underlyingnetwork provides. Communicationis difficult to communicate through message passing than primitivecommunication based on a shared memory available on non-distributedplatformsInter-process Communication (IPC) is a method for thecommunication and synchronization of systems. The communicationbetween such processes can be regarded as acooperation methodamongthem.These three methods allow processes to communicate with eachother: shared memory, remote procedure call (RPC), and message passing.In distributed systems, IPCs with sockets are very popular. In short, an IPand a port number are a pair of sockets. Every one requires a socket fortwo processes to communicate.If a server daemon runs on a host, it listens to its port and managesall customer requests that are sent to the client port (server socket). Tosubmit a message, a client has to be aware of the IP and server port (serversocket). Once a client starts communication with the servers and is freedonce communication is over, the OS kernels also provide the client's port.Although communication by sockets is popular and effective, it isconsidered low because sockets allow unstructured streams of bytes onlyto be transmitted between processes. The data transmitted as a byte streamis organized by client and server applications.2.4.4.1 Message-based communication:Message abstraction is essential in the development of models andtechnologies, enabling distributed computing. Distributed system is asystem in which components reside in networked communication and onlythrough moving messages coordinate their functions. Within this message,any confidential data transferred from one individual to another isidentified. It includes any type of data representation with size and timeconstraintswhile invoking a remote process or an object instance sequenceor a common message. That is why the 'message-based communicationmodel,' which is based on data streaming abstraction, can benefit fromreferencing various inter-process communication models.munotes.in

Page 70

70Despite the abstraction that is shown to developers inprogramming the coordination of common components, variousdistributed programming models use this type of communication. Beloware several major distributed models of programming using messagetemplates,Although communication by sockets is popular and effective, it isconsidered low because sockets allow unstructured streams of bytes onlyto be transmitted between processes. The data transmitted as a byte streamis organized by client and server applications.Message Passing:The principle of message is implemented in this model as the mainabstraction of the model. Units that exchange explicitly encoded data andinformation in the form of a message. The structure and message’scontentdiffer or vary according to the model. Message Passing Interfaceand OpenMP are significant examples of this model type.Remote Procedure Call:This model examines the keys to the procedure call outside thelimits of a single process, suggesting system execution inremoteprocesses. It includes the main client-server. A remote process maintains aserver component, allowing client processes to call on processes andreturns the execution output. The messages, which are generated by theimplementation of Remote Procedure Call (RPC), collect informationabout the method on its own and execute the arguments required for it andalso return the values. The usage of messages referred to as themarshalling of the argumentsand the return values.Distributed Objects:This isan implementation of the object-orientated model RemoteProcedure Call (RPC), which is understood in context forremoteinvocation methods that are expanded through objects. Eachprocess assigns a series of interfaces that are remotely accessible. Theclient process can request and invoke the methods accessible via theseinterfaces. The standard runtime infrastructure transforms the localmethod invocation into a remote request calland collects the executionresults. The interaction between the caller andthe remote process takesplace via messages. This model is stateless by design, the complexity ofobject state management and lifetime are illustrated by distributed objectmodels. Common Object Request Broker Architecture (CORBA),Component Object Model (COM, DCOM and COM+), Java RemoteMethod Invocation (RMI), and .NET Remoting are some of the mostimportant Distributedobject infrastructure examples.Active objects:Programming models based on active objects, however accessible,munotes.in

Page 71

71contain by definition the existence of instances, regardless of whether theyare agents of objects. This implies that objects have a special controlthread that allows them to show their activity. Such models often manuallyuse messages to execute functions and the message is connected to a morecomplicated semantics.Web Services:Web service technology offers an alternative to the RPCframework over HTTP, allowing the interaction of establishedcomponents with various technologies. A web service is exposed as aremote objectstored on a web server and invocations of the system areconverted into HTTP requests packed using a particular protocol. It mustbe remembered that the concept of message is a basic abstraction ofcommunication between interprocesses and is used either implicitly orexplicitly.2.4.4.2 Models for message-based communicationPoint-to-point message model:A software or application is designed from point to point (PTP)around the idea of message queues, senders and receivers. That message issent to a certain queue and customers receive messages from the queue(s)that are set up to hold their messages. All messages sent to them are keptuntil the messages are consumed or until messages expire.Publish-and-subscribe message model:Publish-subscribeis a message service. It describes a particular typeofcommunication between components or software modules. The name ischosen to represent the most important features of thiscommunicationmodel.Software modules interact directly with each other instraightforward interactions using mechanisms and media that arerecognized by all parties. For communication needs becoming morecomplex or demanding, other systems of communication have developed.Publish-subscribeis one such subscription and only one of many.The core ideas for Publish-Subscribe•There are not necessarily software components that know with whomthey interact.•The data producers publish the data in the whole network.•Data consumers subscribe to the systemand receive data from itas awhole.•Information is named such that the available information can bedefined by software modules. Sometimes this label is called the topic.munotes.in

Page 72

72Figure 2.20 Publish-and-subscribe message modelA central software module ensures that all data, publishing andsubscription are administered and matched. The "broker" is commonlyreferred to. Often brokers are a network of cooperating software modulesand software modules that use broker services are called clients.Clients that publish and also subscribe "register" with the brokerfor communication paths to manage, clients and other housekeepingactivities to authenticate.Message delivery to subscribers filtered in relation to contentrather than topic. Instead of or with the topic, this can be used. Only a fewPublish-Subscribe systems have implemented this.Data can be "persistent," because subscribers who register on thenetwork after last publishing the data will have the last published data onthe specific topic.Request-reply message model:A request reply messaging modelis different from a traditional pub/ sub or P2P model, which publishes a message to a topic or queue andenables clients to receive the message without providing the replyresponse.Request reply messages may be used when a customer sends therequested message for information from a remote client application or for
munotes.in

Page 73

73a processing action to be carried out. Once the client application receivesthe request message, it receives the necessary information or carries outthe requested action. The information is then applied to a replymessage,or a confirmation of completion of the task is submitted in response to therequest.2.5TECHNOLOGIES FOR DISTRIBUTEDCOMPUTINGIn this section, we are implementing appropriate technologieswhich give realistic implementation of interaction models which dependmainly on message-based communication. Such systems includeremoteprocedure call (RPC), distributed object frameworks and services-orientedcomputing.2.5.1 Remote procedure call:Remote Procedure Call (RPC) is a protocol that can be used by aprogram to request the service on the other computer of the networkwithout the need for the details of the network. RPC is used in remotesystems to call other processes such as a local system. Sometimes, aprocedurecall is also called a function call or a subroutine call.The client-server model is utilized by RPC. The program you arerequesting is a client and the service provider is the server. Like an on-going or local procedure call, the RPC is a synchronousoperation whichrequires a suspension of the requesting program until the remote procedureresult are returned. Nevertheless multiple RPCs can be performedconcurrently by using lightweight processes or threads that share the samespace.In Remote ProcedureCall software, interface definition (IDL)language, the specificationlanguage used to describe an applicationprogramming interface (API) of a software component. IDL provides inthat case a bridge between the two ends of the connection, which may beconnected by different computer languages and operating systems (OSes).RPC message procedure:When program statements using the RPC framework are compiledinto an executable program, the compiled code includes a stubrepresenting the remote procedure code.The stub receives the request andtransmits it to a client runtime program on the local computer when theprogram is running and a call is issued. Once the client stub is firstinvoked, it contacts a name server to specify where the server is located.The Client Runtime Program is familiar with how to address theremote computer and server application and sends the message over thenetwork that requests the remote procedure. The server also has a runtimemunotes.in

Page 74

74program and stub this interface with the remote procedure itself.Response-request protocols returned in the same wayWhen making a Remote Procedure Call
Figure2.21 RemoteProcedure Call1. The calling environment is terminated, procedural parameters aretransferred across the network and the procedure is execute in theenvironment.2.When the procedure is completed and results are produced, its resultsare returned to the calling environment where it resumes to execute asif back from a regular procedure call.Note:RPC is particularly suitable for the client-server interaction (e.g.query-response) between the caller and callee. The client and server arenot both execute simultaneously in the concept. Instead, it jumps back andforth from the caller to the callee.
munotes.in

Page 75

75Working of RPC:
Figure 2.22 WorkingofRemote Procedure CallIn an RPC there will be the following steps:1.A client invokes a client stub procedure which usually passesparameters. The client stub is resides in the own address area of theclient.2.The client stubbed marshalls (pack)the parameters in a message.Marshalling involves converting the parameter representation to astandard format and copying each parameter to the message.3.The client stub transfers the message to the transportation layer andtransfers it to the remote server.4.On the server, a transport layer transfers the message to a server stubto demarshall (unpack)the parameters and uses the standard procedurecall method to call the desired process routine.5.When the server procedure finishes, it returns to the stub server (forexample, through a normal procedure call return). The stub server thentransmits the message to the transport layer.6.The transport layer returns the resulting message to the client transportlayer, which returns the message to the clientstub.
munotes.in

Page 76

767.The client stops the return parameters and returns the execution to thecaller.2.5.2 Distributed object frameworks:The most well-known ways to develop distributed systems orframeworks are client-server systems. An extension to this client-servermodel is a distributed object framework. It is a library where distributedapplications can be built using object-oriented programming.Distributedobjects in distributed computing are objects that are distributed in differentaddress spaces, in different processes on the same computer, or even inmultiple network-connected computers. However, they perform aroundeach other via data sharing and invoking methods. It also involvestransparency of location where remote objects appear the same as localobjects. With remote method invocation, usually message-passing, themain method of distributed communication for objects is by sending amessage to a different object within a remote machine or process toperform some task. The results are returned to the object that you call.The Remote procedure Call (RPC) method applies to distributedenvironments the common programming abstraction of the procedure callallowing the call process to call the remote node as local.Remote method invocation (RMI)resembles RPCfor distributedobjects, but has additional advantages in terms of the use of object-oriented programming concepts for distributed systems and extends to thedistributed Global environment the concept of the object reference andenables the use of object references such as Parameter in remoteinvocation.Remote Procedure call:The client calls procedures in a different serverprogramRemote method invocation (RMI):an object can invoke object methodsin a different processEvent notification:Objects receive notification of events in other objectsthey have registered for2.5.2.1Examples of distributed object frameworks:Distributed programming environment (DPE)-software can bedeveloped and managed by programmers distributed around the worldPractically supporting distributed object computing, such as Internet, onthe distributed system. The research is aimed at developing aprogramming environment that supports an effective distributedenvironment programming.A system that uses distributed objectsprovides the distributed and parallel programming environment withflexible and scalable programming. A lot of them aredistributed objectcomputing systems like CORBA, DCOM, and Java are supportedmunotes.in

Page 77

77Common object request broker architecture (CORBA):CommonObject Request Broker Architecture (CORBA), is aconsortium of more than 800 companies that supports the most famousmiddleware. With the exception of Microsoft, this consortium is themajority of computing companies which has its own DistributedComponentObject Model (DCOM) object broker. The object bus ofCORBA sets and defines the object components Interoperability. Anobject bus is the Object Request broker (ORB). CORBA is conducted todiscover and interoperate object components within an object bus.CORBA supports transparent object referencesthrough object interfacesbetween distributed objectsCORBA is essentially a design specification for an Object RequestBroker (ORB) that provides an ORB mechanism to allow distributedobjects, either locally oron remote devices, written in different languagesor in various network locations, to communication with each other.The CORBA Interface Definition Language or IDL enableslanguage development, location-independent interface development anddistributionof distributed objects. The application components cancommunicate with each other via CORBA, regardless of where they are orwho designed them. CORBA ensures transparency of the location in orderto execute these requests.CORBA is usually described asa "software bus" because theobjects are located and accessed via a software communication interface.The following illustration identifies the main components in theimplementation of CORBA.
Figure2.23Common object request broker architecture (CORBA)
munotes.in

Page 78

78A well-defined object-oriented interface ensures data transmissionfrom client to server. The Object Request Broker (ORB) sets the targetobject's location, sends the request to it and returns the caller with anyresponse. With this object-oriented technology, developers can usecharacteristics such as legacy, encapsulation, polymorphism and dynamicbinding during runtime. These features allow for the modification,modification and reutilization of applications with minimal parentinterface changes. The following illustration shows how a client transmitsa request through the ORB to a server:
Figure 2.24 working of Commonobject requestbroker architecture (CORBA)Interface Definition Language (IDL):The Interface Definition Language is a key pillar of the CORBAstandards. IDL is OMG for the definition of language-neutral APIs andprovides a platform-independent line-up of distributed object interfaces.Client / server interface-standardized data and operations begin to providea consistent approach between the CORBA environments and clients inheterogeneous environments. This mechanism is the IDL and is used byCORBA to describe the object interfaces.For applications, IDL defines and does not take programminglanguage as modules, interfaces, and operations. The variousprogramming languages, such as Ada, C++ , C # and Java, providestandardized IDL mapping to the implementation of the interface.The IDL compiler creates a stub-and-skeleton code to marshalling
munotes.in

Page 79

79and unmarshallingthe parameters from the network stream to memoryinstances of the language implemented, etc. The stub is a client proxy foran object reference accessed from a servant and is a proxy for theservant’s client. Language-specific IDL stubs and skeletons cancommunicate with a skeletonin a language. The stub code is linked to theclient code and the skeleton code is connected to the objectimplementation and communicates in order to implement remoteoperations with the ORB run time system.IIOP (Internet Inter-ORB Protocol) is a protocol which allowsdistributed programs to communicate on the Internet in variousprogramming languages. The Common Object Request BrokerArchitecture (CORBA) is a key element of a strategic industry standard.Using the CORBA IIOP and related procedures, a company may developprograms which are able to communicate, wherever they are and withouthaving to understand anything about the program other than its ownservice, or its name, with existing or future programs of their owncompany or another.Distributedcomponent object model (DCOM/COM):TheDistributed component object model(DCOM) is aproprietary Microsoft communication technology between softwarecomponents that are spread across networked computers. DCOM is adistributed component object model. TheDistributed Component ObjectModel is a component object model (COM) network expansiontechnology that enables network-wide, interprocess communication. Bymanaging low-level network protocol details, DCOM supportscommunication among objects within the network. This enables multipleprocesses to work together to achieve a single task by using distributedprograms.Java remote method invocation (RMI):RMI implies Remote Method Invocation. A mechanism thatpermits the access / invoke of an object in one program (JVM) on anotherJVM. It enables remote communication between programs in Java, RMIis used to create distributed applications.We create two programs in an RMI application: the server program(residing on the server) and the client program (residing on the client).The server program creates a remote object and provides the clientwith a reference to that object (using the registry).The client program requests remote objects and tries to invoke its methodson the server.munotes.in

Page 80

80The following diagram shows the architecture of an RMI application.
Figure 2.25Java remote method invocation (RMI)Transport Layer:using this layer the client are connected with theserver. This connection is maintained with existing connection andnew connectionsare also created.Stub:the stub is the proxy of a client remote object. This is located inthe client system; it serves as the client's gateway.Skeleton:It's the object on the server side. To pass the request on to aremote object, Stub interacts withthe skeleton.RRL (Remote Reference Layer):this is the layer that manages theclient's remote object reference.The following points sum up how an RMI program works.Whenever the client makes a request to the remote object, the stubreceives the request to the RRL.If the RRL from the client receives the request, it uses a method calledinvoke () from the remoteRef object. The request is passed on theserver side to the RRL.The server's RRL passes the client to the server skeleton thateventually calls the object on a server.The results are passed to the clientWhen a client invokes a method that supports remote objectparameters, the parameters shall be enclosed in a message before they aretransmitted through the network. Such may be primitive-type parametersor objects. If the primitive type is used, the parameters are assembled andthe header is attached. If the parameter is an object, it is serialized. Thismethod is referred to as marshalling.
munotes.in

Page 81

81The packed parameters are unbundled on the server side and theappropriate method is invoked. This method is referred to asunmarshalling.RMI Registry is a name space that contains all server objects. Theserver registers this object into an RMIregistry (using bind method () orRebind () methods) (methods), any time an object is created. Those areregistered using a single name known as the bind name.The client requires a reference to that object to invoke a remoteobject. The client must then retrieve the object from the registry by itsbind name(using the lookup () method).
Figure 2.26RMI program works.NET remoting:The. NET remote system offers an interprocess communicationbetween Application Domains through the use of the RemotingFramework. The programs may be installed on the same computer or ondifferent computers on the same network. Through the use of Binary orSOAP formatters in the data stream the .NET Remoting facilitatesdistributed object communications over TCP and HTTP channels.The three main components of the Remoting Framework are:1.Remote object2.Remote Listener Application-( Remote Object requests)3.Remote Client Application-( makes Remote Object requests)
munotes.in

Page 82

82Figure 2.27.NET remoting FrameworkThe Remote Object is implemented in the MarshalByRefObject class.The basic workflow of.Net Remoting can be seen from the figureabove. In addition, if a client calls Remote method, the client does notdirectly call the methods. The remote object receives a proxy and is usedto call up the remote object method. The message is encrypted with acorresponding Formatter (Binary Formatter or SOAP Formatter) in theConfiguration File when the proxy receives a process call from the Serverthen the call will be sent to the Server using a channel selected(TcpChannel or HttpChannel). Theserver side channel accepts the requestfrom the proxy and sends it to the server on the Remoting system wherethe remote object methods are located and invoked methods on theRemote Object. Once the remote procedure is executed, every calloutcome is returned to the client in the same way. It must be generated andinitialized in a process known as Activation before an object instance of aRemotable typecan be accessed. The activation is classified as ClientActivated Objects and Server Activated Objectsin two types.2.5.3.2 Service-oriented architecture (SOA):Service-Oriented Architecture (SOA) is a software style in whichservices via an interconnected communication protocol to othercomponents are distributed by application components. The principles areseparate from the manufacturers and others. Most services communicatewith one another in a service-oriented architecture: through datatransmission or through two or more services that coordinate the activity.It is just one term for service architecture.Service-oriented architecture characteristics:Business valueStrategic goalsIntrinsic inter-operabilityShared servicesFlexibilityEvolutionary refinementBoth of these core principles could be shown through an older
munotes.in

Page 83

83distributed application paradigm, to service-oriented, cloud-relatedarchitecture (which also is considered to be a service-oriented architectureoffshoot).Service-Oriented Architecture Patterns:
Figure 2.28Service-Oriented ArchitectureEach of the building blocks for the Service-oriented Architectureconsists of three roles: service provider; service broker, service registry,service repository and customer / requesterservice.In accordance with the service registry, a service provider isresponsible for addressing whetherand how services are rendered, such assecurity, availability, costs, and more. The type of service and any tradeagreements are also decided by this role.The service broker provides the requester with information aboutthe service. Whoever implements the broker's scope is determined.The service requestor locates and then adds the entries to the brokerregistry. You can access multiple services or you may not; this depends onthe service applicant’s capacity.Implementing Service-Oriented Architecture:There are a wide variety of technologies that can be used when itcomes to implementing service-oriented architecture (SOA), depending onthe ultimate objective and what you're trying to achieve.Service-Oriented Architecture is typically implemented with webservices which make 'functional buildingblocks via standard Internetprotocols' available.SOAP, which stands for Simple Object Access Protocol, is anexample of a web service standard. Briefly speaking, SOAP 'is amessaging protocol specificationfor standardized information sharing incomputer network implementation of web services. Although SOAP was
munotes.in

Page 84

84initially not well received, it has grown in popularity since 2003 and isbeing used and accepted more widely. Jini, COBRA, or REST are otheroptions for implementation of Service-Oriented Architecture.It is important to remember that architectures can be applied indifferent ways, including messaging, such as ActiveMQ, Apache Thriftand SORCER, "regardless of the particular technologies."Why Service-Oriented Architecture Is Important:
Figure 2.29Before and AfterService-Oriented ArchitectureService-oriented architecture has many benefits, particularly in aweb-based business.Here, we will quickly discuss some of thoseadvantages:To create the reusable code, use Service-Oriented Architecture:Not only is it time-consumptive, but it is not necessary to reinvent yourcoding wheel whenever a new service or process is needed. The SOA alsoallows that coding languages to be used, since all runsvia a centralinterface.Using Service-Oriented Architecture to facilitate interaction: Acommon mode of communication is generated with Service-OrientedArchitecture that enables different systems and platforms to operateindependently of each other. By this connection, the Service-OrientedArchitecture can also work around firewalls that enable "companies toshare operationally important services."Using the scalability Service-Oriented Architecture: it is vital to beable to scale a business to meetcustomer's requirements, however somedependencies can be prevented from using it. Use Service-Oriented
munotes.in

Page 85

85Architecture reduces the interaction between customers, which makes iteasier to scale.Using Service-oriented Architecture to reduce costs: with aService-oriented Architecture it is possible to decrease costs while still"maintaining a desired performance." It is possible for businesses torestrict the amount of analyzes they need to create custom solutions usingService-oriented Architecture.2.5.3.3 Web services:Web Service is a structured method for distributing client-servercommunication on the World Wide Web. A web service is a softwaremodule that performs a variety of tasks.You can search for the web services across the network and invokethem appropriately. The web service will, when invoked, provide thecustomer with the features that the web service invokes.
Figure 2.30Web Service Architecture DiagramThe above diagram gives a very clear view of the internal workingof a web service.The customer will make a series of web service calls to aserver to host the current web service via request. These applications arerendered through so-called remote procedure calls. Remote Procedure Call(RPC) are calls made using the webservice hostingservice procedures.Amazon provides a web service for products sold online throughamazon.com, for example. The front end and layer of presentation may bein. Net or Java, but the web service will interact in either programminglanguage.Data transmittedbetween the client and the server is the primarycomponent of a web service, namely XML. AnXML is HTML equivalent,and the intermediate language that many programming languages can easy
munotes.in

Page 86

86to understand and they only speak in XML while applications talk toeachother. This provides a can application interface for interacting with oneanother in different programming languages. Web services use SOAP(Simple Object Access Protocol)to transfer XML data betweenapplications.The data is transmitted through standard HTTP. The datathat is transmitted to the program from the web server is called SOAP. Themessage from SOAP is just XML. The client application that calls to theWeb service can be written in any programming language, as thisdocument is written in XML.Why do you need a Web Service?:Every day software systems use a wide range of web-basedprogramming tools. Several apps in Java, others in Net, others in AngularJS, Node.js, etc. can be built. These heterogeneous applications most oftenrequire some kind of communication between them. Since they areconstructed in different programming languages, effective communicationbetween applications is very difficult to ensure.Here web services are offered. Web services provide a sharedplatform that enables multiple applications could base on variousprogramming languages to communicate with each other.Type of Web Service:Two kinds of web services are mainly available.1.SOAP web services.2.RESTful web services.There are some components which must be inplace to make a webservice fully functional. Regardless of which programming language isbeing used to program the web service, these components must be present.Let us take a closer look at these elementsSOAP is regarded as an independent message protocol fortransport. SOAP is based on the SOAP Messages transfer of XML data.Every message has a document called an XML document. Only the XMLdocument structure follows a certain pattern, but the contents do notfollow. The best component of Web services and SOAP is that they are alldelivered via HTTP, the standardweb protocol .This is the message of a SOAPA root element called the < Envelope > is needed in every SOAPdocument. The first element of an XML document is the root element.The envelopeis divided into 2 parts in turn. The first is the header and thesecond is the body.munotes.in

Page 87

87The header comprises the routing data, the information to which the XMLdocument should be sent to.The actual message is in the body.A simple example of communication through SOAP is given in thediagram below.
Figure 2.31WSDL (Web services description language)If it is found, a web service will not be used. The client invoking the webservice should know the location of the web service.Second, the client application wants to learn what the web servicedoes to invoke the right web service. It is achieved using WSDL, knownas the Web services description language.The WSDL file is anotherXML file which mainly tells the web service what its client applicationdoes. The client applications will understand the location and use of theweb services by using the WSDL document.Web Service ExampleAn example of a WSDL file is given below.
munotes.in

Page 88

88The main aspects of the above WSDL declaration are the following;< message >–The WSDL description message parameter is used todescribe the different data items per Web service operation. In thisexample, there are two messages, one being the "TutorialRequest" and thethe other being the "TutorialResponse" operation, which can be exchangedbetweenthe web service and the client application. The TutorialRequestcontains an item of the string form "TutorialID." Similarly, an elementcalled "TutorialName," also a form string is found in TutorialResponse.< portType >-In fact, this defines the Web service operation that isreferred to in our case as known asTutorial. This procedure will obtain 2messages, one is input and the other is output.< binding >-The protocol that is used contains this element. And wedescribe this in our case to use http (http:/schemas.xmlsoap.org/soap/http).Additional details on the body of the operation are specified, includingnamespace and the encoding of the message.munotes.in

Page 89

89Universal Description, Discovery, and Integration (UDDI):UDDI is the standard in which webservices offered by a particularprovider are described, published and discovered. It provides aspecification for hosting web services content.In the previous topic, we discussed WSDL and how it providesdetails about the actual activities of the Web service. Yet how can a clientapplication consider a WSDL file to recognize the various web-basedoperations.UDDI provides the solution and a server that can host WSDLfiles. This means that the client application has full access to the UDDI, adatabase which containsall WSDL files.Just as a phone directory has a certain person's name, address andtelephone number, so the UDDI registry is fitted with the related webservice information. That's why a developer user knows where to find it.We now also realize why webservices first came about, whichwere to provide a platform to talk to each other with different applications.But let's discuss some other advantages as to why web services arerelevant.Exposing Business Functionality on the network-a web server isaunit of managed code which offers client applications or end users withsome type of functionality. The HTTP protocol allows this functionality tobe called, so that it also can be called up via the Internet. Both programsare already available on the internet, which makes web services moreuseful. It ensures that the web service can be available on the Webanywhere and can provide the required functionality.Interoperability between applications-Web services allow differentapplications to talk to eachother and to share data and services. You canspeak to each other about any kind of query. And you can now writegeneric code that can be understood by all applications in lieu of writing aspecific code that only specific applications to understand.AStandardized Protocol which everybody understands-Webservices use a standardized industry protocol to communicate, whicheverybody understands. All four layers (Transport service, XMLMessaging, Service Description and Service Discovery layers) use well-defined web services network stack protocols.Reduction in cost of communication-Web providers use SOAPover HTTP protocol to implement their web-based services using theexisting low-cost internet.munotes.in

Page 90

902.5.3.4 Service orientation and cloud computing:Service orientation is a built-in architecturalapproach that usesautomated softwareresources to incorporate business processes. Suchbusiness services comprise of a collection of loosely coupled componentsdesigned to reduce dependency, designed to support a business functionthat is well specified. The creation of modular business service systemscontributes to more versatile and effective IT systems.Systems designed to integrate service orientation allow businessesto utilize existing resources and easily manage the unavoidable changesthat a dynamic company is experiencing. There are also circumstanceswhere the combination of a number of services is needed. It means thatthese combined workloads will operate with less latency than with looselycoupledparts.Hybrid cloud environments become important becauseorganizations constantly reinvent themselves and become morecompetitive, in order to respond to change. IT must be at the frontline ofan innovation and transformation-based business strategy. Organizationsunderstand that for all kinds of workloads it is difficult to find one best ITcomputing approach. Thereby, a hybrid cloud system is the most realisticsolution.A high degree of flexibility and modularity to make a cloudinfrastructure work in the real world. To support a range of workloads andbusiness services a cloud must be designed. One can tell when a servicewill be upgraded and when it can be downgraded.Specifically, this service-based architectural design approachsupports key cloudcharacteristics of elasticity, self-support, standard-based interfaces and flexibility in pay-as-you-go. Combining a service-oriented approach with cloud services enables businesses to decrease costsand improve flexibility in business. Scalabilities andelasticity for publicand private cloud systems are interchangeable and loosely mixed.SUMMARYIn this chapter we introduced parallel and distributed computing asa framework on which cloud computing can be properly described. Thesolution of a major issue emerged out of parallel and distributedcomputingby using several processing components first and then multiplenetwork computer nodes.UNIT END QUESTION1.Differentiate between parallel and distributed computing.2.What is an SIMD architecture?munotes.in

Page 91

913.Explain the major categories of parallel computing systems.4.Explain the different levels of parallelism that can be obtained in acomputing system5.What is a distributed system? What are the components thatcharacterize it?6.What is an architectural style and how does it handle a distributedsystem?7.List the most important software architectural styles.8.What are the fundamental system architectural styles?9.Describe the most important model for message-based communication.10.Discuss RPC and how it enables interprocess communication.11.What is CORBA?12.What is service-oriented computing?13.What is market-oriented cloud computing?REFERENCE FOR FURTHER READING•MasteringCloud ComputingFoundations and ApplicationsProgramming Rajkumar Buyya,Christian Vecchiola,S.Thamarai SelviMK publications ISBN: 978-0-12-411454-8•Cloud Computing Concepts, Technology & Architecture Thomas Erl,Zaigham Mahmood, and Ricardo Puttini, The Prentice Hall ServiceTechnology Series ISBN-10: 9780133387520 ISBN-13 : 978-0133387520•Distributed and Cloud Computing: From Parallel Processing to theInternet of Things 1st Edition byKai Hwang Jack Dongarra GeoffreyFox ISBN-10 : 9789381269237 ISBN-13 : 978-9381269237*****munotes.in

Page 92

923VIRTUALIZATIONUnit Structure3.0Objective3.1Introduction3.2Major Components of Virtualization Environment3.2.1 Characteristics of Virtualization3.3Taxonomy of virtualization techniques3.3.1 Execution virtualization3.3.2 Machine reference model3.3.2.1 Instruction Set Architecture (ISA)3.3.2.2 Application Binary Interface3.4Security Rings and Privileged Mode3.4.1 Ring 0 (most privileged) and 3 (least privileged)3.4.2 Rings 1 and 23.5Hardware-level virtualization3.6Hypervisors3.6.1 Type 1 Hypervisor3.6.2 Type 2 Hypervisor3.6.3 Choosing the right hypervisor3.6.7 Hypervisor Reference Model3.7Hardware virtualization techniques3.7.1 Advantages of Hardware-Assisted Virtualization3.8Full virtualization3.9Paravirtualization3.10Programming language-level virtualization3.10.1 Application-level virtualization3.11Other types of virtualization3.11.1 Storage virtualization3.11.2 Network Virtualization3.11.3 Desktop virtualization3.11.4 Application server virtualization3.12Virtualization and cloud computing3.12.1 Pros and cons of virtualization3.12.1.1Advantages of virtualization3.12.1.2 Disadvantages of virtualization3.13Technology examples3.13.1 Xen: paravirtualizationmunotes.in

Page 93

933.13.2 VMware: full virtualization3.13.3 Full Virtualization and Binary Translation3.13.4 Virtualization solutions3.13.5 End-user (desktop) virtualization3.13.6 Server virtualization3.14Microsoft Hyper-V3.14.1 Architecture3.15Summary3.16Unit End Questions3.17Reference for further reading3.0 OBJECTIVEVirtualization abstracts hardware that canshare common resourceswith multiple workloads. A variety of workloads can be co-located onshared virtualized hardware while maintaining complete insulation,migrating freely through the infrastructures and scaling, when required.Businesses are generating considerable assets and efficiencythrough virtualization, as this results in enhanced server usage andconsolidation, dynamic assignment and management of resources,isolation of working loads, security and automation. The virtualizationenables self-provision on-demand services and software-defined resourceorchestration, which is available on-site or off-site to any place in a hybridcloud, according to specific business needs.3.1 INTRODUCTIONCloud Virtualization makes server operating system andstoragedevices a virtual platform. This will enable the user to also share a singlephysical resource instance or application with several users by providingmultiple machines. Cloud virtualizations also administer work through thetransformation, scalability, economics and efficiency of traditionalcomputing.Cloud computing virtualizations quickly integrate the keycomputing method. One of the key features of virtualization is that itallows multiple customers and companies to share their applications.The virtualization environment can also be referred to as cloud-based services and applications. Either public or private this environment.The customer can maximize resources through virtualization and reducethe physical system needed.Recently, dueto the confluence of several phenomena, virtualizationtechnology has become more interested:munotes.in

Page 94

94Increased performance and computing capacity:A unique corporate data center is, in most instances, unable tocompete in terms of security, performance, speedand cost-effectivenesswith the network of data centers provided by service provider.Since themajority of services are available on demand, in a short period of timeusers can also have large amounts of computing resources, withtremendous ease and flexibility and without any costly investment.In turn, Cloud services offer you the ability to free up memory andcomputing power on your individual computers through remote hosting ofplatforms, software and databases. The obvious result, in fact, is asignificant performance improvement.Underutilized hardware and software resources:Underutilization of hardware and software is caused by increasedcomputing and performance and constrained or infrequent resourceusages. Computer systems have become so powerful today that in certaininstances those who are only a fraction of its capacity is used by anapplication or the system. Furthermore, when taking into consideration thecompany's IT infrastructure, numerous computer systems are only partlyutilized whereas they can be used 24/7/365 services without interruption.For instance, desktop PCs mainly for office automation tasks and used byadministration personnel are used only for working hours. The efficiencyof the IT infrastructure can be enhanced by using these resources for otherpurposes. A completely separate environment, which can be achieved viavirtualization, is needed to provide such a service transparently.Lack of space:Data centers are continuously expanding with the necessity forextra infrastructure, be it storage or computing power. Organizations likeGoogle and Microsoft are expanding their infrastructure by constructingdata centers as compare asfootball grounds inwhich containsthousandsof nodes.While this is feasible for IT big players,companies are oftenunable to build an additional data center to accommodate extra resourcecapacity. Together with this situation,unused of hardwareresourceswhichled to the diffusion of a server consolidation,fundamental to thevirtualization is used in the technique.Greening initiatives:Virtualization is a core technology for the deployment of a cloud-based infrastructure to run multiple operating system imagessimultaneously on a single physical server. As a consolidation enabler,server virtualization reduces the overall physical server size, with thegreen benefits inherent.From the perspective of resource efficiency, fewer workloads arerequired, which proactively reduce the space in a datacenter and themunotes.in

Page 95

95eventual footprint of e-waste. From an energy-efficiency point of view, adata center will consume less electricity with fewer physical equipment.Cooling in data centers is a major requirement and can help with highpower consumption. Through free cooling methods, such as the use ofairand water compared to air conditioning and cooling, data centers canreduce their cooling costs. The data center managers can save onelectricity costs with solar panels, temperature controls and wind energypanels.Rise of administrative costs:Power consumption and cooling costs are increasing as well as ITdevice costs. In addition, increased demand for extra capacity thattransforms into more servers in a data center leads to an increase inadministrative costs significantly. Computers—especially servers—willnot all work independently, but require system administrator care andattention. Hardware monitoring, flawed equipment replacement, serverinstallation and updates, server resources monitoring and backups are partof common system administration tasks. These operations are timeconsuming and the more servers to handle, the higher administrativeexpenses. The more administrative expenses are involved, virtualizationcan contribute to reducing the number of servers required for a particularworkload and reducing administrative staff costs.3.2 MAJOR COMPONENTS OF VIRTUALIZATIONENVIRONMENTVirtualization is the way to create the physical machine's 'virtualversion.' Using a virtual machine monitor, virtualization is achieved. Itenables several virtual machines to operate on one single physical device.Without any changes observe in virtual machines it can easily be movedfrom hardware to another. In cloud computing, virtualization is widelyused. Virtualization helps to run multiple operating systems andapplications on the same hardware components on each of them.
Figure: 3.1 Reference Model of Virtualization.(Reference from “Mastering Cloud Computing Foundations and ApplicationsProgramming” by Rajkumar Buyya)
munotes.in

Page 96

96In a virtualized environment, three main components fall into thiscategory:1. GUEST:As usual, the guest denotes the system component interacting with thevirtualization layer instead with the host machine. Usually one or morevirtual disk and VM definition files are presentedto guests. A hostapplication which looks and manages every virtual machine as a differentapplication is centrally operated by virtual machines.2. Hosts:The host is the original environment in which the guest is to be managed.Each host uses the common resources that the host gives to each guest.The OS works as a host and manages the physical management ofresources and the support of the device.3. Virtualization LayerThe virtualization layer ensures that the same or different environmentwhere theguest operates is recreated. It is an extra layer of abstractbetween the hardware, the computing and the application running in thenetwork and storage. It usually helps to operate a single operating systemper machine which, compared with virtualization, is very inflexible.3.2.1 Characteristics of Virtualization1. Increased Security:The ability to fully transparently govern the execution of a guestprogram creates new opportunities for providing a safe, controlledexecution environment. All guest programs operate usually against thevirtual machine, translating them and using them for host program.A virtual machine manager can govern and filter guest programs' activityso as to prevent harmful operations from being carried out. Resourcesexposed by the host can then be hidden or just protected against the guest.Example1:In Cuckoo sandboxes environment, untrusted code can beevaluated. In the term sandbox, the instructions may be filtered andblocked in the isolated execution environment before translating andexecuting in the actual execution environment.Example2:The Java Virtual Machine (JVM) expression sandboxedmeans a particularJVM configuration where instructions that are regardedas possibly harmful can be blocked through a security policy.1. Execution Managed:In particular, the most important features are sharing, aggregation,emulation and isolation.munotes.in

Page 97

97Figure: 3.2 Functions enabled by managed execution(Reference from “Mastering Cloud Computing Foundations andApplications Programming” by Rajkumar Buyya)1. Sharing:Virtualization makes it possible to create a separate computingenvironment in the same host. This common function reduces the amountof active servers and reduces energy consumption.2. Aggregation:The physical resource can not only be shared between severalguests, but virtualization also enables aggregation. A group of individualhosts can be linked and represented as a single virtual host. Thisfunctionality is implemented using the Cluster Management Software,which uses and represents the physical resources of a uniform group ofmachines.3. Emulation:In the virtualization layer, which is essentially a program, guestprograms are executed within an environment. An entirely differentenvironment can also be emulated with regard to the host, so that guestprograms that require certain features not present in the physical host canbe carried out.4. Isolation:Virtualization allows guests to provide an entirely separateenvironment in that they are executed—if they are operating systems,applications or other entities. The guest program operates through anabstraction layer that offers access to the underlying resources. The virtualmachine is able to filter the guest’s activities and prevent dangerousoperations against the host.
munotes.in

Page 98

98In addition to these features, performance tuning is anotherimportant feature enabled by virtualization. This feature is available areality owing to the considerable progress in virtualization supportingsoftware and hardware. Byfinely adjusting the properties of the resourcesexposed in the virtual environment, the guests' performance is easier tocontrol. It offers a means to implement a quality of service (QoS)infrastructure effectively.5. Portability:Dependent on a specific type of virtualization, the concept ofportability applies in different ways.In the case of a hardware virtualization, the guest is packed in avirtual image which can be moved and executed safely on various virtualmachines in many instances.Withthe virtualization of the programming level, as carried out inJVM or in. NET runtime, the binary code of the application components(jars or assemblies) may work on the respective virtual machine withoutrecompilation.3.3 TAXONOMY OF VIRTUALIZATION TECHNIQUESVirtualization encompasses a wide range of emulation techniquesapplied in various computingareas. A classification of such methodsallows us to understand and use them.The service or entity is discriminated against by first classificationthatis being emulated. Virtualization is used primarily to emulation inexecution, storage and networking environments. The most oldest, popularand developed area of these categories is execution virtualization. Ittherefore needs further research and classification. We can especiallydivide the techniques of virtualization by examining the type of host theyrequire in two main categories.We can especially divide the techniques of virtualization by examining thetype of host they require in two main categories.Process-level techniquesare implemented in addition to anexisting operating system with full hardware control.System levels techniqueare carried out directly on hardware andrequire no support from an existing operating system, or require limitedsupport.In these two categories, we can outline different methodsproviding guests a different virtual computing environment: barehardware, the resources of operating systems, low level programminglanguage and the application libraries.munotes.in

Page 99

99FIGURE 3.3A taxonomy of virtualization techniques.(Reference from “Mastering Cloud Computing Foundations andApplications Programming” by Rajkumar Buyya)3.3.1 Execution virtualization:Execution Virtualization involves all the methods to imitate anexecution environment that is separate from the virtualizationlayer host.All these techniques are focused on supporting program execution,whether it be the operating system, a binary program's specificationcompiled against the model or application of an abstract machine model.Therefore, the operating system, an applicationand libraries can directlyor dynamically connected to the application image on top of the hardware.3.3.2 Machine reference model:Ifexecution environment is virtualized at levels other thanthecomputation stack then a reference framework needs to be developed thatdefines the interfaces within the abstract level and this level of abstractionmasks the details of the implementations.This suggests that virtualization techniques can replaceeach layerand intercept the calls to it. For this reason a clear separation between thelayers can simplify their implementation, where only the interfaces need tobe emulated and the subordinate layer is responded to.On the base layer, the hardware model is declared or demonstratedaccording to an architecture, i.e. Instruction Set Architecture (ISA).
munotes.in

Page 100

100FIGURE 3.4 A machine reference model(Reference from “Mastering Cloud Computing Foundations andApplications Programming” by Rajkumar Buyya)3.3.2.1Instruction Set Architecture (ISA):The instruction set, known as ISA, is component of a computerwhichrelated toprogramming, which essentially is a machine's language.The instruction set provides the processor with instructions to tell it whatto do.The set of instructions consists ofaddressingmodes, instructions,native data types, registries, memory architecture, interruption andexception handling, and external I / O.An example of the instruction setis the x86 instruction set,common on computers today. Throughout a still quite varying internaldesign, various computer processors can use almost the same set ofinstructions. Both processors Intel Pentium and AMD Athlon use almostthe same x86 instruction set. An instructionset can be incorporated in theprocessor's hardware or emulated by an interpreter using software. Thehardware design for running programs is more efficient and faster than theemulated program version.3.3.2.2 Application Binary Interface:ABI is the Application Binary Interface.A Binary Code ABIdefines how to invoke the functions, how parameters are passed betweencaller and callee, how return values are given to callers, how libraries aredeployed and how programs are loaded into a memory. The linker thusapplies an ABI: an ABI is the rules of how unrelated code works inconjunction. An ABI also governs the co-existence of processes on thesame system. For example, an ABI could specify on the UNIX systemhow signals are executed, how a process invokes systems calls, whatendianness is usedand stacks are developed. An ABI is a set of rules thatare implemented in a particular architecture by the operating system.
munotes.in

Page 101

101The kernel, toolchain and architecture troika define an ABI. It mustbe agreed by everybody on it. The architectures generally design apreferred or standardized ABI, and operating systems abide to thatstandardization more or less. Such information is usually documented inthe reference manual for the architecture. For instance, x86-64,3.4 SECURITY RINGS ANDPRIVILEGED MODEThe CPU operates mainly on two levels of privilege:User Mode: Memory access is restricted in this mode to a certain extentwhereas peripherals access is denied.Kernel Mode:CPUs have instructions for managing and accessingmemory inthis mode and also have instructions for accessing peripheralssuch as disks and network cards. CPU switches automatically from onerunning program to another running program. This layered approachsimplifies the expansions and applications of the computing system. Thislayered approach simplifies the application of multi-tasking andcoexistence of multiple executions.The first one can be made in a privileged and unprivilegedinstructions. The instructions that can be used with interrupting withanothertask can be called the Non-Privileged Instruction. It is also calledas it is not accessible by shared resources. Ex-contains all fixed points andfloating and arithmetic instructions. Instructions that are executed underspecific restrictions and that are commonly used for sensitive operations(expose behavior-sensitive or alter sensitive controls) are known asprivileged instructions.The OS manages the resources of a computer such as CPUprocessing time and memory access. Computers often run several softwareprocesses simultaneously, requiring different levels of access to resourcesand hardware.Processes are performed in layered "rings" with different rights ofaccess to resources at each ring. The central ring has the highest privilegesand accessis reduced in every subsequent layer. A commonimplementation of the x86 processor protection ring (a common CPUtype) has four rings, from 0 to 3There are two main advantages to the layered model. First of all, itprotects from system crashes. Errors can usually be retrieved in higherrings (with less access). Because Ring 0 has direct access to thememoryand CPU, it can be restarted without data loss or a CPU error inan outer ring crashing process. Secondly, it provides enhanced security.The procedure requires permission from the operating system to executeinstructions that require greater access to resources. Then the OS candecide whether or not to grant the request. This selection process helps tomunotes.in

Page 102

102prevent unwanted or malicious behavior of your system.
FIGURE 3.5 Security rings and privilege modes(Reference from Mastering CloudComputing Foundations and Applications Programming” byRajkumar Buyya)3.4.1 Ring 0 (most privileged) and 3 (least privileged):The kernel Ring 0 is accessible, which is a core component of mostoperating systems and can access everything, Code running in kernelmode is said to operate. Processes running in Kernel can significantlyimpact the whole system; if anything fails, a system shutdown willprobably occur. This ring has direct access to the CPU and systemmemory, which means there are any instructions that require the use ofeither.`Ring 3, the least privileged ring, is available for user processes inuser mode. This is the location for most applications that operate on yourcomputer. This ring does not have direct access to the CPU or memoryand must thus pass instructions to ring 0.3.4.2 Rings 1 and 2:Special privileges exist for rings 1 and 2 that do not exist in ring 3(user mode). Ring 1 is used to interact with your computer-connectedhardware and control it. Playing a song via speakers or headphones orshowing video on your monitor are examples of how to use this ring. Ring2 is used for instructions that interact with system storage, load or savefiles. These types of permissions are referred to as input and outputbecause data is moved in or from a working memory (RAM). Forexample, it is in ring 2 to load a Word storage document. Documentviewing and editing, the application layer, would fall under ring 3.In a hypervisor environment, guest operating systems code isexpected to run in the user to prevent the user from accessing OS statusdirectly. When non-privileged instructions are implemented it is no longerpossible to completely isolate the guestOS.The differentiation among
munotes.in

Page 103

103user and supervisor mode enables us to understand the hypervisor’s role.The hypervisor is conceptually running above the supervisor and theprefix hyper-is used.Hypervisors actually operate in a supervisor mode,and there are challenges in the design of virtual machinemanagers in thedivision between privileged and non-privilegedinstructions. All sensitiveinstructions are expected to be performed in a privileged mode thatrequires supervisor mode to prevent traps. Without this assumption, CPUstatus for guest operating systems cannot be fully emulated and managed.This is unfortunately not the case for the original ISA, which allows 17sensitive user mode instructions. This prevents the separation and changeof multipleoperating systems managed by a single hypervisorsystem.Recent implementations of ISA (Intel VT, AMD Pacifica) have resolvedthis issue by revamping such instructions as privileged ones.3.5 HARDWARE-LEVEL VIRTUALIZATIONHardware-level virtualization is a virtualization techniquethatprovides technique that enables abstract computer hardware executionenvironment where the guest operating system can be executed. The guestis defined in this model through the operating system and the host via thephysical computerThe hardware, the emulation of the virtual machine and thehypervisor's virtual machine manager (see Figure 3.6 ). The hypervisor isusually a software / hardware program that enables the physical hardwareunderlying to be abstracted. Hardwarelevel virtualization, whichrepresents a system's hardware interface, is also referred to as systemvirtualization as ISA provides virtual machines. This means that thevirtual machines that expose ABI to virtually different processes aredistinguished.
FIGURE 3.6 A hardware virtualization reference model.(Reference from“Mastering Cloud Computing Foundations and Applications Programming” byRajkumar Buyya)
munotes.in

Page 104

1043.6 HYPERVISORSA hypervisor is a key software piece which enables virtualization.It abstractsfrom the actual hardware the guest machines and the operatingsystem they use.Hypervisors create the CPU / Processor, RAM and other physicalresources virtualized layer that separates you from the virtual devices youare creating.The hypervisor on which we install the machine is called a hostmachine, compared with virtual guest machines running over it.Hypervisors emulate resources available for guest machines to use.Regardless of which operating system you are booting with an actualhardware, itbelieves that real physical hardware is available.From theviewpoint of VM, the physical and virtual environment is unlike anydifference. In the virtual environment, Guest machines do not know thatthe hypervisor has created them. Or share the computing power available.VMs run on the hardware that powers them simultaneously, and they aretherefore fully dependent upon their stability operation.•Type 1 Hypervisor (also called bare metal or native)•Type 2 Hypervisor (also known as hosted hypervisors)3.6.1Type 1 Hypervisor:A bare-metal hypervisor (type 1) is a software layer which isinstalled directly above a physical server and its underlyinghardware.Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServerand Microsoft Hyper-V hypervisor.There is no intermediate software or operating system, thereforebare-metal hypervisor is the name. AType 1 hypervisor, which does notrun inside Windows or any other operating system so it is proven toprovide excellentperformance and stability.Type 1 hypervisors are a very basic OS themselves, on whichvirtual machines can be operated. The hypervisor’s physical machine isused for servervirtualization purpose only. For anything else, you can'tuse it. In enterprise environments, type 1 hypervisors are mostly found.
FIGURE 3.7 Type 1 Hypervisor
munotes.in

Page 105

1053.6.2 Type 2 Hypervisor:This type of hypervisor runs within a physical host operatingsystem.Example of Type 2 hypervisor include VMware Player orParallels Desktop.That is why we call type 2 hypervisors–hosted hypervisors. In contrast totype 1 hypervisors, which run directly on the hardware, hosted hypervisorshave one underlying layer of the software. Here we have the following:•A physical machine.•An installed hardware operating system (Windows, Linux, macOS).•Software for the type 2 hypervisor in this operating system.•The current instances of virtual guest machines.
FIGURE 3.8 Type 2 Hypervisor3.6.3 Choosing the right hypervisor:Type 1 hypervisors offer much better performance than Type2Those are the logical choice for mission-critical applications andworkloads because there is no middle layer. However, that's not saying thehosted hypervisors that are hosting do not have their place. They’re mucheasier to set up, so it's a good betif, say, you’re going to have to quicklyimplement an environment of the test. . One of the best ways to find outwhich hypervisor meets your needs is to compare their performancemetrics. The following factors must be examined before selecting theappropriate hypervisor: These include CPU overhead, maximum host and
munotes.in

Page 106

106guest memory, and support for virtual processors1.Understand your needs:the data center (and your job) for thecompany and its applications. In addition to the requirements of yourcompany,you (and your IT staff) also have your own requirements.a.Flexibilityb.Scalabilityc.Usabilityd.Availabilitye.Reliabilityf.Efficiencyg.Reliable support1.The cost of a hypervisor:For many buyers, a hypervisor is the most difficult thing inselecting the right balance between cost and functionality. While someentry-level solutions are free or practically free, prices can be staggering atthe opposite end of the market. The frameworks for licensing vary too, soit is important to know what your money gets precisely.2.Virtual machine performance:Virtual systems should achieve or exceed, in relation at least, to theserver applications, their physical counterparts' performance. All that goesbeyond this benchmark is profit.3.Ecosystem:The role that an ecosystems hypervisor can have in deciding onwhether a solution is cost-effective or not is tempting to ignore–that is,availability of documentation, support, training , development andconsultancy services.4.Test for yourself:You can obtain basic experience from your existing desktop orlaptop. To build a nice virtual learning and testing environment, you canrun VMware vSphere and Microsoft Hyper-V in either VMwareWorkstation or VMware Fusion.munotes.in

Page 107

1073.6.7 Hypervisor Reference Model:
FIGURE 3.9 A hypervisor reference architecture.(Reference from “Mastering Cloud Computing Foundations andApplications Programming” by Rajkumar Buyya)There are 3 main modules coordinate in order to emulate the underlyinghardware:1. Dispatcher2. Allocator3. InterpreterDispatcher:The dispatcher acts as the monitor entry point, rerouting virtual machineinstance instructions to one of the other two modules.ALLOCATOR:The allocator is responsible for deciding the system resources to be givento the virtual machine instance. It indicates that the dispatcher invokes theallocator whenever virtual machine attempts to execute instructions thatmodify the machine resources associated with the virtual machine.
munotes.in

Page 108

108Interpreter:The interpreter module consists of routines. These are executedwhile virtual machine executes a privileged instruction.The requirements for Popek and Goldberg are a number ofconditions which are sufficient for a computer architecture to effectivelysupport system virtualization. In their 1974 article entitled "FormalRequirements for Virtualizable Third Generation Architectures”introduced by Gerald J. Popek and Robert P. Goldberg. Althoughsimplifying assumptions are taken into account, the requirements remain auseful way in which to determine whether a computer architecturesupports efficient virtualization and provides guidelines to designvirtualized computer architectures.Virtual system machines can virtualize an entire range of hardwareresources, suchas processors, memory and storage resources andperipheral devices. A virtual machine monitor is a software componentwhich provides the abstraction of a virtual machine, also known as ahypervisor. In analyzing the environment created by VMM there are threeproperties are follows:-Equivalence / Fidelity:Under the VMM a program is running should behave essentially the sameas when running directly on an equivalent machine.Resource control / Safety:The virtualized resources must be fully controlled byVMM.Efficiency / Performance:Without VMM intervention, a statistically dominant fraction of machineinstructions must be performed.A VMM must contain all three properties in the terminology ofPopek and Goldberg. VMMs are typically assumed to fulfilltheequivalence and resource control properties, which are also known aseffective VMMs.The characteristics that Popek and Goldberg must havein order to perform VMMs that have the above properties, are described inthe instruction set architecture (ISA). This model comprises a system oruser mode processor which has access to linear, consistently addressablememory. A subset of the instruction set is assumed to be only available insystem mode and the memory related to the relocation register isaddressed. Interrupts and I / O are not modeled.Popek and Goldberg classify the ISA instructions into threedifferent groups in order to derive their theorems of virtualization, whichgive sufficient (but not necessary) virtualization conditions:munotes.in

Page 109

109Privileged instructions:Those that trap in user mode when the processor is in system mode do nottrap (Supervisor mode).Control sensitive instructions:Those who try to change the system resource configuration.Behavior sensitive instructions:The behavior of thosewhose results depend on the resource'sconfiguration (relocation's registry content or processor mode).This can then be the main result of the analysis by Popek and Goldberg.Theorem 1. An effective VMM can be built on any conventional third-generationcomputer if the sensitive set of instructions is a subset ofthe privileged instructions for that computer.The theorem states intuitively that all instructions that could affectVMM (sensitive instructions) to correctly function always trap andtransferthe control to VMM are sufficient for the building of a VMM.This ensures the property of the resource control. Instead, native (i.e.efficiently) non-privileged instructs should be executed. It is alsonecessary to keep the equivalency property.This theorem also provides a simple method for VMM, knownmore recently as a classical virtualization called trap-and-emulatevirtualization: all the VMM’s sensitive instructions have to do is trap andemulate each of them.A related problem is that sufficientconditions are obtained forrecursive virtualization, i.e. conditions under which a VMM can becreated that can work with a copy of itself. The following (sufficient)conditions are presented by Popek and Goldberg.Theorem 2. A conventional third-generation computer is recursivelyVirtualizable if:•It is Virtualizable and•A VMM without any timing dependencies can be constructed for it.There are architectures that do not meet those conditions, such asthe non-hardware-assisted x86, therefore they cannotbe virtualized asusual. However, architectures (in x86 case, CPU / MMU level) can still befully virtualized using various techniques such as binary translation, whichreplaces sensitive instructions that do not generate traps sometimesreferred to as critical instructions. However, this extra processing reducesthe theoretical efficiency of the VMM, while also hardware traps are cost-effective. Comparable efficiency can be achieved in a smoothly tunedbinary translation system, which only allows sensitiveinstructions to betrapped in relation to first-generation x86 hardware assist.munotes.in

Page 110

110Theorem3. A hybrid VMM may be constructed for any thirdgeneration machine in which the set of user sensitive instructions area subset of the set of privileged instructions:A third-generation hybrid VMM may be built on which the user-sensitive instructions are part of the privileged instructions set.–More instructions will be interpreted in HVM instead of directlyexecuted.3.7 HARDWARE VIRTUALIZATION TECHNIQUESHardware-assisted virtualization, the first virtual machineoperating system (VM/370 in 1972), was introduced on the IBMSystem/370. In the latter 70's, Virtualization was forgotten, but thedevelopment of x86 servers has re-enlightened the interest in virtualizingdriven for a server consolidation requirement. Virtualization allowed asingle server to replace multiple underutilized dedicated servers.The x86 architecture, however, did not meet the criteria ofGoldberg and Popek for "classical virtualization."In order to compensatefor these limitations: Virtualization of an x86 architecture was carried outby two methods: full virtualization or paravirtualization. The illusion ofphysical hardware is created to achieve the objective to independentlymanage the operating system from hardware, but to achieve someperformance and complexity.Intel and AMD introduced new technologies for virtualization, anumber of new instructions, and–most importantly–a new level ofprivilege. The hypervisor is now presentat "Ring-1" so that the guestoperating system can operate at ring 0.Virtualization of hardware leverages virtualization functionalityincorporated into the latest generation of Intel and AMD CPUs. Thesetechnologies, respectively called Intel VT and AMD-V, offerenhancements needed to run non-modified virtual machines without theoverhead of the full CPU virtualization emulation. These new processorsinclude an additional privilege mode, below ring 0, in which thehypervisor essentially can operate leaving ring 0 for unmodified guestoperating systems.The VMM can efficiently virtualize the entire X86 instruction withhardware-assisted virtualization using the classically-used hardware trap-and-emulate model, rather than software, by handling these sensitiveinstructions. CPU access can be accessible at Ring 1 and guest OSes byhypervisors that support this technology, the same way as they wouldwhen operating on a physical host. This makes it possible to virtualizeguest OSes without any changes.munotes.in

Page 111

111FIGURE 3.10 New level of privilege in x86 architecture3.7.1 Advantages of Hardware-Assisted Virtualization:Hardware-assisted virtualization changes the operating systemaccess. Operating systems of x86 have direct access to running systemresources. VMMemulates the necessary hardware into the operatingsystem with software virtualization. The operating system provides directaccess to resources without an emulation or modification with hardware-assisted virtualization, and this improves overall performance.This implies that OS kernels need not be tweaked and can run as is(as in par virtualization). The hypervisor does not have to take part in theinefficient binary translation of the sensitive instructions at the same time.Thus, it not only complieswith the Popek and Goldberg criteria (of fullvirtualization), but also improves its efficiency, because the instructionsare now trapped and emulated directly in the hardware.3.8 FULL VIRTUALIZATIONFull virtualization is a technique for the virtualization of a VMEthat simulates the underlying hardware completely. Any software that can
munotes.in

Page 112

112run on physical hardware can be run in this type of environment in theVM, and any OS supported by the underlying hardware can be run witheach VM. Users can simultaneously run several different guest OSes. TheVM simulates sufficient hardware for unmodified Guest OS to run inisolation in full virtualization software. In a number of situations, this isparticularly helpful. Experimental new code, for example, can run inanOS development in a separate VM simultaneously with older versions.The Hypervisor delivers every VM, including a virtual BIOS, virtualdevices and virtualized memory management, all services of the physicalsystem. The Guest OS is completely unconnected with the virtualizationlayer from the underlying hardware.Full virtualization is achieved through the use of binary and directexecution combinations. The physical CPU executes atnative speednonsensitive instructions with full Virtualization Hypervisors, translatesthe OS instruction and is cached for future use, and the instructions at userlevel are executed at native speed with no change. Full virtualizationprovides optimal isolation and security for VMs, making migration andportability easieras virtualized and native hardware are used by the sameguest OS instance. The concept of complete virtualization is shown inFigure
FIGURE 3.11 Full Virtualization3.9 PARAVIRTUALIZATIONParavirtualization is another approach to server visualizationwhereby paravirtualization is a thin layer that does not imitate a completehardware environment; it makes sure that all guest systems share theirsystem resources and work around each other well. The "Para" is anEnglish affix of Greek origin which means"beside," "with," or"alongside."
munotes.in

Page 113

113FIGURE 3.12 ParavirtualizationThe Guest Operating System kernel will be altered to run on thehypervisor under the paravirtualization. This usually requires areplacement in the ring 0 of the CPU of privileged operations by calling ahypervisor (called hypercalls). The hypervisor, in turn, performs the taskon behalf of the guest kernel and offers hypercall interfaces to othercrucial kernel operations like memory management, interrupt handling andtime keeping.Paravirtualisation attempts to correct all problems related tovirtualization by allowing the guest operating systems to directly accessthe subordinate hardware and thus to improve communication between theGuest OS and the hypervisor. Because it contains OSmodifications,paravirtualization is sometimes called OS-Assisted Virtualization as well.Paravirtualization, in which the guest OS "knows" how it isvirtualized, is different from the full virtualization, in which theunmodified OS does not know that itis virtualized and sensitive OS callstrapped by binary translation.
munotes.in

Page 114

114FIGURE 3.13 Hypercalls to virtualization in ParavirtualizationParavirtualization Advantages:This approach has two advantages:The guest kernel's ability to communicate with the hypervisordirectly leads to greater levels of performance.You will recall that acomplete virtualization inserts a complete layer of the hardware emulationbetween the guest OS and the physical hardware. The thin software layerof paravirtualization acts more like a virtualized server such asair trafficcontroller, which gives a guest OS access to the hardware physicalresources while all other guest OSs stop simultaneously accessing thesame resources. The value offering of paravirtualisation is a loweroverhead virtualization, but the performance advantage over fullvirtualization depends on the workload; this method generally is muchmore efficient than conventional hardware emulation virtualizations;The second advantage of the paravirtualization approach incomparison to full virtualization is that paravirtualization does not confineyou to device drivers included in the virtualization software. It usesinstead the device drivers, known as the privileged guest, in one of ourguest operating systems. Ifyou don't get too much into this architecturehere, you just have to say that this is an advantage since it givesorganizations the opportunity to benefit from all the hardware capabilitiesof the server, instead of being limited to the hardware for whichdriversare available as a whole in virtualization programs.
munotes.in

Page 115

115Paravirtualization Limitations:In paravirtualization, the guest operating systems must be alteredto interact with the paravirtualization interfaces. This usually limitssupport for open source operating systems such as Linux, which can beopenly modified and proprietary operating systems, where the ownersagree to make codes for a specific hypervisor. Since paravirtualizationcannot support unmodified OS (e.g. Windows family) it is not compatibleand portable;Paravirtualization can also introduce major production-basedsupport and maintenance issues because deep OS kernel amendments areneeded.3.9.1.2 Partial virtualization:In computer science, partial virtualization is a virtualizationtechnique that has been employed to implement a virtual machineenvironment: one providing a "partial simulation of the underlyinghardware." Most, though not all, of the hardware functionalities aresimulated which results in virtual machines that can operate certain or allsoftware without modification. In general, it means that the wholeoperating systems "could not," but that many of the applications can run,run on the virtual machine. This is a sign of full virtualization.The ‘address space virtualization,' in that each virtual machinecomprises of a distinct address space, is the key to partial virtualization.This capability necessitates relocation hardware and has adapted partialvirtualization in other practical examples.A major historical landmark on the path to full virtualization waspartial virtualization. It has been used in the time-sharingCTSS system offirst generation and in the experimental paging system wasIBMM44/44X. The concept can be used to define any operating system withseparate address spacesfor independent users or processes, which includenumerous that currently do not meet the criteria as virtualmachinesystems,. The experience and limitations of partial virtualizationhave led to the first full virtualization systemPartial virtualization is much easier than full virtualization. It hasfrequently provided useful, strong virtual machines that support majorapplications. Its drawback is in situations where backward compatibility orportability is needed (in contrast with full virtualization). When certainhardware features are not simulated, all software using such features isunsuccessful. In addition, the features used for a particular application canbe difficult to predict precisely.Partial virtualization has proven extremely successful for multipleusers to share computer resources.munotes.in

Page 116

1163.9.2 Operating system-level virtualization:Operating system level virtualization (OS virtualization)is a servervirtualization technology that includes altering the operating systemtoallow different applications to be run simultaneously on one computer atone timeby different users. Although they operate on the same computer,virtual operating systems do not interfere with each other independently.The standard OS is altered andadapted for operation ofindependentsystems. This virtual system is designed to comply with the user'scommands that can run various applications on the machinesimultaneously. The virtual operating system processes each user requestindividually. An advantage of operating system level virtualizationis thatthe availability of applications will have a minimal impact even duringsystem upgrades and security patches. Virtualization of the operatingsystem allows vital applications to be moved into other virtual operatingsystems so that performance cancontinues.A methodology in which the kernel of an operating system enablesfor several isolated user-space instances uses in thiskind of servervirtualization. The instances are running on top of the previous hostoperating system and feature a set of libraries with which applicationsinteract, illustrating how they run on a machine dedicated to their use.Containers, virtual private servers or virtual environments are known asinstances.
FIGURE 3.14 Operating system-level virtualizationThe host system with a single OS kernel and its control of theguest functionality of the operating system virtualizing system level isachieved. In this virtualization of a shared kernel, the Virtual guestsystems each have a root file system of their own.
munotes.in

Page 117

117Host Virtualization In whichthe hypervisor (the container) has avery limited functionality which is depend upon Host OS for CPUscheduling and memory managementThis method, which uses OS-levelvirtualization, doesnot even include the application of a real hypervisor,but is a component of the operating system that performs all of thehypervisor 's tasks.
FIGURE 3.15 Operating system-level virtualization 2This means that OS virtualization is relied on the creation on asingle physical server of isolatedcontainers or partitions including the useof OS instances to operate individually from the other partitions, in eachguest application environment. This technique installs the software layerfor virtualization on the operating system and the system for all guestsoperates on this layer using the same operating system as the hostoperating system, but each guest has its own resources and runs incomplete isolation from the guests.It is arguable that this is notin strict sense virtualization; rather it isa technique which only enables the consolidation of the machine.3.10PROGRAMMING LANGUAGE-LEVELVIRTUALIZATION3.10.1 Application-level virtualization:Application virtualization is a mechanism that tricks astandardizedapplication into believing it interacts directly with the functionality of anoperating system whereas, in reality, it would not.
munotes.in

Page 118

118That requires a layer of virtualization inserted between theapplication and the OS. This layer, or system, needs to run the subsets ofan app virtually without any affecting the underlying OS. Thevirtualization layer substitutes a part of the runtime environment typicallyprovided by the OS, transparently diverting files to a single executablefile, and changes inthe registry log.Through diverting the processes of the app into one file rather thanseveral scattered around the OS, the app runs easily on another device, andapps that were previously incompatible may now operate adjacently.Desktop virtualizationis used in conjunction with applicationvirtualization—the separation from the end-user system that accesses thephysical desktop environment and its related app software.Benefits of Application Virtualization:•Enables legacy apps (e.g. OS platformssuch as Windows 7 or XPwho development is end ) can be run.•It allows cross-platform operations (e.g., running iOS, Android,macOS and Chrome OS applications).•Prevents conflicts with other virtualized application•Allows users to operate multiple app instances—unless they arevirtualized, several applications can detect and not allow new instancesto runsLimitation of Application Virtualization:•This is difficult to virtualize all computer programs. Applicationswhich require a system driver (a type ofOS integration) and 16-bitapplications to run in shared memory space are some of the examples.•Anti-virus software and programs that need high OS integration aredifficult to virtualize, such as WindowBlinds or StyleXP.•Application virtualization is subject to major license flaws in softwarelicensing, particularly because it must correctly license both theapplication virtualization software and virtualized applications.•Whilst application virtualization can fix issues between oldapplications and neweroperating systems in terms of file-and-registrycompatibility, applications that don't handle the heap properly won'twork with Windows Vista because it does still assign memory, nomatter how virtualized. Therefore, even when the program isvirtualized, specific application compatibility fixes (shims) may berequired.3.11 OTHER TYPES OF VIRTUALIZATIONMany forms of virtualization have an abstract environment forother than virtualization interact with them. They include storage,networking and interaction between client and server.munotes.in

Page 119

1193.11.1 Storage virtualization:Storage is another part virtualization computing concept.Thedefinition of Storage virtualization is Storage virtualization refers to themethod of physical storage abstraction.While the RAIDfunctionality atthe base level, the word storage virtualization typically involves additionalconcepts such as data migration and caching. Storage virtualizationisdifficult to describe in a specific way as it is possible to have a variety ofdifferent functionalities. It is typically provided as a function of:•Host Based with Special Device Drivers•Array Controllers•Network Switches•Stand Alone Network AppliancesIn this respect, each vendor has a different approach. The primaryway of classifying storage virtualization is whether in-band or out-of-band. In-band (often called symmetric) between the host and the storagedevicepermitscaching.Virtualizingout-of-band (often calledasymmetrical) uses host-based drivers that first check at the metadata(indicating the location of the file) and then enable the host to access thefile directly from the stored location. This method does not requirecaching at the virtualization level.General benefits of storage virtualization include:•Migration:Data can quickly be transferred through storage locationswithout interrupting the live access of most technologies to the virtualpartition.•Utilization:The usage of storage devices can be managed foraddressing over and over using in the same way as servervirtualization.•Management:Most hosts may use storage to centrally manage aphysical device.Some of the disadvantages include:•Lack of Standards and Interoperability:Storage virtualization is aterm, not a standard. This also means the vendors don't interoperateeasily.•Metadata:The storage metadata and management are essential to afunctioning reliable system as there is a correlation between logicaland physical location.•Backout:Mapping the backout of virtualized infrastructure from thenetwork fromdigital to physical locations is often less than trivial.3.11.2 Network Virtualization:In the field of computing, Network virtualization integratesmunotes.in

Page 120

120hardware, software and network infrastructure with networkfunctionalities into a single virtual networkadministrative entity.Virtualization of networks requires platform virtualization, often coupledwith virtualization of resources. Network virtualization is defined as eitheroutside or merged into a virtual unit or internal, which gives the softwarecontainers a network-like functionality on a single system.With the internal definition of the phrase, virtualization of thedesktop and server offers networking connectivity both for the host andguest and for several guests. Virtual switches are recognized as part of avirtualization stack on the server side. Nonetheless, the external concept ofnetwork virtualization is likely the most used version. Virtual privatenetworks (VPNs), with most enterprises supporting VPNs, have beenstandard components in the Toolbox for years. A further widely useddefinition of network virtualization is virtual LANs (VLANs). Thenetworks need to be organized solely along regional lines with networkdevelopments as 10 gigabit Ethernet is not a long time.In general benefitsof network virtualization include:Customization of Access:administrators can easily customize access andnetwork options, including bandwidth throttling and quality of service.Consolidation:Virtual networks can be merged into one virtual networkto simplify management overall.Like server virtualization, network virtualization will addcomplexity, contribute to overhead performance and the need foradministrators to have a greater degree of ability.3.11.3 Desktop virtualization:Desktop virtualization is a software paradigm that takes theconventional thin-client model from the cloud, but is designed to providethe world's best to administrators and end users: hosting and centralmanagement in the data center of virtual machines while giving end-usersthe fullest PC desktop experience.Hosted application Virtualization is identical to the hosted desktopvirtualization, which extends user experience to the whole desktop.Microsoft's Terminal Services, Citrix's XenDesktop and VMware's VDIare all commercial products.Advantages of Desktop virtualization cover the majority of those withvirtualization applications as well as:•High availability:Downtime with network replication and faulttolerant settings can be reduced.•Extended RefreshCycles:Larger capacity servers and fewerrequirements on the client PCs will increase their lifespan.munotes.in

Page 121

121•Multiple Desktops:Users can control multiple desktops from thesame client PC, which can handle different tasks.Desktop virtualization also has drawbacks are close to the servervirtualization. The additional downside is that consumers need to benetworked to access their virtual desktops. For offline jobs, this istroublesome as well as increasing network demand in the workplace.3.11.4 Application server virtualization:Application server virtualization describes a selection by means ofload balancing techniques and a high-value architecture for the serviceshosted in the application server that offer the same services as a singlevirtual application server. This is a special type of virtualization that aimsto make virtual storage more efficient than to emulate anotherenvironment. It is a different virtualization.3.12 VIRTUALIZATION AND CLOUD COMPUTINGCloud computing, which is flexible, scalable and oftenreduces inthe cost and complication of applications, is today one of the biggestsounding and exciting technologies. Virtualization is the primarytechnology for the use of cloud computing. Virtualization is a componentof cloud computing. Based on cloudvirtualization, workloads can bequickly deployed and scaled through the rapid provisioning of virtual andphysical machines.Clouds are seen as a pool of virtualized resources thatare easily used and accessible. Software as a service model, Platform as aservices model and Infrastructure as a service model are three cloudservice models. Software is a service model for the development ofservices that guarantees consumers are paying for not owning apps that isusing. Platform as a service model gives users a platform for creating andconfiguring their software accordingly. Infrastructure as a service model isthe self-managed model for controlling and monitoring remote data.Cloud provider administrates networking, storage and computingservices. Cloud computing basically offers access to resources required tocarry out various activities with increasing user demands. The conceptbehind cloud computing is to enable businesses to increase their computerhardware’s performance, use of resources and flexibility. Virtualization isthe most relevant technology today. For cloud computing, virtualizationplays a key role as it allows a degree of customization, protection,isolation and manageability necessary for on-demand service delivery.The method of consolidation hereby revealed assigned VMs to aphysical server and uses parameters such as minimal (min) the requiredamount of Resource by each VM and maximal resource permissible (max)by each VM application to decide the amount of the VM resource. Thesevirtualization parameters are important frameworks to guarantee anintelligent distribution of resources among various applications (especiallywhen a diverse range of VM applications are available in variousmunotes.in

Page 122

122preferences and resources). The method of consolidation is aimed atplacing VM's in order to position the application VM's combinations oneach physical server, taking into account resource allocation for each VM,based on various priorities and affinities of resources. The effectiveness ofa unified programhas a beneficial effect on this granularity in thedistribution of resources.This process is often called server consolidation,while virtual machine migration is the transfer of virtual machineinstances
FIGURE 3.16 virtual machine migrationLive Migration moves a running VM from a physical server to adifferent one without interrupting the virtual machine’s availability to theusers. Live Migration. The aim of a VM live migration is to allowmaintenance or updates without interruption on a virtual machine duringmigration on a VM. Often known as seamless live migration, when theend user does not discover downtime during the migration cycle.3.12.1 Pros and cons of virtualization:Most companies are keen to upgrade their solutions on virtualmachines with the rise of virtualization growing. Nonetheless, it isimportant to consider the advantages and disadvantages of virtualizationbefore any improvements are made. The benefits and drawbacks of bothphysical and virtual systems, each of which has timeand location.Virtual technology has obvious advantages, however there are afew drawbacks. To order to decide how it best matches the companyrequirements we have summarized the pros andconsof virtualization.3.12.1.1Advantages of virtualization:Scalability:A virtual machine is merely as scalable as any other solution. One of thekey benefits of virtualization is that several systems can be integrated. Itoffers you unbelievable flexibility that with a physical and bare metalsystem which cannot bepossible. This flexibility has a direct impact onhow companies can grow quickly and efficiently. Virtualization allows
munotes.in

Page 123

123data migration, upgrade, and instant performance improvement into newVMs in a short time.Consolidation of servers:The design of virtual machines will replace almost 10:1 the physicalmachines. This eliminates the need for physical computers whileproviding efficient operation of systems and specifications. Suchconsolidation will minimize costs and the requisite physical space forcomputer systems.Improved System Reliability:One of the reasons for virtualization is its ability to help avoid failures inthe system. Memory corruption caused by system drivers and the like isthe most common crashes preventing a VM. Such systems describe theDMA architecture to improve I / O isolation. It offers improved securityand reliability.Virtual WorkStations:Virtualization provides the global versatility to allow multiple systems tobe run on a single computer to operate the systems remotely.VM alsoreduces all hardware and desktop footprint.3.12.1.2 Disadvantages of virtualization:Programs that Require Physical Hardware:Virtualization does not work well for any applications that requirephysical hardware. An example is something using adongle or otherhardware attached. Since the program needs to be a physical piece,virtualization would cause more headache than remaining on a physicalsystem.Performance Quality Can Decrease:If you run an application which runs RAM or CPU use,virtualization can cause a performance delay. A VM operates in layers onits hosting systems so that any operation with extreme performance willhave reduced performance if you do not use one application or server. Thedownside of virtualization is that many applications can run on smallphysical servers, so it is difficult to spend a single host on one server.Testing is Critical:The goal of IT is really to achieve your company objectives so thatyou won't have an untested software platform for your business. This isparticularly true for virtualization, as it doesn't work as if you can turn itoff and back on. A system that works smoothly already can performvirtualization, leading to errors and potential waste of time and expense.Before switching to VM, always check.munotes.in

Page 124

124Unexpected Expenses:Initially, it might seem that virtualization saves you a little money.But this is a process which needs to be completed and done correctly forthe first time. You will spend more than initially planned in order to takeaccount of this attention to time and detail. Before taking the plunge,please review the tools and management systems you may need to helpyou transition to a virtual machine.Data can be at Risk:Your data is hosting on a third-party network while operating onvirtual instances on shared hardware resources. This can vulnerability thedata to threats or unauthorized access. It is a problem if the securitysolution of your service provider does not secure your virtual instance anddata. Especially for virtualization of storage is true.Quick Scalability is a Challenge:Scaling on Virtualization is a limited time, so it doesn't matter ifit's to be achieved in such a short time. With physical set-up, newequipment and scale, even though it entails some initial problems, can bebuilt easily. Virtualization can be a tedious task to ensure all necessarysoftware, protection, adequate storage, and availability of resources. Thistakes longer than you might plan as a third-party vendor is involved.However, further management problems are also the additional costsinvolved in increasing the usage of cost.Unintended Server Sprawl:Unintentionally extending the server sprawl is a big problem forboth administrators and users alike. Some of the issues posedby people atthe service desk are application extensions. Installation of a physicalserver takes time and resources, while in a matter of minutes a virtualserver can be built. Users build new servers every time rather than re-using the same virtual server, as it allows them to start again. The serveradministrator who has five or six servers has 20 virtual servers to handle.This could lead to a big difficulty in smooth operations and the forced endof some servers could also result in data loss.3.13TECHNOLOGY EXAMPLES3.13.1 Xen: paravirtualization:Xen is a paravirtualized open source hypervisor. Paravirtualizationis the most popular application. Xen has been expanded with hardware-assisted virtualization to be fully compliant. It allows for highefficiency inthe guest operating system. That would likely be accomplished byeliminating the performance loss when executing the instructions requiringmunotes.in

Page 125

125considerable supervision and changing part of Xen's guest operatingsystem with regard to the performance of these instructions. Thisespecially supports x86, the most common architecture on commodity andserver machines.
FIGURE 3.17Xen architecture and guest OS management.(Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)Figure above defines the Xen and its mapping to a classic x86paradigm of privilege. Xen hypervisor is operating a Xen-based systemthat is operating in the most comfortable mode and retains the guestoperating system's access toessential hardware. Guest operating systemthat runs between domains, representing instances of virtual machines.However, a certain control program with privileged host accessand management of all other guest operating systems operates on aspecific domain called Domain 0. This is the onl y loaded one after afully-booted virtual machine manager hosts an HTTP server whichprovides requests to build, configure and terminate virtual machine. Thecomponent provides the first version of an IaaS (Infrastructure-as-a-Service) solution, shared virtual machine manager (VMM). The programis essential for cloud-based computers.Different x86 implementations allow four different safety levels,called rings, i.e., Ring 0, Ring 1, Ring 2, and Ring 3.Here, Ring 0 is the most privileged level and Ring 3 is the lessprivileged level. Nearly every OS, except OS/2, uses only two differentlevels, i.e. Ring 3, for user program and non-privilege OS, Ring 0 forkernel code, and. It gives the Xen an opportunity to achieveparavirtualization. This makes it possible to manage the ApplicationBinary Interface (ABI) unchanged and thus to switch from an applicationpoint of view to xen-virtualized solutions.
munotes.in

Page 126

126The structure of the set of instructions x86 enables the execution ofcode in the Ring 3 to move to ring 0 (kernel-mode). Such an operation isperformed at the hardware level, and thus, it can lead to TRAP or a silentfault in a virtualized system, thus preventing the overall operation of theguest OS in ring 1.In theory, this condition exists via a subset of system calls.Implementing the operating system needs a modification, and all of thecritical system calls need re-implementation by hypercalls to eradicate thissituation. Here, hypercalls are the special calls exposedvia the XenVirtual Machine (VM) interface, and Xen's hypervisor appears to obtain,manage and return the control with the aid of the supplied handler to theGuest OS.Paravirtualization calls for a shift to the OS-code base such that ina xen-based environment, no guest OS is available for all operatingsystems. This condition is used to prevent free hardware-assistedvirtualization, which requires the hypervisor to operate in Ring 1 and theguest OS at Ring 0. Xen thus demonstrates some drawbacks with respectto legacy hardware and legacy OS.Paravirtualization calls for a shift to the OS-code base such that ina xen-based environment, no guest OS is available for all operatingsystems. This condition is used to prevent free hardware-assistedvirtualization, which requires the hypervisor to operate in Ring 1 and theguest OS at Ring 0. Xen thus demonstrates some drawbacks with respectto legacy hardware and legacy OS.In reality, they cannot be changed in a responsible way to run ring1, as their codebase is unreachable and the primary hardware currently hasno support for running it in a more privileged mode than ring 0. Opensource OS like Linux can be updated simply because it has open code andXen provides full virtualization support, while Windowscomponents areessentially not compliant with Xen until hardware-assisted virtualization isavailable. With the introduction of new releases of OS, the issue is solvedand new hardware will support virtualization of x86.3.13.2 VMware: full virtualization:In full virtualization main hardware is duplicate and madeavailable to the guest operating system that is unaware of the abstractionand no criteria to change.VMware technology is based on the fullVirtualization. VMware implements full virtualizationeither in thedesktop environment, using the hypervisor Type-II, or in the serverenvironment, using the Type-I hypervisor .For all cases, a fullvirtualization of the non-sensitive instructions may be achieved directlyand a binary translation f or sensitive instructions or hardware traps, whichallows architecture such as x86 to become virtualized.3.13.3 Full Virtualization and Binary Translation:VMware is frequently used because x86 architectures areessentially virtualized, and it executes their hypervisors unmodified on themunotes.in

Page 127

127top. Full virtualization is possible by implementing hardware-assistedvirtualization by supporting hardware. Nevertheless, previously, x86 guestsystems could only be implemented with dynamic binary translationwithout modification in a virtualized environment.Since sensitive instruction does not constitute a privilegedinstruction, the first theorem of virtualization is not fulfilled with thedesign of x86 architecture.Due to this particular activity, theseinstructions arenot implemented in Ring 0, which is usual in avirtualization environment in which the guest operating system is run atRing 1.Essentially a trap is generated and the process is used in which thesolution for x86 is distinguished. In the case of dynamic binary translation,the trap is converted into a series of instructions that specify the sametarget without making exceptions.Therefore, the correspondinginstructions are stored in order to improve the efficiency, hence thetranslation is no longer necessary for the further encounters of the sameinstructions. The figure showing it is below.The main advantage of this method is that guests can operateunmodified in a virtualized environment, an essential component of theoperating system, which does not have the source code. For fullvirtualization, binary translation is portable. Besides translating theinstructions during runtime, an additional overhead is not present in otherforms, such as paravirtualization or hardware-assisted virtualization. Incomparison, binary translation is only done in a subset of the instructionsets, whereas the others are done on the main hardware directly. Thissomehow reduces the effect on binary translation efficiency.
FIGURE 3.18A full virtualization reference model.(Reference from “MasteringCloud Computing Foundations and Applications Programming” by RajkumarBuyya)
munotes.in

Page 128

128Advantages of Binary Translation:•This method of virtualization provides Virtual Machines with the bestisolation and security.•Many guest OS will actually run concurrently on the same hardware ina very isolated way.•This is only applied without hardware support or operating systemsupport in the virtualization of sensitive instructions and privilegedinstructions.Disadvantages of Binary Translation:•It is time consuming at run-time.•It achieves a high overhead performance.•The code cache is used to store the most used translated instructionsfor improving performance, but it increases memory usage withhardware costs.•On the x86 architecture, theperformance of the complete virtualizationis 80-95% of the host machine.3.13.5 Virtualization solutions:VMware is a pioneer in virtualization and cloud infrastructuresolutions that allow our 350,000-plus enterprise and customers to succeedin the cloud age. VMware simplifies its complexity across the entire datacenter and enables customers with software-defined data center solutionsto become hybrid cloud computing and the mobile workspace.3.13.5 End-user (desktop) virtualization:VMware desktop and app-virtualization technologies give IT astreamlined method for providing, securing and maintaining Windows andLinux desktops and applications on site or on the cloud, reducing costsand ensuring end users can operate everywhere and everywhere.VMwareWorkstation allows users to run different operating systems on one and thesame Windows or Linux PC concurrently. Create real Linux and WindowsVMs as well as other desktop, server and tablet environments, completewith virtual networking configurable andnetwork simulation, for use withcode development, solution architecture and application testing andproduct demonstrations, and much.VMware Fusion allows Mac users the ability to run Windows onMac together with hundreds of several other operating systems side-by-side without rebooting. Fusion is easy enough for home users andsufficiently efficient for IT experts, developers and businesses.Other thansetting up an independent computing environment, the two productsenable a guest operating system to exploit host machine resources (USBdevices, folder sharing and integration with the host operating system'smunotes.in

Page 129

129graphical user interface (GUI). Figure provides a description of thesystems' architecture.
FIGURE 3.19VMware workstation architecture.(Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)A guest operating system installed application creates the virtualizationenvironment, which enables those operating systems to fully virtualize thehardware that underlie it. This is done through the installation in the hostoperating system of a special driver which provides two main services:•It uses a virtual machine manager, which can be used in privilegedmode.•The VMware application offers straps for processing particular I / Orequests by subsequently forwarding these requests via system calls tothe host operating system.This architecture, also known as Hosted Virtual MachineArchitecture, can both separate virtual machine instances inside anapplication's memory space and provide decent efficiency, as VMwareapplication's involvement is only essential for instructions, for example, I /O devices which require binary translation. The Virtual Machine managermanages the CPU and the MMU and changes the operational functionalityof the CPU and MMU with the host OS. Virtual machine images arestored in a host file system catalogue, and that both VMware and VMwareFusion allow new images to be created, run, create snapshot and undooperational activities by turning back to a previous virtual machine stateVMware Player, VMware ACE and VMware ThinApp areadditional technologies relevant to the virtualization of end-usercomputing environments. VMware Player is a limited VMwareWorkstation version which enablesthe creation and emulation of virtualmachines of an operating environment such as Windows or Linux.VMware ACE is same as VMware Workstation for developing the policy
munotes.in

Page 130

130wrapped virtual machines for provisioning the secure deployment of clientvirtual environments on end user computers. VMware ThinApp is anapplication’s virtualization solution. It offers an independent developmentenvironment to prevent variations due to versioning and incompatibleapplications. It identifies the operating environment changes by installinga specific app and stores these in a package that can be executed withVMware ThinApp along with the binary app.3.13.6 Server virtualization:GSX Server is a Windows and Linux virtualized server systemdeveloped and distributed by VMware, a subsidiary of EMC Corporation,The program promotes remote management, provisioning and applicationstandardization.Figure demonstrates the architecture of the VMware GSXServer.
FIGURE 3.20VMware GSX server architecture.(Reference from “Mastering Cloud Computing Foundations andApplications Programming” byRajkumar Buyya)VMware GSX server converts computers into a collection ofvirtual machines. Operating systems and frameworks are in separationdifferent Virtual machines on a single hardwaredevice. VMware GSXServer offers wide support for inherited hardware support for the devicefrom the host. The reliable architecture and integration capability of theproduct VMware GSX Server makes Windows and Linux hostenvironments simple to use and manage. A host program for VMwareGSX Server helps you to deploy, monitor and control your application andmultiple servers on virtual machines operating remotelyThe architecture is designed primarily for web servervirtualization. A serverd called daemon process monitors and managesapplications for VMware. The VMware driver on the host operatingsystem then connects these programs to the virtual machine instances. TheVMM is used to handle virtual machine instances as earlier defined. Userrequests formanaging and providing virtual machines are redirected fromthe Web server via the serverd via the VMM.
munotes.in

Page 131

131The hypervisor-based method is demonstrated by VMware ESXServer and its improved edition VMWare ESXi Server. Each can bemounted on bare metal serversand provide VM management services.These two products offer similar services, but differ in the internalarchitecture, particularly in the hypervisor kernel organization. An updatedLinux operating system version that allows access to the hypervisor viathe service console is implemented by VMware ESX. VMware ESXIintroduces an incredibly thin OS layer and substitutes the service consolewith remote monitoring interfaces and utilities, thus greatly reducinghypervisor code size and memory footprint Figuredemonstrates thearchitecture of VMware ESXi.
FIGURE 3.21VMware ESXi server architecture(Reference from “Mastering Cloud Computing Foundations andApplications Programming” by Rajkumar Buyya)3.14 MICROSOFT HYPER-VHyper-V is a hypervisor, a virtualizing software of Microsoftwhich can build and host the enterprise virtual machines on x86-64systems, such as desktops and servers.Basically, hypervisor is a softwarethat permits several virtual servers (guest machine) to be operated on thephysical (host) server.A Hyper-V server may be configured to exposevirtual machines to one or more networks. The Windows Server 2008 firstlaunched Hyper-V3.14.1 Architecture:In terms of partition, Hyper-V implements virtual machineisolation. A partition, supported by the hypervisor, is a logical isolationunit in which every guest operating device performs. In a hypervisorinstance, there must be at least one parent partition running WindowsServer (2008 and later) enabled. Within the parent partition thevirtualization software works and has direct access to hardware devices.The parent partition produces child's partitions containing the guestoperating systems. A parent partition uses a hypercall API, the applicationframework, to build children's partitions exposed to the Hyper-V.
munotes.in

Page 132

132A child partition has no access or its actual interrupts to thephysical processor. Rather, the processor has a virtual view and operatesin the guest virtual address, which may not actually be the entire virtualaddress space depending upon on configuration of the hypervisor. Hyper-V will only show a subset of processors on each partition, depending onthe VM configuration. The hypervisor manages the interrupts to theprocessor with the aid of a logical Synthetic Interrupt controller(SynIC) tothe respective partition.Child partitions provide a virtual view of the resources of thevirtual machines, rather than a direct access for hardware sources. Eachvirtual device request is sent to the parent partition devices, which handletherequests, through the VMBus. The VMBus is a logical channel thatallows communication between sections. The response is also forwardedthrough the VMBus. When the parent partition devices are also virtualdevices, they are routed to the parent partition where physical devices areaccessed. Parent partitions operate a Virtualization Service Provider (VSP)that connects to the VMBus and handles requests for system access fromthe child partitions. Child's virtual partition devices run the VirtualizationService Client (VSC) internally, redirecting the program to the VSPs onthe VMBus parent. To the guest OS, this whole process is transparent.
FIGURE 3.22Microsoft Hyper-V architecture(Reference from “Mastering Cloud Computing Foundations andApplications Programming” by Rajkumar Buyya)SUMMARYVirtualization is an essential framework for a number oftechnologies and concepts. The popular source for virtualization is the
munotes.in

Page 133

133ability to demonstrate, using some kind of emulation or abstraction layer,a given runtime environment, whether a software, a storage facility, anetwork connection or a remote desktop. In building cloud services andinfrastructure, all these principles play an essential role, wherein hardware,IT infrastructure, applications and servicesare provided on demand via theInternet or usually through a network connection.UNIT END QUESTIONS1.Define Virtualization. What are the advantages of virtualization?2.What are the characteristics of virtualized environments?3.Describe classification or taxonomy of virtualization at differentlevels.4.Discuss the execution virtualization machine reference model.5.What are the techniques of hardware virtualization?6.List and discuss various virtualization types.7.What are the benefits of virtualization in the context of cloudcomputing?8.What are the disadvantages of virtualization?9.What is Xen? Discuss its elements for virtualization.10.Discuss the reference model of full virtualization.11.Discuss the reference model of paravirtualization.12.Discuss the architecture of Hyper-V. Discuss its use in cloudcomputingREFERENCE FOR FURTHER READING•Mastering Cloud Computing Foundations and ApplicationsProgramming Rajkumar Buyya ,Christian Vecchiola,S. Thamarai SelviMK publications ISBN: 978-0-12-411454-8•Cloud ComputingConcepts, Technology & Architecture Thomas Erl,Zaigham Mahmood, and Ricardo Puttini , The Prentice Hall ServiceTechnology Series ISBN-10 : 9780133387520 ISBN-13 : 978-0133387520•Distributed and Cloud Computing: From Parallel Processing to theInternet ofThings 1st Edition by Kai Hwang Jack Dongarra GeoffreyFox ISBN-10 : 9789381269237 ISBN-13 : 978-9381269237*****munotes.in

Page 134

134UNIT II4CLOUD COMPUTING ARCHITECTUREUnit Structure4.0.Objective4.1Introduction to Cloud Computing Architecture4.1.1 Architecture4.2Fundamental concepts and models4.3Roles and Boundaries4.3.1 Cloud Provider4.3.2 Cloud Consumer4.3.3 Cloud Service Owner4.3.4 Cloud Resource Administrator4.4Cloud Characteristics4.5Cloud Delivery models.4.5.1 On-Demand Usage4.5.2 Ubiquitous Access4.5.3 Multitenancy4.5.4 Elasticity4.5.5 Measured Usage4.5.6 Resiliency4.6Cloud Deployment models4.6.1 Infrastructure-as-a-Service (IaaS)4.6.2 Platform-as-a-Service (PaaS)4.6.3 Software-as-a-Service (SaaS)4.7Economics of the cloud4.7.1 Public Clouds4.7.2 Community Clouds4.7.3 Private Clouds4.7.4 Hybrid Clouds4.8Open challenges4.9Unit End Questions4.10References4.0. OBJECTIVEAfter going through this chapter, you will be able to:•Cloud Computing Architecture•Introductionmunotes.in

Page 135

135•Fundamental concepts and models•Roles and boundaries•Cloud Characteristics•Cloud Delivery models•Cloud Deployment models•Economics of the cloud•Open challenges.4.1PROLOGUE TO CLOUD COMPUTINGARCHITECTUREDistributed computing bolsters any IT administration which will bedevoured as an utility and conveyed through a system, apparently the web.Such portrayal incorporates very various perspectives: framework,advancement stages, application and administrations.4.1.1Engineering:It is conceivable to orchestrate all the solid acknowledge ofdistributed computing into a layered view covering the entire stack (seeFigure4.1), from equipment machines to programming frameworks.Cloud assets are tackled to flexibly "registering pull" required for offeringtypes of assistance. Frequently, this layer is actualized utilizing adatacenter during which hundreds and thousands of hubs are stackedtogether. Cloud framework are regularly heterogeneous in nature in brightof the detail that a spread of assets, similar to bunches and even organizedPCs, are frequently wont to construct it. Additionally, databaseframeworks and other stockpiling administrations likewise can be a pieceof the foundation.
Figure4.1 The cloud computing architecture(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
munotes.in

Page 136

136The physical framework is accomplished by the centermiddleware, the destinations of which are to flexibly a fitting runtimecondition for applications and tobest use assets. At the absolute bottom ofthe stack, virtualization innovations are wont to ensure runtime conditioncustomization, application seclusion, sandboxing, and nature ofadministration.Equipment virtualization is most normally utilized at this level.Hypervisors accomplish the pool of properties and uncover the scatteredframework as a lot of virtual machines. By devouring virtual machineinnovation it's probably going to incredibly divider the equipment assetslike CPU and memory and to virtualize explicit gadgets, therefore meetingthe needs of clients and applications. This goal is regularly matched withcapacity and system virtualization approaches, which license theframework to be totally virtualized and controlled. reliable with the exactassistance offered to complete clients, other virtualization procedures areregularly utilized; for instance, development-level virtualization supportsin making a transportable runtime condition where applications arefrequently run and controlled. This situation for the most part suggests thatapplications facilitated inside the cloud be created with a chose innovationor a programing language, similar to Java, .NET, or Python. during thiscase, the client doesn't have to assemble its framework from exposedmetal. Foundation the executives is the principle motivation behind centermiddleware, which underpins skills like the trade off of the standard ofadministration, affirmation control, execution the board and checking,bookkeeping, and charging.4.2 ESSENTIAL IDEAS AND MODELSThe blend of cloud facilitating stages and assets is typicallydelegated an Infrastructure-asa-Service (IaaS) arrangement. we will foundthe various instances of IaaS into two classifications: various them conveyboth the administration layer and subsequently the physical framework;others convey just the administration layer (IaaS (M)). during thissubsequent case, the administration layer is normally incorporated withdifferent IaaS arrangements that give physical foundationand increasesthe value of them.IaaS arrangements are appropriate for planning the frameworkfoundation however offer constrained types of assistance to makeapplications. Such assistance is given by cloud programming conditionsand apparatuses, whichstructure a trade layer for offering clients animprovement stage for applications. The scope of instruments incorporateWeb-based interfaces, order line apparatuses, and structures forsimultaneous and circulated programming. In this circumstance, clientsbuild up their applications decisively for the cloud by utilizing the APIuncovered at the client level middleware. Thus, this strategy is otherwisecalled Platform-as-a-Service (PaaS) on the grounds that the office offeredto the client is an improvementstage instead of a foundation. PaaSmunotes.in

Page 137

137arrangements for the maximum share incorporate the framework too,which is packaged as a major aspect of the administration gave to clients.On account of Unadulterated PaaS, just the client level middleware isoffered,and it must be supplemented with a virtual or physical foundation.The top layer of the reference model depicted in Figure 3.1 spreadsoffices brought at the application level. These are commonly referenced toas Software-as-a-Service (SaaS). By and large, these are Web-put togetherapplications that trust by reverence to the cloud to offer support to end-clients. The strength of the cloud-conveyed by IaaS and PaaS goalspermits self-governing programming sellers to convey their applicationoffices overthe Internet. Extra applications setting off to this layer arethose that unequivocally impact the Internet for their center functionalitiesthat trust on the cloud to withstand a bigger number of clients; this is thesituation of gaming entryways and, allin all, person to personcommunication sites.Table4.1 typifies the attributes of the three primary classificationsused to classify cloud ascertaining arrangements. In the accompanyingsegment, we incidentally purposeful these qualities alongside certaindirections to reasonable usage.CategoryCharacteristicsProductTypeVendors and ProductsSaaSConsumers aredelivered withapplication thatare availableanytime andfrom anywhereWebapplication andservices(Web 2.0)SalesForcce.com(CRM)Clarizen.com(Project management) GoogleAppsPaaSConsumers aredelivered with astage emergingapplicationshosted in thedoudProgramming APIsandframeworksDeployment systemsGoogle AppEngine MicrosoftAzure Manjrasoft Aneka DataSynapseIaaS/HaaSConsumers aredelivered withvirtualizedhardware andstorage on top ofwhich they canbuild theirinfrastructure.Virtualmachinemanagementinfrastructurestoragemanagement networkmanagementAmazon EC2 and S2 GoGridNirvanixmunotes.in

Page 138

1384.3JOBS AND CUTOFF POINTSAssociations and people can accept contrasting sorts of pre-characterized jobs relying on how they identify with or potentiallyinterface with a cloud and its introduced IT assets. Every one of things tocome parts contributes in and passes on out accountabilities regardingcloud-based activity.The accompanying segments depict these parts andrecognize their principle associations.4.3.1 CloudProvider:The association that gives cloud-based IT assets is that the cloudsupplier. While assuming the job of cloud provider, a company is at riskfor making cloud administrations accessible to cloud customers, accordingto endorsed SLA ensures. The cloud supplier is extra entrusted with anybasic administration and regulatory duties to ensure the on-goingprocedureof the general cloud foundation. Cloud providers normally ownthe IT properties that are made available for rent by cloud purchasers; bethat as it may, some cloud providers additionally "exchange" IT assetsrented from other cloud suppliers.4.3.2 CloudConsumer:A cloud purchaser is an affiliation (or a human) that has officialagreement or course of action with a cloud supplier to utilize IT propertiesmade accessible by the cloud supplier. Expressly, the cloud buyer utilizesa cloud administration shopper to get to a cloud administration (Figure).
Figure:4.2A cloud consumer (Organization A) cooperates with acloud service from a cloud provider (that owns Cloud A). WithinGroup A, the cloud service customer is being used to access the cloudservice.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.3.3 Cloud Service Owner :The individual or affiliation that authoritatively claims a cloudadministration is known as a cloud administration proprietor. The cloudadministration proprietor can be the cloud shopper or the cloud providerthat claims the cloud inside which the cloud administration dwells. Forinstance, in addition, the cloud customer of Cloud X or the cloud supplier
munotes.in

Page 139

139of Cloud X could possess Cloud Service A (Figures4.2 and4.3).
Figure4.2A cloud consumer can be a cloud service owner when itdeploys its own service in a cloud.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
Figure4.3A cloud provider becomes a cloud service owner if itdeploys its own cloud service, typically for other cloud consumers touse.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood,and Ricardo Puttini)4.3.4 Cloud Resource Administrator:A cloud asset executive is an individual or affiliation responsiblefor taking care of a cloud-based IT asset (counting cloud administrations).The cloud asset overseer can be (or have a place with) the cloud customeror cloud supplier of the cloud inside which the cloud administration existsin. Else, it very well may be (or have a place with) an outsider affiliationcontracted to coordinate the cloud-based IT asset. For instance, a cloudoffice owner can get a cloud asset head to oversee a cloud administration(Figures4.4 and4.5).
munotes.in

Page 140

140Figure4.4. A cloud resource administrator can be with a cloudconsumer organization and administer remotely accessible ITresources that belong to the cloud consumer.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
Figure4.5 A cloud resource administrator can be with a cloudprovider organization for which it can administer the cloudprovider’s internally and externally available IT resources.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.3.4 Cloud Resource Administrator:A cloud asset executive is an individual or affiliation responsiblefor taking care of a cloud-based IT asset (counting cloud administrations).The cloud asset overseer can be (or have a place with) the cloud customeror cloud supplier of the cloud inside which the cloud administration existsin. Else,it very well may be (or have a place with) an outsider affiliationcontracted to coordinate the cloud-based IT asset. For instance, a cloudoffice owner can get a cloud asset head to oversee a cloud administration(Figures4.4 and4.5).
munotes.in

Page 141

141Figure4.6A cloud resource administrator can be with a cloudconsumer organization and administer remotely accessible ITresources that belong to the cloud consumer.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
Figure4.7A cloud resource administrator can be with a cloudprovider organization for which it can administer the cloudprovider’s internally and externally available IT resources.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)The explanation a cloud asset executive isn't alluded to as a "cloudadministration manager" is on the lands that this job might be answerablefor directing cloud-based IT assets that don't existas cloudadministrations. For instance, if the cloud asset chairman fits to (or isShrunk by) the cloud supplier, IT assets not made remotely availablemight be controlled by this job (and these sorts of IT assets are notdelegated cloud administrations).4.4 LIMIT4.4.1Hierarchical Boundary:A hierarchical limit implies the physical outskirt that conditions a
munotes.in

Page 142

142lot of IT capitals that are had and controlled by an association. Theauthoritative limit doesn't show the limit of a real association, just ahierarchical arrangement of IT properties and IT assets. Also, mists havean authoritative limit (Figure4.6.).
Fig4.8Organizational boundaries of a cloud consumer (left), and a cloud Provider(right), represented by a broken line notation.(Reference :Cloud omputing(Concepts,Technology & Architecture) by Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.4.2 Trust Boundary:At the point when an association embraces the job of cloud clientto get to cloud-based IT assets, it needs to spread its trust outside thephysical limit of the association to contain portions of the cloud condition.A trust limit is a coherent fringe that normally ranges outside physicallimits to connote the degree to which IT assets are trusted (Figure 3.7).When investigating cloud situations, the trust limit is most every now andagain associated with the trust gave by the association going about as thecloud purchaser.
Fig4.9An extended trust boundary encompasses the organizational boundaries ofthe cloud providerand the cloud consumer.
munotes.in

Page 143

143(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.5 CLOUD CHARACTERISTICSAn IT air needs a careful arrangement of qualities to empower theremoved provisioning of walkable and estimated IT capitals in a realmanner. These qualities need to exist to an expressive degree for the ITclimate to be estimated a viable cloud.The accompanying six accurate attributes are normal to the standard ofcloud situations:• on-request utilization• pervasive access• multitenancy (and asset pooling)• flexibility• estimated utilization• strengthCloud suppliers and cloud clients can quantify these attributesindependently and together to gauge the worth commitment of a givencloud stage. Despite the fact that cloud-based administrations and ITassets will get and display particular qualities to variable degrees,generally the better how much they are strengthened and applied, thebetter the resulting esteem proposal.4.5.1 On-Demand Usage:A cloud client can separately get to cloud-based IT propertiesgiving the cloud client the self-rule to self-arrangement these ITproperties. When sorted out, utilization of oneself provisioned ITproperties can be programmed, needful no extra human interest by thecloud client or cloud supplier. This outcomes in an on-request utilizationcircumstance. Otherwise called "on-request self-administrationutilization," this trademark permits the administration based and usedriven highlights begin in customary mists.4.5.2 Universal Access:Universal access connotes the fitness for a cloud administration tobe widely accessible. Establishing omnipresent access for a cloudadministration can require arrangement for a scope of systems,transportation conventions, limits, and wellbeing advances. To allow thisdegree of access normally needs that the cloud administration design becustomized to the particular prerequisites of various cloud administrationclients.munotes.in

Page 144

1444.5.3 Multitenancy (and ResourcePooling) :The quality of a product bundle that permits a case of the programto support various clients (occupants) whereby each is remote from theother, is referenced to as multitenancy. A cloud supplier pools its ITproperties to enable various cloudto support clients by utilizingmultitenancy imitations that routinely trust on the utilization ofvirtualization advancements. Over the utilization of multitenancyinnovation, IT properties can be animatedly allotted and reallocated,rendering to cloud administration client requests.4.5.4 Versatility :Versatility is the programmed inclination of a cloud tounmistakably scale IT properties, as required in answer to runtimecircumstances or as customized by the cloud client or cloud supplier.Versatility is regularly estimated a center resistance for theacknowledgment of distributed computing, primarily because of the waythat it is firmly related with the Abridged Asset and Comparative Costsadvantage. Cloud suppliers with tremendous IT properties canoffer themost noteworthy scope of versatility.4.5.5 Estimated Usage:The deliberate utilization trademark means the fitness of a cloudstage to keep way of the use of its IT assets, for the most part by cloudclients. Established on what is estimated,the cloud supplier can charge acloud client just for the IT properties truly utilized as well as for the timespan through which access to the IT properties was chosen. In this uniquecircumstance, estimated utilization is firmly associated with the on-requesttrademark.4.5.6Strength:Strong computing is a type of failover that dispenses excessutilizations of IT properties across physical spots. IT properties can be pre-arranged so that in the event that one gets lacking, agreement isconsequently given over to extra excess application. Inside distributedcomputing, the attribute of flexibility can make reference to repetitive ITproperties inside a similar cloud (however in various physical areas) orover various mists. Cloud clients can development both the reliability andavailability of their applications by utilizing the strength of cloud-based ITproperties.4.6CLOUD CONVEYANCE MODELA cloud conveyance model connotes an assigned, pre-bundledblend of IT assets available by a cloud supplier. Three common cloudconveyance models turned out to be comprehensively perceived andmunotes.in

Page 145

145honorable:•Infrastructure-as-a-Service (IaaS)•Platform-as-a-Service (PaaS)•Software-as-a-Service (SaaS)4.6.1 Framework as-a-Service (IaaS) :The IaaS circulation model implies an independent IT climatecontained of foundation driven IT assets which will be recovered andaccomplished by means of cloud administration based interfaces anddevices. This climate can incorporate equipment, organize, network,working frameworks, and other "crude" IT assets. In distinction toconventional facilitating or redistributing environmental factors, withIaaS, IT assets are normally virtualized and pressed into wraps thatcompress in advance runtime climbing and customization oftheframework. the broadly useful of an IaaS domain is to flexibly cloudcustomers with an elevated level of control and responsibility over its.design and use. The IT assets gave by IaaS are by and large not pre-arranged, setting the official obligationstraightforwardly upon the cloudshopper. This model is consequently utilized by cloud buyers that need asignificant level of command over the cloud-based condition they willmake. Here and there cloud suppliers will contract IaaS contributions fromother cloud suppliers in order to scale their own cloud surroundings. thesorts and makes of the IT assets gave by IaaS items offered by variouscloud suppliers can change. IT assets accessible through IaaS conditionsare for the most part offered as newly instated virtual occurrences. A focaland first IT asset inside a run of the mill IaaS condition is that the virtualserver. Virtual servers are rented by indicating server equipmentnecessities, similar to processor limit, memory, and local space for puttingaway, as appeared in Figure
Fig4.10cloud customer is usinga virtual server within an IaaS atmosphere.cloud consumers are delivered with a range of contractual guarantees by thecloud provider, relating to physiognomies such as capacity, performance,andavailability.
munotes.in

Page 146

146(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.6.2 Stage as-a-Service (PaaS) :The PaaS conveyance model says to a pre-categorized "preparedto-utilize" condition ordinarily involved previously sent and arranged ITassets. In specific, PaaS be subject to on the application of an prompt areathat sets up a lot of pre-hustled stuffs and instruments used to help thewhole conveyance lifecycle of custom applications.Even motives a cloud buyer would apply and place properties into a PaaSdomain include:•The cloud buyer needs to reach out on-premise conditions into thecloud for versatility and financial purposes.•The cloud customer utilizes the instant conditionto totally substitutean on-premise condition.•The cloud consumer wants to go into a cloud dealer and takes itspersonal cloud administrations to be complete available to additionalouter cloud buyers.By employed privileged an immediate phase, the cloud shopper issaved the authoritative mass of location up and keeping up the exposedfoundation IT assets gave by means of the IaaS model. On the other hand,the cloud customer is conceded a lower level of authority over thefundamental IT assets that host and arrangement the stage (Figure 4.12).
Fig4.11Acloud consumer is accessing a ready-made PaaS environment. Thequestion mark indicates that the cloud consumer is intentionally shieldedfrom the implementation details of the platform.
munotes.in

Page 147

147(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)PaaS products are available with different development stacks. Forexample, Google App Engine offers a Java and Python-basedenvironment.4.6.3 Programming as-a-Service (SaaS):A product program situated as a typical cloud administration andmade open as an "item" or general worth connotes the standard profile of aSaaS offering. The SaaS conveyance model is normally wont to make arefillable cloud administration extensively available (frequentlymonetarily) to an assorted variety of cloud clients. Complete profitablecenter occurs about SaaS matters which will be borrowed and applied fordifferent drives and by incomes of numerous terms
Figure4.12. Thecloud service customer is given access the cloudagreement, but to not any fundamental IT resources or applicationdetails.(Reference :Cloud Computing(Concepts, Technology &Architecture) by Thomas Erl,Zaigham Mahmood, and Ricardo Puttini)A cloud purchaser is normally allowed constrained authoritativecommand above a SaaS practice.It's most normally provisioned by thecloud provider, yet it are frequently formally claimed by whicheverelement expect the cloud administration proprietor job. for example, anenterprise going about as a cloud buyer while utilizing and managing aPaaS domain can assemble a cloud administration that it chooses toconvey in that equivalent condition as a SaaS offering. An identicalassociation at that point adequately acceptthe cloud supplier job in brightof the fact that the SaaS-based cloud administration is framed accessible todifferent associations that go about as cloud purchasers when utilizing thatcloud administration.
munotes.in

Page 148

1484.7 CLOUD DEPLOYMENT MODELSA cloud procedure classical says to a selected kind of cloud state, basicallydocumented by proprietorship, scope, and access. There are four steadycloud sending models:•Public cloud•Community cloud•Private cloud•Hybrid cloud the resulting areas depicteach.4.7.1 Open Clouds:An open cloud might be a freely accessible cloud environmentclaimed by an outsider cloud provider. The IT properties on exposed hazesare commonly provisioned through the recently characterized cloudtransportation replicas andare typically offered to cloud purchasers at aworth or are marketed by means of different ways. a cloud breadwinner isaccountable for the arrangement and on-going support of the overallpopulation cloud and its IT properties. A significant number of thecircumstances and designs investigated in forthcoming sections includeopen mists and accordingly the association among the providers andbuyers of IT assets through open mists. Figure 3.10. shows a halfwayperspective on the general open cloud scene, featuring some of the mainsellers inside the commercial center.
Figure4.13Organizations act as cloudconsumers when accessing cloudservices and IT resources made available by different cloud providers.(Reference :Cloud Computing(Concepts, Technology &Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.7.2 NetworkClouds:A people group cloud resembles to a network cloud aside from that its
munotes.in

Page 149

149entrance is limited to a specific network of cloud buyers. The open cloudmight be together controlled by the open partners or by an outsider cloudsupplier that provisions an open cloud with restricted access. The partnercloud buyers of people in general commonly share the responsibility forcharacterizing and developing the network cloud (Figure4.11.).
Fig4.14(Reference :Cloud Computing(Concepts, Technology &Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)4.7.3 PrivateClouds:An individual cloud is claimed by one association. Private mistsempower a partnership to utilize distributed computing innovation as amethod of concentrating access thereto assets by various parts, areas, ordivisions of the association. At the point when an individual cloud existsas a controlled situation, the issues portrayed inside the Risks andChallenges segment from Chapter 3 don't will in general use.The utilization of an individual cloud can change how authoritativeand believe limits are characterized and applied. the specific organizationof an individual cloud condition could likewise be regulated by inside orre-appropriated staff.With an individual cloud, an identical association is actually both thecloud purchaser and cloud supplier (Figure 3.13). to separate these jobs:•An unmistakable authoritative segment normally accept theresponsibility for provisioning the cloud (and thusly expect the cloudsupplier job)•Divisions requesting access to the private cloud expect the cloudcustomer job.
munotes.in

Page 150

150Figure4.15Acloud service consumer within the organization’s on-premise environment accesses acloud service hosted on an equivalentorganization’s private cloud via a virtual private network.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)It is critical to utilize the affinities "on-reason" and "cloud-based"accurately inside the setting of an individual cloud. but the private cloudmay genuinely dwell on the association's premises, IT assets it armedforces are as yet estimated "cloud-based" insofar as they're made remotelyopen to cloud shoppers. IT assets facilitated outside of the private cloud bythe segments going about as cloud clients are along these lines considered"on-premise" regarding the private cloud-based IT assets.4.7.4 Half and half Clouds :A half and half cloud may be a cloud environment included of atleast two differing cloud arrangement models. for example , a cloud clientmay like to carry cloud managements making touchy information to anindividual cloud and extra, fewer delicate cloud administrations to an opencloud. The aftereffects of this blend might be a half and half organizationmodel (Figure4.13).
munotes.in

Page 151

151Figure4.16establishment employing a hybrid cloud architecture thatuses both a individual and public cloud.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)Mixture mien structures are regularly compound and testing tomake and proceed with gratitude to the conceivable contrast in cloud airsand in this way, the undeniable actuality that organization accountabilitiesare traditionally part between the private cloud supplier association andthe open cloud supplier.4.8 FINANCIAL MATTERS OF THE CLOUDThe principle drivers of distributed computing are economy ofscale and simplicity of programming conveyance and its activity. Indeed,the most significant favorable position of this wonder is monetary: thepay-more only as costs arise model offered by cloud suppliers.particularly, distributed computing permits:•Reducing the capital expenses related with the IT framework•Eliminating the devaluation or lifetime costs identified with IT capitalresources•Replacing programming permitting with memberships•Cutting the upkeep and regulatory expenses of IT assetsAn expense of capital is that the expense happened in buying anadvantage that is valuable inside the creation of items or the rendering of
munotes.in

Page 152

152administrations. Capital expenses are one-time costs that are commonlypaid forthright which will contribute over the futureto get benefit. TheITfoundation and along these lines the product is capital resources sinceendeavors expect them to lead their business. at the present, it doesn'tmake a difference whether the foremost business of an endeavor is saidthere to in brightof the fact that the business will surely have an IT officethat is wont to robotize a considerable lot of the exercises that areperformed inside the venture: finance, client relationship the board,undertaking asset arranging, following and stock of items, and others.Subsequently, IT assets comprise an expense of capital for any veryendeavor. it's acceptable practice to embrace to remain capital costs lowsince they present costs which will create benefit after some time; verythat, since they're identified with material things they're dependent upondevaluation above the extensive run, which inside the end diminishes thebenefit of the undertaking in light of the fact that such expenses arelegitimately deducted from the venture incomes. inside the instance of ITcapital costs, the deterioration costs are spoken to by the less valuable ofthe equipment after some time and in this manner the maturing ofprogramming items that require to get supplanted on the grounds that newhighlights are required.Before distributed computing diffused inside the venture, thefinancial plan spent consequently framework and programming compriseda major cost for medium-sized and gigantic endeavors. Numerousundertakings own a little or medium-sized datacenter that presents a fewoperational expenses as far as upkeep, power, and cooling. Extraoperational expenses are happened in keeping up an IT division and an ITbolster focus. In addition, different expenses are activated by the securingof likely costly programming. With distributed computing, these expensesare altogether decreased or simply vanish reliably with their infiltration.one of the advantages presented by the distributed computing model is thatit moves the capital expenses recently designated to the securing ofequipment and programming into operational expenses drafted by leasingthe foundation and paying memberships for the usage of the product.These expenses are regularly better controlled predictable with thebusiness needs and flourishing of the endeavor. Distributed computinglikewise presents decreases in authoritative and support costs. the amountof cost reserve funds that distributed computing can present inside anundertaking is said to the particular situation during which cloudadministrationsare utilized and the manner in which they add to get abenefit for the venture. inside the instance of a little startup, it'sconceivable to totally use the cloud for a few angles, for example,•IT foundation•Software improvement•CRM and ERPFor this situation, it's conceivable to totally dispose of capitalexpenses in light of the fact that there are no underlying IT resources.munotes.in

Page 153

153things are entirely unexpected inside the instance of undertakings thathave just got a generous measure of IT resources. during this case,distributed computing, particularly IaaS-based arrangements, can helpoversee impromptu capital costs that are created by the prerequisites of theattempt privileged the current instant. during this case, by utilizingdistributed computing, these expenses are frequently turning out to beoperational costs that keep going as long as there's a prerequisite for them.for example, IT foundation renting helps all the more productively overseetop burdens without actuating capital costs. As soon in light of the fact thatthe expanded burden doesn't legitimize the use of extra assets, these areregularly discharged and consequently the costs identified with themvanish. this is frequently the premier embraced model of distributedcomputing in light of the fact that numerous undertakings have just got IToffices. an elective decision is to frame a moderate progress toward cloud-based arrangements while the capital IT resources get deteriorated andwish to get supplanted. Between these two cases, there's a decent kind ofsituation during which distributed computing may be of help in producingbenefits for endeavors. Regarding the valuing models presented bydistributed computing, we will recognize three unique methodologies thatare received bythe suppliers.SUMMARYIn this chapter we study about the concept of Cloud ComputingArchitecture, fundamentalconcepts and models, what are the Roles andboundaries, Characteristics Cloud, what aretypes of Delivery models ofclouds , development modelsof cloud, Economics of the cloudand whatare the Open challenges of cloud computingREVIEW QUESTION1)Explain the Cloud Computing Architecture.2)What are the Fundamental concepts of cloud computing?3)Writea short note on models in cloud computing.4)What are the Roles and boundaries of cloud computing?5)What are the Characteristics of cloud computing?6)Whatis meantby Cloud Deployment models?REFERENCESCloud Computing(Concepts, Technology & Architecture) by ThomasErl,Zaigham Mahmood, andRicardo Puttini*****munotes.in

Page 154

1545FUNDAMENTALCLOUD SECURITY ANDINDUSTRIALPLATFORMS AND NEWDEVELOPMENTSUnit Structure5.1Fundamental Cloud Security5.1.1 Confidentiality5.1.2 Integrity5.1.3 Authenticity5.1.4 Availability5.1.5 Threat5.1.6 Vulnerability5.1.7 Risk5.1.8 Security Controls5.1.9 Security Mechanisms5.1.10 Security Policies5.2Basics5.2.1 Threat Agents5.2.2 Anonymous Attacker5.2.3 Malicious Service Agent5.2.4 Trusted Attacker5.2.5 Malicious Insider5.3Threat agents5.3.1 Traffic Eavesdropping5.3.2 Malicious Intermediary5.3.3 Denial of Service5.3.4 Insufficient Authorization5.3.5 Virtualization Attack5.3.6 Overlapping Trust Boundaries5.3.7 Risk Management5.4Cloud security threats5.5Additional considerations5.5.1 Proposition of AWS5.5.2 Understating Amazon Web Services5.5.3 Component and Web Services of AWS5.5.4 Elastic Cloud Computemunotes.in

Page 155

1555.6Industrial Platforms and New Developments5.7Amazon Web Services5.7.1 More on MS Cloud5.7.2 Azure Virtual Machines5.7.3 Element of Microsoft Azure5.7.4 Access Control of MS Cloud5.8Google App Engine5.9Microsoft AzureSummary5.10Unit End Questions5.11References5.1 FUNDAMENTAL CLOUD SECURITYData security is a compound group of procedures, advancements,guidelines, and exhibitions that cooperatively guard the genuineness ofand access to PC frameworks and information. IT security occasionsexpect to ensure against dangers and obstruction that ascent from bothnoxious purpose and accidental client mistake.5.1.1 Secrecy:Secrecy is the attribute of something being made available just toapproved gatherings (Figure5.1). Inside cloud situations, secrecy basicallyrelates to limiting access to information in travel and capacity.
Figure5.1.: The message issued by the cloud consumer to the cloudservice is considered confidential only if it is not accessed or read byan unauthorized party.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.1.2 Integrity:Honesty is the quality of not hosting been changed by anunapproved gathering (Figure5.2). A important matter that worries
munotes.in

Page 156

156information respectability in the cloud is whether a cloud buyer can beensured that the information it transmits toa cloud administrationcoordinates the information got by that cloud administration.Respectability can stretch out to how info is set away, prepared, andrecovered by cloud administrations and cloud-based IT assets.
Figure5.2.: The message issued by the cloud consumer to the cloudservice is considered to have integrity if it has not been altered.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.1.3 Realness:Realness is the quality of something having been given by anapproved source. This knowledge includes non-revocation, which is theletdown of a meeting to reject or challenge the authorization of aassociation. Verification in non-repudiable connections gives evidencethat these associations are remarkably connected to an approved source.Now take example, a consumer will most expected be unable to get to anon-reputable article later its receiving deprived of similarly generating arecord of this entry.5.1.4 Accessibility:Accessibility is the attribute of being open and usable during apredefined timeframe. In average cloud situations, the accessibility ofcloud administrations can be an obligation that is shared by the cloudsupplier and the cloud bearer. The accessibility of a cloud-basedarrangement that reaches out to cloud administration customers isadditionally shared by the cloud purchaser.5.1.5 Danger:A danger is a potential security infringement that can move guardstrying to break protection and additionallycause hurt. Both physically andnaturally incited dangers are intended to misuse referred to shortcomings,likewise alluded to as vulnerabilities. A danger that is done outcomes in anassault.
munotes.in

Page 157

1575.1.6 Powerlessness:A defenselessness is a shortcoming that can be misused either inbright of the detail that it is ensured by inadequate security controls, or onthe grounds that present safety panels are overwhelmed by an assault. ITasset vulnerabilities can have a scope of causes, including setupinadequacies, security strategy shortcomings, client mistakes, equipmentor firmware defects, programming bugs, and helpless security design.5.1.7 Hazard:Hazard is the chance of misfortune or mischief emerging fromplaying out an action. Hazard is regularly estimated by its danger level andthe quantity of conceivable or known vulnerabilities. Two measurementsthat can be utilized to decide chance for an IT asset are:•The likelihood of a danger happening to abuse vulnerabilities in the ITasset•The desirefor misfortune upon the IT asset being undermined Insightsabout hazard the board are shrouded later in this section.5.1.8 SecurityControls:Security controls are countermeasures used to forestall or react tosecurity dangers and to decrease or maintain a strategic distance fromhazard. Subtleties on the greatest capable technique to use securitycountermeasures are ordinarily laid out in the security strategy, whichcontains a lot of rules and works on determining how to execute aframework, administration, or security plan for greatest assurance oftouchy and basic IT assets.5.1.9 SecurityMechanisms:Countermeasures are commonly depicted as far as securitysystems, which are parts containing a guarded structure that ensures ITassets, data, and administrations.5.1.10 SecurityPolicies:A security strategy sets up a lot of security rules and guidelines.Regularly, security strategies will additionally characterize how theseguidelines and guidelines are executed and implemented. For instance, thesituating and use of security controls and components can be controlled bysecurity arrangements5.2.1 Danger Agents:A danger specialist is an element that represents a danger since it isfit for doing an assault. Cloud security dangers can begin either inside orremotely, from people or programming programs. Comparing dangeroperators are depicted in the up and coming areas. Figure5.3 shows thejob a danger operator accept comparable to vulnerabilities, dangers, anddangers, and the shields built upby security approaches and securitysystems.munotes.in

Page 158

158Figure5.3.: How security policies mechanisms are used to counterthreats vulnerabilities, and risks caused by threat agents.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.2.2 Unknown Attacker :An unknown assailant is a non-believed cloud administration buyerwithout authorizations in the cloud. It regularly exists as an outerprogramming program that dispatches organize level assaults throughopen systems. At the point when mysterious aggressors have restricteddata on security approaches and protections, it can restrain their capacityto detail successful assaults. In this manner, unknown assailants frequentlyresort to submitting acts like bypassing client records or taking clientqualifications, while utilizing strategies that either guarantee namelessnessor require considerable assets for arraignment.
Figure5.4. The notation used for an anonymous attacker.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
munotes.in

Page 159

1595.2.3Mysterious Attacker:A mysterious assailant is a non-believed cloud administrationshopper without authorizations in the cloud. It ordinarily exists as an outerprogramming program that dispatches arrange level assaults through opensystems. At the point when mysterious aggressors have restricted data onsecurity arrangements and barriers, it can hinder their capacity to planpowerful assaults. Accordingly, mysterious aggressors regularly resort tosubmitting acts like bypassing client records or taking client certifications,while utilizing strategies that either guarantee namelessness or requiregenerous assets for indictment.
Figure5.5The notation used for a malicious service agent.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.2.4 Confided in Attacker :A believed assailant shares IT assets in a similar cloud condition asthe cloud purchaser and endeavors to misuse real accreditations to targetcloud suppliers and the cloud inhabitants with we celebrate IT assets(Figure4.6). In contrast to mysterious aggressors (which are non-believed), believed assailants as a rule dispatch their assaults from inside acloud's trust limits by manhandling genuine qualifications or by means ofthe selection of touchy and private data. Confided in aggressors (otherwisecalled vindictive inhabitants) can utilize cloud-based IT assets for a widescope of abuses, including the hacking of feeble validation forms, thebreaking of encryption, the spamming of email accounts, or to dispatchbasic assaults, for example, forswearing of administration crusades.
Figure5.6The notation that is used for a trusted attacker.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
munotes.in

Page 160

1605.2.5 Noxious Insider:Noxious insiders are human danger specialists following up for thebenefit of or corresponding to the cloud supplier. They are ordinarilycurrent or previous workers or outsiders with access to the cloud supplier'spremises. This sort of danger specialist conveys enormous harm potential,as the malignant insider may have regulatory benefits for gettingto cloudbuyer IT assets.
Figure5.7The notation used for an attack originating from aworkstation. The human symbol is optional.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.3 CLOUD SECURITY THREATSThis segment presents a few normal dangers and vulnerabilities incloud-based situations and portrays the jobs of the previously mentioneddanger operators.5.3.1 Traffic Eavesdropping:Traffic listening in happens when information being moved to orinside a cloud (normally from the cloud customer to the cloud supplier) isinactively captured by a vindictive assistance specialist for ill-conceiveddata gathering purposes (Figure4.8). The point of this assault is tostraightforwardly bargain the secrecy of the information and, conceivably,the classification of the connection between the cloud buyer and cloudsupplier. Due to the uninvolved idea of the assault, it can all the moreeffectively go undetected for broadened timeframes.
munotes.in

Page 161

161Figure5.8An externally positioned malicious service agents carriesout a traffic eavesdropping attack by intercepting a message sent bythe cloud service consumer to the cloud service. The service agentmakes an unauthorized copy of the message before it is sent along itsoriginal path to the cloud service.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.3.2 Malevolent Intermediary :The malevolent mediator danger emerges when messages arecaught and changed by a pernicious help specialist, in this mannerconceivably trading off the message's classification and additionallyuprightness. It might likewise embed destructive information into themessage before sending it to its goal.Figure4.9. represents a typical caseof the malevolent mediator assault.
Figure5.9The malicious service agent intercepts and modifies amessage sent by a cloud service consumer to a cloud service(notshown) being hosted on a virtual server. Because harmful data ispackaged into the message, the virtual server is compromised.
munotes.in

Page 162

162(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.3.3 Forswearing of Service :The goal of the disavowalof administration (DoS) assault is to over-burden IT assets to where they can't work appropriately. This type ofassault is regularly propelled in one of the accompanying ways:•The outstanding task at hand on cloud administrations ismisleadingly expanded with impersonation messages or rehashedcorrespondence demands.•The system is over-burden with traffic to lessen its responsivenessand challenged person its presentation.•Multiple cloud administration demands are sent, every one of whichis intended to expend unreasonable memory and handling assets.Effective DoS assaults produce server debasement as well asdisappointment, as delineated in Figure5.10.:
Figure5.10Cloud service consumer A sends multiple message to a cloudservice(not shown) hosted on virtual server A. This overloads the capacity ofthe underlying physical server, which causes outages with virtual servers Aand B. As a result legitimate cloud service consumers, such as cloud serviceconsumer B. become unable to communicate with any cloud services hostedon virtual servers A and B.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)
munotes.in

Page 163

1635.3.4Refusal of Service:The target of the refusal of administration (DoS)assault is to over-burdenIT assets to where they can't work appropriately. This type of assault isnormally propelled in one of the accompanying ways:•The remaining task at hand on cloud administrations is falselyexpanded with impersonation messagesor rehashed correspondencedemands.•The system is over-burden with traffic to decrease its responsivenessand handicapped person its exhibition.•Multiple cloud administration demands are sent, every one of whichis intended to expend over the topmemory and preparing assets.Fruitful DoS assaults produce server corruption as well as disappointment,as represented in Figure5.10.:
Figure5.11Cloud service consumer A gains access to a database that wasimplemented under the assumption that it would only be accessed through aweb service with a published service contract (as per cloud service consumerB)(Reference :Cloud Computing(Concepts, Technology &Architecture) by Thomas Erl,Zaigham Mahmood, and RicardoPuttini)A variety of this assault, known as feeble confirmation, can resultwhen frail passwords or shared records are utilized to secure IT assets.Inside cloud situations, these kinds of assaults can prompt noteworthyeffects relying upon the scope of IT assets and the scope of accessto thoseIT assets the assailant gains.5.3.5Virtualization Attack :Virtualization furnishes different cloud shoppers with access to ITassets that share basic equipment however are coherently disconnectedfrom one another. Since cloud suppliers awardcloud buyers regulatory
munotes.in

Page 164

164access to virtualized IT assets, (for example, virtual servers), there is aninalienable hazard that cloud purchasers could manhandle this entrance toassault the basic physical IT assets. A virtualization assault misusesvulnerabilities in the virtualization stage to risk its secrecy, honesty, aswell as accessibility. This danger is outlined in Figure4.12, where abelieved aggressor effectively gets to a virtual server to bargain its hiddenphysical server. With open mists, wherea solitary physical IT asset mightbe giving virtualized IT assets to various cloud buyers, such an assault canhave noteworthy repercussions.
Figure5.12An authorized cloud service consumer carries out avirtualization attack by abusing its administrative access to a virtualserver to exploit the underlying hardware.(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.3.6 Covering TrustBoundaries:In the event that physical IT assets inside a cloud are shared byvarious cloud administration shoppers, these cloud administrationpurchasers have covering trust limits. Malevolent cloud administrationcustomers can target imparted IT assets to the aim of bargaining cloudshoppers or other IT assets that share a similar trust limit. The outcome isthat a few or the entirety of the other cloud administration shoppers couldbe affected by the assault as well as the assailant could utilize virtual ITassets against others that happen to likewise have a similar trust limit.
munotes.in

Page 165

165Figure5.13 illustrates an example in which two cloud service consumers sharevirtual servers hosted by the same physical server and, resultantly, their respectivetrust boundaries overlap.(Reference :Cloud Computing(Concepts, Technology & Architecture) by ThomasErl,Zaigham Mahmood, and Ricardo Puttini)5.3.7 Hazard Management :While evaluating the expected effects and difficulties relating tocloud reception, cloud shoppers are urged to play out a conventionalhazardappraisal as a major aspect of a hazard the executives procedure. Aconsistently executed procedure used to upgrade key and strategicsecurity, chance administration is involved a lot of facilitated exercises forsupervising and controlling dangers. The fundamental exercises arecommonly characterized as hazard appraisal, chance treatment, and hazardcontrol (Figure5.14).
Figure5.14The on-going risk management process, which can be initiatedfrom any of the three stages.
munotes.in

Page 166

166(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)• Risk Treatment:Mitigation arrangements and plans are structuredduring the hazard treatment stage with the expectation of effectivelyrewarding the dangers that were found during hazard evaluation. A fewdangers can be dispensed with, others can be moderated, while others canbe managed through re-appropriating or even joined into the protection aswell as working misfortune financial plans. The cloud supplier itselfmayconsent to accept accountability as a feature of its legally bindingcommitments.• Risk Control:The hazard control stage is identified with chancechecking, a three-advance procedure that is included studying relatedoccasions, surveying these occasions to decide the adequacy of pastappraisals and medicines, and recognizing any strategy modificationneeds. Contingent upon the idea of the checking required, this stage mightbe done or shared by the cloud supplier.5.4MECHANICAL PLATFORMS AND NEWDEVELOPMENTSAdvancement of a distributed computing application occurs byutilizing stages and structures that give various sorts of administrations,from the exposed metal foundation to adaptable applications fillingexplicit needs.5.5AMAZON WEB ADMINISTRATIONS (AWS)One of the utmost much-admired and enormous traffic sites is theAmazon.com which offers a tremendous determination of items utilizingframework based web administration. This organization was begun in theyear 2006 with the accessiblestage on web-administration for designerson a use premise model. This organization brings the best case of web-administration accomplished through the administration situatedengineering.Amazon Web Service is a valuable of Amazon.com. Amazon hasmadeit conceivable to create private virtual servers that can run overallthrough 'Equipment Virtualization' on Xen hypervisor. These servers canbe provisioned with various kinds of utilization programming that clientmay foresee alongside a scope of help benefits that makes distributedcomputing applications conceivable as well as make them solid towithstand calculation.munotes.in

Page 167

167Figure5.15(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)In view of SOA standard, and SOAP, REST and HTTP moveconventions, moreover open-source and business OS, program basedprogramming and application servers are running by Amazon WebService. AWS offers different set-ups of Cloud registering innovation thatmakes up an on-request computational stage. These administrations getworked from twelve distinctive geological areas and among them, themost popular is the Amazon's Elastic Compute Cloud (EC2) andAmazon's Simple Storage Service (S3).5.5.1Recommendation ofAWS:AWS has a tremendous offer. Just clients need to pay that they use,which can spare a lot of cash. AWS has in extra of seventy administrationsincluding capacity, figure, database, organizing, applicationadministration, versatile, the executives,engineer's devices and IoT.5.5.2Downplaying Amazon Web Services:It is the world's biggest online retailer. Before Amazon.com, theworld's greatest retailer was Wal-Mart. As per the yearly report of theyear 2009, the net offer of Amazon is $24.51 billion. It has an immensebusiness, and for this, it has fabricated a colossal system of IT frameworksfor supporting. AWS basically takes Amazon.com system's trulybeneficial business bringing a huge measure of income.AWS has an enormous power and effectin cloud innovation,giving the biggest Infrastructure as a Service (IaaS) commercial center.5.5.3Segment and Web Services of AWS:The Amazon's web administrations have the accompanying segments:
munotes.in

Page 168

168•Amazon Elastic Compute Cloud: (EC2; http://aws.amazon.com/ec2/)is the incorporated use of AWS which encourages the administrationand use of virtual private servers that can run on Windows and Linux-based stages over Xen Hypervisor. Various devices are utilized to helpAmazon's web administrations. These are:•Amazon Simple Queue Service is a message line and exchangeframework for dispersed Internet-based applications.•Amazon Simple Notification Service is utilized to distributemessage from an application.•Amazon CloudWatch is utilized for observing EC2Cloud whichunderpins by giving console or order line perspective on assets inuse.•Elastic Load Balancing is utilized to identify whether anoccurrence is falling flat or check whether the traffic is sound ornot.•Amazon's Simple Storage Service: is anonline stockpiling andreinforcement framework which has rapid information movestrategy called AWS Import/Export.Additional web-services mechanisms are:•Amazon's Elastic Block Store•Amazon's Simple Database (DB)•Amazon's Relational Database Service•Amazon CloudfrontA large number of services and utilities also support Amazon partners, i.e.,the AWS infrastructure itself. These are:•Alexa Web Information Service•Amazon Associates Web Services (A2S)•Amazon DevPay•Elastic Map-Reduce•Amazon's MechanicalTurk•AWS Multi-factor Authentication•Amazon's Flexible payment Service (FPS)•Amazon's Fulfillment Web-Service (FWS)•Amazon Virtual Private Cloud5.5.4Flexible Cloud Compute:It is a virtual server stage permitting clients o make and run virtualmachines on Amazon Server firm. The Amazon Machine Images (AMI) isused by EC2 to communication and run server samples for runningworking frameworks like: Linux (Red-Hat), Windows, and so on variousservers. As the name proposes, we can include or take away flexibly asmunotes.in

Page 169

169and when required; reproduce and load-adjusted servers. We can likewisefind our servers in various zones all through the globe to give adaptationto non-critical failure.The term 'versatile' characterizes the capacity to resize your ability rapidlyvarying. Executing a help may require the accompanying segments:•Application server (having large RAM allocation)•A load balancer•Database server•Firewall and network switches•Additional rack capacity5.6GOOGLE APP ENGINEGoogle App Engine permitsus to run (have) your own Webapplications on Google's foundation. Be that as it may, in no method,figure or form is this a "lease a bit of a server" facilitating administration.With App Engine, your application isn't facilitated on a solitary server.There are no servers to keep up: You simply transfer your application, andit's prepared to serve your clients. Similarly as adjusting a Google searchsolicitation may include handfuls, or even many Google servers, allcompletely covered up and fulfilled ina small amount of a second, GoogleApp Engine applications run a similar way, on a similar framework. Thisis the novel part of Google's methodology. Truly, you surrender somecontrol to Google, yet you are compensated by being absolutely liberatedfrom the foundation, limit the executives, and burden adjustingassignments that undertaking normally need to oversee, independent ofwhether they are self-facilitating or facilitating on another person's PaaSor IaaS.You can decide to impart your applicationto the world, or limitaccess to individuals from your association. Google App Engine bolstersapplications written in a few program design vernaculars:With App Engine's Java runtime condition, you can manufactureyour application utilizing standard Java inventions, counting the JVM,Java servlets, and the Java program design language—or some otherlanguage utilizing a JVM-based mediator or compiler, for example,JavaScript or Ruby. Application Engine likewise includes a committedPython runtime condition, which incorporates a quick Python translatorand the Python standard library. The Java and Python runtime situationsare worked to guarantee that your application runs rapidly, safely, andwithout impedance from different applications on the framework.Likewise with most cloud-facilitating administrations, with AppEngine, you just compensation for what you use. Google demands no set-up costs and no repetitive charges. Like Amazon's AWS, assets, formunotes.in

Page 170

170example, stockpiling and data transfer capacity areestimated by thegigabyte.Application Engine costs nothing to begin. All applications canutilize around 500 MB of capacity and sufficient CPU and datatransmission to help a proficient application portion round 5 million sitevisits a month, totally free. At the point when you empower charging foryour application, your free cutoff points are raised, and you justcompensation for assets you use over the free levels.Application designers approach tireless capacity innovations, forexample, the Google File System (GFS) and Bigtable, a circulatedstockpiling framework for unstructured information. The Java renditionunderpins offbeat nonblocking inquiries utilizing the Twig ObjectDatastore interface. This offers an option in contrast to utilizing stringsforequal information preparing."With Google App Engine, designers can compose Webapplications dependent on a similar structure hinders that Google utilizes,"Kevin Gibbs, Google's specialized lead for the undertaking, wrote in TheOfficial Google Blog"Google. Twig is an item industriousness interfacebased on Google App Engine's low-level datastore which defeats aconsiderable lot of JDO-GAEs impediments, including full help forlegacy, polymorphism, and conventional sorts. You can without much of astretch design, change or broaden Twig's conduct by actualizing your ownsystems or superseding augmentation focuses in unadulterated Java code.Application Engine bundles those structure squares and gives access toadaptable foundation that we expectationwill make it simpler fordesigners to scale their applications naturally as they develop."Google App Engine has showed up when an expanding number oftech organizations are moving their tasks to the cloud; it places Googlesolidly in rivalry with Amazon's Elastic Cloud Computing (EC2) andSimple Storage Service (S3) contributions.Google says its vision with Google App Engine is to offerdesigners a progressively all encompassing, start to finish answer forbuilding and scaling applications on the web.Its servers are designed toadjust the heap of traffic to engineers' applications, scaling to satisfy theneed of a flood of traffic. Application Engine likewise incorporates APIsfor client validation to permit designers to sign on for administrations, andfor email, to oversee correspondences.InternetNews.com detailed,Through its underlying review, Google's App Engine will beaccessible allowed to the initial 10,000 designers who sign up, with plansto extend that number in the future.During thatperiod, clients will beconstrained to 500MB of capacity, 10GB of every day transfer speed and5 million day by day site hits, the organization said. Engineers will havethe option to enroll up to three applications.munotes.in

Page 171

1715.7MICROSOFT AZUREIt gives a wide assortment of administrations which cloud clientscan use without buying your own equipment. It empowers fastimprovement of arrangements and giving the assets to that condition.Without the need to stress over gathering of physical foundation, Azure'sfigure, Azure's process, system, stockpiling and applicationadministrations permit clients to concentrate on building incrediblearrangements.Azure Services:AzureServicesremembers different administrations for its cloudinnovation. These are:1.ComputeServices:This holds MS Azure administrations, for example,Azure VM, Azure Website, Mobile administrations and so forth.2.Data Services:It incorporates MS Azure Storage, Azure SQLdatabase, and so on.3.Application Services:It Includes those administrations that causesclients to fabricate and work applications, for example, Azure ActiveDirectory, Service transport for associating appropriated frameworks,preparing enormous information and so forth.4.Network Services:It incorporates Virtual system,content conveyancesystem and Traffic administrator of Azure.There are other services such as:•BizTalk•Big Compute•Identity•Messaging•Media•CDN etc...5.7.1More on MS Cloud:The beginning zone for Microsoft's Cloud innovation endeavorsmight be found at Microsoft.com/cloud. It has an immense scope of cloudinnovation items and some driving Web-utilizations of the business.Microsoft Messenger turned into the market chief after America OnlineInstant Messenger (AIM). Progressively with the ascent of e-office andadvertising field, Microsoft considers its to be as giving best Webunderstanding for various kinds of gadgets, for example, PCs, work areas,PCs, tablets, advanced lockups, and so out.5.7.2Purplish blue Virtual Machines:It is one of theunified highlights of IaaS ability of MS Azure alongmunotes.in

Page 172

172with virtual system. Purplish blue's VM support the improvement ofWindows Server (or Linux VM) in MS Azure's datacenter; where youhave unlimited oversight over the Virtual machine's setup. Sky blue's VMhas three potential states:•Running•Stopped•Stopped (De-designated)The VM gets the stop (de-assigned) state as a matter of course when it ishalted in the Azure Management Portal. On the off chance that we need tokeep it halted just as apportionedwe need to utilize PowerShell cmdletwith the accompanying order:> Stop-AzureVM-Name "az-essential"-ServiceName "az-essential"-StayProvisioned5.7.3Element of Microsoft Azure:There are 6 main elements that form Windows Azure. These are:•Compute•Storage•Application•Fabric•VM (Virtual Machines)•Config (Configuration)
Figure5.16Elements of Microsoft Azure:
munotes.in

Page 173

173(Reference :Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham Mahmood, and Ricardo Puttini)5.7.4Access Control ofMS Cloud:It permits an application to believe the character of anotherapplication and this strategy can meet up with personality suppliers, forexample, ADFS to make conveyed frameworks dependent on SOA. Themeans required for Access Control are:•The customer/client sends demand for verification from AC (AccessControl)•Access Control creates a token dependent on put away guidelines forserver application•The token is marked and come back to customer application•The customer present the gottoken to the administration application•The check of the mark is done at long last and uses a token to pickwhether cloud application is permitted or not.SUMMARYIn this chapter we study about the concept of issues FundamentalCloud Security likeConfidentiality, Integrity, Authenticity, Availability,etc…. Basic concept about ThreatAgents, Anonymous Attacker,Malicious Service Agent, Trusted Attacker, etc.., study aboutthreat agents,etc….,which are the Cloud security threats, what the use of AWS,WebServiceand Component and Web Services of AWS, also IndustrialPlatforms and NewDevelopments, detail study of Amazon Web Services,Google App Engine and MicrosoftAzure.REVIEW AND QUESTION1)Explain the Cloud Computing Architecture2)What are the Fundamental concepts of cloud computing3)Write a short note on models in cloud computing4)What are the Roles and boundaries of cloud computing5)What are the Characteristics of cloud computing6)What is mean by Cloud Deployment modelsREFERENCE•Cloud Computing(Concepts, Technology & Architecture) byThomas Erl,Zaigham•Mahmood, and Ricardo Puttini*****munotes.in

Page 174

174UNITIII6SPECIALIZED CLOUD MECHANISMSUnit Structure6.1Objectives6.2Introduction6.3Automated Scaling Listener6.3.1 Case of DTGOV6.3.2 AScaling-Down6.3.3 B Scaling-Up6.4Load Balancer6.4.1 How does load balancing work?6.5SLA Monitor6.6Pay-per-use monitor6.7Audit monitor6.8Failover System6.8.1 Failover systems come in two basic configurations6.8.2 A Active-Active6.8.3 B Active-Passive6.9Hypervisor6.9.1 Hypervisors Are Divided Into Two Types6.9.2 A Type one is the bare-metal hypervisor6.9.3 B Type two is a hosted hypervisor that runs as a software layer6.10Resource Cluster6.10.1 Common resource cluster types6.10.2 A Server Cluster6.10.3 B Database Cluster6.10.4 C Large Dataset Cluster6.10.5 There are two basic types of resource clusters6.10.6 A Load Balanced Cluster6.10.7 B High-Availability (HA) Cluster6.11Multidevice broker6.12State Management Database6.13Unit End Question6.14Referencesmunotes.in

Page 175

1756.1 OBJECTIVECollective to offer different and tradition architecture. Thismechanism is a service agent that monitors and tracks communicationsbetween cloud service consumers and cloud services for dynamic scalingpurposes. This way, the cloud consumer can choose to adjust its current ITresource allocation.6.2 INTRODUCTIONA typical cloud technology architecture contains numerous movingparts to address distinct usage requirements of IT resources and solutions.Each mechanism covered in this chapter fulfills a specific runtimefunction in support of one or more cloud characteristics.6.3AUTOMATED SCALING LISTENERThe automated scaling listener mechanism is a service agent thatmonitorsand tracks communications between cloud service consumersand cloud services for dynamic scaling purposes. Automated scalinglisteners are deployed within the cloud, typically near the firewall, fromwhere they automatically track workload status information.Workloads can be determined by the volume of cloud consumer-generated requests or via back-end processing demands triggered bycertain types of requests. For example, a small amount of incoming datacan result in a large amount of processing.Automated scaling listeners can provide different types of responses toworkload fluctuation conditions, such as:•Automatically scaling IT resources out or in based on parameterspreviously defined by the cloud consumer (commonly referred to asauto-scaling).•Automatic notification of the cloud consumer when workloads exceedcurrent thresholds or fall below allocated resources. This way, thecloud consumer can choose to adjust its current IT resource allocation.(auto-notification)Different cloud provider vendors have different names for service agentsthat act as automated scaling listeners.munotes.in

Page 176

176Fig6.1Three cloud service consumers attempt to access one cloud servicesimultaneously(1).The automated scaling listener scales out and initiatesthe creation of threeredundant instances of the service(2).A fourth cloudservice consumer attempts to use the cloud service(3).Programmed toallow up to only three instances of the cloud service, the automatedscaling listener rejects the fourth attempt and notifies thecloud consumerthat the requested workload limit has been exceeded(4).The cloudconsumer’s cloud resource administrator accesses the remoteadministration environment to adjust the provisioning setup and increasethe redundant instance limit(5).6.3.1Case of DTGOV:The virtualization platform is configured to automatically scale a virtualserver at runtime, as follows:A.Scaling-Down:The virtual server continues residing on the samephysical host server while being scaled down to a lower performanceconfiguration.B.Scaling-Up:The virtual server’s capacity is doubled on its originalphysical host server. The VIM may also live migrate the virtual serverto another physical server if the original host server is overcommitted.Migration is automatically performed at runtime and does not requirethe virtual server to shut down.1.A cloud consumer creates and starts a virtual server with 8 virtualprocessor cores and 16 GB of virtual RAM (1).2.The VIM creates the virtual server at the cloud service consumer'srequest, and the corresponding virtual machine is allocated toPhysical Server 1 to join 3 other active virtual machines (2).3.Cloud consumer demand causes the virtual server usage to increaseby over 80% of the CPU capacity for 60 consecutive seconds (3).4.Theautomated scaling listener running at the hypervisor detects theneed to scale up and commands the VIM accordingly (4).
munotes.in

Page 177

177Fig6.2In Fig6.3.1the VIM determines that scaling up the virtual server onPhysical Server 1 is not possible and proceeds to live migrate it toPhysical Server 2.5.The virtual server's CPU/RAM usage remains below 15% capacity for60 consecutive seconds (6).6.The automated scaling listener detects the need to scale down andcommands the VIM (7), which scales down the virtual server(8) whileit remains active on Physical Server 2.
Fig6.3
munotes.in

Page 178

1786.4 LOAD BALANCERCloudload-balancingis the process of distributing workloads &computing resources within a cloud technology's environment. It alsohelps organizations & enterprises to manage workload demands byallocating resources among multiple systems or servers. Cloud loadbalancing also involves hosting the distribution of workload traffic thatresides over the internet.High performance level of tasks can be achieved using this loadbalancing technique on a lower cost than traditional on-premises load-balancing technology. In addition to workload and traffic distribution,cloud technology's load balancing can also provide health check for Cloudapplications.Theload balancermechanism is a runtime agent with logicfundamentally based on this premise.Load balancers can perform a range of specialized runtime workloaddistribution functions that include:•Asymmetric Distribution:larger workloads are issued to IT resources with higher processingcapacities•Workload Prioritization:workloads are scheduled, queued, discarded, and distributedworkloads according to their priority levels•Content-Aware Distribution:requests are distributed to different ITresources as dictated by the request contentThecommon objectivesof using load balancers are:•To maintain system firmness.•To improve system performance.•To protect against system failures.6.4.1 How does load balancing work?:Here, load refers to not only the website traffic but also includesCPU load, network load and memory capacity of each server. A loadbalancing technique makes sure that each system in the network has sameamount of work at any instant of time. This means neither any of them isexcessively over-loaded, nor under-utilized.[1]munotes.in

Page 179

179Fig6.4The load balancer mechanisms can exist as a:•Multi-layer network switch•Dedicated hardware appliance•Dedicated software-based system•Service agent6.5SLA MONITORThe SLA management system mechanism represents a range ofcommercially available cloud management products that provide featurespertaining to the administration, collection, storage, reporting, and runtimenotification of SLA data.An SLA management system deployment will generally include arepository used tostore and retrieve collected SLA data based on pre-defined metrics and reporting parameters. It will further rely on one ormore SLA monitor mechanisms to collect the SLA data that can then bemade available in near-real time to usage and administration portals toprovide ongoing feedback regarding active cloud services(Figure6.5).The metrics monitored for individual cloud services are aligned with theSLA guarantees in corresponding cloud provisioning contracts.
munotes.in

Page 180

180Fig6.5Figure6.5–A cloud serviceconsumer interacts with a cloud service (1). AnSLA monitor intercepts the exchanged messages, evaluates the interaction, andcollects relevant runtime data in relation to quality-of-service guarantees definedin the cloud service’s SLA (2A). The data collected is stored in a repository (2B)that is part of the SLA management system (3). Queries can be issued and reportscan be generated for an external cloud resource administrator via a usage andadministration portal (4) or for an internal cloud resourceadministrator via theSLA management system’s native user-interface (5).6.6 PAY-PER-USE MONITORThe pay-per-use monitor mechanism measures cloud-based ITresource usage in accordance with predefined pricing parameters andgenerates usage logs for fee calculations and billing purposes.Some typical monitoring variables are:•request/response message quantity•transmitted data volume•bandwidth consumptionThe data collected by the pay-per-use monitor is processed by a billingmanagement system that calculates the payment fees.
munotes.in

Page 181

181Figure6.6 shows a pay-per-use monitor implemented as a resource agentused to determine the usage period of a virtual server.
Fig6.6Figure6.6.1–A cloud consumer requests the creation of a new instanceof a cloud service(1). The IT resource is instantiated and they pay-per-usemonitor mechanism receives a “start” event notification from the resourcesoftware (2). The pay-per-use monitor stores the value timestamp in thelog database (3). The cloud consumer later requeststhat the cloud serviceinstance be stopped (4). The pay-per-use monitor receives a “stop” eventnotification from the resource software (5) and stores the value timestampin the log database (6).Figure6.6 illustrates the pay-per-use monitor designed asamonitoring agent that transparently intercepts and analyzes runtimecommunication with a cloud service.
181Figure6.6 shows a pay-per-use monitor implemented as a resource agentused to determine the usage period of a virtual server.
Fig6.6Figure6.6.1–A cloud consumer requests the creation of a new instanceof a cloud service(1). The IT resource is instantiated and they pay-per-usemonitor mechanism receives a “start” event notification from the resourcesoftware (2). The pay-per-use monitor stores the value timestamp in thelog database (3). The cloud consumer later requeststhat the cloud serviceinstance be stopped (4). The pay-per-use monitor receives a “stop” eventnotification from the resource software (5) and stores the value timestampin the log database (6).Figure6.6 illustrates the pay-per-use monitor designed asamonitoring agent that transparently intercepts and analyzes runtimecommunication with a cloud service.
181Figure6.6 shows a pay-per-use monitor implemented as a resource agentused to determine the usage period of a virtual server.
Fig6.6Figure6.6.1–A cloud consumer requests the creation of a new instanceof a cloud service(1). The IT resource is instantiated and they pay-per-usemonitor mechanism receives a “start” event notification from the resourcesoftware (2). The pay-per-use monitor stores the value timestamp in thelog database (3). The cloud consumer later requeststhat the cloud serviceinstance be stopped (4). The pay-per-use monitor receives a “stop” eventnotification from the resource software (5) and stores the value timestampin the log database (6).Figure6.6 illustrates the pay-per-use monitor designed asamonitoring agent that transparently intercepts and analyzes runtimecommunication with a cloud service.
munotes.in

Page 182

182Figure6.7A cloud service consumer sends a request message to thecloud service (1). The pay-per-use monitor intercepts the message (2),forwards it to the cloud service (3a), and stores the usage information inaccordance with its monitoring metrics (3b). The cloud service forwardsthe response messages back to the cloud service consumer to provide therequested service (4).6.7 AUDIT MONITORThe audit monitor mechanism is used to collect audit tracking datafor networks and IT resources in support of, or dictated by, regulatory andcontractual obligations. The figure depicts an audit monitor implementedas a monitoring agent that intercepts “login” requests and stores therequestor’s security credentials, as well as both failed and successful loginattempts, in a log database for future audit reporting purposes.
Fig6.8
munotes.in

Page 183

183A cloud service consumer requests access to a cloud service bysending alogin request message with security credentials (1). The auditmonitor intercepts the message (2) and forwards it to the authenticationservice (3). The authentication service processes the security credentials.A response message is generated for the cloud service consumer, inaddition to the results from the login attempt (4). The audit monitorintercepts the response message and stores the entire collected login eventdetails in the log database, as per the organization’s audit policyrequirements (5).Access has been granted, and a response is sent back tothe cloud service consumer (6).6.8 FAILOVER SYSTEMThe failover system mechanism is used to increase the reliabilityand availability of IT resources by using established clustering technologyto provide redundant implementations. A failover system is configured toautomatically switch over to a redundant or standby IT resource instancewhenever the currently active IT resource becomes unavailable.Failover systems are commonly used for mission-critical programsor for reusable services that can introduce a single point of failure formultiple applications. A failover system can span more than onegeographical region so that each location hosts one or more redundantimplementations of the same IT resource.This mechanism may rely on the resource replication mechanismto supply the redundant IT resource instances, which are activelymonitored for the detection of errors and unavailability conditions.6.8.1 Failover systems come in two basic configurations:A. Active-Active:In an active-active configuration, redundantimplementations of the IT resource actively serve the workloadsynchronously (Figure6.8.1). Load balancing among active instances isrequired. When a failure is detected, the failed instance is removed fromthe load balancing scheduler (Figure6.8.2). Whichever IT resourceremains operational when a failure is detected takes over the processing(Figure6.8.3).
Fig6.9The failover system monitors the operational status of Cloud Service A.
183A cloud service consumer requests access to a cloud service bysending alogin request message with security credentials (1). The auditmonitor intercepts the message (2) and forwards it to the authenticationservice (3). The authentication service processes the security credentials.A response message is generated for the cloud service consumer, inaddition to the results from the login attempt (4). The audit monitorintercepts the response message and stores the entire collected login eventdetails in the log database, as per the organization’s audit policyrequirements (5).Access has been granted, and a response is sent back tothe cloud service consumer (6).6.8 FAILOVER SYSTEMThe failover system mechanism is used to increase the reliabilityand availability of IT resources by using established clustering technologyto provide redundant implementations. A failover system is configured toautomatically switch over to a redundant or standby IT resource instancewhenever the currently active IT resource becomes unavailable.Failover systems are commonly used for mission-critical programsor for reusable services that can introduce a single point of failure formultiple applications. A failover system can span more than onegeographical region so that each location hosts one or more redundantimplementations of the same IT resource.This mechanism may rely on the resource replication mechanismto supply the redundant IT resource instances, which are activelymonitored for the detection of errors and unavailability conditions.6.8.1 Failover systems come in two basic configurations:A. Active-Active:In an active-active configuration, redundantimplementations of the IT resource actively serve the workloadsynchronously (Figure6.8.1). Load balancing among active instances isrequired. When a failure is detected, the failed instance is removed fromthe load balancing scheduler (Figure6.8.2). Whichever IT resourceremains operational when a failure is detected takes over the processing(Figure6.8.3).
Fig6.9The failover system monitors the operational status of Cloud Service A.
183A cloud service consumer requests access to a cloud service bysending alogin request message with security credentials (1). The auditmonitor intercepts the message (2) and forwards it to the authenticationservice (3). The authentication service processes the security credentials.A response message is generated for the cloud service consumer, inaddition to the results from the login attempt (4). The audit monitorintercepts the response message and stores the entire collected login eventdetails in the log database, as per the organization’s audit policyrequirements (5).Access has been granted, and a response is sent back tothe cloud service consumer (6).6.8 FAILOVER SYSTEMThe failover system mechanism is used to increase the reliabilityand availability of IT resources by using established clustering technologyto provide redundant implementations. A failover system is configured toautomatically switch over to a redundant or standby IT resource instancewhenever the currently active IT resource becomes unavailable.Failover systems are commonly used for mission-critical programsor for reusable services that can introduce a single point of failure formultiple applications. A failover system can span more than onegeographical region so that each location hosts one or more redundantimplementations of the same IT resource.This mechanism may rely on the resource replication mechanismto supply the redundant IT resource instances, which are activelymonitored for the detection of errors and unavailability conditions.6.8.1 Failover systems come in two basic configurations:A. Active-Active:In an active-active configuration, redundantimplementations of the IT resource actively serve the workloadsynchronously (Figure6.8.1). Load balancing among active instances isrequired. When a failure is detected, the failed instance is removed fromthe load balancing scheduler (Figure6.8.2). Whichever IT resourceremains operational when a failure is detected takes over the processing(Figure6.8.3).
Fig6.9The failover system monitors the operational status of Cloud Service A.munotes.in

Page 184

184Figure6.10When a failure is detected in one Cloud Service A implementation,the failover system commands the load balancer to switch over the workload tothe redundant Cloud Service A implementation.
Figure6.11The failed Cloud Service A implementation is recovered orreplicated into an operational cloud service. The failover system now commandsthe load balancer to distribute the workload again.B. Active-Passive:In an active-passive configuration, a standby or inactiveimplementation is activated to take over the processing from the ITresource that becomes unavailable, and the corresponding workload isredirected to the instance taking over the operation (Figures 4 to 5).
munotes.in

Page 185

185Some failover systems are designed to redirect workloads to activeIT resources that rely on specialized load balancers that detect failureconditions and exclude failed IT resource instances from the workloaddistribution. This type of failover system is suitable for IT resources thatdo not require execution state management and provide statelessprocessing capabilities. In technology architectures that are typically basedon clustering and virtualization technologies, the redundant or standby ITresource implementations are also required to share their state andexecution context. A complex task that was executed on a failed ITresource can remain operational in one if it’s redundant implementations.
Figure6.12The failover system monitors the operational status of CloudService A. The Cloud Service A implementation acting as the activeinstance is receiving cloud service consumer requests.
Figure6.13The Cloud Service An implementation acting as the activeinstance encounters a failure that is detected by the failover system, whichsubsequently activates the inactive Cloud Service A implementation andredirects the workload toward it. The newly invoked Cloud Service Aimplementation now assumes the role of active instance.
munotes.in

Page 186

186Figure6.14The failed Cloud Service A implementation is recovered or replicated into anoperational cloud service, and is now positioned as the standby instance, while thepreviously invoked Cloud Service A continues to serve as the active instance.6.9 HYPERVISORA hypervisor is a hardware virtualization technique that allowsmultiple guest operating systems (OS) to run on a single host system at thesame time. The guest OS shares the hardware of the host computer, suchthat each OS appears to have its own processor, memory and otherhardware resources.A hypervisor is also known asa virtual machine manager (VMM).The hypervisor isolates the operating systems from the primary hostmachine. The job of a hypervisor is to cater to the needs of a guestoperating system and to manage it efficiently. Each virtual machine isindependent and do not interfere with each another although they run onthe same host machine.[2]
Fig 6.15
munotes.in

Page 187

1876.9.1 Hypervisors Are Divided Into Two Types:A. Type one is the bare-metal hypervisorthat are deployed directly overthe host's system hardware without any underlying operating systems orsoftware. Some examples of the type 1 hypervisors are Microsoft Hyper-Vhypervisor, VMware ESXi, Citrix XenServer.
Fig6.16B. Type two is a hosted hypervisor that runs as a software layerwithina physical operating system. The hypervisor runs as a separate secondlayer over the hardware while the operating system runs as a third layer.The hosted hypervisors include Parallels Desktop and VMware Player.
Fig6.17
1876.9.1 Hypervisors Are Divided Into Two Types:A. Type one is the bare-metal hypervisorthat are deployed directly overthe host's system hardware without any underlying operating systems orsoftware. Some examples of the type 1 hypervisors are Microsoft Hyper-Vhypervisor, VMware ESXi, Citrix XenServer.
Fig6.16B. Type two is a hosted hypervisor that runs as a software layerwithina physical operating system. The hypervisor runs as a separate secondlayer over the hardware while the operating system runs as a third layer.The hosted hypervisors include Parallels Desktop and VMware Player.
Fig6.17
1876.9.1 Hypervisors Are Divided Into Two Types:A. Type one is the bare-metal hypervisorthat are deployed directly overthe host's system hardware without any underlying operating systems orsoftware. Some examples of the type 1 hypervisors are Microsoft Hyper-Vhypervisor, VMware ESXi, Citrix XenServer.
Fig6.16B. Type two is a hosted hypervisor that runs as a software layerwithina physical operating system. The hypervisor runs as a separate secondlayer over the hardware while the operating system runs as a third layer.The hosted hypervisors include Parallels Desktop and VMware Player.
Fig6.17
munotes.in

Page 188

1886.10 RESOURCE CLUSTERCloud-based IT resources that are geographically diverse can belogically combined into groups to improve their allocation and use. Theresource cluster mechanism is used to group multiple IT resourceinstances so that they can be operated as a single IT resource. Thisincreases the combined computing capacity, load balancing, andavailability of the clustered IT resources.Resource cluster architectures rely on high-speed dedicatednetwork connections, or cluster nodes, between IT resource instances tocommunicate about workload distribution, task scheduling, data sharing,and system synchronization. A cluster management platform that isrunning as distributed middleware in all of the cluster nodes is usuallyresponsible for these activities. This platform implements a coordinationfunction that allows distributed IT resources to appear as one IT resource,and also executes IT resources inside the cluster.6.10.1Common resource cluster types include:A.Server Cluster:Physical or virtual servers are clustered to increaseperformance and availability. Hypervisors running on different physicalservers can be configured to share virtual server execution state (suchas memory pages and processor register state) in order to establishclustered virtual servers. In such configurations, which usually requirephysical servers to have access to shared storage, virtual servers areable to live-migrate from one to another. In this process, thevirtualization platform suspends the execution of a given virtual serverat one physical server and resumes iton another physical server. Theprocess is transparent to the virtual server operating system and can beused to increase scalability by live-migrating a virtual server that isrunning at an overloaded physical server to another physical server thathas suitable capacity.B.Database Cluster:Designed to improve data availability, this high-availability resource cluster has a synchronization feature thatmaintains the consistency of data being stored at different storagedevices used in the cluster. The redundant capacity is usually based onan active-active or active-passive failover system committed tomaintaining the synchronization conditions.C.Large Dataset Cluster:Data partitioning and distribution isimplemented so that the target datasets can be efficiently partitionedwithout compromising data integrity or computing accuracy. Eachcluster node processes workloads without communicating with othernodes as much as in other cluster types.Many resource clusters require cluster nodes to have almostidentical computing capacity and characteristics in order to simplify thedesign of and maintain consistency within the resource clusterarchitecture. The cluster nodes in high-availability cluster architecturesmunotes.in

Page 189

189need to access and share common storage IT resources. This can requiretwo layers of communication between the nodes—one for accessing thestorage device and another to execute IT resource orchestration (Figure6.10.1). Some resource clusters are designed with more loosely coupled ITresources that only require the network layer (Figure 2).
Figure6.18Load balancing and resource replication are implemented through a clusterenabled hypervisor. A dedicated storage area network is used to connect the clusteredstorage and the clustered servers, whichare able to share common cloud storage devices.This simplifies the storage replication process, which is independently carried out at thestorage cluster.
Figure6.19
munotes.in

Page 190

190A loosely coupled server cluster that incorporates a load balancer. There isno shared storage. Resource replication is used to replicate cloud storagedevices through the network by the cluster software.6.10.2There are two basic types of resource clusters:A.Load Balanced Cluster:This resource cluster specializes indistributing workloads among cluster nodes to increase IT resourcecapacity while preserving the centralization of IT resourcemanagement. It usually implements a load balancer mechanism that iseither embedded within the cluster management platform or set up as aseparateIT resource.B.High-Availability (HA) Cluster–A high-availability clustermaintains system availability in the event of multiple node failures, andhas redundant implementations of most of all of the clustered ITresources. It implements a failover systemmechanism that monitorsfailure conditions and automatically redirects the workload away fromany failed nodes.6.11 MULTI-DEVICE BROKERAn individual cloud service may need to be accessed by differenttypes of cloud service consumers, some of which maybe incompatiblewith the cloud service’s published service contract. Disparate cloudservice consumers may be differentiated by their hosting hardware devicesand/or may have different types of communication requirements. Toovercome incompatibilities between a cloud service and a disparate cloudservice consumer, mapping logic needs to be created to transform (orconvert) information that is exchanged at runtime.The multi-device broker mechanism is used to facilitate runtimedata transformation so as tomake a cloud service accessible by a widerrange of cloud service consumer programs and devices (Figure 1).Multi-device brokers commonly exist as or incorporate gatewaycomponents, such as:•XML Gateway–transmits and validates XML data•Cloud Storage Gateway–transforms cloud storage protocols andencodes storage devices to facilitate data transfer and storage•Mobile Device Gateway–transforms the communication protocolsused by mobile devicesThe levels at which transformation logic can be created include:•transport protocols•messaging protocols•storage device protocols•data schemas/data modelsmunotes.in

Page 191

191For example, a multi-device broker may contain mapping logic thatcoverts both transport and messaging protocols for a cloud serviceconsumer accessing a cloudservice with a mobile device.
Figure6.20–A multi-device broker contains the mapping logic necessary totransform data exchanges between a cloud service and different types ofcloud service consumer devices.6.12 STATE MANAGEMENT DATABASEA state management database is a storage device that is used totemporarily persist state data for software programs. As an alternative tocaching state data in memory, software programs can offload state data tothe database in order to reduce the amount of run-time memory theyconsume (Figures6.12.1 and6.12.2). By doing so, the software programsand the surrounding infrastructure are more scalable. State managementdatabases are commonly used by cloud services, especially those involvedin long-running runtime activities.
Figure6.21
munotes.in

Page 192

192During the lifespan of a cloud service instance, it may be required to remainstateful and keep state data cached in memory, even when idle.
Figure6.22By deferring state data to a state repository, the cloudservice is able to transition to a stateless condition (or a partiallystateless condition), thereby temporarily freeing system resources.SUMMARYCloud mechanism gives the ability to provide the high availability,resiliency and fault tolerancekind of quality to their users. By auto scalingmethod we get dynamic scaling method to use thecloud services. Lots ofother features of cloud we are used to achieve the 24*7 services, securityand data backup. The hypervisor or virtualization technology that isapplied for sharing thecapabilities of physical computers by dividing theresource among the OS & also we can reducethe capex and opex ofthe IT Operation and infrastructure. Load balancing and failover clustermechanism provide the continuous availability of theservices andapplication when theorganization needed. As we all know the cloud ispay-per use kind of technology so betweencloud provider and thecustomer sign the SLA in between.UNIT END QUESTION1.Explain Automated Scaling Listener.2.Discuss the case study on DTGOV. Explain in detail.3.What is load balancing work? How does load balancing work?4.Explain SLA monitor in detail.
munotes.in

Page 193

1935.Explain Pay-per-use monitor in detail.6.Explain Audit monitor in detail.7.What is Failover System? Explain its types in details.8.What is Hypervisor? Explain its types in details.9.What is resource cluster?10. Explain common type cluster and basic resource cluster type.11. Explain in detail Multidevice broker.12. Explain in detail State management database.REFERENCES•https://www.znetlive.com/blog/what-is-load-balancing-in-cloud-computing-and-its-advantages/•https://www.cloudoye.com/kb/general/what-is-hypervisor-in-cloud-computing-and-its-types•https://patterns.arcitura.com/cloud-computing-patterns/mechanisms/state_management_database*****munotes.in

Page 194

1947CLOUD MANAGEMENT MECHANISMSANDCLOUD SECURITY MECHANISMSUnit Structure7.1Objective7.2Introduction7.3Remote Administration System7.4Resource Management System7.5SLA Management System7.6Billing Management System7.7Encryption7.7.1 Symmetric Encryption7.7.2 Asymmetric Encryption7.8Hashing7.9Digital Signature7.10Public Key Infrastructure (PKI)7.11Identity andAccess Management (IAM)7.12Single Sign-On (SSO)7.13 Cloud-Based Security Groups7.14 Hardened Virtual Server Images7.15Unit End Question7.17 References7.1 OBJECTIVETo understand the security issues and to identify the appropriatesecurity techniques those are being used in the current world of CloudComputing To identify the security challenges those are expected in thefuture of Cloud Computing. To suggest some counter measures for thefuture challenges to be faced in Cloud Computing.7.2 INTRODUCTIONThe cloud management mechanisms are measures to be taken toensure that there are security mechanisms of cloud solutions in place todeal with the security attacks and threats. Cloud management mechanismscan help to facilitate the control, management and the evolutions of cloudtechnology and IT resources that form part of the cloud platforms andsolutions. As cloud-based IT resources must be configured, set-up,munotes.in

Page 195

195maintained, and monitored, there are systems and mechanisms that shouldbe in placeto managed this tasks.These mechanisms of how these systems are managed arediscussed and they typically provide an integrated APIs that can offerindividual products, custom applications, which can be combined intovarious product suites or multi-function applications.7.3 REMOTE ADMINISTRATION SYSTEMThe remote administration system mechanism provides tools anduser-interfaces for external cloud resource administrators to configure andadminister cloud-based IT resources.A remote administration system can establish a portal for access toadministration and management features of various underlying systems,including the resource management, SLA management, and billingmanagement systems (Figure 1).
Fig7.1Figure7.1–The remote administration system abstracts underlyingmanagement systems to expose and centralize administration controls toexternal cloud resource administrators. The system provides acustomizable user console, while programmatically interfacing withunderlying management systems via their APIs.The tools and APIs provided by a remote administration system aregenerally used by the cloud provider to develop and customize onlineportals that provide cloud consumers with a variety of administrativecontrols.The following arethetwo primary types of portalsthat are created withthe remote administration system:•Usage and Administration Portal:A general purpose portal thatcentralized management controls to different cloud-based IT resourcesand can further provide IT resource usage reports.
munotes.in

Page 196

196•Self-Service Portal:This is essentially a shopping portal that allowscloud consumers to search an up-to-date list of cloud services and ITresources that are available from a cloud provider (usually for lease).The cloud consumer submits its chosen items to the cloud provider forprovisioning.Figure7.3.2illustrates a scenario involving a remote administrationsystem and both usage and administration and self-service portals.
Fig7.2Figure7.2:A cloud resource administrator uses the usage andadministration portal to configure an already leased virtual server (notshown) to prepare it for hosting (1). The cloud resource administrator thenuses the self-service portal to select and request the provisioning of a newcloud service (2). The cloud resource administrator then accesses theusage and administration portal again to configure the newly provisionedcloud service that is hosted on the virtual server (3). Throughout thesesteps, the remote administration system interacts with the necessarymanagement systems to perform the requested actions (4).Depending on:•the type of cloud product or cloud delivery model the cloud consumeris leasing or using from the cloud provider,•the level of access control granted by the cloud provider to the cloudconsumer, and•further depending on which underlying management systems theremote administration system interfaces with,Tasks that can commonly be performed by cloud consumers via a remoteadministration console include:•configuring andsetting up cloud services•provisioning and releasing IT resource for on-demand cloud services•monitoring cloud service status, usage, and performance•monitoring QoS and SLA fulfillment
munotes.in

Page 197

197•managing leasing costs and usage fees•managing user accounts, securitycredentials, authorization, and accesscontrol•tracking internal and external access to leased services•planning and assessing IT resource provisioning•capacity planningWhile the user-interface provided by the remote administrationsystem will tend to beproprietary to the cloud provider, there is apreference among cloud consumers to work with remote administrationsystems that offer standardized APIs. This allows a cloud consumer toinvest in the creation of its own front-end with the foreknowledge thatitcan reuse this console if it decides to move to another cloud provider thatsupports the same standardized API. Additionally, the cloud consumerwould be able to further leverage standardized APIs if it is interested inleasing and centrally administering IT resources from multiple cloudproviders and/or IT resources residing in cloud and on-premiseenvironments.
Fig7.3.Figure7.3Standardized APIs published by remote administrationsystems from different clouds enable a cloud consumer to develop acustom portal that centralizes a single IT resource management portal forboth cloud-based and on-premise IT resources.7.4 RESOURCE MANAGEMENT SYSTEMThe resource management system mechanism helps coordinate ITresources in response to management actions performed by both cloudconsumers and cloud providers (Figure 1). Core to this system is thevirtual infrastructure manager (VIM) that coordinates the server hardware
munotes.in

Page 198

198so that virtual server instances can be created from the most expedientunderlyingphysical server. VIM is a commercial product that can be usedto manage a range of virtual IT resources across multiple physical servers.For example, VIM can create and manage multiple instances of ahypervisor across different physical servers or allocate a virtual server onone physical server to another (or to a resource pool).Tasks that are typically automated and implemented through the resourcemanagement system include:•managing virtual IT resource templates that are used to create pre-builtinstances, such as virtual server images•allocating and releasing virtual IT resources into the available physicalinfrastructure in response to the starting, pausing, resuming, andtermination of virtual IT resource instances•coordinating IT resources in relation to the involvement of othermechanisms, such as resource replication, load balancer, and failoversystem•enforcing usage and security policies throughout the lifecycle of cloudservice instances•monitoring operational conditions of IT resourcesResource management system functions can be accessed by cloudresource administrators employed by the cloud provider or cloudconsumer. Those working on behalf of a cloud provider will often be ableto directly access the resource management system’s native console.Resource management systems typically expose APIs that allowcloud providers to build remote administration system portals that can becustomized to selectively offer resource management controls to externalcloud resource administrators acting on behalf of cloud consumerorganizations via usage and administration portals.
Fig7.4
munotes.in

Page 199

199Figure7.4–The cloud consumer’s cloud resource administrator accessesa usage and administration portal externally to administer a leased ITresource (1). The cloud provider’s cloud resource administrator uses thenative user-interface provided by the VIM to perform internal resourcemanagement tasks (2).7.5 SLA MANAGEMENT SYSTEMThe SLA monitor mechanism is used to specifically observe theruntime performance of cloud services to ensure that they are fulfilling thecontractual QoS requirements published in SLAs (Figure 1). The datacollected by the SLA monitor is processed by an SLA management systemto be aggregated into SLA reporting metrics. These systems canproactively repair or failover cloud services when exception conditionsoccur, such as when the SLA monitor reports a cloud service as “down.”
Fig7.5Figure7.5–The SLA monitor polls the cloud service by sending overpolling request messages (MREQ1 to MREQN). The monitor receivespolling response messages (M to M) that report that the service was “up”at each polling cycle (1a). The SLA monitor stores the “up” time—timeperiod of all polling cycles 1 to N—in the log database (1b). The SLAmonitor polls thecloud service that sends polling request messages (M toM). Polling response messages are not received (2a). The responsemessages continue to time out, so the SLA monitor stores the “down”time—time period of all polling cycles N+1 to N+M—in the log database(2b). The SLA monitor sends a polling request message (M) and receivesthe polling response message (M) (3a). The SLA monitor stores the “up”time in the log database (3b).7.6 BILLING MANAGEMENT SYSTEMThe billing management system mechanism is dedicated to the
munotes.in

Page 200

200collection and processing of usage data as it pertains to cloud provideraccounting and cloud consumer billing. Specifically, the billingmanagement system relies on pay-per-use monitors to gather runtimeusage data that is stored in a repository that the system components thendraw from for billing reporting and invoicing purposes (Figure 1).The billing management system allows for the definition ofdifferent pricing policies as well as custom pricing models on a per-cloudconsumer and/or per-IT resource basis. Pricing models can vary from thetraditional pay-per-use models to flat-rate or pay-per-allocation models, orcombinations thereof.Billing arrangements can be based on pre-usage and post-usagepayments. The latter type can include pre-defined limits or can be set up(with the mutual agreement of the cloud consumer) to allow for unlimitedusage (and, consequently, no limit on subsequent billing). When limits areestablished, they are usually in the form of usage quotas. When quotas areexceeded, the billing management system can block further usage requestsby cloud consumers.
Fig7.6Figure7.6–A cloud service consumer exchanges messages with a cloud service(1). A pay-per-use monitor keeps track of the usage and collects data relevant tobilling (2A), which is forwarded to a repository that is part of the billingmanagement system (2B). The system periodically calculates the consolidatedcloud service usage fees and generates an invoice for the cloud consumer (3).The invoice maybe provided to the cloud consumer through the usage andadministration portal (4).
munotes.in

Page 201

2017.7 ENCRYPTION•Data, by default, is coded in a readable format known as plaintext.When transmitted over a network, plaintext is vulnerable tounauthorized and potentially malicious access.•The encryption mechanism is a digital coding system dedicated topreserving the confidentiality and integrity of data. It is used forencoding plaintext data into a protected and unreadable format.•Encryption technology commonly relies on a standardized algorithmcalled a cipher to transform original plaintext data into encrypted data,referred to as ciphertext.•When encryption is applied to plaintext data, the data is paired with astring of characters called an encryption key, a secret message that isestablished by and shared among authorized parties. The encryptionkey is used to decrypt the ciphertext back into its original plaintextformat.•Data encryption in the cloud is the process of transforming orencoding data before it’smoved to cloud storage. Typically cloudservice providers offer encryption services ranging from an encryptedconnection to limited encryption of sensitive data and provideencryption keys to decrypt the data as needed.•Encryption services like these prevent unauthorized free access to yoursystem or file data without the decryption key, making it an effectivedata security method. Keeping information secure in the cloud shouldbe your top priority. Just taking a few preventative measures arounddata encryption can tighten security for your most sensitiveinformation. Follow these encryption tips to lock down yourinformation in the cloud.
Fig7.7Figure7.7A malicious service agent is unable to retrieve data from anencrypted message. The retrieval attempt may furthermore be revealed tothe cloud service consumer.There are two common forms of encryption known as symmetricencryption and asymmetric encryption.
munotes.in

Page 202

2027.7.1 Symmetric Encryption:•Symmetric encryption uses the same key for both encryption anddecryption, both of which are performed by authorized parties that usethe one shared key. Also known as secret key cryptography, messagesthat are encrypted with a specific key can be decrypted by only thatsame key Note that symmetrical encryption does nothave thecharacteristic of non-repudiation.7.7.2 Asymmetric Encryption:•Asymmetric encryption relies on the use of two different keys, namelya private key and a public key. With asymmetric encryption (which isalso referred to as public key cryptography), the private key is knownonly to its owner while the public key is commonly available. Adocument that was encrypted with a private key can only be correctlydecrypted with the corresponding public key.•Conversely, a document that was encrypted with apublic key can bedecrypted only using its private key counterpart. Asymmetricencryption is almost always computationally slower than symmetricencryption. Private Key encryption therefore offers integrity protectionin addition to authenticity and non-repudiation.
Fig7.8Figure7.7.2The encryption mechanism is added to the communicationchannel between outside users and Innovartus’ User Registration Portal.This safeguards message confidentiality via the use of HTTPS.7.8 HASHING•Hashing the hashing mechanism is used when a one-way, non-reversible form of data protection is required. Once hashing has been
munotes.in

Page 203

203applied to a message, it is locked and no key is provided for themessage to be unlocked.•A common application of this mechanism is the storageof passwords.Hashing technology can be used to derive a hashing code or messagedigest from a message, which is often of a fixed length and smallerthan the original message.•The message sender can then utilize the hashing mechanism to attachthe messagedigest to the message. The recipient applies the same hashfunction to the message to verify that the produced message digest isidentical to the one that accompanied the message.•Any alteration to the original data results in an entirely differentmessage digest and clearly indicates that tampering has occurred.
Fig7.9Figure7.8.A hashing function is applied to protect the integrity of amessage that is intercepted and altered by a malicious service agent,before it is forwarded. The firewall canbe configured to determine that themessage has been altered, thereby enabling it to reject the message beforeit can proceed to the cloud service.
Figure7.10A hashing procedure is invoked when the PaaS environmentis accessed (1). The applicationsthat were ported to this environment arechecked . (2). and their message digests are calculated. (3). The messagedigests are stored in a secure on-premise database. (4), and a notificationare issued if any of their values are not identical to the ones in storage.7.9 DIGITAL SIGNATURE•The digital signature mechanism is a means of providing dataauthenticity and integrity through authentication and non-repudiation.
munotes.in

Page 204

204•A message is assigned a digital signature prior to transmission, whichis then rendered invalid if the message experiences any subsequent,unauthorized modifications.•A digital signature provides evidence that the message received is thesame as the one created by its rightful sender.•Both hashing and asymmetrical encryption are involved inthe creationof a digital signature, which essentially exists as a message digest thatwas encrypted by a private key and appended to the original message.The recipient verifies the signature validity and uses the correspondingpublic key to decrypt the digital signature, which produces themessage digest.
Figure7.11Cloud Service Consumer B sends a message that was digitally signed butwas altered by trusted attacker Cloud Service Consumer A. Virtual Server B isconfigured to verify digital signatures before processing incoming messages even if theyare within its trust boundary. The message is revealed as illegitimate due to its invaliddigital signature, and is therefore rejected by Virtual Server B.
Figure7.12Whenever a cloud consumer performsa management action that is related toIT resources provisioned by DTGOV, the cloud service consumer program must includea digital signature in the message request to prove the legitimacy of its user.
204•A message is assigned a digital signature prior to transmission, whichis then rendered invalid if the message experiences any subsequent,unauthorized modifications.•A digital signature provides evidence that the message received is thesame as the one created by its rightful sender.•Both hashing and asymmetrical encryption are involved inthe creationof a digital signature, which essentially exists as a message digest thatwas encrypted by a private key and appended to the original message.The recipient verifies the signature validity and uses the correspondingpublic key to decrypt the digital signature, which produces themessage digest.
Figure7.11Cloud Service Consumer B sends a message that was digitally signed butwas altered by trusted attacker Cloud Service Consumer A. Virtual Server B isconfigured to verify digital signatures before processing incoming messages even if theyare within its trust boundary. The message is revealed as illegitimate due to its invaliddigital signature, and is therefore rejected by Virtual Server B.
Figure7.12Whenever a cloud consumer performsa management action that is related toIT resources provisioned by DTGOV, the cloud service consumer program must includea digital signature in the message request to prove the legitimacy of its user.
204•A message is assigned a digital signature prior to transmission, whichis then rendered invalid if the message experiences any subsequent,unauthorized modifications.•A digital signature provides evidence that the message received is thesame as the one created by its rightful sender.•Both hashing and asymmetrical encryption are involved inthe creationof a digital signature, which essentially exists as a message digest thatwas encrypted by a private key and appended to the original message.The recipient verifies the signature validity and uses the correspondingpublic key to decrypt the digital signature, which produces themessage digest.
Figure7.11Cloud Service Consumer B sends a message that was digitally signed butwas altered by trusted attacker Cloud Service Consumer A. Virtual Server B isconfigured to verify digital signatures before processing incoming messages even if theyare within its trust boundary. The message is revealed as illegitimate due to its invaliddigital signature, and is therefore rejected by Virtual Server B.
Figure7.12Whenever a cloud consumer performsa management action that is related toIT resources provisioned by DTGOV, the cloud service consumer program must includea digital signature in the message request to prove the legitimacy of its user.
munotes.in

Page 205

2057.10 PUBLIC KEY INFRASTRUCTURE (PKI)•Public Key Infrastructure (PKI) A common approach for managing theissuance of asymmetric keys is based on the public key infrastructure(PKI) mechanism, which exists as a system of protocols, data formats,rules, and practices that enable large-scale systems to securely usepublic key cryptography.•This system is used to associate public keys with their correspondingkey owners (known as public key identification) while enabling theverification of key validity.•PKIs rely on the use of digital certificates, which aredigitally signeddata structures that bind public keys to certificate owner identities, aswell as to related information, such as validity periods. Digitalcertificates are usually digitally signed by a third-party certificateauthority (CA), as illustrated in Figure 7.
Figure7.13The common steps involved during the generation ofcertificates by a certificate authority.•Public Key Infrastructure (PKI) Larger organizations, such asMicrosoft, can act as their own CA and issue certificates to their clientsand the public, since even individual users can generate certificates aslong as they have the appropriate software tools.•The PKI is a dependable method for implementing asymmetricencryption, managing cloud consumer and cloud provider identityinformation, and helping to defend against the malicious intermediaryand insufficient authorization threats.•The PKI mechanism is primarily used to counter the insufficientauthorization threat.
2057.10 PUBLIC KEY INFRASTRUCTURE (PKI)•Public Key Infrastructure (PKI) A common approach for managing theissuance of asymmetric keys is based on the public key infrastructure(PKI) mechanism, which exists as a system of protocols, data formats,rules, and practices that enable large-scale systems to securely usepublic key cryptography.•This system is used to associate public keys with their correspondingkey owners (known as public key identification) while enabling theverification of key validity.•PKIs rely on the use of digital certificates, which aredigitally signeddata structures that bind public keys to certificate owner identities, aswell as to related information, such as validity periods. Digitalcertificates are usually digitally signed by a third-party certificateauthority (CA), as illustrated in Figure 7.
Figure7.13The common steps involved during the generation ofcertificates by a certificate authority.•Public Key Infrastructure (PKI) Larger organizations, such asMicrosoft, can act as their own CA and issue certificates to their clientsand the public, since even individual users can generate certificates aslong as they have the appropriate software tools.•The PKI is a dependable method for implementing asymmetricencryption, managing cloud consumer and cloud provider identityinformation, and helping to defend against the malicious intermediaryand insufficient authorization threats.•The PKI mechanism is primarily used to counter the insufficientauthorization threat.
2057.10 PUBLIC KEY INFRASTRUCTURE (PKI)•Public Key Infrastructure (PKI) A common approach for managing theissuance of asymmetric keys is based on the public key infrastructure(PKI) mechanism, which exists as a system of protocols, data formats,rules, and practices that enable large-scale systems to securely usepublic key cryptography.•This system is used to associate public keys with their correspondingkey owners (known as public key identification) while enabling theverification of key validity.•PKIs rely on the use of digital certificates, which aredigitally signeddata structures that bind public keys to certificate owner identities, aswell as to related information, such as validity periods. Digitalcertificates are usually digitally signed by a third-party certificateauthority (CA), as illustrated in Figure 7.
Figure7.13The common steps involved during the generation ofcertificates by a certificate authority.•Public Key Infrastructure (PKI) Larger organizations, such asMicrosoft, can act as their own CA and issue certificates to their clientsand the public, since even individual users can generate certificates aslong as they have the appropriate software tools.•The PKI is a dependable method for implementing asymmetricencryption, managing cloud consumer and cloud provider identityinformation, and helping to defend against the malicious intermediaryand insufficient authorization threats.•The PKI mechanism is primarily used to counter the insufficientauthorization threat.
munotes.in

Page 206

206Figure7.14An external cloud resource administrator usesa digitalcertificate to access the Web-based management environment. DTGOV’sdigital certificate is used in the HTTPS connection and then signed by atrusted CA.7.11 IDENTITY AND ACCESS MANAGEMENT (IAM)Identity and Access Management (IAM) The Identity and accessmanagement (IAM) mechanism encompasses the components and policiesnecessary to control and track user identities and access privileges for ITresources, environments, and systems. Specifically, IAM mechanismsexist as systems comprised of four main components:1.Authentication:Username and password combinations remain themost common forms of user authentication credentials managed by theIAM system, which also can support digital signatures, digitalcertificates, biometric hardware (fingerprint readers), specializedsoftware (such as voice analysis programs), and locking user accountsto registered IP or MAC addresses.2.Authorization:The authorization component defines the correctgranularity for access controls and oversees the relationships betweenidentities, access control rights, and IT resource availability.3.User Management:Related to the administrative capabilities of thesystem, the user management program is responsible for creating newuser identities and access groups, resetting passwords, definingpassword policies, and managing privileges.4.Credential Management:The credential management systemestablishes identities and access control rules for defined user accounts,which mitigates the threat of insufficient authorization. The IAM
206Figure7.14An external cloud resource administrator usesa digitalcertificate to access the Web-based management environment. DTGOV’sdigital certificate is used in the HTTPS connection and then signed by atrusted CA.7.11 IDENTITY AND ACCESS MANAGEMENT (IAM)Identity and Access Management (IAM) The Identity and accessmanagement (IAM) mechanism encompasses the components and policiesnecessary to control and track user identities and access privileges for ITresources, environments, and systems. Specifically, IAM mechanismsexist as systems comprised of four main components:1.Authentication:Username and password combinations remain themost common forms of user authentication credentials managed by theIAM system, which also can support digital signatures, digitalcertificates, biometric hardware (fingerprint readers), specializedsoftware (such as voice analysis programs), and locking user accountsto registered IP or MAC addresses.2.Authorization:The authorization component defines the correctgranularity for access controls and oversees the relationships betweenidentities, access control rights, and IT resource availability.3.User Management:Related to the administrative capabilities of thesystem, the user management program is responsible for creating newuser identities and access groups, resetting passwords, definingpassword policies, and managing privileges.4.Credential Management:The credential management systemestablishes identities and access control rules for defined user accounts,which mitigates the threat of insufficient authorization. The IAM
206Figure7.14An external cloud resource administrator usesa digitalcertificate to access the Web-based management environment. DTGOV’sdigital certificate is used in the HTTPS connection and then signed by atrusted CA.7.11 IDENTITY AND ACCESS MANAGEMENT (IAM)Identity and Access Management (IAM) The Identity and accessmanagement (IAM) mechanism encompasses the components and policiesnecessary to control and track user identities and access privileges for ITresources, environments, and systems. Specifically, IAM mechanismsexist as systems comprised of four main components:1.Authentication:Username and password combinations remain themost common forms of user authentication credentials managed by theIAM system, which also can support digital signatures, digitalcertificates, biometric hardware (fingerprint readers), specializedsoftware (such as voice analysis programs), and locking user accountsto registered IP or MAC addresses.2.Authorization:The authorization component defines the correctgranularity for access controls and oversees the relationships betweenidentities, access control rights, and IT resource availability.3.User Management:Related to the administrative capabilities of thesystem, the user management program is responsible for creating newuser identities and access groups, resetting passwords, definingpassword policies, and managing privileges.4.Credential Management:The credential management systemestablishes identities and access control rules for defined user accounts,which mitigates the threat of insufficient authorization. The IAM
munotes.in

Page 207

207mechanism is primarily used to counter the insufficient authorization,denial of service, and overlapping trust boundaries threats.7.12 SINGLE SIGN-ON (SSO)•Single Sign-On (SSO) Propagating the authentication and authorizationinformation for a cloud service consumer across multiple cloud servicescan be a challenge, especially if numerous cloud services or cloud-based IT resources need to be invoked as part of the same overallruntime activity.•The single sign-on (SSO) mechanism enables one cloud serviceconsumer to be authenticated by a security broker, which establishes asecurity context that is persisted while the cloud service consumeraccesses other cloud services or cloud-based IT resources.•Otherwise, the cloud service consumer would need to re-authenticateitself with every subsequent request. The SSO mechanism essentiallyenables mutually independent cloud services and IT resources togenerate and circulate runtime authentication and authorizationcredentials.
Figure7.15A cloud serviceconsumer provides the security broker withlogin credentials (1). The security broker responds with an authenticationtoken (message with small lock symbol) upon successful authentication,which contains cloud service consumer identity information (2) thatisused to automatically authenticate the cloud service consumer acrossCloud Services A, B, and C (3).
207mechanism is primarily used to counter the insufficient authorization,denial of service, and overlapping trust boundaries threats.7.12 SINGLE SIGN-ON (SSO)•Single Sign-On (SSO) Propagating the authentication and authorizationinformation for a cloud service consumer across multiple cloud servicescan be a challenge, especially if numerous cloud services or cloud-based IT resources need to be invoked as part of the same overallruntime activity.•The single sign-on (SSO) mechanism enables one cloud serviceconsumer to be authenticated by a security broker, which establishes asecurity context that is persisted while the cloud service consumeraccesses other cloud services or cloud-based IT resources.•Otherwise, the cloud service consumer would need to re-authenticateitself with every subsequent request. The SSO mechanism essentiallyenables mutually independent cloud services and IT resources togenerate and circulate runtime authentication and authorizationcredentials.
Figure7.15A cloud serviceconsumer provides the security broker withlogin credentials (1). The security broker responds with an authenticationtoken (message with small lock symbol) upon successful authentication,which contains cloud service consumer identity information (2) thatisused to automatically authenticate the cloud service consumer acrossCloud Services A, B, and C (3).
207mechanism is primarily used to counter the insufficient authorization,denial of service, and overlapping trust boundaries threats.7.12 SINGLE SIGN-ON (SSO)•Single Sign-On (SSO) Propagating the authentication and authorizationinformation for a cloud service consumer across multiple cloud servicescan be a challenge, especially if numerous cloud services or cloud-based IT resources need to be invoked as part of the same overallruntime activity.•The single sign-on (SSO) mechanism enables one cloud serviceconsumer to be authenticated by a security broker, which establishes asecurity context that is persisted while the cloud service consumeraccesses other cloud services or cloud-based IT resources.•Otherwise, the cloud service consumer would need to re-authenticateitself with every subsequent request. The SSO mechanism essentiallyenables mutually independent cloud services and IT resources togenerate and circulate runtime authentication and authorizationcredentials.
Figure7.15A cloud serviceconsumer provides the security broker withlogin credentials (1). The security broker responds with an authenticationtoken (message with small lock symbol) upon successful authentication,which contains cloud service consumer identity information (2) thatisused to automatically authenticate the cloud service consumer acrossCloud Services A, B, and C (3).
munotes.in

Page 208

208Figure7.16The credentials received by the security broker arepropagated to ready-made environments across two different clouds. Thesecurity broker is responsible for selecting the appropriate securityprocedure with which to contact each cloud.7.13 CLOUD-BASED SECURITY GROUPS•Cloud-Based Security Groups Cloud resource segmentation is a processby which separate physical and virtual IT environments are created fordifferent users and groups. For example, an organization’s WAN can bepartitioned according to individual network security requirements.•One network can be established with a resilient firewall for externalInternet access, while asecond is deployed without a firewall becauseits users are internal and unable to access the Internet. Resourcesegmentation is used to enable virtualization by allocating a variety ofphysical IT resources to virtual machines.•Cloud-Based Security Groupsthe cloud-based resource segmentationprocess creates cloud-based security group mechanisms that aredetermined through security policies. Networks are segmented intological cloud-based security groups that form logical networkperimeters multiple virtual servers running on the same physical servercan become members of different logical cloud-based security groups(Figure 11).•Virtual servers can further be separated into public-private groups,development-production groups, or any other designation configured bythe cloud resource administrator.
208Figure7.16The credentials received by the security broker arepropagated to ready-made environments across two different clouds. Thesecurity broker is responsible for selecting the appropriate securityprocedure with which to contact each cloud.7.13 CLOUD-BASED SECURITY GROUPS•Cloud-Based Security Groups Cloud resource segmentation is a processby which separate physical and virtual IT environments are created fordifferent users and groups. For example, an organization’s WAN can bepartitioned according to individual network security requirements.•One network can be established with a resilient firewall for externalInternet access, while asecond is deployed without a firewall becauseits users are internal and unable to access the Internet. Resourcesegmentation is used to enable virtualization by allocating a variety ofphysical IT resources to virtual machines.•Cloud-Based Security Groupsthe cloud-based resource segmentationprocess creates cloud-based security group mechanisms that aredetermined through security policies. Networks are segmented intological cloud-based security groups that form logical networkperimeters multiple virtual servers running on the same physical servercan become members of different logical cloud-based security groups(Figure 11).•Virtual servers can further be separated into public-private groups,development-production groups, or any other designation configured bythe cloud resource administrator.
208Figure7.16The credentials received by the security broker arepropagated to ready-made environments across two different clouds. Thesecurity broker is responsible for selecting the appropriate securityprocedure with which to contact each cloud.7.13 CLOUD-BASED SECURITY GROUPS•Cloud-Based Security Groups Cloud resource segmentation is a processby which separate physical and virtual IT environments are created fordifferent users and groups. For example, an organization’s WAN can bepartitioned according to individual network security requirements.•One network can be established with a resilient firewall for externalInternet access, while asecond is deployed without a firewall becauseits users are internal and unable to access the Internet. Resourcesegmentation is used to enable virtualization by allocating a variety ofphysical IT resources to virtual machines.•Cloud-Based Security Groupsthe cloud-based resource segmentationprocess creates cloud-based security group mechanisms that aredetermined through security policies. Networks are segmented intological cloud-based security groups that form logical networkperimeters multiple virtual servers running on the same physical servercan become members of different logical cloud-based security groups(Figure 11).•Virtual servers can further be separated into public-private groups,development-production groups, or any other designation configured bythe cloud resource administrator.
munotes.in

Page 209

209Fig7.13.1Figure7.17Cloud-Based Security Group A encompasses Virtual ServersA and D and is assigned to Cloud Consumer A. Cloud-Based SecurityGroup B is comprised of Virtual Servers B, C, and E and is assigned toCloud Consumer B. If Cloud Service Consumer A’s credentials arecompromised, the attacker would only be able to access and damage thevirtual servers in Cloud-Based Security Group A, thereby protectingVirtual Servers B, C, and E.
Fig7.18
209Fig7.13.1Figure7.17Cloud-Based Security Group A encompasses Virtual ServersA and D and is assigned to Cloud Consumer A. Cloud-Based SecurityGroup B is comprised of Virtual Servers B, C, and E and is assigned toCloud Consumer B. If Cloud Service Consumer A’s credentials arecompromised, the attacker would only be able to access and damage thevirtual servers in Cloud-Based Security Group A, thereby protectingVirtual Servers B, C, and E.
Fig7.18
209Fig7.13.1Figure7.17Cloud-Based Security Group A encompasses Virtual ServersA and D and is assigned to Cloud Consumer A. Cloud-Based SecurityGroup B is comprised of Virtual Servers B, C, and E and is assigned toCloud Consumer B. If Cloud Service Consumer A’s credentials arecompromised, the attacker would only be able to access and damage thevirtual servers in Cloud-Based Security Group A, thereby protectingVirtual Servers B, C, and E.
Fig7.18
munotes.in

Page 210

210Figure7.13.2When an external cloud resource administrator accesses the Webportal to allocate a virtual server, the requested security credentials are assessedand mapped to an internal security policy that assigns a corresponding cloud-based security group to the new virtual server.7.14 HARDENED VIRTUAL SERVER IMAGES•Hardened Virtual Server Images As previously discussed, a virtualserver is created from a template configuration called a virtual serverimage (or virtual machine image). Hardening is the process ofstripping unnecessary software from a system to limit potentialvulnerabilities that can be exploited by attackers.•Removing redundant programs, closing unnecessary server ports, anddisabling unused services, internal root accounts, and guest access areall examples of hardening.•A hardened virtual server image is a template for virtual serviceinstance creation that has been subjected to a hardening process(Figure 13).•This generally results in a virtual server template that is significantlymore secure than the original standard image.
Figure7.19A cloud provider applies its security policies to harden its standardvirtual server images. The hardened image template is saved in the VM imagesrepository as part of a resource management system.•Hardened virtual server images help counter the denial of service,insufficient authorization, and overlapping trust boundaries threats.
210Figure7.13.2When an external cloud resource administrator accesses the Webportal to allocate a virtual server, the requested security credentials are assessedand mapped to an internal security policy that assigns a corresponding cloud-based security group to the new virtual server.7.14 HARDENED VIRTUAL SERVER IMAGES•Hardened Virtual Server Images As previously discussed, a virtualserver is created from a template configuration called a virtual serverimage (or virtual machine image). Hardening is the process ofstripping unnecessary software from a system to limit potentialvulnerabilities that can be exploited by attackers.•Removing redundant programs, closing unnecessary server ports, anddisabling unused services, internal root accounts, and guest access areall examples of hardening.•A hardened virtual server image is a template for virtual serviceinstance creation that has been subjected to a hardening process(Figure 13).•This generally results in a virtual server template that is significantlymore secure than the original standard image.
Figure7.19A cloud provider applies its security policies to harden its standardvirtual server images. The hardened image template is saved in the VM imagesrepository as part of a resource management system.•Hardened virtual server images help counter the denial of service,insufficient authorization, and overlapping trust boundaries threats.
210Figure7.13.2When an external cloud resource administrator accesses the Webportal to allocate a virtual server, the requested security credentials are assessedand mapped to an internal security policy that assigns a corresponding cloud-based security group to the new virtual server.7.14 HARDENED VIRTUAL SERVER IMAGES•Hardened Virtual Server Images As previously discussed, a virtualserver is created from a template configuration called a virtual serverimage (or virtual machine image). Hardening is the process ofstripping unnecessary software from a system to limit potentialvulnerabilities that can be exploited by attackers.•Removing redundant programs, closing unnecessary server ports, anddisabling unused services, internal root accounts, and guest access areall examples of hardening.•A hardened virtual server image is a template for virtual serviceinstance creation that has been subjected to a hardening process(Figure 13).•This generally results in a virtual server template that is significantlymore secure than the original standard image.
Figure7.19A cloud provider applies its security policies to harden its standardvirtual server images. The hardened image template is saved in the VM imagesrepository as part of a resource management system.•Hardened virtual server images help counter the denial of service,insufficient authorization, and overlapping trust boundaries threats.
munotes.in

Page 211

211Figure7.20The cloud resource administrator chooses the hardened virtualserver image option for the virtual servers provisioned for Cloud-BasedSecurity Group B.SUMMARYNow as we all know cloud technology growing very fast andbecome so popular today, so in thiscase everyone is very curious abouttheir security and data over the internet. So, cloud securityuses so manymechanism of network security to make cloud more secure androbustness.Various method available to easiest method like remotemanagement, billing management &resource management etc.available to manage your cloud which is more difficult to manage inrecent times. Hashing and Public key infrastructure provide the strongsecurity enhancement tothe cloud services. As we know cloud servicesavailable as private, public and hybrid, so in thiscase security is veryimportant to maintain the privacy of the user and service provider.UNIT END QUESTION1.Explain Remote administration system. What are the two portalscreate by remote administration system?2.What are the tasks that can commonly be performed by cloudconsumers via a remote administration console? Explain.3.What is Resource management? What are the tasks that are typicallyautomated and implemented through the resource managementsystem?4.Explain SLA management system in detail.5.Explain Billing management system in detail.6.What is an encryption? Explain its types.
211Figure7.20The cloud resource administrator chooses the hardened virtualserver image option for the virtual servers provisioned for Cloud-BasedSecurity Group B.SUMMARYNow as we all know cloud technology growing very fast andbecome so popular today, so in thiscase everyone is very curious abouttheir security and data over the internet. So, cloud securityuses so manymechanism of network security to make cloud more secure androbustness.Various method available to easiest method like remotemanagement, billing management &resource management etc.available to manage your cloud which is more difficult to manage inrecent times. Hashing and Public key infrastructure provide the strongsecurity enhancement tothe cloud services. As we know cloud servicesavailable as private, public and hybrid, so in thiscase security is veryimportant to maintain the privacy of the user and service provider.UNIT END QUESTION1.Explain Remote administration system. What are the two portalscreate by remote administration system?2.What are the tasks that can commonly be performed by cloudconsumers via a remote administration console? Explain.3.What is Resource management? What are the tasks that are typicallyautomated and implemented through the resource managementsystem?4.Explain SLA management system in detail.5.Explain Billing management system in detail.6.What is an encryption? Explain its types.
211Figure7.20The cloud resource administrator chooses the hardened virtualserver image option for the virtual servers provisioned for Cloud-BasedSecurity Group B.SUMMARYNow as we all know cloud technology growing very fast andbecome so popular today, so in thiscase everyone is very curious abouttheir security and data over the internet. So, cloud securityuses so manymechanism of network security to make cloud more secure androbustness.Various method available to easiest method like remotemanagement, billing management &resource management etc.available to manage your cloud which is more difficult to manage inrecent times. Hashing and Public key infrastructure provide the strongsecurity enhancement tothe cloud services. As we know cloud servicesavailable as private, public and hybrid, so in thiscase security is veryimportant to maintain the privacy of the user and service provider.UNIT END QUESTION1.Explain Remote administration system. What are the two portalscreate by remote administration system?2.What are the tasks that can commonly be performed by cloudconsumers via a remote administration console? Explain.3.What is Resource management? What are the tasks that are typicallyautomated and implemented through the resource managementsystem?4.Explain SLA management system in detail.5.Explain Billing management system in detail.6.What is an encryption? Explain its types.
munotes.in

Page 212

2127.What is hashing? Explain with the help of diagram.8.Explain digital signature in detail.9.What is PKI (Public Key Infrastructure)? Explain in detail.10.What isIdentity and Access Management (IAM)? Explain itscomponents.11. Explain Single Sign-On (SSO) in details.12. Explain Cloud-Based Security Groups in details.13. How Hardened Virtual Server Images? Explain.REFERENCES•https://patterns.arcitura.com/cloud-computing-patterns/mechanisms/remote_administration_system•Cloud Computing Concepts, Technology & Architecture By Thomas Erl,Zaigham Mahmood, and Ricardo Puttini Prentice Hall–2013.*****munotes.in

Page 213

213UNIT IV8FUNDAMENTAL CLOUDARCHITECTURESUnit Structure8.0Objectives8.1Introduction8.2Workload Distribution Architecture8.3Resource Pooling Architecture8.4Dynamic Scalability Architecture8.5Elastic Resource Capacity Architecture8.6Service Load Balancing Architecture8.7Cloud Bursting Architecture8.8Elastic Disk Provisioning Architecture8.9Redundant Storage Architecture8.10Summary8.11. Questions8.12References8.0 OBJECTIVETo learn how to use Cloud Services.8.1 INTRODUCTIONThis chapter introduces and describes several of the more commonfoundational cloud architectural models, each explaining a common usageand characteristic of modern day cloud-based environments. Further thechapter also exploresthe involvement and importance of differentcombinations of cloud computing mechanisms in relation to thesearchitectures.8.2WORKLOAD DISTRIBUTION ARCHITECTURE•Resources on cloud can be horizontally scaled using an addition oridentical resource and aload balancer that is capable of providing runtime distribution of workload among resources.•This architecture of distribution has a dual advantagei.Reduces overutilization of resources.munotes.in

Page 214

214ii.Reduces underutilization of resources.•Workload distributionis carried out in support of distributed virtualservers, storage devices and services.
FigureWorkload Distribution Architecture•Load balancing system produces specialized variation thatincorporates the aspect of load balancing like:i.Load BalancedService Instances Architectureii.Load Balanced Virtual Server Instances Architectureiii.Load Balanced Virtual Switches Architecture.•In addition to the above mentioned mechanism, the followingmechanisms can also be part of this cloud architecture:i.Audit Monitor:Resources that process the data can determinewhether monitoring is necessary to fulfill legal and regulatoryrequirements.ii.Cloud Usage Monitor:Various monitors can be involved to carryout runtime workload tracking and data processing.iii. Hypervisor:Workloads between hypervisors and the virtualservers that they host may require distribution.iv. Logical Network Perimeter:The logical network perimeterisolates cloud consumer network boundaries in relation to how andwhere workloads are distributed.v.Resource Cluster:Clustered IT resources in active/active modeare commonly used to support workload balancing betweendifferent cluster nodes.ViResource Replication:This mechanism can generate newinstances of virtualized IT resources in response to runtimeworkload distribution demands.
munotes.in

Page 215

2158.3RESOURCE POOLING ARCHITECTURE•Thisarchitectureis based on the use of one or more resource from apool of resources, in which identical synchronized resources aregrouped and maintained bya system. Examples of resource pools:1)Physical server pools:are group of physical servers networked tohave installed operating systems and other necessary programs and/orapplications and are ready for immediate use.
2)Virtual server pools:are groupof virtual servers networked to haveinstalled operating systems and other necessary programs and/orapplications and are ready for immediate use. They are usuallyconfigured using one of several available templates chosen by thecloud consumer during provisioning.•For example, a cloud consumer can set up a pool of mid-tier Windowsservers with 4 GB of RAM or a pool of low-tier Ubuntu servers with 2GB of RAM.
munotes.in

Page 216

2163)Storage pools, or cloud storage device pools: are group of file-based or block-based storage structures that contain empty and/orfilled cloud storage devices.
4)Network pools (or interconnect pools):are group of differentpreconfigured network connectivity devices.•For example, a pool of virtual firewall devices or physical networkswitches canbe created for redundant connectivity, load balancing,etc.5)CPU pools:are group of processing units ready to be allocated tovirtual servers, and are typically broken down into individualprocessing cores.
•Pools of physical RAM can be used in newlyprovisioned physicalservers or to vertically scale physical servers.•Dedicated pools can be created for each type of resource andindividual pools can be grouped into a larger pool, in such case eachindividual pool becomes a sub-pool.•Resource pools canbecome highly complex, with multiple poolscreated for specific cloud consumers or applications.•A hierarchical structure can be established to form parent, sibling, andnested pools in order to facilitate the organization of diverse resourcepooling requirements as shown in figure below.
2163)Storage pools, or cloud storage device pools: are group of file-based or block-based storage structures that contain empty and/orfilled cloud storage devices.
4)Network pools (or interconnect pools):are group of differentpreconfigured network connectivity devices.•For example, a pool of virtual firewall devices or physical networkswitches canbe created for redundant connectivity, load balancing,etc.5)CPU pools:are group of processing units ready to be allocated tovirtual servers, and are typically broken down into individualprocessing cores.
•Pools of physical RAM can be used in newlyprovisioned physicalservers or to vertically scale physical servers.•Dedicated pools can be created for each type of resource andindividual pools can be grouped into a larger pool, in such case eachindividual pool becomes a sub-pool.•Resource pools canbecome highly complex, with multiple poolscreated for specific cloud consumers or applications.•A hierarchical structure can be established to form parent, sibling, andnested pools in order to facilitate the organization of diverse resourcepooling requirements as shown in figure below.
2163)Storage pools, or cloud storage device pools: are group of file-based or block-based storage structures that contain empty and/orfilled cloud storage devices.
4)Network pools (or interconnect pools):are group of differentpreconfigured network connectivity devices.•For example, a pool of virtual firewall devices or physical networkswitches canbe created for redundant connectivity, load balancing,etc.5)CPU pools:are group of processing units ready to be allocated tovirtual servers, and are typically broken down into individualprocessing cores.
•Pools of physical RAM can be used in newlyprovisioned physicalservers or to vertically scale physical servers.•Dedicated pools can be created for each type of resource andindividual pools can be grouped into a larger pool, in such case eachindividual pool becomes a sub-pool.•Resource pools canbecome highly complex, with multiple poolscreated for specific cloud consumers or applications.•A hierarchical structure can be established to form parent, sibling, andnested pools in order to facilitate the organization of diverse resourcepooling requirements as shown in figure below.
munotes.in

Page 217

217Figure: Different Pool ArchitectureThedynamic scalability architectureis an architectural modelbased on a system of predefined scaling conditions that trigger thedynamic allocation of IT resources from resource pools. Dynamicallocation enables variable utilization as dictated by usage demandfluctuations, since unnecessary IT resources are efficiently reclaimedwithout requiring manual interaction.The automated scaling listener is configured with workloadthresholds that dictate when new IT resources need to be added to theworkload processing. This mechanism can be provided with logic thatdetermines how many additional IT resources can be dynamicallyprovided, based on the terms of a given cloud consumer’s provisioningcontract.8.4DYNAMIC SCALABILITY ARCHITECTURE•This architecture is a model based on a system of predefined resourcepool scaling conditions that trigger the dynamic allocation of cloudresources from pools.•Dynamic allocation enables variable utilization as defined by usagedemand fluctuations, resulting in effective resource utilization andunnecessary resources are efficiently reclaimed without requiringmanual interaction.•There are three types of dynamic scaling:1 Dynamic Horizontal Scaling :in this type the resource instances arescaled out and in to handle dynamic workloads during execution. Theautomatic scaling listener monitors requests and signals resourcereplication to initiate resource duplication, as per requirements andpermissions set by administrator.
217Figure: Different Pool ArchitectureThedynamic scalability architectureis an architectural modelbased on a system of predefined scaling conditions that trigger thedynamic allocation of IT resources from resource pools. Dynamicallocation enables variable utilization as dictated by usage demandfluctuations, since unnecessary IT resources are efficiently reclaimedwithout requiring manual interaction.The automated scaling listener is configured with workloadthresholds that dictate when new IT resources need to be added to theworkload processing. This mechanism can be provided with logic thatdetermines how many additional IT resources can be dynamicallyprovided, based on the terms of a given cloud consumer’s provisioningcontract.8.4DYNAMIC SCALABILITY ARCHITECTURE•This architecture is a model based on a system of predefined resourcepool scaling conditions that trigger the dynamic allocation of cloudresources from pools.•Dynamic allocation enables variable utilization as defined by usagedemand fluctuations, resulting in effective resource utilization andunnecessary resources are efficiently reclaimed without requiringmanual interaction.•There are three types of dynamic scaling:1 Dynamic Horizontal Scaling :in this type the resource instances arescaled out and in to handle dynamic workloads during execution. Theautomatic scaling listener monitors requests and signals resourcereplication to initiate resource duplication, as per requirements andpermissions set by administrator.
217Figure: Different Pool ArchitectureThedynamic scalability architectureis an architectural modelbased on a system of predefined scaling conditions that trigger thedynamic allocation of IT resources from resource pools. Dynamicallocation enables variable utilization as dictated by usage demandfluctuations, since unnecessary IT resources are efficiently reclaimedwithout requiring manual interaction.The automated scaling listener is configured with workloadthresholds that dictate when new IT resources need to be added to theworkload processing. This mechanism can be provided with logic thatdetermines how many additional IT resources can be dynamicallyprovided, based on the terms of a given cloud consumer’s provisioningcontract.8.4DYNAMIC SCALABILITY ARCHITECTURE•This architecture is a model based on a system of predefined resourcepool scaling conditions that trigger the dynamic allocation of cloudresources from pools.•Dynamic allocation enables variable utilization as defined by usagedemand fluctuations, resulting in effective resource utilization andunnecessary resources are efficiently reclaimed without requiringmanual interaction.•There are three types of dynamic scaling:1 Dynamic Horizontal Scaling :in this type the resource instances arescaled out and in to handle dynamic workloads during execution. Theautomatic scaling listener monitors requests and signals resourcereplication to initiate resource duplication, as per requirements andpermissions set by administrator.
munotes.in

Page 218

2182. Dynamic Vertical Scaling:in this type the resource instances arescaled up and down when there is a need to adjust the processing capacityof a single resource. For example, a virtual server that is being overloadedcan have its memory dynamically increased or it may have a processingcore added.
3. Dynamic Relocation: in this type the resource is relocated to a hostwith more capacity. For example, a file may need to be moved from atape-based SAN storage device with 4GB per second I/O capacity toanother diskbased SAN storage device with 8 GB per second I/O capacity.The dynamic scalability architecture can be applied to a range of ITresources, including virtual servers and cloud storage devices. Alongwith the coreautomated scaling listener and resource replicationmechanisms, the following mechanisms can also be used in this formof cloud architecture:•Cloud Usage Monitor:Specialized cloud usage monitors cantrack runtime usage in response to dynamic fluctuationscaused bythis architecture.
munotes.in

Page 219

219•Hypervisor:The hypervisor is invoked by a dynamic scalabilitysystem to create or remove virtual server instances, or to be scaleditself.•Pay-Per-Use Monitor:The pay-per-use monitor is engaged tocollect usage cost information in response to the scaling of ITresources.8.5ELASTIC RESOURCE CAPACITY ARCHITECTURE•Thisarchitecture is related to the dynamic provisioning of virtualservers, using a system that allocates and reclaims processors andmemory in immediate responseto the fluctuating processingrequirements of hosted cloud resources.
•Resource pools are used by scaling technology that interacts with thehypervisor and/or VIM to retrieve and return CPU and RAM resourcesat runtime.•The runtime processing of the virtual server is monitored so thatadditional processing power can be leveraged from the resource poolvia dynamic allocation, before capacity thresholds are met.•The virtual server and its hosted applications and resources arevertically scaled in response.•This type of cloud architecture can be designed so that the intelligentautomation engine script sends its scaling request via the VIM insteadof to the hypervisor directly.•Virtual servers that participate in elastic resource allocation systemsmay require rebooting in order for the dynamic resource allocation totake effect.8.6SERVICE LOAD BALANCING ARCHITECTURE•Thisarchitectureis a specialized variant of the workload distributionarchitecture that is geared specifically for scaling cloud serviceimplementations.•Redundant deployments of cloud services are created, with a loadbalancing system added to dynamically distribute workloads.
219•Hypervisor:The hypervisor is invoked by a dynamic scalabilitysystem to create or remove virtual server instances, or to be scaleditself.•Pay-Per-Use Monitor:The pay-per-use monitor is engaged tocollect usage cost information in response to the scaling of ITresources.8.5ELASTIC RESOURCE CAPACITY ARCHITECTURE•Thisarchitecture is related to the dynamic provisioning of virtualservers, using a system that allocates and reclaims processors andmemory in immediate responseto the fluctuating processingrequirements of hosted cloud resources.
•Resource pools are used by scaling technology that interacts with thehypervisor and/or VIM to retrieve and return CPU and RAM resourcesat runtime.•The runtime processing of the virtual server is monitored so thatadditional processing power can be leveraged from the resource poolvia dynamic allocation, before capacity thresholds are met.•The virtual server and its hosted applications and resources arevertically scaled in response.•This type of cloud architecture can be designed so that the intelligentautomation engine script sends its scaling request via the VIM insteadof to the hypervisor directly.•Virtual servers that participate in elastic resource allocation systemsmay require rebooting in order for the dynamic resource allocation totake effect.8.6SERVICE LOAD BALANCING ARCHITECTURE•Thisarchitectureis a specialized variant of the workload distributionarchitecture that is geared specifically for scaling cloud serviceimplementations.•Redundant deployments of cloud services are created, with a loadbalancing system added to dynamically distribute workloads.
219•Hypervisor:The hypervisor is invoked by a dynamic scalabilitysystem to create or remove virtual server instances, or to be scaleditself.•Pay-Per-Use Monitor:The pay-per-use monitor is engaged tocollect usage cost information in response to the scaling of ITresources.8.5ELASTIC RESOURCE CAPACITY ARCHITECTURE•Thisarchitecture is related to the dynamic provisioning of virtualservers, using a system that allocates and reclaims processors andmemory in immediate responseto the fluctuating processingrequirements of hosted cloud resources.
•Resource pools are used by scaling technology that interacts with thehypervisor and/or VIM to retrieve and return CPU and RAM resourcesat runtime.•The runtime processing of the virtual server is monitored so thatadditional processing power can be leveraged from the resource poolvia dynamic allocation, before capacity thresholds are met.•The virtual server and its hosted applications and resources arevertically scaled in response.•This type of cloud architecture can be designed so that the intelligentautomation engine script sends its scaling request via the VIM insteadof to the hypervisor directly.•Virtual servers that participate in elastic resource allocation systemsmay require rebooting in order for the dynamic resource allocation totake effect.8.6SERVICE LOAD BALANCING ARCHITECTURE•Thisarchitectureis a specialized variant of the workload distributionarchitecture that is geared specifically for scaling cloud serviceimplementations.•Redundant deployments of cloud services are created, with a loadbalancing system added to dynamically distribute workloads.
munotes.in

Page 220

220•The duplicate cloud service implementations are organized into aresource pool, while the load balancer is positioned as either anexternal or built-in component to allow the host servers to balance theworkloads themselves.•Depending on the anticipated workload and processing capacity ofhost server environments, multiple instances of each cloud serviceimplementation can be generated as part of a resource pool thatresponds to fluctuating request volumes more efficiently.
8.7CLOUD BURSTING ARCHITECTURE•Thisarchitectureestablishes a form of dynamic scaling that scales or“bursts out” on-premise cloud resources into a cloud wheneverpredefined capacity thresholds have been reached.•The corresponding cloud-based resources are redundantly pre-deployed but remain inactive until cloud bursting occurs. After theyare no longer required, the resources are releasedand the architecture“bursts in” back to the on-premise environment.•Cloud bursting is a flexible scaling architecture that provides cloudconsumers with the option of using cloud-based IT resources only tomeet higher usage demands.•The foundation of this architectural model is based on the automatedscaling listener and resource replication mechanisms.•The automated scaling listener determines when to redirect requests tocloud based resources, and resource replication is used to maintainsynchronicity between on-premise and cloud-based IT resources inrelation to state information
220•The duplicate cloud service implementations are organized into aresource pool, while the load balancer is positioned as either anexternal or built-in component to allow the host servers to balance theworkloads themselves.•Depending on the anticipated workload and processing capacity ofhost server environments, multiple instances of each cloud serviceimplementation can be generated as part of a resource pool thatresponds to fluctuating request volumes more efficiently.
8.7CLOUD BURSTING ARCHITECTURE•Thisarchitectureestablishes a form of dynamic scaling that scales or“bursts out” on-premise cloud resources into a cloud wheneverpredefined capacity thresholds have been reached.•The corresponding cloud-based resources are redundantly pre-deployed but remain inactive until cloud bursting occurs. After theyare no longer required, the resources are releasedand the architecture“bursts in” back to the on-premise environment.•Cloud bursting is a flexible scaling architecture that provides cloudconsumers with the option of using cloud-based IT resources only tomeet higher usage demands.•The foundation of this architectural model is based on the automatedscaling listener and resource replication mechanisms.•The automated scaling listener determines when to redirect requests tocloud based resources, and resource replication is used to maintainsynchronicity between on-premise and cloud-based IT resources inrelation to state information
220•The duplicate cloud service implementations are organized into aresource pool, while the load balancer is positioned as either anexternal or built-in component to allow the host servers to balance theworkloads themselves.•Depending on the anticipated workload and processing capacity ofhost server environments, multiple instances of each cloud serviceimplementation can be generated as part of a resource pool thatresponds to fluctuating request volumes more efficiently.
8.7CLOUD BURSTING ARCHITECTURE•Thisarchitectureestablishes a form of dynamic scaling that scales or“bursts out” on-premise cloud resources into a cloud wheneverpredefined capacity thresholds have been reached.•The corresponding cloud-based resources are redundantly pre-deployed but remain inactive until cloud bursting occurs. After theyare no longer required, the resources are releasedand the architecture“bursts in” back to the on-premise environment.•Cloud bursting is a flexible scaling architecture that provides cloudconsumers with the option of using cloud-based IT resources only tomeet higher usage demands.•The foundation of this architectural model is based on the automatedscaling listener and resource replication mechanisms.•The automated scaling listener determines when to redirect requests tocloud based resources, and resource replication is used to maintainsynchronicity between on-premise and cloud-based IT resources inrelation to state information
munotes.in

Page 221

2218.8ELASTIC DISK PROVISIONING ARCHITECTURE•Cloud consumers are commonly charged for cloud-based storage spacebased on fixed-disk storage allocation, meaning the charges arepredetermined by disk capacity and not aligned with actual datastorage consumption.•Cloud provisions a virtual server with the Windows Server 2019 OSand Three 800 GB hard drives. The cloud consumer is billed for using2400 GB of storage space after installing the operating system, eventhough the operating system only requires 25 GB of storage space.
•The elastic disk provisioning architectureestablishes a dynamicstorage provisioning system that ensures that the cloud consumer isgranularly billed for the exact amount of storage that it actually uses.This system uses thin provisioning technology for the dynamicallocation of storage space, and is further supported by runtime usagemonitoring to collect accurate usage data for billing purposes
2218.8ELASTIC DISK PROVISIONING ARCHITECTURE•Cloud consumers are commonly charged for cloud-based storage spacebased on fixed-disk storage allocation, meaning the charges arepredetermined by disk capacity and not aligned with actual datastorage consumption.•Cloud provisions a virtual server with the Windows Server 2019 OSand Three 800 GB hard drives. The cloud consumer is billed for using2400 GB of storage space after installing the operating system, eventhough the operating system only requires 25 GB of storage space.
•The elastic disk provisioning architectureestablishes a dynamicstorage provisioning system that ensures that the cloud consumer isgranularly billed for the exact amount of storage that it actually uses.This system uses thin provisioning technology for the dynamicallocation of storage space, and is further supported by runtime usagemonitoring to collect accurate usage data for billing purposes
2218.8ELASTIC DISK PROVISIONING ARCHITECTURE•Cloud consumers are commonly charged for cloud-based storage spacebased on fixed-disk storage allocation, meaning the charges arepredetermined by disk capacity and not aligned with actual datastorage consumption.•Cloud provisions a virtual server with the Windows Server 2019 OSand Three 800 GB hard drives. The cloud consumer is billed for using2400 GB of storage space after installing the operating system, eventhough the operating system only requires 25 GB of storage space.
•The elastic disk provisioning architectureestablishes a dynamicstorage provisioning system that ensures that the cloud consumer isgranularly billed for the exact amount of storage that it actually uses.This system uses thin provisioning technology for the dynamicallocation of storage space, and is further supported by runtime usagemonitoring to collect accurate usage data for billing purposes
munotes.in

Page 222

222•Thin-provisioning software is installed on virtual servers that processdynamic storage allocation via the hypervisor, while the pay-per-usemonitor tracks and reports granular billing-related disk usage data.8.9REDUNDANT STORAGE ARCHITECTURE•Cloud storage devices are occasionally subject to failure anddisruptions that are caused by network connectivity issues, controlleror general hardware failure, or security breaches.•A compromised cloud storage device’s reliability can have a rippleeffect and causeimpact failure across all of the services, applications,and infrastructure components in the cloud that are reliant on itsavailability.•Theredundant storage architectureintroduces a secondary duplicatecloud storage device as part of a failover systemthat synchronizes itsdata with the data in the primary cloud storage device.•A storage service gateway diverts cloud consumer requests to thesecondary device whenever the primary device fails.
222•Thin-provisioning software is installed on virtual servers that processdynamic storage allocation via the hypervisor, while the pay-per-usemonitor tracks and reports granular billing-related disk usage data.8.9REDUNDANT STORAGE ARCHITECTURE•Cloud storage devices are occasionally subject to failure anddisruptions that are caused by network connectivity issues, controlleror general hardware failure, or security breaches.•A compromised cloud storage device’s reliability can have a rippleeffect and causeimpact failure across all of the services, applications,and infrastructure components in the cloud that are reliant on itsavailability.•Theredundant storage architectureintroduces a secondary duplicatecloud storage device as part of a failover systemthat synchronizes itsdata with the data in the primary cloud storage device.•A storage service gateway diverts cloud consumer requests to thesecondary device whenever the primary device fails.
222•Thin-provisioning software is installed on virtual servers that processdynamic storage allocation via the hypervisor, while the pay-per-usemonitor tracks and reports granular billing-related disk usage data.8.9REDUNDANT STORAGE ARCHITECTURE•Cloud storage devices are occasionally subject to failure anddisruptions that are caused by network connectivity issues, controlleror general hardware failure, or security breaches.•A compromised cloud storage device’s reliability can have a rippleeffect and causeimpact failure across all of the services, applications,and infrastructure components in the cloud that are reliant on itsavailability.•Theredundant storage architectureintroduces a secondary duplicatecloud storage device as part of a failover systemthat synchronizes itsdata with the data in the primary cloud storage device.•A storage service gateway diverts cloud consumer requests to thesecondary device whenever the primary device fails.
munotes.in

Page 223

223•This cloud architecture primarily relies on a storage replication systemthat keeps the primary cloud storage device synchronized with itsduplicate secondary cloud storage devices.•Cloud providers may locate secondary cloud storage devices in adifferent geographical region than the primary cloud storage device,usually for economic reasons.•The location of the secondary cloud storage devices can dictate theprotocol and method used for synchronization, like some replicationtransport protocols have distance restrictions.SUMMARYIn this chapter we learn some common foundational cloudarchitectural models, eachexplaining a common usage and characteristicof modern day cloud-based environments. The chapteralso explored theinvolvement and importance of different combinations of cloud computingmechanisms inrelation to these architectures.QUESTIONS1.Write a short note on workload distribution architecture.2.Which architecture is based on the use of one or more resource from apool ofresources, in which identical synchronized resources aregrouped and maintained by asystem? Explain.3.Explain the cloud architecture model that is based on a system ofpredefined resource poolscaling conditions that trigger the dynamicallocation of cloud resources from pools.4.Explain using suitable diagram Elastic Resource CapacityArchitecture.
223•This cloud architecture primarily relies on a storage replication systemthat keeps the primary cloud storage device synchronized with itsduplicate secondary cloud storage devices.•Cloud providers may locate secondary cloud storage devices in adifferent geographical region than the primary cloud storage device,usually for economic reasons.•The location of the secondary cloud storage devices can dictate theprotocol and method used for synchronization, like some replicationtransport protocols have distance restrictions.SUMMARYIn this chapter we learn some common foundational cloudarchitectural models, eachexplaining a common usage and characteristicof modern day cloud-based environments. The chapteralso explored theinvolvement and importance of different combinations of cloud computingmechanisms inrelation to these architectures.QUESTIONS1.Write a short note on workload distribution architecture.2.Which architecture is based on the use of one or more resource from apool ofresources, in which identical synchronized resources aregrouped and maintained by asystem? Explain.3.Explain the cloud architecture model that is based on a system ofpredefined resource poolscaling conditions that trigger the dynamicallocation of cloud resources from pools.4.Explain using suitable diagram Elastic Resource CapacityArchitecture.
223•This cloud architecture primarily relies on a storage replication systemthat keeps the primary cloud storage device synchronized with itsduplicate secondary cloud storage devices.•Cloud providers may locate secondary cloud storage devices in adifferent geographical region than the primary cloud storage device,usually for economic reasons.•The location of the secondary cloud storage devices can dictate theprotocol and method used for synchronization, like some replicationtransport protocols have distance restrictions.SUMMARYIn this chapter we learn some common foundational cloudarchitectural models, eachexplaining a common usage and characteristicof modern day cloud-based environments. The chapteralso explored theinvolvement and importance of different combinations of cloud computingmechanisms inrelation to these architectures.QUESTIONS1.Write a short note on workload distribution architecture.2.Which architecture is based on the use of one or more resource from apool ofresources, in which identical synchronized resources aregrouped and maintained by asystem? Explain.3.Explain the cloud architecture model that is based on a system ofpredefined resource poolscaling conditions that trigger the dynamicallocation of cloud resources from pools.4.Explain using suitable diagram Elastic Resource CapacityArchitecture.
munotes.in

Page 224

2245.What is Service Load Balancing Architecture? Explain.6.Write a note on Cloud Bursting Architecture.7.Explain using suitable diagram, Elastic Disk ProvisioningArchitecture.8.Which Cloud storage architecture is suitable if devices areoccasionally subject to failure anddisruptions that are caused bynetwork connectivity issues, controller or general hardwarefailure, orsecurity breaches? Explain using suitable diagram.9.Explain using suitable diagram, Redundant Storage Architecture.10.List any five commonly used cloud architecture modelREFERENCES•Foundations of Modern Networking: SDN, NFV, QoE, IoT, and Cloud–WilliamStallings Addison-Wesley Professional October 2015•SDN and NFV Simplified A Visual Guide to Understanding SoftwareDefined Networksand Network Function Virtualization Jim DohertyPearson Education•Network Functions Virtualization (NFV) with a Touch of SDNRajendra ChayapathiSyed Farrukh Hassan Addison-Wesley•CCIE and CCDE Evolving Technologies Study Guide Brad dgeworth,Jason Gooley,Ramiro Garza Rios Pearson Education, Inc 2019..*****munotes.in

Page 225

2259ADVANCEDCLOUD ARCHITECTURESUnit Structure9.0Objective9.1Hypervisor Clustering Architecture9.2Load Balanced Virtual Server Instances Architecture9.3Non-Disruptive Service RelocationArchitecture9.4Zero Downtime Architecture9.5Cloud Balancing Architecture9.6Resource Reservation Architecture9.7Dynamic Failure Detection and Recovery Architecture9.8Bare-Metal Provisioning Architecture9.9Rapid Provisioning Architecture9.10Storage Workload Management Architecture9.11. Summary9.12 Questions9.13 References9.0 OBJECTIVE•To learn how to use Advance Cloud Services.•This chapter introduces the cloud technology architectures distinct andsophisticated architectural layers, several of which can be built uponthe more foundational environments established by the architecturalmodels discussed in previous chapter.9.1 HYPERVISOR CLUSTERING ARCHITECTURE•Hypervisors are responsible for creating and hosting multiple virtualservers.•Because of this dependency, any failure conditions that affect ahypervisor can cascaded effect on its virtual servers.•Thehypervisor clustering architectureestablishes a high-availabilitycluster of hypervisors across multiple physical servers.•Ifa given hypervisor or its underlying physical server becomesunavailable, the hosted virtual servers can be moved to anotherphysical server or hypervisor to maintain runtime operations.munotes.in

Page 226

226•The hypervisor cluster is controlled via a central VIM, which sendsregular heartbeat messages to the hypervisors to confirm that they areup and running.•Unacknowledged heartbeat messages cause the VIM to initiate the liveVM migration program, in order to dynamically move the affectedvirtual servers to a new host.9.2 LOAD BALANCED VIRTUAL SERVER INSTANCESARCHITECTURE•Sometime keeping cross-server workloads evenly balanced betweenphysical servers whose operation and management are isolated can bethe most challenging part on cloud.•A physical server can easily end up hosting more virtual servers orreceive larger workloads than its neighboring physical servers.•Both physical server over and under-utilization can increasedramatically over time, leading to on-going performance challenges(for over-utilized servers)and constant waste (for the lost processingpotential of under-utilized servers).•Theload balanced virtual server instances architectureestablishes acapacity watchdog system that dynamically calculates virtual serverinstances and associated workloads,before distributing the processingacross available physical server hosts.•The capacity watchdog system is comprised of a capacity watchdogcloud usage monitor, the live VM migration program, and a capacityplanner.
226•The hypervisor cluster is controlled via a central VIM, which sendsregular heartbeat messages to the hypervisors to confirm that they areup and running.•Unacknowledged heartbeat messages cause the VIM to initiate the liveVM migration program, in order to dynamically move the affectedvirtual servers to a new host.9.2 LOAD BALANCED VIRTUAL SERVER INSTANCESARCHITECTURE•Sometime keeping cross-server workloads evenly balanced betweenphysical servers whose operation and management are isolated can bethe most challenging part on cloud.•A physical server can easily end up hosting more virtual servers orreceive larger workloads than its neighboring physical servers.•Both physical server over and under-utilization can increasedramatically over time, leading to on-going performance challenges(for over-utilized servers)and constant waste (for the lost processingpotential of under-utilized servers).•Theload balanced virtual server instances architectureestablishes acapacity watchdog system that dynamically calculates virtual serverinstances and associated workloads,before distributing the processingacross available physical server hosts.•The capacity watchdog system is comprised of a capacity watchdogcloud usage monitor, the live VM migration program, and a capacityplanner.
226•The hypervisor cluster is controlled via a central VIM, which sendsregular heartbeat messages to the hypervisors to confirm that they areup and running.•Unacknowledged heartbeat messages cause the VIM to initiate the liveVM migration program, in order to dynamically move the affectedvirtual servers to a new host.9.2 LOAD BALANCED VIRTUAL SERVER INSTANCESARCHITECTURE•Sometime keeping cross-server workloads evenly balanced betweenphysical servers whose operation and management are isolated can bethe most challenging part on cloud.•A physical server can easily end up hosting more virtual servers orreceive larger workloads than its neighboring physical servers.•Both physical server over and under-utilization can increasedramatically over time, leading to on-going performance challenges(for over-utilized servers)and constant waste (for the lost processingpotential of under-utilized servers).•Theload balanced virtual server instances architectureestablishes acapacity watchdog system that dynamically calculates virtual serverinstances and associated workloads,before distributing the processingacross available physical server hosts.•The capacity watchdog system is comprised of a capacity watchdogcloud usage monitor, the live VM migration program, and a capacityplanner.
munotes.in

Page 227

227•The capacity watchdog monitor tracks physical and virtual serverusage and reports any significant fluctuations to the capacity planner,which is responsible for dynamically calculating physical servercomputing capacities against virtual server capacity requirements.•If the capacity plannerdecides to move a virtual server to another hostto distribute the workload, the live VM migration program is signaledto move the virtual server.
9.3 NON-DISRUPTIVE SERVICE RELOCATIONARCHITECTURE•A cloud service can become unavailable for a number of reasons, suchas:1)Runtime usage demands that exceed its processing capacity2)A maintenance update that mandates a temporary outage3)Permanent migration to a new physical server host•Cloud service consumer requests are usually rejected if a cloudservicebecomes unavailable, which can potentially result in exceptionconditions.•Thenon-disruptive service relocation architectureestablishes a systemby which a predefined event triggers the duplication or migration of acloud service implementation at runtime, thereby avoiding anydisruption.
227•The capacity watchdog monitor tracks physical and virtual serverusage and reports any significant fluctuations to the capacity planner,which is responsible for dynamically calculating physical servercomputing capacities against virtual server capacity requirements.•If the capacity plannerdecides to move a virtual server to another hostto distribute the workload, the live VM migration program is signaledto move the virtual server.
9.3 NON-DISRUPTIVE SERVICE RELOCATIONARCHITECTURE•A cloud service can become unavailable for a number of reasons, suchas:1)Runtime usage demands that exceed its processing capacity2)A maintenance update that mandates a temporary outage3)Permanent migration to a new physical server host•Cloud service consumer requests are usually rejected if a cloudservicebecomes unavailable, which can potentially result in exceptionconditions.•Thenon-disruptive service relocation architectureestablishes a systemby which a predefined event triggers the duplication or migration of acloud service implementation at runtime, thereby avoiding anydisruption.
227•The capacity watchdog monitor tracks physical and virtual serverusage and reports any significant fluctuations to the capacity planner,which is responsible for dynamically calculating physical servercomputing capacities against virtual server capacity requirements.•If the capacity plannerdecides to move a virtual server to another hostto distribute the workload, the live VM migration program is signaledto move the virtual server.
9.3 NON-DISRUPTIVE SERVICE RELOCATIONARCHITECTURE•A cloud service can become unavailable for a number of reasons, suchas:1)Runtime usage demands that exceed its processing capacity2)A maintenance update that mandates a temporary outage3)Permanent migration to a new physical server host•Cloud service consumer requests are usually rejected if a cloudservicebecomes unavailable, which can potentially result in exceptionconditions.•Thenon-disruptive service relocation architectureestablishes a systemby which a predefined event triggers the duplication or migration of acloud service implementation at runtime, thereby avoiding anydisruption.
munotes.in

Page 228

228•Instead of scaling cloud services in or out with redundantimplementations, cloud service activity can be temporarily diverted toanother hosting environment at runtime by adding a duplicateimplementation ontoa new host.•Similarly, cloud service consumer requests can be temporarilyredirected to a duplicate implementation when the originalimplementation needs to undergo a maintenance outage.•The relocation of the cloud service implementation and any cloudservice activity can also be permanent to accommodate cloud servicemigrations to new physical server hosts.•A key aspect of the underlying architecture is that the new cloudservice implementation is guaranteed to be successfully receiving andresponding tocloud service consumer requestsbeforethe originalcloud service implementation is deactivated or removed.•A common approach is for live VM migration to move the entirevirtual server instance that is hosting the cloud service.•The automated scaling listener and/or load balancer mechanisms canbe used to trigger a temporary redirection of cloud service consumerrequests, in response to scaling and workload distributionrequirements. Either mechanism can contact the VIM to initiate thelive VM migrationprocess.
Figure : Before FailureFigure : After Failure•Virtual server migration can occur in one of the following two ways,depending on the location of the virtual server’s disks andconfiguration:•A copy of the virtual server disks is created on the destination host, ifthe virtual server disks are stored on a local storage device or non-shared remote storage devices attached to the source host. After thecopy has been created, both virtual server instances are synchronizedand virtual server filesare removed from the origin host.
228•Instead of scaling cloud services in or out with redundantimplementations, cloud service activity can be temporarily diverted toanother hosting environment at runtime by adding a duplicateimplementation ontoa new host.•Similarly, cloud service consumer requests can be temporarilyredirected to a duplicate implementation when the originalimplementation needs to undergo a maintenance outage.•The relocation of the cloud service implementation and any cloudservice activity can also be permanent to accommodate cloud servicemigrations to new physical server hosts.•A key aspect of the underlying architecture is that the new cloudservice implementation is guaranteed to be successfully receiving andresponding tocloud service consumer requestsbeforethe originalcloud service implementation is deactivated or removed.•A common approach is for live VM migration to move the entirevirtual server instance that is hosting the cloud service.•The automated scaling listener and/or load balancer mechanisms canbe used to trigger a temporary redirection of cloud service consumerrequests, in response to scaling and workload distributionrequirements. Either mechanism can contact the VIM to initiate thelive VM migrationprocess.
Figure : Before FailureFigure : After Failure•Virtual server migration can occur in one of the following two ways,depending on the location of the virtual server’s disks andconfiguration:•A copy of the virtual server disks is created on the destination host, ifthe virtual server disks are stored on a local storage device or non-shared remote storage devices attached to the source host. After thecopy has been created, both virtual server instances are synchronizedand virtual server filesare removed from the origin host.
228•Instead of scaling cloud services in or out with redundantimplementations, cloud service activity can be temporarily diverted toanother hosting environment at runtime by adding a duplicateimplementation ontoa new host.•Similarly, cloud service consumer requests can be temporarilyredirected to a duplicate implementation when the originalimplementation needs to undergo a maintenance outage.•The relocation of the cloud service implementation and any cloudservice activity can also be permanent to accommodate cloud servicemigrations to new physical server hosts.•A key aspect of the underlying architecture is that the new cloudservice implementation is guaranteed to be successfully receiving andresponding tocloud service consumer requestsbeforethe originalcloud service implementation is deactivated or removed.•A common approach is for live VM migration to move the entirevirtual server instance that is hosting the cloud service.•The automated scaling listener and/or load balancer mechanisms canbe used to trigger a temporary redirection of cloud service consumerrequests, in response to scaling and workload distributionrequirements. Either mechanism can contact the VIM to initiate thelive VM migrationprocess.
Figure : Before FailureFigure : After Failure•Virtual server migration can occur in one of the following two ways,depending on the location of the virtual server’s disks andconfiguration:•A copy of the virtual server disks is created on the destination host, ifthe virtual server disks are stored on a local storage device or non-shared remote storage devices attached to the source host. After thecopy has been created, both virtual server instances are synchronizedand virtual server filesare removed from the origin host.
munotes.in

Page 229

229•Copying the virtual server disks is unnecessary if the virtual server’sfiles are stored on a remote storage device that is shared betweenorigin and destination hosts. Ownership of the virtual server is simplytransferredfrom the origin to the destination physical server host, andthe virtual server’s state is automatically synchronized.9.4 ZERO DOWNTIME ARCHITECTURE•A physical server naturally acts as a single point of failure for thevirtual servers it hosts. As a result, when the physical server fails or iscompromised, the availability of any (or all) hosted virtual servers canbe affected. This makes the issuance of zero downtime guarantees by acloud provider to cloud consumers challenging.•Thezero downtime architectureestablishes a sophisticated failoversystem that allows virtual servers to be dynamically moved to differentphysical server hosts, in the event that their original physical serverhost fails
9.5 CLOUD BALANCING ARCHITECTUREThis architectureestablishes a specialized architectural model in whichcloud resources can be load-balanced across multiple clouds. The cross-cloud balancing of cloud service consumer requests can help:1)improve the performance and scalability of resources2)increase the availability and reliability of resources3)improve load-balancing and resource optimizationIts functionality is primarily based on the combination of theautomated scaling listener and failover system mechanisms. Many morecomponents and mechanisms can be part of a complete this architecture.As a starting point, the two mechanisms are utilized as follows:•The automated scaling listener redirects cloud service consumerrequests to one of several redundant IT resource implementations,based on current scaling and performance requirements.
229•Copying the virtual server disks is unnecessary if the virtual server’sfiles are stored on a remote storage device that is shared betweenorigin and destination hosts. Ownership of the virtual server is simplytransferredfrom the origin to the destination physical server host, andthe virtual server’s state is automatically synchronized.9.4 ZERO DOWNTIME ARCHITECTURE•A physical server naturally acts as a single point of failure for thevirtual servers it hosts. As a result, when the physical server fails or iscompromised, the availability of any (or all) hosted virtual servers canbe affected. This makes the issuance of zero downtime guarantees by acloud provider to cloud consumers challenging.•Thezero downtime architectureestablishes a sophisticated failoversystem that allows virtual servers to be dynamically moved to differentphysical server hosts, in the event that their original physical serverhost fails
9.5 CLOUD BALANCING ARCHITECTUREThis architectureestablishes a specialized architectural model in whichcloud resources can be load-balanced across multiple clouds. The cross-cloud balancing of cloud service consumer requests can help:1)improve the performance and scalability of resources2)increase the availability and reliability of resources3)improve load-balancing and resource optimizationIts functionality is primarily based on the combination of theautomated scaling listener and failover system mechanisms. Many morecomponents and mechanisms can be part of a complete this architecture.As a starting point, the two mechanisms are utilized as follows:•The automated scaling listener redirects cloud service consumerrequests to one of several redundant IT resource implementations,based on current scaling and performance requirements.
229•Copying the virtual server disks is unnecessary if the virtual server’sfiles are stored on a remote storage device that is shared betweenorigin and destination hosts. Ownership of the virtual server is simplytransferredfrom the origin to the destination physical server host, andthe virtual server’s state is automatically synchronized.9.4 ZERO DOWNTIME ARCHITECTURE•A physical server naturally acts as a single point of failure for thevirtual servers it hosts. As a result, when the physical server fails or iscompromised, the availability of any (or all) hosted virtual servers canbe affected. This makes the issuance of zero downtime guarantees by acloud provider to cloud consumers challenging.•Thezero downtime architectureestablishes a sophisticated failoversystem that allows virtual servers to be dynamically moved to differentphysical server hosts, in the event that their original physical serverhost fails
9.5 CLOUD BALANCING ARCHITECTUREThis architectureestablishes a specialized architectural model in whichcloud resources can be load-balanced across multiple clouds. The cross-cloud balancing of cloud service consumer requests can help:1)improve the performance and scalability of resources2)increase the availability and reliability of resources3)improve load-balancing and resource optimizationIts functionality is primarily based on the combination of theautomated scaling listener and failover system mechanisms. Many morecomponents and mechanisms can be part of a complete this architecture.As a starting point, the two mechanisms are utilized as follows:•The automated scaling listener redirects cloud service consumerrequests to one of several redundant IT resource implementations,based on current scaling and performance requirements.
munotes.in

Page 230

230•The failover system ensures that redundant IT resources are capable ofcross-cloud failover in the event of a failure within an IT resource orits underlying hosting environment. IT resource failures are announcedso that the automated scaling listener can avoid inadvertently routingcloud service consumer requests to unavailable or unstable ITresources.For a cloud balancing architecture to function effectively, theautomated scaling listener needs to be aware of all redundant IT resourceimplementations within the scope of the cloud balanced architecture. Alsoif the manual synchronization of cross-cloud IT resource implementationsis not possible, the resource replication mechanism may need to beincorporated to automatethe synchronization.
9.6 RESOURCE RESERVATION ARCHITECTURETheresource reservation architectureestablishes a system whereby one ofthe following is set aside exclusively for a given cloud consumer•single resource•part of an resource•multiple resourcesThe creation of a resource reservation system can require involving
230•The failover system ensures that redundant IT resources are capable ofcross-cloud failover in the event of a failure within an IT resource orits underlying hosting environment. IT resource failures are announcedso that the automated scaling listener can avoid inadvertently routingcloud service consumer requests to unavailable or unstable ITresources.For a cloud balancing architecture to function effectively, theautomated scaling listener needs to be aware of all redundant IT resourceimplementations within the scope of the cloud balanced architecture. Alsoif the manual synchronization of cross-cloud IT resource implementationsis not possible, the resource replication mechanism may need to beincorporated to automatethe synchronization.
9.6 RESOURCE RESERVATION ARCHITECTURETheresource reservation architectureestablishes a system whereby one ofthe following is set aside exclusively for a given cloud consumer•single resource•part of an resource•multiple resourcesThe creation of a resource reservation system can require involving
230•The failover system ensures that redundant IT resources are capable ofcross-cloud failover in the event of a failure within an IT resource orits underlying hosting environment. IT resource failures are announcedso that the automated scaling listener can avoid inadvertently routingcloud service consumer requests to unavailable or unstable ITresources.For a cloud balancing architecture to function effectively, theautomated scaling listener needs to be aware of all redundant IT resourceimplementations within the scope of the cloud balanced architecture. Alsoif the manual synchronization of cross-cloud IT resource implementationsis not possible, the resource replication mechanism may need to beincorporated to automatethe synchronization.
9.6 RESOURCE RESERVATION ARCHITECTURETheresource reservation architectureestablishes a system whereby one ofthe following is set aside exclusively for a given cloud consumer•single resource•part of an resource•multiple resourcesThe creation of a resource reservation system can require involving
munotes.in

Page 231

231the resource management system mechanism, which is used to define theusage thresholds for individual resources and resource pools. Reservationslock the amount of resources that eachpool needs to keep, with thebalance of the pool’s resources still available for sharing and borrowing.The remote administration system mechanism is also used to enable front-end customization, so that cloud consumers have administration controlsfor themanagement of their reserved resource allocations.The types of mechanisms that are commonly reserved within thisarchitecture are cloud storage devices and virtual servers. Othermechanisms that may be part of the architecture can include:•Audit Monitor•Cloud Usage Monitor•Hypervisor•Logical Network Perimeter•Resource Replication
231the resource management system mechanism, which is used to define theusage thresholds for individual resources and resource pools. Reservationslock the amount of resources that eachpool needs to keep, with thebalance of the pool’s resources still available for sharing and borrowing.The remote administration system mechanism is also used to enable front-end customization, so that cloud consumers have administration controlsfor themanagement of their reserved resource allocations.The types of mechanisms that are commonly reserved within thisarchitecture are cloud storage devices and virtual servers. Othermechanisms that may be part of the architecture can include:•Audit Monitor•Cloud Usage Monitor•Hypervisor•Logical Network Perimeter•Resource Replication
231the resource management system mechanism, which is used to define theusage thresholds for individual resources and resource pools. Reservationslock the amount of resources that eachpool needs to keep, with thebalance of the pool’s resources still available for sharing and borrowing.The remote administration system mechanism is also used to enable front-end customization, so that cloud consumers have administration controlsfor themanagement of their reserved resource allocations.The types of mechanisms that are commonly reserved within thisarchitecture are cloud storage devices and virtual servers. Othermechanisms that may be part of the architecture can include:•Audit Monitor•Cloud Usage Monitor•Hypervisor•Logical Network Perimeter•Resource Replication
munotes.in

Page 232

2329.7 DYNAMIC FAILURE DETECTION ANDRECOVERY ARCHITECTUREThisarchitectureestablishes a resilient watchdog system tomonitor and respond to a wide range of pre-definedfailure scenarios. Thissystem notifies and escalates the failure conditions that it cannotautomatically resolve itself. It relies on a specialized cloud usage monitorcalled the intelligent watchdog monitor to actively track resources andtake pre-defined actions in response to predefined events.The resilient watchdog system performs the following five core functions:1) Watching2)deciding upon an event3) Acting upon an event4) Reporting5) EscalatingSequential recovery policies can be defined foreach resource todetermine the steps that the intelligent watchdog monitor needs to takewhen a failure condition occurs. For example, a recovery policy can statethat one recovery attempt needs to be automatically carried out beforeissuing a notification
2329.7 DYNAMIC FAILURE DETECTION ANDRECOVERY ARCHITECTUREThisarchitectureestablishes a resilient watchdog system tomonitor and respond to a wide range of pre-definedfailure scenarios. Thissystem notifies and escalates the failure conditions that it cannotautomatically resolve itself. It relies on a specialized cloud usage monitorcalled the intelligent watchdog monitor to actively track resources andtake pre-defined actions in response to predefined events.The resilient watchdog system performs the following five core functions:1) Watching2)deciding upon an event3) Acting upon an event4) Reporting5) EscalatingSequential recovery policies can be defined foreach resource todetermine the steps that the intelligent watchdog monitor needs to takewhen a failure condition occurs. For example, a recovery policy can statethat one recovery attempt needs to be automatically carried out beforeissuing a notification
2329.7 DYNAMIC FAILURE DETECTION ANDRECOVERY ARCHITECTUREThisarchitectureestablishes a resilient watchdog system tomonitor and respond to a wide range of pre-definedfailure scenarios. Thissystem notifies and escalates the failure conditions that it cannotautomatically resolve itself. It relies on a specialized cloud usage monitorcalled the intelligent watchdog monitor to actively track resources andtake pre-defined actions in response to predefined events.The resilient watchdog system performs the following five core functions:1) Watching2)deciding upon an event3) Acting upon an event4) Reporting5) EscalatingSequential recovery policies can be defined foreach resource todetermine the steps that the intelligent watchdog monitor needs to takewhen a failure condition occurs. For example, a recovery policy can statethat one recovery attempt needs to be automatically carried out beforeissuing a notification
munotes.in

Page 233

2339.8 BARE-METAL PROVISIONING ARCHITECTUREThisarchitectureestablishes a system that utilizes this feature withspecialized service agents, which are used to discover and effectivelyprovision entire operating systems remotely.The remote management software that is integrated with theserver’s ROM becomes available upon server start-up. A Web-based orproprietary userinterface, like the portal provided by the remoteadministration system, is usually used to connect to the physical server’snative remote management interface. IP addresses in IaaS platforms canbe forwarded directly to cloud consumers so that they can perform bare-metal operating system installations independently.The bare-metal provisioning system provides an auto-deploymentfeature that allows cloud consumers to connect to the deployment softwareand provision more than one server or operating system at the same time.The central deployment system connects to the servers via theirmanagement interfaces, and uses the same protocol to upload and operateas an agent in the physical server’s RAM. The bare-metal server thenbecomes a raw client with a management agent installed, and thedeployment software uploads the required setup files to deploy theoperating system.
9.9 RAPIDPROVISIONING ARCHITECTURETherapid provisioning architectureestablishes a system thatautomates the provisioning of a wide range of resources, eitherindividually or as a collective. The underlying technology architecture forrapid resource provisioningcan be sophisticated and complex, and relieson a system comprised of an automated provisioning program, rapidprovisioning engine, and scripts and templates for on-demandprovisioning.
2339.8 BARE-METAL PROVISIONING ARCHITECTUREThisarchitectureestablishes a system that utilizes this feature withspecialized service agents, which are used to discover and effectivelyprovision entire operating systems remotely.The remote management software that is integrated with theserver’s ROM becomes available upon server start-up. A Web-based orproprietary userinterface, like the portal provided by the remoteadministration system, is usually used to connect to the physical server’snative remote management interface. IP addresses in IaaS platforms canbe forwarded directly to cloud consumers so that they can perform bare-metal operating system installations independently.The bare-metal provisioning system provides an auto-deploymentfeature that allows cloud consumers to connect to the deployment softwareand provision more than one server or operating system at the same time.The central deployment system connects to the servers via theirmanagement interfaces, and uses the same protocol to upload and operateas an agent in the physical server’s RAM. The bare-metal server thenbecomes a raw client with a management agent installed, and thedeployment software uploads the required setup files to deploy theoperating system.
9.9 RAPIDPROVISIONING ARCHITECTURETherapid provisioning architectureestablishes a system thatautomates the provisioning of a wide range of resources, eitherindividually or as a collective. The underlying technology architecture forrapid resource provisioningcan be sophisticated and complex, and relieson a system comprised of an automated provisioning program, rapidprovisioning engine, and scripts and templates for on-demandprovisioning.
2339.8 BARE-METAL PROVISIONING ARCHITECTUREThisarchitectureestablishes a system that utilizes this feature withspecialized service agents, which are used to discover and effectivelyprovision entire operating systems remotely.The remote management software that is integrated with theserver’s ROM becomes available upon server start-up. A Web-based orproprietary userinterface, like the portal provided by the remoteadministration system, is usually used to connect to the physical server’snative remote management interface. IP addresses in IaaS platforms canbe forwarded directly to cloud consumers so that they can perform bare-metal operating system installations independently.The bare-metal provisioning system provides an auto-deploymentfeature that allows cloud consumers to connect to the deployment softwareand provision more than one server or operating system at the same time.The central deployment system connects to the servers via theirmanagement interfaces, and uses the same protocol to upload and operateas an agent in the physical server’s RAM. The bare-metal server thenbecomes a raw client with a management agent installed, and thedeployment software uploads the required setup files to deploy theoperating system.
9.9 RAPIDPROVISIONING ARCHITECTURETherapid provisioning architectureestablishes a system thatautomates the provisioning of a wide range of resources, eitherindividually or as a collective. The underlying technology architecture forrapid resource provisioningcan be sophisticated and complex, and relieson a system comprised of an automated provisioning program, rapidprovisioning engine, and scripts and templates for on-demandprovisioning.
munotes.in

Page 234

234(1)A cloud resource administrator requests a new cloud service throughthe self-service portal.(2)The self-service portal passes the request to the automated serviceprovisioning program installed on the virtual server.(3)Which passes the necessary tasks to be performed to the rapidprovisioning engine.(4)The rapid provisioning engine announces when the new cloud serviceis ready.(5)The automated service provisioning program finalizes and publishesthe cloud service on the usage and administration portal for cloudconsumer access.The step-by-step description describes the inner workings of a rapidprovisioning engine:1.A cloud consumer requests a new server through the self-serviceportal.2.The sequence manager forwards the request to the deployment enginefor the preparation of an operating system.3.The deployment engine uses the virtual server templates forprovisioning if the request is for a virtual server. Otherwise, thedeployment engine sends the request to provision a physical server.4.The pre-defined image for the requested type of operating system isused for the provisioningof the operating system, if available.Otherwise, the regular deployment process is executed to install theoperating system.5.The deployment engine informs the sequence manager when theoperating system is ready.
234(1)A cloud resource administrator requests a new cloud service throughthe self-service portal.(2)The self-service portal passes the request to the automated serviceprovisioning program installed on the virtual server.(3)Which passes the necessary tasks to be performed to the rapidprovisioning engine.(4)The rapid provisioning engine announces when the new cloud serviceis ready.(5)The automated service provisioning program finalizes and publishesthe cloud service on the usage and administration portal for cloudconsumer access.The step-by-step description describes the inner workings of a rapidprovisioning engine:1.A cloud consumer requests a new server through the self-serviceportal.2.The sequence manager forwards the request to the deployment enginefor the preparation of an operating system.3.The deployment engine uses the virtual server templates forprovisioning if the request is for a virtual server. Otherwise, thedeployment engine sends the request to provision a physical server.4.The pre-defined image for the requested type of operating system isused for the provisioningof the operating system, if available.Otherwise, the regular deployment process is executed to install theoperating system.5.The deployment engine informs the sequence manager when theoperating system is ready.
234(1)A cloud resource administrator requests a new cloud service throughthe self-service portal.(2)The self-service portal passes the request to the automated serviceprovisioning program installed on the virtual server.(3)Which passes the necessary tasks to be performed to the rapidprovisioning engine.(4)The rapid provisioning engine announces when the new cloud serviceis ready.(5)The automated service provisioning program finalizes and publishesthe cloud service on the usage and administration portal for cloudconsumer access.The step-by-step description describes the inner workings of a rapidprovisioning engine:1.A cloud consumer requests a new server through the self-serviceportal.2.The sequence manager forwards the request to the deployment enginefor the preparation of an operating system.3.The deployment engine uses the virtual server templates forprovisioning if the request is for a virtual server. Otherwise, thedeployment engine sends the request to provision a physical server.4.The pre-defined image for the requested type of operating system isused for the provisioningof the operating system, if available.Otherwise, the regular deployment process is executed to install theoperating system.5.The deployment engine informs the sequence manager when theoperating system is ready.
munotes.in

Page 235

2356.The sequence manager updates and sends thelogs to the sequencelogger for storage.7.The sequence manager requests that the deployment engine apply theoperating system baseline to the provisioned operating system.8.The deployment engine applies the requested operating systembaseline.9.The deployment engine informs the sequence manager that theoperating system baseline has been applied.10.The sequence manager updates and sends the logs of completed stepsto the sequence logger for storage.11.The sequence manager requests that the deployment engine install theapplications.12.The deployment engine deploys the applications on the provisionedserver.13.The deployment engine informs the sequence manager that theapplications have been installed.14.The sequence manager updates and sends the logs of completed stepsto the sequence logger for storage.15.The sequence manager requests that the deployment engine apply theapplication’s configuration baseline.16.The deployment engine applies the configuration baseline.17.The deployment engine informs the sequence manager that theconfiguration baseline has been applied.18.The sequence manager updates and sends the logs of completed stepsto the sequence logger for storage.9.10 STORAGE WORKLOAD MANAGEMENTARCHITECTUREThisarchitectureenables LUNs to be evenly distributed acrossavailable cloud storage devices, while a storage capacity system isestablished to ensure that runtime workloads are evenly distributed acrossthe LUNs.munotes.in

Page 236

236Combining cloud storage devices into a group allows LUN data tobe distributed between available storage hosts equally. A storagemanagement system is configured and an automated scaling listener ispositioned to monitor and equalize runtime workloads among the groupedcloud storage devices.9.11GLOSSARY•Intelligent Automation Engine:The intelligent automation engineautomates administration tasks by executing scripts that containworkflow logic.•LUN:A logical unit number (LUN) is a logical drive that represents apartition of a physical drive.•Storage Service Gateway:The storage service gatewayis acomponent that acts as the external interface to cloud storage services,and is capable of automatically redirecting cloud consumer requestswhenever the location of the requested data has changed.•Storage Replication:Storage replication is a variation of the resourcereplication mechanisms used to synchronously or asynchronouslyreplicate data from a primary storage device to a secondary storagedevice. It can be used to replicate partial and entire LUNs.•Heartbeats:Heartbeats are system-level messages exchanged betweenhypervisors, hypervisors and virtual servers, and hypervisors andVIMs.•Live VM migration:Live VM migration is a system that is capable ofrelocating virtual servers or virtual server instances at runtime.
236Combining cloud storage devices into a group allows LUN data tobe distributed between available storage hosts equally. A storagemanagement system is configured and an automated scaling listener ispositioned to monitor and equalize runtime workloads among the groupedcloud storage devices.9.11GLOSSARY•Intelligent Automation Engine:The intelligent automation engineautomates administration tasks by executing scripts that containworkflow logic.•LUN:A logical unit number (LUN) is a logical drive that represents apartition of a physical drive.•Storage Service Gateway:The storage service gatewayis acomponent that acts as the external interface to cloud storage services,and is capable of automatically redirecting cloud consumer requestswhenever the location of the requested data has changed.•Storage Replication:Storage replication is a variation of the resourcereplication mechanisms used to synchronously or asynchronouslyreplicate data from a primary storage device to a secondary storagedevice. It can be used to replicate partial and entire LUNs.•Heartbeats:Heartbeats are system-level messages exchanged betweenhypervisors, hypervisors and virtual servers, and hypervisors andVIMs.•Live VM migration:Live VM migration is a system that is capable ofrelocating virtual servers or virtual server instances at runtime.
236Combining cloud storage devices into a group allows LUN data tobe distributed between available storage hosts equally. A storagemanagement system is configured and an automated scaling listener ispositioned to monitor and equalize runtime workloads among the groupedcloud storage devices.9.11GLOSSARY•Intelligent Automation Engine:The intelligent automation engineautomates administration tasks by executing scripts that containworkflow logic.•LUN:A logical unit number (LUN) is a logical drive that represents apartition of a physical drive.•Storage Service Gateway:The storage service gatewayis acomponent that acts as the external interface to cloud storage services,and is capable of automatically redirecting cloud consumer requestswhenever the location of the requested data has changed.•Storage Replication:Storage replication is a variation of the resourcereplication mechanisms used to synchronously or asynchronouslyreplicate data from a primary storage device to a secondary storagedevice. It can be used to replicate partial and entire LUNs.•Heartbeats:Heartbeats are system-level messages exchanged betweenhypervisors, hypervisors and virtual servers, and hypervisors andVIMs.•Live VM migration:Live VM migration is a system that is capable ofrelocating virtual servers or virtual server instances at runtime.
munotes.in

Page 237

237•LUN migration:LUN migration is a specialized storage program thatis used to move LUNs from one storage device to another withoutinterruption, while remaining transparent to cloud consumers.SUMMARYThis chapter introduced the cloud technology architectures distinctand sophisticatedarchitectural layers, several of which can be built uponthe more foundational environmentsestablished by the architecturalmodels.QUESTIONS1.Write a short note on Hypervisor Clustering Architecture2.Explain Load Balanced Virtual Server InstancesArchitecture3.Write a note the Non-Disruptive Service Relocation cloud Architecturemodel.4.Explain using suitable diagram Zero down time Architecture.5.What is Cloud Balancing Architecture? Explain.6.Write a note on Resource Reservation Architecture.7.Explainusing suitable diagram, Dynamic Failure Detection andRecovery Architecture8.Write a note using suitable diagram on rapid provisioning architecture.9.Explain using suitable diagram, Bare-Metal Provisioning Architecture10.What is Storage Workload Management ArchitectureREFERENCES•Foundations of Modern Networking: SDN, NFV, QoE, IoT, and Cloud–WilliamStallings Addison-Wesley ProfessionalOctober 2015•SDN and NFV Simplified A Visual Guide to Understanding SoftwareDefined Networksand Network Function Virtualization Jim DohertyPearson Education•Network Functions Virtualization (NFV) with a Touch of SDNRajendra ChayapathiSyed Farrukh Hassan Addison-Wesley•CCIE and CCDE Evolving Technologies Study Guide Braddgeworth,Jason Gooley,Ramiro Garza Rios PearsonEducation, Inc 2019*****munotes.in

Page 238

238UNITV10CLOUD DELIVERY MODELCONSIDERATIONUnit Structure10.0 Objectives10.1 Introduction10.2 Cloud Delivery Models: The Cloud Provider Perspective10.2.1 Building IaaS Environments10.2.2 Equipping PaaS Environments10.2.3 Optimizing SaaS Environments10.3 Cloud Delivery Models: The Cloud Consumer Perspective10.3.1 Working with IaaS Environments10.3.2 Working with PaaS Environments10.3.3 Working with SaaS Services10.4Unit End Exercise10.0 OBJECTIVES•Describe cloud delivery models for PaaS•Describe cloud delivery models for SaaS•Describe different ways in which cloud delivery models areadministered and utilized by cloud consumers•Working with IaaS Environments•Working with PaaS Environments•Working with SaaS Environments10.1 INTRODUCTIONA cloud delivery model represents a specific combination of ITresources offered by a cloud provider. This terminology is typicallyassociated with cloud computing and frequently used to describe a type ofremote environment and the level of control.10.2 CLOUD DELIVERY MODELS: THE CLOUDPROVIDER PERSPECTIVEThis section explores the architecture and administration of IaaS,munotes.in

Page 239

239PaaS, and SaaS cloud delivery models from the point of view of the cloudprovider (Figure 9.1). The integration and management of these cloud-based environments as part of greater environments and how they canrelate to different technologies and cloud mechanism combinations areexamined.
Figure10.110.2.1 Building IaaS Environments:The virtual server and cloud storage device mechanisms representthe two most fundamental IT resources that are delivered as part of astandard rapid provisioning architecture within IaaS environments. Theyare offered in various standardized configurationsthat are defined by thefollowing properties:•Operating System•Primary Memory Capacity•Processing Capacity•Virtualized Storage CapacityMemory and virtualized storage capacity is usually allocated withincrements of 1 GB to simplify the provisioning of underlying physical ITresources. When limiting cloud consumer access to virtualizedenvironments, IaaS offerings are preemptively assembled by cloudproviders via virtual server images that capture the pre-definedconfigurations. Some cloud providers may offer cloud consumers directadministrative access to physical IT resources, in which case the bare-metal provisioning architecture may come into play.Snapshots can be taken of a virtual server to record its current state,memory, and configuration of a virtualized IaaS environment for backup
munotes.in

Page 240

240and replication purposes, in support of horizontal and vertical scalingrequirements. For example, a virtual server can use its snapshot to becomereinitialized in another hosting environment after its capacity has beenincreased to allow for vertical scaling. The snapshot can alternatively beused to duplicate a virtual server. The management of custom virtualserver images is a vital feature that is provided via the remoteadministration system mechanism. Most cloudproviders also supportimporting and exporting options for custom-built virtual server images inboth proprietary and standard formats.Data Centers:Cloud providers can offer IaaS-based IT resources from multiplegeographically diverse data centers, which provides the following primarybenefits:•Multiple data centers can be linked together for increased resiliency.Each data center is placed in a different location to lower the chancesof a single failure forcing all of the data centers to go offlinesimultaneously.•Connected through high-speed communications networks with lowlatency, data centers can perform load balancing, IT resource backupand replication, and increase storage capacity, while improvingavailability and reliability. Having multiple data centers spread over agreater area further reduces network latency.•Data centers that are deployed in different countries make access to ITresources more convenient for cloud consumers that are constricted bylegal and regulatory requirements.When an IaaS environment is used to provide cloud consumerswith virtualized network environments, each cloud consumer is segregatedinto a tenant environment that isolates IT resources from the rest of thecloud through the Internet. VLANs and network accesscontrol softwarecollaboratively realize the corresponding logical network perimeters.Scalability and Reliability:Within IaaS environments, cloud providers can automaticallyprovision virtual servers via the dynamic vertical scaling type of thedynamic scalability architecture. This can be performed through the VIM,as long as the host physical servers have sufficient capacity. The VIM canscale virtual servers out using resource replication as part of a resourcepool architecture, if a given physicalserver has insufficient capacity tosupport vertical scaling. The load balancer mechanism, as part of aworkload distribution architecture, can be used to distribute the workloadamong IT resources in a pool to complete the horizontal scaling process.Manual scalability requires the cloud consumer to interact with ausage and administration program to explicitly request IT resource scaling.In contrast, automatic scalability requires the automated scaling listener tomunotes.in

Page 241

241monitor the workload and reactively scale the resource capacity. Thismechanism typically acts as a monitoring agent that tracks IT resourceusage in order to notify the resource management system when capacityhas been exceeded.Replicated IT resources can be arranged in high-availabilityconfiguration that forms a failover system for implementation via standardVIM features. Alternatively, a high-availability/high-performanceresource cluster can be created at the physical or virtual server level, orboth simultaneously. The multipath resource access architecture iscommonly employed to enhance reliability via the use of redundant accesspaths, and some cloud providers further offer the provisioning ofdedicated IT resources via the resource reservation architecture.Monitoring:Cloud usagemonitors in an IaaS environment can be implementedusing the VIM or specialized monitoring tools that directly compriseand/or interface with the virtualization platform. Several commoncapabilities of the IaaS platform involve monitoring:•Virtual Server Lifecycles:Recording and tracking uptime periodsand the allocation of IT resources, for pay-per-use monitors and time-based billing purposes.•Data Storage:Tracking and assigning the allocation of storagecapacity to cloud storage devices on virtual servers, for pay-per-usemonitors that record storage usage for billing purposes.•Network Traffic:For pay-per-use monitors that measure inbound andoutbound network usage and SLA monitors that track QoS metrics,such as response times and network losses.•Failure Conditions:For SLA monitors that track IT resource and QoSmetrics to provide warning in times of failure.•Event Triggers:For audit monitors that appraise and evaluate theregulatory compliance of select IT resources.Monitoring architectures within IaaS environments typicallyinvolve service agents that communicate directly with backendmanagement systems.Security:Cloud security mechanisms that are relevant for securing IaaSenvironments include:•encryption, hashing, digital signature, and PKImechanisms for overallprotection of data transmission•IAM and SSO mechanisms for accessing services and interfaces insecurity systems that rely on user identification, authentication, andauthorization capabilitiesmunotes.in

Page 242

242•cloud-based security groups for isolating virtual environments throughhypervisors and network segments via network management software•hardened virtual server images for internal and externally availablevirtual server environments•various cloud usage monitors to track provisioned virtual ITresourcesto detect abnormal usage patterns.10.2.2 Equipping PaaS Environments:PaaS environments typically need to be outfitted with a selectionof application development and deployment platforms in order toaccommodate different programming models, languages, and frameworks.A separate ready-made environment is usually created for eachprogramming stack that contains the necessary software to runapplications specifically developed for the platform.Each platform is accompanied by a matching SDK andIDE, whichcan be custom-built or enabled by IDE plugins supplied by the cloudprovider. IDE toolkits can simulate the cloud runtime locally within thePaaS environment and usually include executable application servers. Thesecurity restrictions that areinherent to the runtime are also simulated inthe development environment, including checks for unauthorized attemptsto access system IT resources.Cloud providers often offer a resource management systemmechanism that is customized for the PaaS platform so that cloudconsumers can create and control customized virtual server images withready-made environments. This mechanism also provides features specificto the PaaS platform, such as managing deployed applications andconfiguring multitenancy. Cloudproviders further rely on a variation ofthe rapid provisioning architecture known as platform provisioning, whichis designed specifically to provision ready-made environments.Scalability and Reliability:The scalability requirements of cloud servicesand applications thatare deployed within PaaS environments are generally addressed viadynamic scalability and workload distribution architectures that rely onthe use of native automated scaling listeners and load balancers. Theresource pooling architecture is further utilized to provision IT resourcesfrom resource pools made available to multiple cloud consumers.Cloud providers can evaluate network traffic and server-sideconnection usage against the instance’s workload, when determining howto scalean overloaded application as per parameters and cost limitationsprovided by the cloud consumer. Alternatively, cloud consumers canconfigure the application designs to customize the incorporation ofavailable mechanisms themselves.The reliability of ready-made environments and hosted cloudmunotes.in

Page 243

243services and applications can be supported with standard failover systemmechanisms (Figure 142), as well as the non-disruptive service relocationarchitecture, so as to shield cloud consumers from failover conditions. Theresource reservation architecture may also be in place to offer exclusiveaccess to PaaS-based IT resources. As with other IT resources, ready-made environments can also span multiple data centers and geographicalregions to further increase availability andresiliencyMonitoringSpecialized cloud usage monitors in PaaSenvironments are used to monitor the following:•Ready-Made Environment Instances:The applications of theseinstances are recorded by pay-per-use monitors for the calculation oftime-based usage fees.•Data Persistence:This statistic is provided by pay-per-use monitorsthat record the number of objects, individual occupied storage sizes,and database transactions per billing period.•Network Usage:Inbound and outbound network usage is tracked forpay-per-use monitors and SLA monitors that track network-relatedQoS metrics.•Failure Conditions:SLA monitors that track the QoS metrics of ITresources need to capture failure statistics.•Event Triggers:This metric is primarily used by audit monitors thatneed to respond to certain types of events.Security:The PaaS environment, by default, does not usually introduce the need fornew cloud security mechanisms beyond those that are already provisionedfor IaaS environments.10.2.3 OptimizingSaaS Environments:In SaaS implementations, cloud service architectures are generallybased on multitenant environments that enable and regulate concurrentcloud consumer access (Figure10.2).
Figure10.2The SaaS-based cloud service is hosted by a multitenant
243services and applications can be supported with standard failover systemmechanisms (Figure 142), as well as the non-disruptive service relocationarchitecture, so as to shield cloud consumers from failover conditions. Theresource reservation architecture may also be in place to offer exclusiveaccess to PaaS-based IT resources. As with other IT resources, ready-made environments can also span multiple data centers and geographicalregions to further increase availability andresiliencyMonitoringSpecialized cloud usage monitors in PaaSenvironments are used to monitor the following:•Ready-Made Environment Instances:The applications of theseinstances are recorded by pay-per-use monitors for the calculation oftime-based usage fees.•Data Persistence:This statistic is provided by pay-per-use monitorsthat record the number of objects, individual occupied storage sizes,and database transactions per billing period.•Network Usage:Inbound and outbound network usage is tracked forpay-per-use monitors and SLA monitors that track network-relatedQoS metrics.•Failure Conditions:SLA monitors that track the QoS metrics of ITresources need to capture failure statistics.•Event Triggers:This metric is primarily used by audit monitors thatneed to respond to certain types of events.Security:The PaaS environment, by default, does not usually introduce the need fornew cloud security mechanisms beyond those that are already provisionedfor IaaS environments.10.2.3 OptimizingSaaS Environments:In SaaS implementations, cloud service architectures are generallybased on multitenant environments that enable and regulate concurrentcloud consumer access (Figure10.2).
Figure10.2The SaaS-based cloud service is hosted by a multitenant
243services and applications can be supported with standard failover systemmechanisms (Figure 142), as well as the non-disruptive service relocationarchitecture, so as to shield cloud consumers from failover conditions. Theresource reservation architecture may also be in place to offer exclusiveaccess to PaaS-based IT resources. As with other IT resources, ready-made environments can also span multiple data centers and geographicalregions to further increase availability andresiliencyMonitoringSpecialized cloud usage monitors in PaaSenvironments are used to monitor the following:•Ready-Made Environment Instances:The applications of theseinstances are recorded by pay-per-use monitors for the calculation oftime-based usage fees.•Data Persistence:This statistic is provided by pay-per-use monitorsthat record the number of objects, individual occupied storage sizes,and database transactions per billing period.•Network Usage:Inbound and outbound network usage is tracked forpay-per-use monitors and SLA monitors that track network-relatedQoS metrics.•Failure Conditions:SLA monitors that track the QoS metrics of ITresources need to capture failure statistics.•Event Triggers:This metric is primarily used by audit monitors thatneed to respond to certain types of events.Security:The PaaS environment, by default, does not usually introduce the need fornew cloud security mechanisms beyond those that are already provisionedfor IaaS environments.10.2.3 OptimizingSaaS Environments:In SaaS implementations, cloud service architectures are generallybased on multitenant environments that enable and regulate concurrentcloud consumer access (Figure10.2).
Figure10.2The SaaS-based cloud service is hosted by a multitenant
munotes.in

Page 244

244environment deployed in a high-performance virtual server cluster. Ausage and administration portal is used by the cloud consumer to accessand configure the cloud service.SaaS IT resourcesegregation does not typically occur at theinfrastructure level in SaaS environments, as it does in IaaS and PaaSenvironments.SaaS implementations rely heavily on the features provided by thenative dynamic scalability and workload distribution architectures, as wellas nondisruptive service relocation to ensure that failover conditions donot impact the availability of SaaS-based cloud services.However, it is vital to acknowledge that, unlike the relativelyvanilla designs of IaaS and PaaS products,each SaaS deployment willbring with it unique architectural, functional, and runtime requirements.These requirements are specific to the nature of the business logic theSaaS-based cloud service is programmed with, as well as the distinctusage patternsit is subjected to by its cloud service consumers.For example, consider the diversity in functionality and usage of thefollowing recognized online SaaS offerings:•Collaborativeauthoring and information-sharing (Wikipedia,Blogger)•Collaborative management (Zimbra, Google Apps)•Conferencing services for instant messaging, audio/videocommunications (Skype, Google Talk)•Enterprise management systems (ERP, CRM, CM)•File-sharing and content distribution (YouTube, Dropbox)•Industry-specific software (engineering, bioinformatics)•Messaging systems (e-mail, voicemail)•Mobile application marketplaces (Android Play Store, Apple AppStore)•Office productivity software suites (Microsoft Office, Adobe CreativeCloud)•Search engines (Google, Yahoo)•Social networkingmedia (Twitter, LinkedIn)Now consider that many of the previously listed cloud services are offeredin one or more of the following implementation mediums:•Mobile application•REST service•Web servicemunotes.in

Page 245

245Each of these SaaS implementation mediums provide Web-based APIs forinterfacing by cloud consumers. Examples of online SaaS-based cloudservices with Web-based APIs include:•Electronic payment services (PayPal)•Mapping and routing services (Google Maps)•Publishing tools (WordPress)Mobile-enabled SaaS implementations are commonly supported bythe multidevice broker mechanism, unless the cloud service is intendedexclusively for access by specific mobile devices.The potentially diverse nature of SaaS functionality, the variationin implementation technology, and the tendency to offer a SaaS-basedcloud service redundantly with multiple different implementationmediums makes the design of SaaS environments highly specialized.Though not essential to a SaaS implementation, specialized processingrequirements can prompt the need to incorporate architectural models,such as:•Service Load Balancing:for workload distribution across redundantSaaS-based cloud service implementations•Dynamic Failure Detection and Recovery:to establish a system thatcan automatically resolve some failure conditions without disruption inservice to the SaaS implementation.•Storage Maintenance Window:to allow for planned maintenanceoutages that do not impact SaaS implementation availability•Elastic Resource Capacity/Elastic Network Capacity:to establishinherent elasticity within the SaaS-based cloud service architecturethat enables it to automatically accommodate a range of runtimescalability requirements•Cloud Balancing:to instill broad resiliency within the SaaSimplementation, which can be especially important for cloud servicessubjected to extreme concurrent usage volumesSpecialized cloud usage monitors can be used in SaaS environments totrack the following types of metrics:•Tenant Subscription Period:This metric is used by pay-per-usemonitors to record and track application usage for time-based billing.This type of monitoring usually incorporates application licensing andregular assessments of leasing periods that extend beyond the hourlyperiods of IaaS and PaaS environments.•Application Usage:This metric, based on user or security groups, isused with pay-per-use monitors to record and track application usagefor billing purposes.munotes.in

Page 246

246•Tenant Application Functional Module:This metric is used by pay-per-use monitors for function-based billing. Cloud services can havedifferent functionality tiers according to whether the cloud consumer isfree-tier or a paid subscriber.10.3 CLOUD DELIVERY MODELS: THE CLOUDCONSUMER PERSPECTIVEThis section raises various considerations concerning the differentways in which cloud delivery models are administered and utilized bycloud consumers.10.3.1 Working with IaaS Environments:Virtual servers are accessed at the operating system level throughthe use of remote terminal applications. Accordingly, the type of clientsoftware used directly depends on the type of operating system that isrunning at the virtual server, of which two common options are:•Remote Desktop (or Remote Desktop Connection) Client:forWindows-based environments and presents a Windows GUI desktop•SSH Client:for Mac and other Linux-based environments to allow forsecure channel connections to text-based shell accounts running on theserver OSFigure10.3illustrates a typical usage scenario for virtual servers that arebeing offered as IaaS services after having been created with managementinterfaces
Figure10.3A cloud resource administration uses the Windows-basedRemote Desktop client to administrator a Windows-based virtualserver and the SSH client for the Linux-based virtual server.A cloud storage device can be attached directly to the virtualservers and accessed through the virtual servers’ functional interface formanagement by the operating system. Alternatively, a cloud storagedevice can be attached to an IT resource that is being hosted outside of the
246•Tenant Application Functional Module:This metric is used by pay-per-use monitors for function-based billing. Cloud services can havedifferent functionality tiers according to whether the cloud consumer isfree-tier or a paid subscriber.10.3 CLOUD DELIVERY MODELS: THE CLOUDCONSUMER PERSPECTIVEThis section raises various considerations concerning the differentways in which cloud delivery models are administered and utilized bycloud consumers.10.3.1 Working with IaaS Environments:Virtual servers are accessed at the operating system level throughthe use of remote terminal applications. Accordingly, the type of clientsoftware used directly depends on the type of operating system that isrunning at the virtual server, of which two common options are:•Remote Desktop (or Remote Desktop Connection) Client:forWindows-based environments and presents a Windows GUI desktop•SSH Client:for Mac and other Linux-based environments to allow forsecure channel connections to text-based shell accounts running on theserver OSFigure10.3illustrates a typical usage scenario for virtual servers that arebeing offered as IaaS services after having been created with managementinterfaces
Figure10.3A cloud resource administration uses the Windows-basedRemote Desktop client to administrator a Windows-based virtualserver and the SSH client for the Linux-based virtual server.A cloud storage device can be attached directly to the virtualservers and accessed through the virtual servers’ functional interface formanagement by the operating system. Alternatively, a cloud storagedevice can be attached to an IT resource that is being hosted outside of the
246•Tenant Application Functional Module:This metric is used by pay-per-use monitors for function-based billing. Cloud services can havedifferent functionality tiers according to whether the cloud consumer isfree-tier or a paid subscriber.10.3 CLOUD DELIVERY MODELS: THE CLOUDCONSUMER PERSPECTIVEThis section raises various considerations concerning the differentways in which cloud delivery models are administered and utilized bycloud consumers.10.3.1 Working with IaaS Environments:Virtual servers are accessed at the operating system level throughthe use of remote terminal applications. Accordingly, the type of clientsoftware used directly depends on the type of operating system that isrunning at the virtual server, of which two common options are:•Remote Desktop (or Remote Desktop Connection) Client:forWindows-based environments and presents a Windows GUI desktop•SSH Client:for Mac and other Linux-based environments to allow forsecure channel connections to text-based shell accounts running on theserver OSFigure10.3illustrates a typical usage scenario for virtual servers that arebeing offered as IaaS services after having been created with managementinterfaces
Figure10.3A cloud resource administration uses the Windows-basedRemote Desktop client to administrator a Windows-based virtualserver and the SSH client for the Linux-based virtual server.A cloud storage device can be attached directly to the virtualservers and accessed through the virtual servers’ functional interface formanagement by the operating system. Alternatively, a cloud storagedevice can be attached to an IT resource that is being hosted outside of the
munotes.in

Page 247

247cloud, such as an on-premise device over a WAN or VPN. In these cases,the following formats for the manipulation and transmission of cloudstorage data are commonly used:•Networked File System:System-based storage access, whoserendering of files is similar to how folders are organized in operatingsystems (NFS, CIFS)•Storage Area Network Devices:Block-based storage access collatesand formats geographically diverse data into cohesive filesfor optimalnetwork transmission (iSCSI, Fibre Channel)•Web-Based Resources:Object-based storage access by which aninterface that is not integrated into the operating system logicallyrepresents files, which can be accessed through a Web-based interface(Amazon S3)IT Resource Provisioning Considerations:Cloud consumers have a high degree of control over how and to whatextent IT resources are provisioned as part of their IaaS environments.For example:•Controlling scalability features (automated scaling, load balancing)•Controlling the lifecycle of virtual IT resources (shutting down,restarting, powering up of virtual devices)•Controlling the virtual network environment and network access rules(firewalls, logical network perimeters)•Establishing and displaying service provisioning agreements (accountconditions, usage terms)•Managing the attachment of cloud storage devices•Managing the pre-allocation of cloud-based IT resources (resourcereservation)•Managing credentials and passwords for cloud resource administrators•Managing credentials for cloud-based security groups that accessvirtualized IT resources through an IAM•Managing security-related configurations•Managing customized virtual server image storage (importing,exporting, backup)•Selectinghigh-availability options (failover, IT resource clustering)•Selecting and monitoring SLA metrics•Selecting basic software configurations (operating system, pre-installed software for new virtual servers) selecting IaaS resourceinstances from a number ofavailable hardware-related configurationsmunotes.in

Page 248

248and options (processing capabilities, RAM, storage)•Selecting the geographical regions in which cloud-based IT resourcesshould be hosted•Tracking and managing costsThe management interface for these types of provisioning tasks isusually a usage and administration portal, but may also be offered via theuse of command line interface (CLI) tools that can simplify the executionof many scripted administrative actions.Even though standardizing the presentation ofadministrativefeatures and controls is typically preferred, using different tools and user-interfaces can sometimes be justified. For example, a script can be made toturn virtual servers on and off nightly through a CLI, while adding orremoving storagecapacity can be more easily carried out using a portal.10.3.2 Working with PaaS Environments:A typical PaaS IDE can offer a wide range of tools andprogramming resources, such as software libraries, class libraries,frameworks, APIs, and various runtime capabilities that emulate theintended cloud-based deployment environment. These features allowdevelopers to create, test, and run application code within the cloud orlocally (on-premise) while using the IDE to emulate the cloud deploymentenvironment. Compiled or completed applications are then bundled anduploaded to the cloud, and deployed via the ready-made environments.This deployment process can also be controlled through the IDE.PaaS also allows for applications to use cloud storage devices asindependent data storing systems for holding development-specific data(for example in a repository that is available outside of the cloudenvironment). Both SQL and NoSQL database structures are generallysupported.IT Resource Provisioning Considerations:PaaS environments provide less administrative control than IaaSenvironments, but still offer a significant range of management features.For example:•Establishing and displaying service provisioning agreements, such asaccount conditions and usageterms•Selecting software platform and development frameworks for ready-made environments•Selecting instance types, which are most commonly frontend orbackend instances•Selecting cloud storage devices for use in ready-made environments•Controlling the lifecycle of PaaS-developed applications (deployment,munotes.in

Page 249

249starting, shutdown, restarting, and release)•Controlling the versioning of deployed applications and modules•Configuring availability and reliability-related mechanisms•Managing credentials for developersand cloud resource administratorsusing IAM•Managing general security settings, such as accessible network ports•Selecting and monitoring PaaS-related SLA metrics•Managing and monitoring usage and IT resource costs•Controlling scalability features such asusage quotas, active instancethresholds, and the configuration and deployment of the automatedscaling listener and load balancer mechanisms10.3.3 Working with SaaS Services:Because SaaS-based cloud services are almost alwaysaccompanied by refined and generic APIs, they are usually designed to beincorporated as part of larger distributed solutions. A common example ofthis is Google Maps, which offers a comprehensive API that enablesmapping information and images to be incorporated into Web sites andWeb-based applications.Many SaaS offerings are provided free of charge, although thesecloud services often come with data collecting sub-programs that harvestusage data for the benefit of the cloud provider. When using any SaaSproduct that is sponsored by third parties, there is a reasonable chance thatit is performing a form of background information gathering. Reading thecloud provider’s agreement will usually help shed light on any secondaryactivity that the cloud service is designed to perform.Cloud consumers using SaaS products supplied by cloud providersare relieved of the responsibilities of implementing and administering theirunderlying hosting environments. Customization options are usuallyavailable to cloud consumers; however, theseoptions are generally limitedto the runtime usage control of the cloud service instances that aregenerated specifically by and for the cloud consumer.For example:•Managing security-related configurations•Managing select availability and reliability options•Managing usage costs•Managing user accounts, profiles, and access authorization•Selecting and monitoring SLAs•Setting manual and automated scalability options and limitations.munotes.in

Page 250

250QUESTIONS1.Explain Building IaaS Environments.2.What are common capabilities of the IaaS platform involve inmonitoring?3.What are cloud security mechanisms that are relevant for securing IaaSenvironments?4.Explain Equipping PaaS Environments.5.How SaaS Environments can be optimized? Explain with example.6.How we work with IaaS Environments?7.How we work with PaaS Environments?8.How we work with SaaS Services?REFERENCES•Mastering Cloud Computing Foundations and ApplicationsProgramming Rajkumar•Buyya, Christian Vecchiola, S. Thamarai Selvi MK publicationsISBN: 978-0-12-411454-8•Cloud Computing Concepts, Technology & Architecture ThomasErl, Zaigham•Mahmood, and Ricardo Puttini, The Prentice Hall Service TechnologySeries ISBN-10 :9780133387520 ISBN-13 : 978-0133387520•Distributed and Cloud Computing: From Parallel Processing to theInternet of Things 1stEdition by Kai Hwang Jack Dongarra GeoffreyFox ISBN-10 : 9789381269237 ISBN-13: 978-9381269237https://www.ques10.com/p/47801/cloud-service-delivery-models-1/*****munotes.in

Page 251

25111COST METRICS AND PRICING MODELSAND SERVICE QUALITY METRICSAND SLASUnit Structure11.0Objectives11.1Introduction11.2Business Cost Metrics11.3Cloud Usage Cost Metrics11.3.1 Network Usage11.3.2 Server Usage11.3.4 Cloud Service Usage11.4Cost Management Considerations11.4.1 Pricing Models11.4.2 Additional Considerations11.5 Service-level agreements (SLAs)11.6Service Quality Metrics11.6.1 Service Availability Metrics11.6.2 Service Reliability Metrics11.6.3 Service Performance Metrics11.6.4 Service Scalability Metrics11.6.5 Service Resiliency Metrics11.7 SLA Guidelines11.8Unit End Questions11.0 OBJECTIVESThis chapter provides metrics, formulas, and practices to assistcloud consumers in performing accurate financial analysis of cloudadoption plans.11.1INTRODUCTIONReducing operating costs and optimizing IT environments arepivotal to understanding and being able to compare the cost models behindprovisioning on-premise and cloud-based environments. The pricingstructures used by public clouds are typically based on utility-centric pay-per-usage models, enabling organizations to avoid up-front infrastructuremunotes.in

Page 252

252investments. These models need to be assessed against thefinancialimplications of on-premise infrastructure investments and associated totalcost-of-ownership commitments.11.2 BUSINESS COST METRICSThis section begins by describing the common types of metricsused to evaluate the estimated costs and business value of leasing cloud-based IT resources when compared to the purchase of on-premise ITresources.Up-Front and On-Going Costs:Up-front costsare associated with the initial investments thatorganizations need to make in order to fund the IT resources they intend touse. This includes both the costs associated with obtaining the ITresources, as well as expenses required to deploy and administer them.•Up-front costs for the purchase and deployment of on-premise ITresources tend to be high. Examplesof up-front costs for on-premiseenvironments can include hardware, software, and the labor requiredfor deployment.•Up-front costs for the leasing of cloud-based IT resources tend to below. Examples of up-front costs for cloud-based environments caninclude the labor costs required to assess and set up a cloudenvironment.On-going costsrepresent the expenses required by an organization to runand maintain IT resources it uses.•On-going costs for the operation of on-premise IT resources can vary.Examples include licensing fees, electricity, insurance, and labor.•On-going costs for the operation of cloud-based IT resources can alsovary, but often exceed the on-going costs of on-premise IT resources(especially over a longer period of time). Examples include virtualhardware leasing fees, bandwidth usage fees, licensing fees, and labor.Additional Costs:To supplement and extend a financial analysis beyond thecalculation and comparison of standard up-front and on-going businesscost metrics, several other more specialized business cost metrics can betaken into account.For example:•Cost of Capital:Thecost of capitalis a value that represents the costincurred by raising required funds. For example, it will generally bemore expensive to raise an initial investment of $150,000 than it will bemunotes.in

Page 253

253to raise this amount over a period of three years. The relevancy of thiscost depends on how the organization goes about gathering the funds itrequires. If the cost of capital for an initial investment is high,then itfurther helps justify the leasing of cloud-based IT resources.•Sunk Costs:An organization will often have existing IT resources thatare already paid for and operational. The prior investment that has beenmade in these on-premise IT resources isreferred to assunk costs.When comparing up-front costs together with significant sunk costs, itcan be more difficult to justify the leasing of cloud-based IT resourcesas an alternative.•Integration Costs:Integration testing is a form of testing required tomeasure the effort required to make IT resources compatible andinteroperable within a foreign environment, such as a new cloudplatform. Depending on the cloud deployment model and clouddelivery model being considered by an organization, there maybe theneed to further allocate funds to carry out integration testing andadditional labor related to enable interoperability between cloud serviceconsumers and cloud services. These expenses are referred to asintegration costs.High integration costscan make the option of leasingcloud-based IT resources less appealing.•Locked-in Costs:As explained in theRisks and Challengessection inChapter 3. cloud environments can impose portability limitations.When performing a metrics analysis over a longer period of time, itmay be necessary to take into consideration the possibility of having tomove from one cloud provider to another. Due to the fact that cloudservice consumers can become dependent onproprietary characteristicsof a cloud environment, there arelocked-in costsassociated with thistype of move. Locked-in costs can further decrease the long-termbusiness value of leasing cloud-based IT resources.11.3 CLOUD USAGE COST METRICSThe following sections describe a set of usage cost metrics for calculatingcosts associated with cloud-based IT resource usage measurements:•Network Usage:inbound and outbound network traffic, as well asintracloud network traffic•Server Usage:virtual server allocation (and resource reservation)•Cloud Storage Device:storage capacity allocation•Cloud Service:subscription duration, number of nominated users,number of transactions (of cloud services and cloud-basedapplications)For each usage cost metric a description, measurement unit, andmeasurement frequency is provided, along with the cloud delivery modelmost applicable to the metric. Each metric is further supplemented with abrief example.munotes.in

Page 254

25411.3.1 Network Usage:Defined as the amount of data that is transferred over a networkconnection, network usage is typically calculated using separatelymeasuredinbound network usage trafficandoutbound network usagetrafficmetrics in relation to cloud services or other IT resources.Inbound Network Usage Metric:•Description-inbound network traffic•Measurement-£, inbound network traffic in bytes•Frequency-continuous and cumulative over a predefined period•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-up to 1 GB free, $0.001/GB up to 10 TB a monthOutbound Network Usage Metric:•Description:outbound network traffic•Measurement-£, outbound network traffic in bytes•Frequency-continuous and cumulative over a predefined period•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-up to 1 GB free a month, $0.01/GB between 1 GB to 10TB per monthNetwork usage metrics can be applied to WAN traffic between ITresources of one cloud that are located in different geographical regions inorder to calculate costs for synchronization, data replication, and relatedforms of processing. Conversely, LAN usage and other network trafficamong IT resources that reside at the same data center are typically nottracked.Intra-Cloud WAN Usage Metric:•Description-network traffic between geographically diverse ITresourcesof the same cloud•Measurement-£, intra-cloud WAN traffic in bytes•Frequency-continuous and cumulative over a predefined period•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-up to 500 MB free daily and $0.01/GB thereafter,$0.005/GB after 1 TB per monthMany cloud providers do not charge for inbound traffic in order toencourage cloud consumers to migrate data to the cloud. Some also do notcharge for WAN traffic within the same cloud.Network-related cost metrics are determined by the following properties:•Static IP Address Usage-IP address allocation time (if a static IP isrequired)munotes.in

Page 255

255•Network Load-Balancing-the amount of load-balanced networktraffic (in bytes)•Virtual Firewall-the amount of firewall-processed network traffic(as per allocation time)11.3.2Server Usage:The allocation of virtual servers is measured using common pay-per-use metrics in IaaS and PaaS environments that are quantified by thenumber of virtual servers and ready-made environments. This form ofserver usage measurement is divided intoon-demand virtual machineinstance allocationandreserved virtual machine instance allocationmetrics.The former metric measures pay-per-usage fees on a short-termbasis, while the latter metric calculates up-front reservation feesfor usingvirtual servers over extended periods. The up-front reservation fee isusually used in conjunction with the discounted pay-per-usage fees.On-Demand Virtual Machine Instance Allocation Metric:•Description-uptime of a virtual server instance•Measurement-E, virtual server start date to stop date•Frequency-continuous and cumulative over a predefined period•Cloud Delivery Model-IaaS, PaaS•Example-$0.10/hour small instance, $0.20/hour medium instance,$0.90/hour large instanceReserved Virtual Machine Instance Allocation Metric:•Description-up-front cost for reserving a virtual server instance•Measurement-E, virtual server reservation start date to expiry date•Frequency-daily, monthly, yearly•Cloud Delivery Model-IaaS, PaaS•Example-$55.10/small instance, $99.90/medium instance,$249.90/large instance3. Cloud Storage Device Usage:Cloud storage is generally charged by the amount of spaceallocated within a predefined period, as measured by theon-demandstorage allocationmetric. Similar to IaaS-based cost metrics, on-demandstorage allocation fees are usually based on short time increments (such ason an hourly basis). Another common cost metric for cloud storage isI/Odata transferred,which measures the amount of transferred input andoutput data.munotes.in

Page 256

256On-Demand Storage Space Allocation Metric:•Description:duration and size of on-demand storage space allocationin bytes•Measurement:E, date of storage release / reallocation to date ofstorage allocation (resets upon change in storage size)•Frequency:continuous•Cloud Delivery Model:IaaS, PaaS, SaaS•Example:$0.01/GB per hour (typically expressed as GB/month)I/O Data Transferred Metric:•Description-amount of transferred I/O data•Measurement-E, I/O data in bytes•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS•Example-$0.10/TB11.3.4 Cloud Service Usage:Cloud service usage in SaaS environments is typically measured using thefollowing three metrics:Application Subscription Duration Metric:•Description-duration ofcloud service usage subscription•Measurement-E, subscription start date to expiry date•Frequency-daily, monthly, yearly•Cloud Delivery Model-SaaS•Example-$69.90 per monthNumber of Nominated Users Metric:•Description:number of registered users with legitimate access•Measurement:number of users•Frequency:monthly, yearly•Cloud Delivery Model:SaaS•Example:$0.90/additional user per monthNumber of Transactions Users Metric:•Description:number of transactions served by the cloud service•Measurement:number of transactions (request-response messageexchanges)•Frequency:continuous•Cloud Delivery Model: PaaS, SaaS•Example:$0.05 per 1,000 transactionmunotes.in

Page 257

25711.4 COST MANAGEMENT CONSIDERATIONSCost management is often centered around the lifecycle phasesof cloudservices, as follows:•Cloud Service Design and Development:During this stage, thevanilla pricing models and cost templates are typically defined by theorganization delivering the cloud service.•Cloud ServiceDeployment:Prior to and during thedeployment of acloud service, the backend architecture for usage measurement andbilling-related data collection is determined and implemented,including the positioning of pay-per-use monitor and billingmanagement system mechanisms.•Cloud Service Contracting:This phase consists of negotiationsbetween the cloud consumer and cloud provider with the goal ofreaching a mutual agreement on rates based on usage cost metrics.•Cloud Service Offering:This stage entails the concrete offering of acloud service’s pricing models through cost templates, and anyavailable customization options.•Cloud Service Provisioning:Cloud service usage and instancecreation thresholds may be imposed by the cloud provider or set by thecloud consumer. Either way, these and other provisioning options canimpact usage costs and other fees.•Cloud Service Operation:This is the phase during which activeusage of the cloud service produces usage cost metric data.•Cloud Service Decommissioning:When a cloud service istemporarily orpermanently deactivated, statistical cost data may bearchived.
Figure 11.1Common cloud service lifecycle stages as they relate to costmanagement considerations.
25711.4 COST MANAGEMENT CONSIDERATIONSCost management is often centered around the lifecycle phasesof cloudservices, as follows:•Cloud Service Design and Development:During this stage, thevanilla pricing models and cost templates are typically defined by theorganization delivering the cloud service.•Cloud ServiceDeployment:Prior to and during thedeployment of acloud service, the backend architecture for usage measurement andbilling-related data collection is determined and implemented,including the positioning of pay-per-use monitor and billingmanagement system mechanisms.•Cloud Service Contracting:This phase consists of negotiationsbetween the cloud consumer and cloud provider with the goal ofreaching a mutual agreement on rates based on usage cost metrics.•Cloud Service Offering:This stage entails the concrete offering of acloud service’s pricing models through cost templates, and anyavailable customization options.•Cloud Service Provisioning:Cloud service usage and instancecreation thresholds may be imposed by the cloud provider or set by thecloud consumer. Either way, these and other provisioning options canimpact usage costs and other fees.•Cloud Service Operation:This is the phase during which activeusage of the cloud service produces usage cost metric data.•Cloud Service Decommissioning:When a cloud service istemporarily orpermanently deactivated, statistical cost data may bearchived.
Figure 11.1Common cloud service lifecycle stages as they relate to costmanagement considerations.
25711.4 COST MANAGEMENT CONSIDERATIONSCost management is often centered around the lifecycle phasesof cloudservices, as follows:•Cloud Service Design and Development:During this stage, thevanilla pricing models and cost templates are typically defined by theorganization delivering the cloud service.•Cloud ServiceDeployment:Prior to and during thedeployment of acloud service, the backend architecture for usage measurement andbilling-related data collection is determined and implemented,including the positioning of pay-per-use monitor and billingmanagement system mechanisms.•Cloud Service Contracting:This phase consists of negotiationsbetween the cloud consumer and cloud provider with the goal ofreaching a mutual agreement on rates based on usage cost metrics.•Cloud Service Offering:This stage entails the concrete offering of acloud service’s pricing models through cost templates, and anyavailable customization options.•Cloud Service Provisioning:Cloud service usage and instancecreation thresholds may be imposed by the cloud provider or set by thecloud consumer. Either way, these and other provisioning options canimpact usage costs and other fees.•Cloud Service Operation:This is the phase during which activeusage of the cloud service produces usage cost metric data.•Cloud Service Decommissioning:When a cloud service istemporarily orpermanently deactivated, statistical cost data may bearchived.
Figure 11.1Common cloud service lifecycle stages as they relate to costmanagement considerations.
munotes.in

Page 258

25811.4.1 Pricing Models:The pricing models used by cloud providers are defined using templatesthat specify unit costs for fine-grained resource usage according to usagecost metrics. Various factors can influence a pricing model, such as:•Market competition and regulatory requirements•Overhead incurred during the design, development, deployment, andoperation of cloud services and other IT resources•Opportunities to reduce expenses via IT resource sharing and datacenter optimizationMost major cloud providers offer cloud services at relativelystable, competitive prices even though their ownexpenses can be volatile.A price template or pricing plan contains a set of standardized costs andmetrics that specify how cloud service fees are measured and calculated.Price templates define a pricing model’s structure by setting various unitsof measure, usage quotas, discounts, and other codified fees. A pricingmodel can contain multiple price templates, whose formulation isdetermined by variables like:•Cost Metrics and Associated Prices:These are costs that aredependent on the type of IT resource allocation (such as on-demandversus reserved allocation).•Fixed and Variable Rates Definitions:Fixed rates are based onresource allocation and define the usage quotas included in the fixedprice, while variable rates are aligned with actual resource usage.•Volume Discounts:More IT resources are consumed as the degree ofIT resource scaling progressively increases, thereby possiblyqualifying a cloud consumer for higher discounts.•Cost and Price Customization Options:This variable is associatedwith payment options and schedules. For example, cloud consumersmay be able to choose monthly, semi-annual, or annual paymentinstallments.11.4.2 Additional Considerations:•Negotiation:Cloud provider pricing is often open to negotiation,especially for customers willing to commit to higher volumes or longerterms. Price negotiations can sometimes be executed online via thecloud provider’s Web site by submitting estimated usage volumesalong with proposed discounts. There are even tools available forcloud consumers to help generate accurate IT resource usage estimatesfor this purpose.•Payment Options:After completing each measurement period, thecloud provider’s billing management system calculates the amountowed by a cloud consumer. There are two common payment optionsavailable to cloud consumers: pre-payment and post-payment. Withmunotes.in

Page 259

259pre-paid billing, cloud consumers are provided with IT resource usagecredits that can be applied to future usage bills. With the post-paymentmethod, cloud consumers are billed and invoiced for each IT resourceconsumption period, which is usually on a monthly basis.•CostArchiving:By tracking historical billing information both cloudproviders and cloud consumers can generate insightful reports thathelp identify usage and financial trends.11.5 SERVICE-LEVEL AGREEMENTS (SLAS)Service-level agreements (SLAs) are a focal point of negotiations,contract terms, legal obligations, and runtime metrics and measurements.SLAs formalize the guarantees put forth by cloud providers, andcorrespondingly influence or determine the pricing models and paymentterms. SLAs set cloud consumer expectations and are integral to howorganizations build business automation around the utilization of cloud-based IT resources.The guarantees made by a cloud provider to a cloud consumer areoften carried forward, in that the same guarantees are made by the cloudconsumer organization to its clients, business partners, or whomever willbe relying on the services and solutions hosted by the cloud provider. It istherefore crucial for SLAs and related service quality metrics to beunderstood and aligned support of the cloud consumer’s businessrequirements, while also ensuring that the guarantees can, in fact, berealistically fulfilled consistently andreliably by the cloud provider. Thelatter consideration is especially relevant for cloud providers that hostshared IT resources for high volumes of cloud consumers, each of whichwill have been issued its own SLA guarantees.11.6 SERVICE QUALITY METRICSSLAs issued by cloud providers are human-readable documentsthat describe quality-of-service (QoS) features, guarantees, and limitationsof one or more cloud-based IT resources.SLAs use service quality metrics to express measurable QoScharacteristics.For example:•Availability-up-time, outages, service duration•Reliability-minimum time between failures, guaranteed rate ofsuccessful responses•Performance-capacity, response time, and delivery time guarantees•Scalability-capacity fluctuation andresponsiveness guarantees•Resiliency-mean-time to switchover and recoverymunotes.in

Page 260

260SLA management systems use these metrics to perform periodicmeasurements that verify compliance with SLA guarantees, in addition tocollecting SLA-related data for various typesof statistical analyses.Each service quality metric is ideally defined using the followingcharacteristics:•Quantifiable:The unit of measure is clearly set, absolute, andappropriate so that the metric can be based on quantitativemeasurements.•Repeatable:The methods of measuring the metric need to yieldidentical results when repeated under identical conditions.•Comparable:The units of measure used by a metric need to bestandardized and comparable. For example, a service quality metriccannot measuresmaller quantities of data in bits and larger quantitiesin bytes.•Easily Obtainable:The metric needs to be based on a non-proprietary, common form of measurement that can be easily obtainedand understood by cloud consumers.11.6.1 Service AvailabilityMetrics:Availability Rate Metric:The overall availability of an IT resource is usually expressed as apercentage of up-time. For example, an IT resource that is alwaysavailable will have an uptime of 100%.•Description-percentage of service up-time•Measurement-total up-time / total time•Frequency-weekly, monthly, yearly•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-minimum 99.5% up-timeAvailability rates are calculated cumulatively, meaning thatunavailability periods are combined in orderto compute the totaldowntime (Table 11.1)AvailabilityDowntime/week(Seconds)Downtime/Month(Seconds)Downtime/Year(Seconds)99.5302421615811299.8121051746307299.960625923153699.9530212941576899.9960.6259.2315499.9996.0525.9316.699.99990.6052.5931.5Table 11.1Sample availability rates measured in units of secondsmunotes.in

Page 261

261Outage Duration Metric:This service quality metric is used to define both maximum and averagecontinuous outage service-level targets.•Description-duration of a single outage•Measurement-date/time of outage end-date/time of outage start•Frequency-per event•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-1 hour maximum, 15 minutes average11.6.2 Service Reliability Metrics:A characteristic closely related to availability; reliability is theprobability that an IT resource can perform its intended function underpre-defined conditions without experiencing failure. Reliability focuses onhow often the service performs as expected, which requires the service toremain in an operational and available state. Certain reliability metricsonly consider runtime errors and exception conditions as failures, whichare commonly measured only when the IT resource is available.Mean-Time Between Failures (MTBF) Metric:•Description-expected time between consecutive service failures•Measurement-£, normal operational period duration / number offailures•Frequency-monthly, yearly•Cloud Delivery Model-IaaS, PaaS•Example-90 day averageReliability Rate Metric:Overall reliability is more complicated to measure and is usuallydefined by a reliability rate that represents the percentage of successfulservice outcomes.This metric measures the effects ofnon-fatal errors and failuresthat occur during up-time periods. For example, an IT resource’sreliability is 100% if it has performed as expected every time it is invoked,but only 80% if it fails to perform every fifth time.•Description-percentage of successful service outcomes under pre-defined conditions•Measurement-total number of successful responses / total number ofrequests•Frequency-weekly, monthly, yearly•Cloud Delivery Model-SaaS•Example-minimum 99.5%munotes.in

Page 262

26211.6.3 Service Performance Metrics:Service performance refers to the ability on an IT resource to carryout its functions within expected parameters. This quality is measuredusing service capacity metrics, each of which focuses on a relatedmeasurable characteristic of IT resource capacity. A set of commonperformance capacity metrics is provided in this section. Note thatdifferent metrics may apply, depending on the type of IT resource beingmeasured.Network Capacity Metric:•Description-measurable characteristics of network capacity•Measurement-bandwidth / throughput in bits per second•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-10 MB per secondStorage Device Capacity Metric:•Description-measurable characteristics of storage device capacity•Measurement-storage size in GB•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-80 GB of storageServer Capacity Metric:•Description-measurable characteristics of server capacity•Measurement-number of CPUs, CPU frequency in GHz, RAMsize in GB, storage size in GB•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS•Example-1 core at 1.7 GHz, 16 GB of RAM, 80 GB of storageWeb Application Capacity Metric:•Description-measurable characteristics of Web application capacity•Measurement-rate of requests per minute•Frequency-continuous•Cloud Delivery Model-SaaS•Example-maximum 100,000 requests per minuteInstance Starting Time Metric:•Description-length of time required to initialize a new instance•Measurement-date/time of instance up-date/time of start requestmunotes.in

Page 263

263•Frequency-per event•Cloud Delivery Model-IaaS, PaaS•Example-5 minute maximum, 3 minute averageResponse Time Metric:•Description-time required to perform synchronous operation•Measurement-(date/timeof request-date/time of response) / totalnumber of requests•Frequency-daily, weekly, monthly•Cloud Delivery Model-SaaS•Example-5 millisecond averageCompletion Time Metric:•Description-time required to complete an asynchronous task•Measurement-(date of request-date of response) / total number ofrequests•Frequency-daily, weekly, monthly•Cloud Delivery Model-PaaS, SaaS•Example-1 second average11.6.4 Service Scalability Metrics:Service scalability metrics are related to IT resource elasticitycapacity, which is related to the maximum capacity that an IT resource canachieve, as well as measurements of its ability to adapt to workloadfluctuations. For example, a server can be scaled up to a maximum of 128CPU cores and 512 GB of RAM,or scaled out to a maximum of 16 load-balanced replicated instances.The following metrics help determine whether dynamic servicedemands will be met proactively or reactively, as well as the impacts ofmanual or automated IT resource allocation processes.Storage Scalability (Horizontal) Metric:•Description-permissible storage device capacity changes in responseto increased workloads•Measurement-storage size in GB•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-1,000 GB maximum (automated scaling)Server Scalability (Horizontal) Metric:•Description-permissible server capacity changes in response toincreased workloadsmunotes.in

Page 264

264•Measurement-number of virtual servers in resource pool•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS•Example-1 virtual server minimum, 10 virtual server maximum(automated scaling)Server Scalability (Vertical) Metric:•Description-permissible server capacity fluctuations in response toworkload fluctuations•Measurement-number of CPUs, RAM size in GB•Frequency-continuous•Cloud Delivery Model-IaaS, PaaS•Example-512 core maximum, 512 GB of RAM11.6.5 Service Resiliency Metrics:The ability of an IT resource to recover from operationaldisturbances is often measured using service resiliency metrics. Whenresiliency is described within or in relation to SLA resiliency guarantees, itis often based on redundant implementations and resource replication overdifferent physical locations, as well as various disaster recovery systems.The typeof cloud delivery model determines how resiliency isimplemented and measured. For example, the physical locations ofreplicated virtual servers that are implementing resilient cloud services canbe explicitly expressed in the SLAs for IaaS environments,while beingimplicitly expressed for the corresponding PaaS and SaaS environments.Resiliency metrics can be applied in three different phases to address thechallenges and events that can threaten the regular level of a service:•Design Phase:Metrics that measure how prepared systems andservices are to cope with challenges.•Operational Phase:Metrics that measure the difference in servicelevels before, during, and after a downtime event or service outage,which are further qualified by availability, reliability, performance,and scalability metrics.•Recovery Phase:Metrics that measure the rate at which an ITresource recovers from downtime, such as the meantime for a systemto log an outage and switchover to a new virtual server.Two common metrics related to measuring resiliency are as follows:Mean-Time to Switchover (MTSO) Metric:•Description-the time expected to complete a switchover from amunotes.in

Page 265

265severe failure to a replicated instance in a different geographical area•Measurement-(date/time of switchover completion-date/time offailure) / total number of failures•Frequency-monthly, yearly•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-10 minutes averageMean-Time System Recovery (MTSR) Metric•Description-time expected for a resilient systemto perform acomplete recovery from a severe failure•Measurement-(date/time of recovery-date/time of failure) / totalnumber of failures•Frequency-monthly, yearly•Cloud Delivery Model-IaaS, PaaS, SaaS•Example-120 minutes average11.7 SLA GUIDELINESThis section provides a number of best practices and recommendations forworking with SLAs, the majority of which are applicable to cloudconsumers:•Mapping Business Cases to SLAs:It can be helpful to identify the necessary QoS requirements for agiven automation solution and to then concretely link them to theguarantees expressed in the SLAs for IT resources responsible forcarrying out the automation. This can avoid situations where SLAs areinadvertently misaligned or perhaps unreasonably deviatein theirguarantees, subsequent to IT resource usage.•Working with Cloud and On-Premise SLAs:Due to the vast infrastructure available to support IT resources inpublic clouds, the QoS guarantees issued in SLAs for cloud-based ITresources are generally superior to those provided for on-premise ITresources. This variance needs to be understood, especially whenbuilding hybrid distributed solutions that utilize both on on-premiseand cloud-based services or when incorporating cross-environmenttechnology architectures, such as cloud bursting.•Understanding the Scope of an SLA:Cloud environments are comprised of many supporting architecturaland infrastructure layers upon which IT resources reside and areintegrated. It is important to acknowledge the extent to which a givenIT resource guarantee applies. For example, an SLA may be limited tothe IT resource implementation but not its underlying hostingenvironment.munotes.in

Page 266

266•Understanding the Scope of SLA Monitoring:SLAs need to specify where monitoring is performedand wheremeasurements are calculated, primarily in relation to the cloud’sfirewall. For example, monitoring within the cloud firewall is notalways advantageous or relevant to the cloud consumer’s required QoSguarantees. Even the most efficient firewalls have a measurable degreeof influence on performance and can further present a point of failure.•Documenting Guarantees at Appropriate Granularity:SLA templates used by cloud providers sometimes define guaranteesin broad terms. If a cloud consumer hasspecific requirements, thecorresponding level of detail should be used to describe theguarantees. For example, if data replication needs to take place acrossparticular geographic locations, then these need to be specified directlywithin the SLA.•Defining Penalties for Non-Compliance:If a cloud provider is unable to follow through on the QoS guaranteespromised within the SLAs, recourse can be formally documented interms of compensation, penalties, reimbursements, or otherwise.•Incorporating Non-Measurable Requirements:Some guarantees cannot be easily measured using service qualitymetrics, but are relevant to QoS nonetheless, and should therefore stillbe documented within the SLA. For example, a cloud consumer mayhave specific security and privacyrequirements for data hosted by thecloud provider that can be addressed by assurances in the SLA for thecloud storage device being leased.•Disclosure of Compliance Verification and Management:Cloud providers are often responsible for monitoring IT resources toensure compliance with their own SLAs. In this case, the SLAsthemselves should state what tools and practices are being used tocarry out the compliance checking process, in addition to any legal-related auditing that may be occurring.•Inclusion of Specific Metric Formulas:Some cloud providers do not mention common SLA metrics or themetrics-related calculations in their SLAs, instead focusing on service-level descriptions that highlight the use of best practices and customersupport. Metrics being used to measure SLAs should be part of theSLA document, including the formulas and calculations that themetrics are based upon.•Considering Independent SLA Monitoring:Although cloud providers will often have sophisticated SLAmanagement systems and SLA monitors, it may be in the best interestof a cloud consumer to hire a third-party organization to performindependent monitoring as well, especially if there are suspicions thatSLA guarantees are not always being met by the cloud provider(despite theresults shown on periodically issued monitoring reports).munotes.in

Page 267

267•Archiving SLA Data:The SLA-related statistics collected by SLAmonitors are commonlystored and archived by the cloud provider for future reportingpurposes. If a cloud provider intends to keep SLAdata specific to acloud consumer even after the cloud consumer no longer continues itsbusiness relationship with the cloud provider, then this should bedisclosed. The cloud consumer may have data privacy requirementsthat disallow the unauthorized storage of this type of information.Similarly, during and after a cloud consumer’s engagement with acloud provider, it may want to keep a copy of historical SLA-relateddata as well. It may be especially useful for comparing cloud providersin the future.•Disclosing Cross-Cloud Dependencies:Cloud providers may be leasing IT resources from other cloudproviders, which results in a loss of control over the guarantees theyare able to make to cloud consumers. Although a cloud provider willrely on the SLA assurances made to it by other cloud providers, thecloud consumer may want disclosure of the fact that the IT resources itis leasing may have dependencies beyond the environment of the cloudprovider organization.QUESTIONS1.Explain different types of metrics used to evaluate the estimated costsand business valueof leasing cloud-based IT resources.2.Explain Network usage cost metrics for calculating costs associatedwith cloud-based ITresources.3.Explain Server usage cost metrics for calculating costs associatedwith cloud-based ITresources.4.Explain Cloud Service usage cost metrics for calculating costsassociated with cloud-based IT resources.5.How different cloud service lifecycle stages are related to cost?6.Explain pricing model in cost Management7.Write short note on Service-Level Agreements.8.Explain different characteristics of service quality metric.9.Explain Service Availability Metrics.10.Explain Service Reliability Metrics.11.Explain Service Performance Metrics.12.Explain Service Scalability Metrics.munotes.in

Page 268

26813.Explain Service Resiliency Metrics.14.What are different guidelines for Service-Level Agreements?REFERENCES•Mastering Cloud Computing Foundations and Applications ProgrammingRajkumarBuyya, Christian Vecchiola, S. Thamarai Selvi MK publicationsISBN: 978-0-12-411454-8•Cloud Computing Concepts, Technology & Architecture Thomas Erl,Zaigham•Mahmood, and Ricardo Puttini, The Prentice Hall Service Technology SeriesISBN-10 :9780133387520 ISBN-13 : 978-0133387520•Distributed and Cloud Computing: From ParallelProcessing to the Internetof Things 1stEdition by Kai Hwang Jack Dongarra Geoffrey Fox ISBN-10 :9789381269237 ISBN-13978-9381269237•https://www.studocu.com/in/document/srm-institute-of-science-and-technology/cloud-computing/unit-4-cloud/9139371•https://www.coursehero.com/file/18595964/103940-Lec13/*****munotes.in