Electronicdesign 5684 Xl 63889 Tab
Electronicdesign 5684 Xl 63889 Tab
Electronicdesign 5684 Xl 63889 Tab
Electronicdesign 5684 Xl 63889 Tab
Electronicdesign 5684 Xl 63889 Tab

Virtualizing the Application Lifecycle in the Cloud

Oct. 16, 2011
Cloud computing continues to grow and organizations are looking to implement their own strategy to drive greater efficiency across organizations.

With the popularity of cloud computing continuing to grow, organizations are looking to implement their own strategy to drive greater efficiency across their organization. For companies who are just starting out in the cloud, it can be a daunting process. The application development/test lifecycle is a great initiation point for many enterprises whose goal is to use cloud computing to simplify the process of building, testing, and deploying applications. In order to begin one's cloud implementation, it's helpful to look at cloud computing from two different points of view: an "inside-out" technology-centric view and an "outside-in" business-centric view.

Application Development in the Cloud Converges

From a technical perspective, cloud computing is the convergence of several pervasive technologies combining to simplify, standardize, and automate much of the IT infrastructure. At its core, cloud computing relies on virtualization, but is much more than just a virtualized data center.

Cloud computing includes an automated, self-service interface where users obtain services from standardized catalogs of resources such as virtual machine images. Generally, this is coupled with metering and billing systems allowing chargebacks based on the computing resources consumed per unit time (such as virtual-machine hours, GB of storage, or database transactions). The ease with which these resources can be provisioned and deprovisioned gives rise to the "elastic" nature of cloud computing. The billing mechanisms give users an incentive to deprovision unused resources; without that, cloud computing would be referred to as "expansive" rather than "elastic."

From a business perspective, cloud computing is simply the delivery of IT-related services across the network (LAN or Internet) a consumption-based charge model. This includes:

  • Infrastructure as a service (IaaS, such as virtual machines, network connectivity, and storage)
  • Platform as a service (PaaS, including the runtime stack including middleware and databases)
  • Software as a service (SaaS, including applications such as email and office productivity tools)
  • Business Processes as a service (BPaaS, such as financial transaction processing)

A key characteristic is that customers often pay for the services they consume but are not aware of the technology behind these services. For example, a SaaS consumer of email services might pay for a number of email accounts but they neither know or care about what operating system the underlying systems are running, or whether they are x86 virtual machines on KVM or Power System virtual machines on an IBM PowerVM.

Regardless of which view of cloud computing an organization prefers, these services can be delivered through a mixture of owned or leased infrastructure, hosted on or off premises. The services can also be delivered by a third party cloud service provider or by a customer's internal IT organization as well as run on infrastructure shared among several customers or dedicated to one customer. Additionally, the infrastructure has the ability to be housed on-premises or located off-site in a shared data center. These options are often referred to as being on a spectrum from "public" to "private" clouds, with the expected tradeoffs among privacy, control, and cost.

Whether viewed from the technology-out or the consumption-model in, or delivered via dedicated or shared infrastructure, cloud computing can help drive significant increases in efficiency across an IT organization. However, many organizations are still unclear about where to start applying cloud computing. Many companies realize that somewhere between the hype and the FUD (fear, uncertainty and doubt), there must be an alternative. Application development/test lifecycle offers that alternative since infrastructure utilization is low, the labor costs are high, and the lifecycle is highly automatable. Cloud computing can streamline the development lifecycle dramatically, reducing cost, improving quality, and reducing time-to-market.

Efficiencies Created in the Cloud

Studies have shown in a typical IT organization, between 30-50% of all servers are dedicated to test, and most of those servers run at 10% utilization or less (see Industry Developments and Models - Global Testing Services: Coming of Age). With all of these underutilized servers, one would think that organizations have all of the resources they need for testing. However, one of the top challenges in moving applications into production is the limited availability of servers on which to test the app, and the test backlog is often the single largest factor delaying new application deployments. With so many servers dedicated to test, sitting mostly idle, , teams cannot get their applications created fast enough to get them tested and deployed. Generally, the root cause is the difficulty in properly deploying and configuring test environments. Cloud computing can make test servers more readily available, and speed the process of deploying applications.

The first step on the path of cloud computing for development and test is simply to use an Infrastructure as a Service (IaaS) cloud to obtain virtual machines for development and test environments. This is fairly straightforward and practitioners can directly request virtual machines and can install software on them and configure them as needed. The benefits are apparent: procurement times for these systems are nearly zero, and capital expenditures are eliminated in exchange for usage-based operational expenses. This directly addresses both utilization and availability of test servers.

The time to install and configure middleware and applications, however, can still be substantial. Some address this problem by saving virtual machine images after installing software on the running instance. This allows users to subsequently reprovision the VM; the desired software will already be installed. While on the right track, this approach can lead to a disorganized collection of virtual machine images creating confusion over, which VM contains the right version of the software or needed for a given task. There is a more structured approach, using cloud computing to automate application deployment and configuration.

A Structured Approach

First, organizations should identify a relatively small number of common application infrastructure and middleware patterns on which to standardize. Though this might require buy-in from the infrastructure, development, and operations teams, one way to "sell" this standardization to the development teams is to guarantee much faster turnaround of environment requests that conform to the standard catalog. Aiming for 100% coverage of the organization's applications is not realistic, so look to start with applications that are not the company's IT crown jewels, where conformance to a standard platform is a reasonable price to pay for reduced management costs and improved speed of deployment. It may even be helpful to start with one platform configuration that will cover a reasonable number of applications.

The next step is to capture the platform and its configuration in a set of virtual machine images, and surface them in the cloud provider's catalog, either private or public, depending on the organizations needs. Cloud management systems that let an organization orchestrate the deployment of multiple virtual machines together are particularly well suited for this. Next, automation scripts configuring the environments are created and linked into the cloud catalog. The environments can then be automatically provisioned "on-demand" from a self-service interface. Finally, scripts deploying the application on top of the provisioned platform are written.

It also helps to think of the construction of the virtual machines in three stages. At the lowest level are the components that are "baked into" the image, typically the operating system and some middleware components. These are often governed by corporate standards, such as policies about operating systems and patch levels. They are large and complicated to install, and change relatively infrequently. Next are the common components that are laid down on top of the images after the instances are provisioned. These components often have different governance processes and rates of change than the underlying infrastructural components. They may include software components delivered by other teams in the organization or 3rd party libraries. To ensure correctness, it is useful to govern the catalog of components using a definitive software library and ensure that the automation scripts draw on the approved versions from the catalog.

At the highest level are the most rapidly changing pieces, which are the application bits. These components, and the automation scripts to install and configure them, are governed by the development processes. This keeps the application, and the scripts to deploy and configure it, in sync with each other so they can be deployed and tested together.

Defining these environments and the automation scripts to deploy and configure them is not easy, but there are technologies that can help users to model the assemblage of virtual machines and the deployment of software onto those machines. In some cases, these tools can, from the models, generate scripts to automate the software deployment. One should think of this as model-driven development of the application topologies themselves. To take the automation further, there are frameworks that can simplify the configuration of middleware components, both from commercial vendors and in the open source world.

These are the building blocks of cloud-based automated application deployment through the lifecycle. The process of deploying a test environment can now be reduced to a self-service request through the cloud's Web portal. The cloud, after obtaining approvals the organization deems necessary, automatically provisions the virtual machines, complete with OS and low-level components, and deploys the dependant components from the software library. The application, testing, or operation teams can then deploy the application bits, leveraging a set of automation scripts which are tested and configuration-managed together with the application. A highly error-prone and manual process that used to take weeks, or even months, can be reduced to hours or minutes.

Once a standard set of environments is defined, the developer and tester practitioner tools can be directly integrated with the cloud catalog. Developers can then deploy applications from the desktop onto dynamically provisioned cloud servers, and teams can automatically deploy applications directly from their build servers. Giving the team easy access to the same set of environments and automations reduces the likelihood of any given defect being due to misconfiguration, and simplifies defect reproduction. This improves the collaboration between the development and test teams.

This can extend into the operations team as well. However, not all operations teams will be able to take advantage of the same environments as the development and test teams. After all, production systems often have specific requirements such as high-availability, and integration with monitoring and backup solutions. By having access to the same definitive software library of components and the same automation scripts as the development and test, organizations will help ensure the applications get deployed and configured correctly, despite differences in the underlying environments.

Best Practices for Collaborative Lifecycle Management

A set of best practices for using cloud computing to extend this collaborative lifecycle from development through to operations is emerging under the banner of "DevOps". Adopting a true DevOps approach requires more than standardized cloud catalogs and automations. However, the techniques outlined here can help IT organizations achieve some of the benefits of DevOps without undergoing the often major organization and procedural changes needed for adoption.

There are benefits at many levels to automating application deployment and configurations. Most obviously, application cycle times are improved, because of the ease with which test environments can be made available. Also, development and test teams are less inclined to "hoard" servers if they know that they can get properly configured environments on-demand - especially if they are charged for the usage of the resources. This improves utilization, quality and cost. . With an estimated 30% of all defects due to misconfigured environments, it can be reduced if not eliminated by freeing testers and developers to focus on real quality issues and improving test coverage. The time to set up and configure test environments is often the dominant factor in the test lifecycle. If deployment time is reduced from days or weeks to hours or minutes, testers spend more time focused on testing the application. With maintenance costs reduced, having a standard set of virtual machine images also reduces the labor required to keep all of the test servers at the appropriate operating system and middleware patch levels.

In addition, by hosting the applications under development, the cloud can be used to provision development and testing tools as well. As a particular example, cloud computing can improve the speed and efficiency with which organizations can do performance testing, which is quite resource-intensive. Agents, normally controlled via a performance testing workbench or console, generate load against a server-based system under test by simulating a large number of client interactions. Dozens or even hundreds of agents can combine to simulate thousands of users interacting with a system. Each of those agents needs to be installed and configured. It is not generally known how many agents will be needed to generate a given load; it depends on the nature of the load. Typically, testers start with a small number of agents and ramp up, adding agents as needed to achieve the desired load. Frequently, the available hardware, and the time involved in configuring the agents, is the main gate on an organizations' ability to drive large scale load tests. As a result, the infrastructure used for performance testing often sits idle between bursts of testing activity. These factors combine to make performance testing another ideal workload for cloud computing.

After first creating a virtual machine image that contains a properly setup and configured agents, it is relatively easy to provision additional instances in the cloud of that image as needed. The performance testing workbench is then configured to drive each of those agents. This not only simplifies the setup of the test infrastructure but can also dramatically reduce the cost. Instead of requiring a large capital expenditure to obtain the agent machines, these can be obtained for a short period of time from an IaaS cloud.

There are some technical constraints to be aware of in this environment. For example, decisions about where to place the application under test relative to the cloud on which the agents are running can have a dramatic impact on the test results. If one wants to include the latency of the network as part of the test run, do not place the system under test on the same cloud as the agents, as the traffic will likely be through high-bandwidth intra-cloud communication paths rather than on the internet. On the other hand, if one wants to understand the maximum throughput of the system under test without regard to network bandwith and latency, then it may be wise to place the system under test on the cloud as well.

Also, when using public cloud providers for the agents, some organizations may require the system under test to remain behind the enterprise firewall. This may require the use of a Virtual Private Network (VPN) so the agents can communicate with the system under test, which may introduce bandwidth constraints. In general, it is important to understand the interplay between the cloud, the network, and the application under test to ensure proper interpretation of the test results. Within those constraints, though, the elasticity of the cloud can be a powerful tool to reduce both the cost and complexity of performance testing. This technique can be applied to other "bursty" testing tasks such as automated functional or regression testing, agents can be dynamically provisioned as needed to perform the work, and then relinquished when completed.

Taking the Cloud Paradigm Further

Cloud computing aids organizations in simplifying the configuration, deployment, and testing of applications in enterprise environments. Though these steps help drive down cost, improve quality, and speed time to market - the cloud computing paradigm can be taken much further. For example, the entire developer or tester practitioner suite, including the application lifecycle management tools, can be obtained through a cloud model. A fully cloud-based development organization can obtain their tools, share their work items and source code, build their applications, hand them off to be tested, and even have them provisioned into production without ever leaving the cloud. At a time when enterprises are demanding that their IT organizations deliver more value to the business with less expense, a carefully planned cloud implementation, starting with the development/test lifecycle, is worthy of serious consideration.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!