Reports on Business Processes

Description
A business process or business method is a collection of related , structured activities or tasks that produce a specific service or product for a particular customer or customers. It often can be visualized with a flowchart as a sequence of activities with interleaving decision points or with a Process Matrix as a sequence of activities with relevance rules based on the data in the process.

VIRTUAL SERVICE GRIDS: INTEGRATED IT WITH BUSINESS PROCESSES
CONTENTS 1) 2) 3) 4) 5) 6) 7) 8) Introduction History What is GRID Virtualization Service Orientation Uses of Virtual Service Oriented Grid VSG Techniques to Modernized Data Centers Benefits

9) 10)

Conclusion References

Introduction
Scaling a business often involves OPM (other people’s money), through partnerships or issuing of stock through IPOs (initial public offerings). These relationships are carried out within a legal framework that took hundreds of years to develop. In the real world, scaling a computing system follows a similar approach, in the form of resource outsourcing, such as using other people’s systems or OPS. The use of OPS has a strong economic incentive: it does not make sense to spend millions of dollars in a large system for occasional use only. Even large scale projects that require significant amounts of computing power usually start with small trial or development runs, with large runs far and few between. Because of the data gathering cycle, some projects reserve the largest and most spectacular runs for the end. A large system that lies idle during development would be a waste of capital.

The use of OPS is facilitated by the other two technologies covered in the book The Business Value of Virtual Service-Oriented Grids (published by Intel Press): virtualization and service orientation. Virtualization makes the sharing of a physical resource practical, whereas the application of service oriented principles facilitates the reuse of these resources. Within an economic ecosystem, a group of people with skills in this area may decide to form a company that provides a grid service much more efficiently than it is possible at a departmental grid where staff tending it may have other jobs, lowering the cost overall to society for providing grid services As an example, storage in data company-hosted data centers may become quaintly anachronistic. Storage will become a commodity, purchased by the terabyte, petabyte, or exabyte, depending of the most popular unit of the time with quality of service.

A Bit of History
It would be wonderful news if we could claim that virtualization, service orientation, and grids represent the latest and greatest in

terms of emerging capabilities. Alas, we can’t lay that claim. The best we can claim is that the trio is old wine in new bottles. For instance, let’s look at virtualization. The first efforts at virtualization can be traced back at least five decades to virtual memory and machine virtualization research done at IBM in the early 1960s for the IBM 360/67, the demand paging research for the Atlas computer at the University of Manchester, and the segmented memory design of the Burroughs B5000, to list just a few of many examples. If we include assemblers and compilers as a form of virtualization, we can push the clock back by at least another decade. Compilers relieved computer programmers from the drudgery of using wire patches or writing programs using ones and zeros. A compiler presents an idealized or should we say virtualized, logical view of a computer. For instance, real-life computers have architectural oddities imposed by limitations in the hardware design, such as memory regions that cannot be used, or registers that can be used for only certain purposes. A compiler presents a uniform view of memory to the programmer. The programmer does not need to worry about details of computer architecture such as whether the machine is stack or register oriented, details that may be irrelevant to the application under development.

Virtualization, Service Orientation, and Grids One reason behind the increasing adoption of grids in the enterprise is the synergy with two powerful technology trends, namely, virtualization and service orientation. Let’s explore each of these technologies and how they relate to each other.

What Is GRID
The origin of the term <grid as in grid computing is shrouded in mystery and ambiguity. Because of the association of grids with utility computing and the analogy of utility computing with electrical power systems, it is likely that the term grid was coined to capture the concept of an electrical power grid, but applied to computer systems.

An electrical power system consists of a number of transmission lines ending up in bus bars. Bus bars may have generators, that is, power sources attached to them, or they may carry electrical loads. The generic name for a bus bar is a node. The aggregation of transmission lines and nodes forms a network or mesh, albeit a very irregular and sparse mesh. The set of reference lines in a map is also called a grid. An electric power distribution system within a city is similar in structure to a transmission system that spans a state or even a country. The difference is that the system is much more interconnected because links run for every street, as opposed to being used as intercity ties. Following on this analogy, a grid computing system consists of a set of computers in a network as illustrated in Figure 1. The computers in a grid are complete, functioning computers capable of working standalone.

Figure 1 - Structure of a Computing Grid A cluster is a specialized kind of grid where the nodes are usually identical and co-located in the same room or building. The network may be a low-latency, high-bandwidth proprietary network. This definition of grid is recursive. For instance a cluster within a grid may be represented as a single node.

Advantage of grid computing
A computing grid, reduced to its barest essentials, is a set of computing resources connected by a network. The computing resources comprise processing and storage capabilities. Networks are necessary because the computing resources are assumed to be distributed, within a room, across buildings, or even across cities and continents. The networks allow data to move across processing elements and between processing elements and storage. When it comes to pushing the envelope for performance, building a single, powerful computer becomes more expensive than a group of smaller computers whose collective performance equals the performance of the single computer. Under these conditions it becomes cheaper to attain additional performance through the use of replication. This dynamic takes place in multiple contexts. Examples:1. In 2004 Intel found that a successor to the Pentium® 4 processor, the successor chip would have run too hot and consumed too much power at a time when power

consumption was becoming an important factor for consumer acceptance. 2. To overcome this the first generation of dual-core processors in 2006 carrying two CPUs in a chip. Each CPU possesses a little less processing capability, but the combined performance of the two cores is larger than the prior generation processor, and the power consumption is also smaller.

Virtualization

Virtualization uses the power inherent in a computer to emulate features beyond the computer’s initial set of capabilities, including emulating complete machines, including machines of a different architecture.

A virtualized version of a machine requires billions of state changes and a significant amount of scoreboarding, that is, the host machine simulating a virtualized machine needs to remember the state of the virtualized machine at every step of the simulation. Fortunately, this scoreboarding is maintained by the host computer, and modern computers are good at this task. Virtualization is also justifiable in environments where the cost of the hardware is a small fraction of the delivered cost of the application or in data centers or when power or space limitations impose limitations on the number of physical servers that can be deployed in the data center.
Why we need Virtualization One of the first applications of virtualization was virtual memory from research done in the early 1960s. Virtualization allowed machines with limited physical memory to emulate a much larger virtual memory space. This is accomplished by storing data in virtual memory space in some other form of storage, such as a hard drive. Data is swapped in and out of physical memory as it is used. One down side of virtual memory is that it is significantly slower than an equivalent physical memory scheme.

On the other hand, in spite of the slowdown, virtualization presents some undeniable operating advantages. The nodes in a grid are easier to manage if they are all identical. They need to be virtually identical, not physically identical

Service Orientation
The notion of service orientation comes from the business world. Any business community or ecosystem is structured as a set of services. For instance, an automobile repair shop service in turn uses services from power utility companies, telecommunications services, accounting and legal services, and so on. Services are composable businesses providing a service use services provided by other business to build theirs through a process of integration or aggregation, as illustrated in Figure 2. Recursion is depicted in Figure 2, with S4 supporting S1, but also making use of S1.

Figure 2 - Composite Service S 1 with Supporting Services S2, S3, S4 and S1 (through Recursion) IT applications and infrastructure expressly designed to function and support this service business world are said to be service oriented. A service oriented architecture, or SOA is any structured combination of services and technologies designed to support the service oriented model. Example:- Consensus in the industry has been building to align utility computing at least with business applications with the notion of service. A service in the generic sense is an

abstraction for a business service. A service is defined by an interface. From an interface perspective, it can be as simple as a credit card purchase authorization that takes an account number and a purchase amount and returns an authorization approval. Or it can be as complex as the processing of a mortgage loan or almost any other business transaction, all the way up in complexity to a corporate merger or transnational agreement. Service orientation first became known at the application level, and hence when we hear SOA we usually think of business applications. Actually, the effects are profound at every level of abstraction in a business.

Virtualization + Service orientation + Grids = Virtual Service Oriented Grids
In a virtualized grid environment data is no longer bound to a machine, and hence protecting a machine from theft makes no sense. Data becomes disembodied; it is just present when it’s needed and where it’s needed and in a form appropriate to the device used to access it. Data might look to programs and users as a single entity, for instance as a single file. In actuality the system will store, replicate, and migrate the data to multiple devices, yet keeping the illusion of a single logical entity.

Uses of virtual service oriented grids
1. A virtual service oriented grid environment will still be useful for processing home loans or performing credit card transactions. 2. The performance of these systems will be infinitely scalable for practical purposes: if the workload slows the system, there will be a way of adding more resources to bring response times back in line without hitting a wall. 3. New services can be put together by composing more primitive services using compatible protocols. If these primitive services are already available, the new services can be assembled almost automatically and in near real time. 4. The component services can be in sourced or outsourced. The size or granularity of the component services is much smaller than a traditional outsourced application such as payroll. The negotiation or handshaking to bring an outsourced component is automatic and machine driven as opposed to the months of negotiation that would take a large company to contract for a payroll service. 5. The component services and the composite applications built from them are mutually interoperable. In a virtual service oriented grid environment, the service components mentioned above may not be designed to behave as full fledged applications, but intended to be used as building blocks for applications. We will use the term servicelet or microservice to specifically denote service building blocks

An Industry Example
• Because of the expense associated with negotiation and teardown of business applications, these applications today tend to have a long life. Teardown may be a complex process involving the removal of on the premises applications and equipment and negotiating a new contract with an alternative provider. A virtualized service environment will make dynamic behaviors more practical. Decision makers will be able to gather supporting data in a fraction of the time considered possible today. An application could more easily be built to address a specific question or support a short duration campaign providing increased business agility. • Let’s take the example of a food industry marketing analyst interested in performing a study on purchase patterns of a certain line of products using business intelligence (BI) methods. • The analyst licenses a customer loyalty database from a grocery chain company. The grocery chain is willing to grant such licenses to third parties as part of a strategy to create additional revenue streams from intellectual property generated from business operations. • In looking at the grocery chain, virtualization and service orientation will lower the barriers for mature industries with a vision to rediscover jewels of intellectual property they had all along in the form of data and processes, and find out that they can enhance their revenue in the services marketplace.

Using Virtual Service Grid Techniques to Modernize Data Centers
Virtual Service Grid IT architecture and methodology include these technologies and best practices. 1. Consolidated IT resources 2. Agile IT operations 3. Predictable high performance and scalability 4. Continuous availability Not every IT department will adopt every grid computing technology or technique; however, many groups are already seeing dramatic benefits by using selected grid technologies and best practices.

1. Consolidating IT Resources
Consolidating IT resources can provide dramatic cost and energy savings. Forrester Research estimates that in data centers today, the average server utilization is only about 30 percent. Considering how many servers can be in an enterprise, that inefficiency is staggering. Because application usage varies greatly by certain times of the day or year, it provides an opportunity to apply grid techniques for a combination of better management,

utilization, and overall efficiency. IT departments are significant users of electricity, so they must also consider those energy costs in their data center operations. When metrics such as efficiency and operating margins are scrutinized, IDC estimates that power, cooling, and other management costs account for 70 percent of a server’s lifetime cost. Many organizations are now putting energy efficiency and “green computing” initiatives into their buying criteria for technology components. With the power and space optimization that comes from consolidating resources into a grid infrastructure, organizations can have a greener data center. As organizations centralize and consolidate servers and storage, their overall server and storage utilization increases. The IT staff no longer needs to overprovision hardware and can improve overall energy efficiency. Two key grids computing technologies, server virtualization and clustering, enable the sharing of IT resources and the consolidation of servers, storage, and even entire data centers. Grid computing enables the sharing of IT resources and the consolidation of servers, storage, applications, and even datacenters. Results include reduced costs and lower power, cooling, and space requirements. Server virtualization and clustering are two key grid computing techniques that can make one resource look like many and many resources look like one.

.Consolidating with Virtual Servers
Many customers are using hypervisor-based virtual machines (VMs) to consolidate multiple applications onto fewer centrally managed shared servers. A virtual machine (or virtual server) is software that simulates the operation of computer hardware and allows an application to run on it as if it were a physical computer. The advantage of this approach is that many virtual machines can run on a single physical computer, allowing the consolidation of multiple small servers onto one larger server. This approach helps establish a standardized computing environment in which to run applications, middleware servers, database servers, and storage servers. For example, the University of Massachusetts serves more than 60,000 students and 16,000 faculty and staff on five campuses.1 Previously, the University had 500 servers running at 20 percent capacity, except during the peak period of fall registration. Using VM, the University of Massachusetts consolidated its 500 servers to fewer than 300

servers. The lower investments in hardware made possible by VM will reduce the University’s operational expenses. • Consolidating with Server and Storage Clustering In addition to server virtualization, Oracle also offers server and storage clustering to allow the consolidation of even the largest application environments—including those that may span multiple servers. Clustering makes a group of physical computers operate as one, providing improved performance, availability, scalability, and cost. Oracle offers comprehensive clustering capabilities at the middleware, database, and storage layers. Server virtualization and clustering are complementary and can even work together on the same physical machine. For example, Oracle Database Real Application Clusters (Oracle RAC) can run within an Oracle VM environment to provide more flexibility and greater efficiency. With Oracle VM, capacity can be adjusted with a finer granularity, allowing development and test environments to share machines with production environments. Smaller applications can be made fault tolerant by running in VM environments on multiple host computers.

2. Keeping IT Operations Agile
Managing a grid computing system requires a new breed of agile, efficient systems management software that can provide rapid resource provisioning, real-time visibility into end-user service levels, proactive monitoring, and diagnostics. As the administrative workload associated with grid computing grows and evolves, the new systems management software must be able to automate administrative tasks so that IT staff can manage the growing complexity. In addition, the systems management software must be able to work within the underlying complexity of hardware and software infrastructures, and be able to configure and modify those infrastructures to meet the dynamic needs of the business. Manual management of grid computing infrastructures is not economically feasible. However, the comprehensive functionality and grid automation provided by Oracle Enterprise Manager make managing the grid possible. Oracle Enterprise Manager

provides top-down, end-to-end application management with broad coverage across Oracle databases, middleware, and applications.

3. Delivering Predictable High Performance and Scalability
Recent trends toward more customer-facing Web applications, cloud computing, and service oriented architecture (SOA) are driving the need for predictable high performance and scalability. ??Customer-facing Web applications have significantly more users than internal applications and more-challenging requirements for response time, scalability, and availability. ??Cloud computing service providers often have service-level commitments for performance and availability. They must be able to scale quickly as new clients are added and workload usage peaks. ??SOA enables disparate applications and data sources to be integrated into loosely coupled, Composite applications. As more and more applications are exposed as Web services by SOA, other programs are quickly consuming those services and creating greater, often unpredictable workloads. This is especially true if Web services are exposed outside the organization’s firewall. Such exposure can put a strain on Web services and create spikes in response time and bottlenecks in throughput. Grid computing leverages clustering and virtualization technologies at all layers—middleware, database and storage—to deliver predictable high performance and scalability for applications. Enterprise grid computing delivers consistent high performance as workloads increase from initial rollout to full-scale deployment. In addition, grid architecture provides the ability to scale all levels of the technology stack by using clustering and virtualization technologies to add storage, network, and computing capacity as they are needed. Capacity can be added in smaller, less-expensive increments at any time, so organizations can take advantage of lower pricing opportunities.

4. Enabling Continuous Availability

Oracle continues to be at the forefront of developing high availability products and practices. From server failover with Oracle RAC and Oracle Application Server, to Oracle Active Data Guard and data replication, Oracle provides IT organizations with a comprehensive portfolio of solutions to keep the data center—and the business—running smoothly.

Server Failover
Server failover has been available for many years from both hardware and software Manufacturers. The protections offered by a successful failover of a server are critical, and yet the hardware and software costs of setting up a standby server—used only if disaster strikes—can be prohibitive. With grid computing, standby resources can be used as active resources, resulting in higher utilization and lower costs. Grid computing enables continuous availability with replication, automatic failover, and disaster protection. Another consideration of server failover is failover time. Some applications are so critical that the business cannot afford to be without them even for a few minutes, while other applications can tolerate some interruption of operation. As a pioneer in server-failover techniques, Oracle offers automatic failover capabilities for several server types. For example, Oracle Database with Oracle RAC, Oracle Web Logic Server, Oracle Tuxedo, and Oracle Coherence clusters can withstand failures of several servers within a cluster and still remain in operation. IT departments simply remove the failed servers from service, repair or replace them, and then add them back to the server grid. Automatic migration and failover of services (or whole servers)—along with load balancing, workload management, and overload protection—ensure that mission-critical applications stay up and running.

Disaster Protection
Even clusters cannot survive a complete data center failure from natural disasters such as fires and floods. In these cases, failover to a remote location is required. An enterprise grid can be designed to encompass multiple locations, dynamically shifting workloads to other locations for the highest reliability. Oracle Active Data Guard provides the ability to create up-to-date replicas of the production.

The Benefits of Virtual Service Grids in IT sector
1. Responding Quickly to Volatile Business Needs
Businesses today operate in an unpredictable, global environment. Staying on top of business demands, competitive threats, supply chain risks, and regulatory requirements is increasingly challenging. Businesses expect their IT groups to “turn on a dime” and be able to, for example, change a pricing model to beat a competitor, adjust the order management process to accommodate new regulatory requirements, and integrate acquired companies. With an underlying grid infrastructure, IT has the agility to respond quickly to these types of changing business needs.

2. Responding to Dynamic Workloads in Real Time
Most of today’s applications are tied to specific software and hardware silos, limiting their ability to adapt to changing workloads. This costly and inefficient use of IT resources means that IT departments must overprovision their hardware so that each application can handle peak or worst-case workload scenarios. Grid computing lets IT professionals dynamically allocate and deallocate IT resources as needed, providing much better responsiveness to workloads that Change on a global scale.

3. Providing Predictable Service Levels
Through the use of service-level agreements (SLAs), organizations can tie business requirements to IT architecture to get demonstrable metrics and proactive monitoring and

maintenance. This encourages a shared-service-bureau approach to IT, with the focus on measuring and meeting higher service levels and better aligning IT and business goals. A grid-based architecture eliminates single sources of failure and provides powerful, highavailability capabilities throughout the entire software stack, protection for valuable information assets, and business continuity. It lets IT groups eliminate expensive systems administration overhead, costly integration projects, and runaway budgets.

4.

Reducing Costs with Improved Efficiency and Smarter Capacity Planning

Grid computing practices focus on operational efficiency and predictability. Easier grid workload management and resource provisioning puts more power in the hands of the IT staff, letting them maintain current staffing levels even as computing demands skyrocket. With a new generation of server virtualization and clustering capabilities from Oracle, IT departments no longer have to overprovision to meet worst-case scenarios during peak periods. And because computing resources can be applied incrementally when needed, organizations enjoy much higher computing and storage capacity utilization at a reduced cost.

Conclusion
Oracle first introduced enterprise grid computing in 2003. The state-of-the-art technologies and new database and middleware capabilities helped change the way IT departments operate. At the time, data center projects—including server consolidation, SOA development, space and power optimization, and large-scale implementations of rack-mounted Linux servers—seemed unrelated. We can now see that—taken together— these techniques can be described as a “grid computing approach to data center modernization.” The customer examples in this white paper illustrate how benefits derived from these techniques can be compounded. For example, server and storage consolidation increases utilization levels, which allows IT departments to save energy, reduce systems management costs, and get a better return on their hardware investments. Oracle introduces a second generation of grid computing technology that builds on its strong foundation of scalable, fault-tolerant database and middleware clusters; virtualized computing and storage resources; and highly automated, end-to-end systems management. Today, with more than 10,000 Oracle customers deploying some level of grid computing technology, Oracle continues to lead the software industry in its commitment to grid computing solutions, products, and practices.



doc_206948596.doc
 

Attachments

Back
Top