Description
This is a ppt about Hawthorne Experiments.
ECR
Efficient Consumer Response (ECR) is a joint trade and industry body working towards making the grocery sector as a whole more responsive to consumer demand and promote the removal of unnecessary costs from the supply chain. The ECR movement beginning in the mid-nineties was characterized by the emergence of new principles of collaborative management along the supply chain. It was understood that companies can serve consumers better, faster and at less cost by working together with trading partners. The dramatic advances in information technology, growing competition, global business structures and consumer demand focused on better choice, service convenience, quality, freshness and safety, made it apparent that a fundamental reconsideration of the most effective way of delivering the right products to consumers at the right price was much needed. Non-standardized operational practices and the rigid separation of the traditional roles of manufacturer and retailer threatened to block the supply chain unnecessarily and failed to exploit the synergies that came from powerful new information technologies and planning tools. In other words, ECR allows companies to seek a competitive advantage by demonstrating their superior ability in working together with trading partners to add value to the consumer. There are four focus areas under ECR; Demand management, Supply management, Enablers and Integrators, which are intended to be addressed as an integrated set. These form the basis of the ECR Global Scorecard. To better serve the consumer, ECR has set out to invert the traditional model and break down nonproductive barriers. The impacts have been extensive and continue to resonate across industry.
ECR Europe was launched in 1994. With its headquarters in Brussels, the organization works in close cooperation with national ECR initiatives in most European countries. Participation in projects at European and national levels is open to large and small companies in the grocery and Fast Moving Consumer Goods sectors – including retailers, wholesalers, manufacturers, suppliers, brokers and third-party service providers such as logistics operators. Every year ECR Europe organizes projects where companies from all over Europe explore new areas of working together to fulfill consumer wishes better faster and at less costs or deepen existing experiences. The results of these projects are published for a wide audience through publications and an annual ECR Europe conference which attracts thousands of (top) managers from all over the world.
The Hawthorne Experiments :
The deviation from rulemaking on a higher level was documented for the first time in the Hawthorne studies (1924-1932) and called informal organization. At first this discovery was ignored and dismissed as the product of avoidable errors, until it finally had to be recognized that these unwritten laws of work of everyday life often had more influence on the fate of the enterprise than those conceived on organizational charts of the executive level. Numerous empirical studies in sociological organization research followed, ever more clearly proving this, particularly during the Human Relations Movement. It is important to analyze informal structures within an enterprise to make use of positive innovations, but also to be able to do away with bad habits that have developed over time. The Hawthorne effect is a form of reactivity, The term was coined in 1955 by Henry A. Landsberger when analyzing older experiments from 1924-1932 at the Hawthorne Works (outside Chicago). Landsberger defined the Hawthorne effect as: a short-term improvement caused by observing worker performance. Earlier researchers had concluded the short-term improvement was caused by teamwork when workers saw themselves as part of a study group or team. Others have broadened the definition to mean that people's behavior and performance change following any new or increased attention. Hence, the term Hawthorne effect no longer has a specific definition. The Hawthorne studies have had a dramatic effect on management in organizations and how people react to different situations. Although illumination research of workplace lighting formed the basis of the Hawthorne effect, other changes such as maintaining clean work stations, clearing floors of obstacles, and even relocating workstations resulted in increased productivity for short periods of time. Thus the term is used to identify any type of short-lived increase in productivity. In short, people will be more productive when appreciated or when watched. Like the Hawthorne effect, the definition of the Hawthorne experiments also varies. Most industrial/occupational psychology and organizational behavior textbooks refer to the illumination studies, and usually to the relay assembly test room experiments and the bank wiring room experiments. Only occasionally are the rest of the studies mentioned. Illumination Study Lighting intensity was altered to examine its effect on worker productivity. The findings were not significant. It seemed as if the workers tried harder when the lights went dim, just because they knew that they were in an experiment. This lead to the idea of the Hawthorne Effect, that people will behave differently when they are being watched.
Relay assembly experiments The researchers wanted to identify how other variables could affect productivity. They chose two women as test subjects and asked them to choose four other workers to join the test group. Together the women worked in a separate room over the course of five years (1927-1932) assembling telephone relays. Output was measured mechanically by counting how many finished relays each dropped down a chute. This measuring began in secret two weeks before moving the women to an experiment room and continued throughout the study. In the experiment room, they had a supervisor who discussed changes with them and at times used their suggestions. Then the researchers spent five years measuring how different variables impacted the group's and individuals' productivity. Some of the variables were:
?
changing the pay rules so that the group was paid for overall group production, not individual production giving two 5-minute breaks (after a discussion with them on the best length of time), and then changing to two 10-minute breaks (not their preference). Productivity increased, but when they received six 5-minute rests, they disliked it and reduced output.
?
? ?
providing food during the breaks shortening the day by 30 minutes (output went up); shortening it more (output per hour went up, but overall output decreased); returning to the earlier condition (where output peaked).
Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. However it is said that this is the natural process of the human being to adapt to the environment without knowing the objective of the experiment occurring. Researchers concluded that the workers worked harder because they thought that they were being monitored individually. Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Mayo, was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment."
NISG :
The National Institute for Smart Government (NISG) is a non-for profit company incorporated in 2002 by the Government of India and NASSCOM with its head office at Hyderabad, India. The Government of India is committed to eGovernment in order to make India more competitive in the new global economy. Having a highly developed public service that is capable of delivering eGovernment services to customers is an essential part of that strategy. The Vision of NISG is "to establish itself as an institution of excellence in e-Governance and to leverage private sector resources through a Public-Private-Partnership mode in establishing eIndia." The mission statement for NISG is The mission of NISG is to facilitate application of Public and Private resources to e-Governance in the areas of Architect, Consultancy and Training."
SWAGAT :
State-Wide Attention on Public Grievance by Application of Technology SWAGAT is an innovative concept that enables direct communication between the citizens and the chief minister. In Gandhinagar, the fourth Thursday of every month is a SWAGAT day wherein the highest office in administration attends to the grievances of the man on the street. Grievances are logged in, transmitted and made available online to the officers concerned who have to reply within 3 to 4 hours. The departments concerned then have to be ready with the replies, before 3 p.m., when the Chief Minister holds video conferences with all the districts concerned. Applicants are called one by one and the chief minister examines each complaint in detail. The information sent by the department is also reviewed online in the presence of the complainant and the Collector/District Development Officer/Superintendent of Police and other officials concerned. Attempts are made to offer a fair and acceptable solution on the same day and no applicant has ever left without any firm reply to his grievance. The record is then preserved in the ‘SWAGAT’ database and a separate log is maintained for each case. Owing to the innovative use of technology that injects in accountability in the government machinery, the International institutions such as the Commonwealth Telecom Organization and University of Manchester have considered SWAGAT as an excellent model of e-transparency.
Integrated Workflow and Document Management System (IWDMS)
e-Governance can never be effective without integrating the citizen centric delivery processes with the back end processes within the Government. Over 80% of the day-to-day work executed by Government departments is of routine in nature and workflow driven. IWDMS project was conceptualized by Department of Science & Technology, Govt. of Gujarat to automate the day-to-day work and improve upon the Accountability, Transparency & Effectiveness in Government administration. IWDMS project has been implemented by Government of Gujarat (GoG) to improve the Accountability, Transparency & Effectiveness in Government administration through automating the government functions and processes. GoG has given stress to Change Management and Top-down approach for successful implementation of IWDMS project. IWDMS provides Document Management, Workflow Management, Collaborative Environment and Knowledge Management in an integrated fashion and delivers an Electronic Workplace that result in productivity improvement in Government. The strategy adopted by Government for implementation of IWMDS project was to follow top-down approach. IWDMS was thereby implemented in Secretariat to set example and ensure continuous support from top leadership at all levels. Implementation at Secretariat was considered critical due to its vital encompassing areas including Policy Formulation, Allocation of Budgetary Resources, Monitoring & Supervision. Moreover, Secretariat is also directly accountable to Legislative assembly. To ensure smooth implementation of project across various departments the implementation strategy comprised of two phases. With 12 selective departments in Phase-I, kick-off presentations were carried out for all employees in the departments. These presentations highlighted current scenario of document processing in a sample department and how rendering IWDMS services would eliminate the repetition of processes. Presentations and training programmes have been conducted across all departments, and are in progress in HoDs. The Government also gave priority to roll out core and common applications and set stage for roll out of department specific applications thereafter. The project initially covered all employees of the Government of Gujarat at New Sachivalaya, Gandhinagar. It is now being extended to employees of the HoDs of the departments in Gandhinagar and Ahmedabad. IWDMS will be extended to all other HoDs in Gujarat in a phased manner thus covering all the Government offices/agencies by single file management system. Vision: “To provide better service to citizens by improving the efficiency of Government employees by automating and streamlining the Government processes” Objectives: Ensure accountable, transparent and effective administration Use IT as an enabler for efficient workplace Create an automated Office Management System Move towards less paper office by automation of routine tasks Enable Prioritization of work Enable Policy Based Processing IWDMS has facilitated to eliminate several steps required right from inwarding a physical correspondence till creating a file from it. IWDMS provides a central numbering system for all correspondences and files. This process eliminates registering of correspondences and files at
each step for traceability and hence reduces the number of steps. All registers required to be kept are automatically generated through the system. Electronic drafts which are attached to files created in IWDMS could be edited at each level of submission and at the same time track can be kept of all the changes done by users at various levels. Moreover the time required to transport the physical file by clerks and peons is reduced to merely fraction of a second.
NIXI :
The National Internet Exchange of India (NIXI) is a non-profit Company established in 2003 to provide neutral Internet Exchange Point services in the country. It was established with the Internet Service Providers Association of India (ISPAI) to become the operational meeting point of Internet service providers (ISPs) in India. Its main purpose is to facilitate handing over of domestic Internet traffic between the peering ISP members, rather than using servers in the US or elsewhere. This enables more efficient use of international bandwidth and saves foreign exchange. It also improves the Quality of Services for the customers of member ISPs, by being able to avoid multiple international hops and thus lowering delays. NIXI currently has seven operational nodes at the centers in Delhi (Noida), Mumbai (Vashi), Chennai, Kolkata, Bangalore, Hyderabad and Ahmedabad. The NIXI services consists of: 1. Access to the layer-2 switched medium (fast ethernet). 2. One IP address on the LAN with a reverse DNS mapping in the "nixi.in" domain. 3. 24x7 watch service, 24x7 hardware maintenance and 24x7 helpdesk service on the NIXI switch. 4. Adding a new route within 2 working days of receiving a request, handling complaints and problems within 3 hours of receiving them within a normal working day from 9.00 am to 5.30 pm. Outside of this 6–8 hours will be required. 5. This will be applicable when the peering ISP enters into a membership agreement with NIXI and adhere to the governing rules and regulations, including that of housing their routers in the NIXI locations.
NAS :
Network-attached storage (NAS) is file-level computer data storage connected to a computer network providing data access to heterogeneous network clients. A NAS unit is essentially a selfcontained computer connected to a network, with the sole purpose of supplying file-based data storage services to other devices on the network. The operating system and other software on the NAS unit provide the functionality of data storage, file systems, and access to files, and the management of these functionalities. The unit is not designed to carry out general-purpose computing tasks, although it may technically be possible to run other software on it. NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often by connecting a browser to their network address. The alternative to NAS storage on a network is to use a computer as a file server. In its most basic form a dedicated file server is no more than a NAS unit with keyboard and display and an operating system which, while optimised for providing storage services, can run other tasks; however, file servers are increasingly used to supply other functionality, such as supplying database services, email services, and so on. A general-purpose operating system is not needed on a NAS device, and often minimalfunctionality or stripped-down operating systems are used. For example FreeNAS, which is Free / open source NAS software designed for use on standard computer hardware, is just a version of FreeBSD with all functionality not related to data storage stripped out. NASLite is a highly optimized Linux distribution running from a floppy disk for the sole purpose of a NAS. Likewise, NexentaStor is based upon the core of the NexentaOS, aFree / open source hybrid operating system with an OpenSolaris core and a GNU user environment. NAS systems contain one or more hard disks, often arranged into logical, redundant storage containers or RAID arrays (redundant arrays of inexpensive/independent disks). NAS removes the responsibility of file serving from other servers on the network. NAS uses file-based protocols such as NFS (popular on UNIX systems) or SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems). NAS units rarely limit clients to a single protocol. NAS provides both storage and filesystem. This is often contrasted with SAN (Storage Area Network), which provides only block-based storage and leaves filesystem concerns on the "client" side. SAN protocols are SCSI, Fibre Channel, iSCSI, ATA over Ethernet (AoE),
or HyperSCSI. Despite differences SAN and NAS are not exclusive and may be combined in one solution: SANNAS hybrid The boundaries between NAS and SAN systems are starting to overlap, with some products making the obvious next evolution and offering both file level protocols (NAS) and block level protocols (SAN) from the same system. However a SAN device is usually served through NAS as one large flat file, not as a true filesystem. An example of this is Openfiler, a free software product running on Linux. NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such as load-balancing and fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller form factors. The price of NAS appliances has plummeted in recent years, offering flexible network-based storage to the home consumer market for little more than the cost of a regular USB or FireWireexternal hard disk. Many of these home consumer devices are built around ARM, PowerPC or MIPS processors running an embedded Linux operating system. Examples include Buffalo'sTeraStation and Linksys NSLU2. More recently, home NAS devices have incorporated support for the Universal Plug and Play protocol, enabling them to serve the growing number of networked home media players.
SAN :
A storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers in such a way that the devices appear as locally attached to the operating system. Although the cost and complexity of SANs are dropping, they are still uncommon outside larger enterprises. Network attached storage (NAS), in contrast to SAN, uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block.
Historically, data centers first created "islands" of SCSI disk arrays. Each island was dedicated to an application, and visible as a number of "virtual hard drives" (i.e. LUNs). Essentially, a SAN connects storage islands together using a high-speed network, thus allowing all applications to access all disks. Operating systems still view a SAN as a collection of LUNs, and usually maintain their own file systems on them. These local file systems, which cannot be shared among multiple operating systems/hosts, are the most reliable and most widely used. If two independent local file systems resided on a shared LUN, they would be unaware of this fact, would have no means of cache synchronization and eventually would corrupt each other. Thus, sharing data between computers through a SAN requires advanced solutions, such as SAN file systems orclustered computing. Despite such issues, SANs help to increase storage capacity utilization, since multiple servers share the storage space on the disk arrays. The common application of a SAN is for the use of transactionally accessed data that require high-speed block-level access to the hard drives such as email servers, databases, and high usage file servers. In contrast, NAS allows many computers to access the same file system over the network and synchronizes their accesses. Lately, the introduction of NAS heads allowed easy conversion of SAN storage to NAS Sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to shift storage from one server to another. Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the LUN of the faulty server. This process can take as little as half an hour and is a relatively new idea being pioneered in newer data centers. There are a number of emerging products designed to facilitate and speed this up still further. Brocade, for example, offers an Application Resource Manager product which automatically provisions servers to boot off a SAN, with typical-case load times measured in minutes. While this area of technology is still new many view it as being the future of the enterprise datacenter. SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant location containing a secondary storage array. This enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices. Since IP WANs are often the least costly method of long-distance transport, the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The traditional physical SCSI layer could only support a few meters of distance - not nearly enough to ensure business continuance in a disaster. Demand for this SAN application has increased
dramatically after the September 11th attacks in the United States, and increased regulatory requirements associated with Sarbanes-Oxley and similar legislation. The economic consolidation of disk arrays has accelerated the advancement of several features including I/O caching, snapshotting, and volume cloning (Business Continuance Volumes or BCVs).
Hybrid using DAS, NAS and SAN technologies.
Windows Azure :
Microsoft's Azure Services Platform is a cloud platform (cloud computing platform as a service) offering that "provides a wide range of internet services that can be consumed from both onpremises environments or the internet". It is significant in that it is Microsoft's first step into cloud computing following the recent launch of the Microsoft Online Services offering. The idea and push from Microsoft to compete directly in the software as a service model that Google's Google App Engine and Amazon's EC2 have offered is increasingly seen by them and others as an important next step in application development. In this idea, software doesn't have to be installed and managed on the user's computer. It also allows files and folders to be accessed from the web. The Azure Services Platform uses a specialized operating system, Windows Azure, to run its "fabric layer" — a cluster hosted at Microsoft's datacenters that manages computing and storage resources of the computers and provisions the resources (or a subset of them) to applications running on top of Windows Azure. Windows Azure, which was known as "Red Dog" during its development, has been described as a "cloud layer" on top of a number of Windows Server systems, which use Windows Server 2008 and Hyper-V to provide virtualization of services.
The platform includes five services — Live Services, SQL Services, .NET Services, SharePoint Services and Dynamics CRM Services — which the developers can use to build the applications that will run in the cloud. A client library, in managed code, and associated tools are also provided for developing cloud applications in Visual Studio. Scaling and reliability are controlled by the Windows Azure Fabric Controller so the services and environment don't crash if one of the servers crash within the Microsoft datacenter and provides the management of the user's web application like memory resources and load balancing. The Azure Services Platform can currently run .NET Framework applications written in C#, while supporting the ASP.NET application framework and associated deployment methods to deploy the applications onto the cloud platform. Two SDKs have been made available for interoperability with the Azure Services Platform: The Java SDK for .NET Services and the Ruby SDK for .NET Services. These enable Java and Ruby developers to integrate with .NET Services.
Routing :
Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology. In packet switching networks, routing directs packet forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes; typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-purpose computers with multiple network cards can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the routers' memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths. Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms
unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments. Small networks may involve manually configured routing tables, while larger networks involve complex topologies and may change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses precomputed routing tables, with fallback routes if the most direct route becomes blocked (see routing in the PSTN). Adaptive routing attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, and allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. However, the configuration of the routing protocols often requires a skilled touch; one should not suppose that networking technology has developed to the point of the complete automation of routing.
Data Consolidation :
Data migration and data consolidation are two very different operations which produce different results. In either case however, the process for accomplishing these tasks are somewhat the same. All organizations at one time or another will have to face the issue of migrating or consolidating their data to accomplish various goals. Both of these functions are, at times, a great challenge. Whether the data is being moved across the country or simply moving data off of an older production cluster to a new environment, there is a certain amount of risk. These are the times when problems at any level can cause a production system to fail. Before beginning a project that creates the need to move data, an advanced strategy for not only moving the data but also to protect it at each step is key to success and to reducing risk. Data migration is the most common activity undertaken by most organizations and is usually related to moving data from an old technology platform to a new platform (ie, old hardware to new hardware) or moving data from an older version of an application to a new version of the same application. (This is not to be confused with data conversion, which converts data from one application to a different application that uses the “same” data, though the basic tenant for this operation is the same.) Data consolidation is usually associated with moving data from remote locations to a central location or combining data due to an acquisition or merger. In either case the first step in the process will always be to backup or protect your data, so that in the event of a problem, you can restore the operation. DR Scout is perfectly suited to either of these operations. In an environment where DR Scout is implemented, the process of protecting the data is already complete. As well, the ability to perform application failover and provide data availability exists without adding any additional hardware or software to
accomplish the migration or consolidation operation. If DR Scout is not installed and running in an environment, implementation is not complex and will, in a very short period of time, provide the data protection required by either of these processes. How it Works: DR-Scout provides non-disruptive Continuous Data Protection to allow for instant recovery back to any point in time or any business event. Unlike other host-only-based approaches that severely impact production hosts, DR-Scout offloads retention, encryption and compression functions to an out-of-band server. The diagram below shows a typical implementation.
Unique Features and Benefits: Uninterrupted Operations: If your organization is undertaking a technology refresh or switching from one storage environment to another, DR Scout can ensure uninterrupted operation using our application failover capability while new hardware is installed and tested. While ensuring the application is constantly available so that there is no lost productivity, the implementation team can ensure the new environment is correctly installed a thoroughly tested. Assured Recoverability: While many updates and upgrades are performed "in place" and are considered "non-disruptive", unforeseen problem can and many times do occur. Because of this it is still best practice to ensure your data is backed up and protected prior to beginning the operation. DR Scout can ensure your data is protected using our patented CDP technology providing the peace of mind that you can always recover your data to the point-in-time just prior to the failure. As well, using the application failover capability of DR Scout will also
alleviate the concern of lost or corrupted data. Non-Intrusive Operation: Consolidating data from remote sites can be a pain-staking process that can cause disruption to the remote operation. DR Scout can be utilized to remove this disruption by capturing the data in a non-intrusive manner. DR Scout's ability to transmit the data via the WAN in a controlled fashion utilizing our unique bandwidth policy management tools means little impact on the production environment and assured protection of all of the remote data at the central location. Heterogeneous Support: Today, many organizations are growing through acquisition and mergers. This activity puts a heavy burden on IT organizations because of the disparate operations between the two organizations. Bringing these differing operations together can be an overwhelming task in not properly planned and without proper tools. DR Scout and it's flexible architecture provides superior support for heterogeneous environments. The capability to handle various platforms helps control costs and reduces, if not eliminates the need for multiple software solutions to support the merging of two different organizations.
? ? ? ? ? ?
doc_472184456.docx
This is a ppt about Hawthorne Experiments.
ECR
Efficient Consumer Response (ECR) is a joint trade and industry body working towards making the grocery sector as a whole more responsive to consumer demand and promote the removal of unnecessary costs from the supply chain. The ECR movement beginning in the mid-nineties was characterized by the emergence of new principles of collaborative management along the supply chain. It was understood that companies can serve consumers better, faster and at less cost by working together with trading partners. The dramatic advances in information technology, growing competition, global business structures and consumer demand focused on better choice, service convenience, quality, freshness and safety, made it apparent that a fundamental reconsideration of the most effective way of delivering the right products to consumers at the right price was much needed. Non-standardized operational practices and the rigid separation of the traditional roles of manufacturer and retailer threatened to block the supply chain unnecessarily and failed to exploit the synergies that came from powerful new information technologies and planning tools. In other words, ECR allows companies to seek a competitive advantage by demonstrating their superior ability in working together with trading partners to add value to the consumer. There are four focus areas under ECR; Demand management, Supply management, Enablers and Integrators, which are intended to be addressed as an integrated set. These form the basis of the ECR Global Scorecard. To better serve the consumer, ECR has set out to invert the traditional model and break down nonproductive barriers. The impacts have been extensive and continue to resonate across industry.
ECR Europe was launched in 1994. With its headquarters in Brussels, the organization works in close cooperation with national ECR initiatives in most European countries. Participation in projects at European and national levels is open to large and small companies in the grocery and Fast Moving Consumer Goods sectors – including retailers, wholesalers, manufacturers, suppliers, brokers and third-party service providers such as logistics operators. Every year ECR Europe organizes projects where companies from all over Europe explore new areas of working together to fulfill consumer wishes better faster and at less costs or deepen existing experiences. The results of these projects are published for a wide audience through publications and an annual ECR Europe conference which attracts thousands of (top) managers from all over the world.
The Hawthorne Experiments :
The deviation from rulemaking on a higher level was documented for the first time in the Hawthorne studies (1924-1932) and called informal organization. At first this discovery was ignored and dismissed as the product of avoidable errors, until it finally had to be recognized that these unwritten laws of work of everyday life often had more influence on the fate of the enterprise than those conceived on organizational charts of the executive level. Numerous empirical studies in sociological organization research followed, ever more clearly proving this, particularly during the Human Relations Movement. It is important to analyze informal structures within an enterprise to make use of positive innovations, but also to be able to do away with bad habits that have developed over time. The Hawthorne effect is a form of reactivity, The term was coined in 1955 by Henry A. Landsberger when analyzing older experiments from 1924-1932 at the Hawthorne Works (outside Chicago). Landsberger defined the Hawthorne effect as: a short-term improvement caused by observing worker performance. Earlier researchers had concluded the short-term improvement was caused by teamwork when workers saw themselves as part of a study group or team. Others have broadened the definition to mean that people's behavior and performance change following any new or increased attention. Hence, the term Hawthorne effect no longer has a specific definition. The Hawthorne studies have had a dramatic effect on management in organizations and how people react to different situations. Although illumination research of workplace lighting formed the basis of the Hawthorne effect, other changes such as maintaining clean work stations, clearing floors of obstacles, and even relocating workstations resulted in increased productivity for short periods of time. Thus the term is used to identify any type of short-lived increase in productivity. In short, people will be more productive when appreciated or when watched. Like the Hawthorne effect, the definition of the Hawthorne experiments also varies. Most industrial/occupational psychology and organizational behavior textbooks refer to the illumination studies, and usually to the relay assembly test room experiments and the bank wiring room experiments. Only occasionally are the rest of the studies mentioned. Illumination Study Lighting intensity was altered to examine its effect on worker productivity. The findings were not significant. It seemed as if the workers tried harder when the lights went dim, just because they knew that they were in an experiment. This lead to the idea of the Hawthorne Effect, that people will behave differently when they are being watched.
Relay assembly experiments The researchers wanted to identify how other variables could affect productivity. They chose two women as test subjects and asked them to choose four other workers to join the test group. Together the women worked in a separate room over the course of five years (1927-1932) assembling telephone relays. Output was measured mechanically by counting how many finished relays each dropped down a chute. This measuring began in secret two weeks before moving the women to an experiment room and continued throughout the study. In the experiment room, they had a supervisor who discussed changes with them and at times used their suggestions. Then the researchers spent five years measuring how different variables impacted the group's and individuals' productivity. Some of the variables were:
?
changing the pay rules so that the group was paid for overall group production, not individual production giving two 5-minute breaks (after a discussion with them on the best length of time), and then changing to two 10-minute breaks (not their preference). Productivity increased, but when they received six 5-minute rests, they disliked it and reduced output.
?
? ?
providing food during the breaks shortening the day by 30 minutes (output went up); shortening it more (output per hour went up, but overall output decreased); returning to the earlier condition (where output peaked).
Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. However it is said that this is the natural process of the human being to adapt to the environment without knowing the objective of the experiment occurring. Researchers concluded that the workers worked harder because they thought that they were being monitored individually. Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Mayo, was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment."
NISG :
The National Institute for Smart Government (NISG) is a non-for profit company incorporated in 2002 by the Government of India and NASSCOM with its head office at Hyderabad, India. The Government of India is committed to eGovernment in order to make India more competitive in the new global economy. Having a highly developed public service that is capable of delivering eGovernment services to customers is an essential part of that strategy. The Vision of NISG is "to establish itself as an institution of excellence in e-Governance and to leverage private sector resources through a Public-Private-Partnership mode in establishing eIndia." The mission statement for NISG is The mission of NISG is to facilitate application of Public and Private resources to e-Governance in the areas of Architect, Consultancy and Training."
SWAGAT :
State-Wide Attention on Public Grievance by Application of Technology SWAGAT is an innovative concept that enables direct communication between the citizens and the chief minister. In Gandhinagar, the fourth Thursday of every month is a SWAGAT day wherein the highest office in administration attends to the grievances of the man on the street. Grievances are logged in, transmitted and made available online to the officers concerned who have to reply within 3 to 4 hours. The departments concerned then have to be ready with the replies, before 3 p.m., when the Chief Minister holds video conferences with all the districts concerned. Applicants are called one by one and the chief minister examines each complaint in detail. The information sent by the department is also reviewed online in the presence of the complainant and the Collector/District Development Officer/Superintendent of Police and other officials concerned. Attempts are made to offer a fair and acceptable solution on the same day and no applicant has ever left without any firm reply to his grievance. The record is then preserved in the ‘SWAGAT’ database and a separate log is maintained for each case. Owing to the innovative use of technology that injects in accountability in the government machinery, the International institutions such as the Commonwealth Telecom Organization and University of Manchester have considered SWAGAT as an excellent model of e-transparency.
Integrated Workflow and Document Management System (IWDMS)
e-Governance can never be effective without integrating the citizen centric delivery processes with the back end processes within the Government. Over 80% of the day-to-day work executed by Government departments is of routine in nature and workflow driven. IWDMS project was conceptualized by Department of Science & Technology, Govt. of Gujarat to automate the day-to-day work and improve upon the Accountability, Transparency & Effectiveness in Government administration. IWDMS project has been implemented by Government of Gujarat (GoG) to improve the Accountability, Transparency & Effectiveness in Government administration through automating the government functions and processes. GoG has given stress to Change Management and Top-down approach for successful implementation of IWDMS project. IWDMS provides Document Management, Workflow Management, Collaborative Environment and Knowledge Management in an integrated fashion and delivers an Electronic Workplace that result in productivity improvement in Government. The strategy adopted by Government for implementation of IWMDS project was to follow top-down approach. IWDMS was thereby implemented in Secretariat to set example and ensure continuous support from top leadership at all levels. Implementation at Secretariat was considered critical due to its vital encompassing areas including Policy Formulation, Allocation of Budgetary Resources, Monitoring & Supervision. Moreover, Secretariat is also directly accountable to Legislative assembly. To ensure smooth implementation of project across various departments the implementation strategy comprised of two phases. With 12 selective departments in Phase-I, kick-off presentations were carried out for all employees in the departments. These presentations highlighted current scenario of document processing in a sample department and how rendering IWDMS services would eliminate the repetition of processes. Presentations and training programmes have been conducted across all departments, and are in progress in HoDs. The Government also gave priority to roll out core and common applications and set stage for roll out of department specific applications thereafter. The project initially covered all employees of the Government of Gujarat at New Sachivalaya, Gandhinagar. It is now being extended to employees of the HoDs of the departments in Gandhinagar and Ahmedabad. IWDMS will be extended to all other HoDs in Gujarat in a phased manner thus covering all the Government offices/agencies by single file management system. Vision: “To provide better service to citizens by improving the efficiency of Government employees by automating and streamlining the Government processes” Objectives: Ensure accountable, transparent and effective administration Use IT as an enabler for efficient workplace Create an automated Office Management System Move towards less paper office by automation of routine tasks Enable Prioritization of work Enable Policy Based Processing IWDMS has facilitated to eliminate several steps required right from inwarding a physical correspondence till creating a file from it. IWDMS provides a central numbering system for all correspondences and files. This process eliminates registering of correspondences and files at
each step for traceability and hence reduces the number of steps. All registers required to be kept are automatically generated through the system. Electronic drafts which are attached to files created in IWDMS could be edited at each level of submission and at the same time track can be kept of all the changes done by users at various levels. Moreover the time required to transport the physical file by clerks and peons is reduced to merely fraction of a second.
NIXI :
The National Internet Exchange of India (NIXI) is a non-profit Company established in 2003 to provide neutral Internet Exchange Point services in the country. It was established with the Internet Service Providers Association of India (ISPAI) to become the operational meeting point of Internet service providers (ISPs) in India. Its main purpose is to facilitate handing over of domestic Internet traffic between the peering ISP members, rather than using servers in the US or elsewhere. This enables more efficient use of international bandwidth and saves foreign exchange. It also improves the Quality of Services for the customers of member ISPs, by being able to avoid multiple international hops and thus lowering delays. NIXI currently has seven operational nodes at the centers in Delhi (Noida), Mumbai (Vashi), Chennai, Kolkata, Bangalore, Hyderabad and Ahmedabad. The NIXI services consists of: 1. Access to the layer-2 switched medium (fast ethernet). 2. One IP address on the LAN with a reverse DNS mapping in the "nixi.in" domain. 3. 24x7 watch service, 24x7 hardware maintenance and 24x7 helpdesk service on the NIXI switch. 4. Adding a new route within 2 working days of receiving a request, handling complaints and problems within 3 hours of receiving them within a normal working day from 9.00 am to 5.30 pm. Outside of this 6–8 hours will be required. 5. This will be applicable when the peering ISP enters into a membership agreement with NIXI and adhere to the governing rules and regulations, including that of housing their routers in the NIXI locations.
NAS :
Network-attached storage (NAS) is file-level computer data storage connected to a computer network providing data access to heterogeneous network clients. A NAS unit is essentially a selfcontained computer connected to a network, with the sole purpose of supplying file-based data storage services to other devices on the network. The operating system and other software on the NAS unit provide the functionality of data storage, file systems, and access to files, and the management of these functionalities. The unit is not designed to carry out general-purpose computing tasks, although it may technically be possible to run other software on it. NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often by connecting a browser to their network address. The alternative to NAS storage on a network is to use a computer as a file server. In its most basic form a dedicated file server is no more than a NAS unit with keyboard and display and an operating system which, while optimised for providing storage services, can run other tasks; however, file servers are increasingly used to supply other functionality, such as supplying database services, email services, and so on. A general-purpose operating system is not needed on a NAS device, and often minimalfunctionality or stripped-down operating systems are used. For example FreeNAS, which is Free / open source NAS software designed for use on standard computer hardware, is just a version of FreeBSD with all functionality not related to data storage stripped out. NASLite is a highly optimized Linux distribution running from a floppy disk for the sole purpose of a NAS. Likewise, NexentaStor is based upon the core of the NexentaOS, aFree / open source hybrid operating system with an OpenSolaris core and a GNU user environment. NAS systems contain one or more hard disks, often arranged into logical, redundant storage containers or RAID arrays (redundant arrays of inexpensive/independent disks). NAS removes the responsibility of file serving from other servers on the network. NAS uses file-based protocols such as NFS (popular on UNIX systems) or SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems). NAS units rarely limit clients to a single protocol. NAS provides both storage and filesystem. This is often contrasted with SAN (Storage Area Network), which provides only block-based storage and leaves filesystem concerns on the "client" side. SAN protocols are SCSI, Fibre Channel, iSCSI, ATA over Ethernet (AoE),
or HyperSCSI. Despite differences SAN and NAS are not exclusive and may be combined in one solution: SANNAS hybrid The boundaries between NAS and SAN systems are starting to overlap, with some products making the obvious next evolution and offering both file level protocols (NAS) and block level protocols (SAN) from the same system. However a SAN device is usually served through NAS as one large flat file, not as a true filesystem. An example of this is Openfiler, a free software product running on Linux. NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such as load-balancing and fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller form factors. The price of NAS appliances has plummeted in recent years, offering flexible network-based storage to the home consumer market for little more than the cost of a regular USB or FireWireexternal hard disk. Many of these home consumer devices are built around ARM, PowerPC or MIPS processors running an embedded Linux operating system. Examples include Buffalo'sTeraStation and Linksys NSLU2. More recently, home NAS devices have incorporated support for the Universal Plug and Play protocol, enabling them to serve the growing number of networked home media players.
SAN :
A storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers in such a way that the devices appear as locally attached to the operating system. Although the cost and complexity of SANs are dropping, they are still uncommon outside larger enterprises. Network attached storage (NAS), in contrast to SAN, uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block.
Historically, data centers first created "islands" of SCSI disk arrays. Each island was dedicated to an application, and visible as a number of "virtual hard drives" (i.e. LUNs). Essentially, a SAN connects storage islands together using a high-speed network, thus allowing all applications to access all disks. Operating systems still view a SAN as a collection of LUNs, and usually maintain their own file systems on them. These local file systems, which cannot be shared among multiple operating systems/hosts, are the most reliable and most widely used. If two independent local file systems resided on a shared LUN, they would be unaware of this fact, would have no means of cache synchronization and eventually would corrupt each other. Thus, sharing data between computers through a SAN requires advanced solutions, such as SAN file systems orclustered computing. Despite such issues, SANs help to increase storage capacity utilization, since multiple servers share the storage space on the disk arrays. The common application of a SAN is for the use of transactionally accessed data that require high-speed block-level access to the hard drives such as email servers, databases, and high usage file servers. In contrast, NAS allows many computers to access the same file system over the network and synchronizes their accesses. Lately, the introduction of NAS heads allowed easy conversion of SAN storage to NAS Sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to shift storage from one server to another. Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the LUN of the faulty server. This process can take as little as half an hour and is a relatively new idea being pioneered in newer data centers. There are a number of emerging products designed to facilitate and speed this up still further. Brocade, for example, offers an Application Resource Manager product which automatically provisions servers to boot off a SAN, with typical-case load times measured in minutes. While this area of technology is still new many view it as being the future of the enterprise datacenter. SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant location containing a secondary storage array. This enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices. Since IP WANs are often the least costly method of long-distance transport, the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The traditional physical SCSI layer could only support a few meters of distance - not nearly enough to ensure business continuance in a disaster. Demand for this SAN application has increased
dramatically after the September 11th attacks in the United States, and increased regulatory requirements associated with Sarbanes-Oxley and similar legislation. The economic consolidation of disk arrays has accelerated the advancement of several features including I/O caching, snapshotting, and volume cloning (Business Continuance Volumes or BCVs).
Hybrid using DAS, NAS and SAN technologies.
Windows Azure :
Microsoft's Azure Services Platform is a cloud platform (cloud computing platform as a service) offering that "provides a wide range of internet services that can be consumed from both onpremises environments or the internet". It is significant in that it is Microsoft's first step into cloud computing following the recent launch of the Microsoft Online Services offering. The idea and push from Microsoft to compete directly in the software as a service model that Google's Google App Engine and Amazon's EC2 have offered is increasingly seen by them and others as an important next step in application development. In this idea, software doesn't have to be installed and managed on the user's computer. It also allows files and folders to be accessed from the web. The Azure Services Platform uses a specialized operating system, Windows Azure, to run its "fabric layer" — a cluster hosted at Microsoft's datacenters that manages computing and storage resources of the computers and provisions the resources (or a subset of them) to applications running on top of Windows Azure. Windows Azure, which was known as "Red Dog" during its development, has been described as a "cloud layer" on top of a number of Windows Server systems, which use Windows Server 2008 and Hyper-V to provide virtualization of services.
The platform includes five services — Live Services, SQL Services, .NET Services, SharePoint Services and Dynamics CRM Services — which the developers can use to build the applications that will run in the cloud. A client library, in managed code, and associated tools are also provided for developing cloud applications in Visual Studio. Scaling and reliability are controlled by the Windows Azure Fabric Controller so the services and environment don't crash if one of the servers crash within the Microsoft datacenter and provides the management of the user's web application like memory resources and load balancing. The Azure Services Platform can currently run .NET Framework applications written in C#, while supporting the ASP.NET application framework and associated deployment methods to deploy the applications onto the cloud platform. Two SDKs have been made available for interoperability with the Azure Services Platform: The Java SDK for .NET Services and the Ruby SDK for .NET Services. These enable Java and Ruby developers to integrate with .NET Services.
Routing :
Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data networks (such as the Internet), and transportation networks. This article is concerned primarily with routing in electronic data networks using packet switching technology. In packet switching networks, routing directs packet forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes; typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-purpose computers with multiple network cards can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the routers' memory, is very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths. Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms
unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments. Small networks may involve manually configured routing tables, while larger networks involve complex topologies and may change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses precomputed routing tables, with fallback routes if the most direct route becomes blocked (see routing in the PSTN). Adaptive routing attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, and allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. However, the configuration of the routing protocols often requires a skilled touch; one should not suppose that networking technology has developed to the point of the complete automation of routing.
Data Consolidation :
Data migration and data consolidation are two very different operations which produce different results. In either case however, the process for accomplishing these tasks are somewhat the same. All organizations at one time or another will have to face the issue of migrating or consolidating their data to accomplish various goals. Both of these functions are, at times, a great challenge. Whether the data is being moved across the country or simply moving data off of an older production cluster to a new environment, there is a certain amount of risk. These are the times when problems at any level can cause a production system to fail. Before beginning a project that creates the need to move data, an advanced strategy for not only moving the data but also to protect it at each step is key to success and to reducing risk. Data migration is the most common activity undertaken by most organizations and is usually related to moving data from an old technology platform to a new platform (ie, old hardware to new hardware) or moving data from an older version of an application to a new version of the same application. (This is not to be confused with data conversion, which converts data from one application to a different application that uses the “same” data, though the basic tenant for this operation is the same.) Data consolidation is usually associated with moving data from remote locations to a central location or combining data due to an acquisition or merger. In either case the first step in the process will always be to backup or protect your data, so that in the event of a problem, you can restore the operation. DR Scout is perfectly suited to either of these operations. In an environment where DR Scout is implemented, the process of protecting the data is already complete. As well, the ability to perform application failover and provide data availability exists without adding any additional hardware or software to
accomplish the migration or consolidation operation. If DR Scout is not installed and running in an environment, implementation is not complex and will, in a very short period of time, provide the data protection required by either of these processes. How it Works: DR-Scout provides non-disruptive Continuous Data Protection to allow for instant recovery back to any point in time or any business event. Unlike other host-only-based approaches that severely impact production hosts, DR-Scout offloads retention, encryption and compression functions to an out-of-band server. The diagram below shows a typical implementation.
Unique Features and Benefits: Uninterrupted Operations: If your organization is undertaking a technology refresh or switching from one storage environment to another, DR Scout can ensure uninterrupted operation using our application failover capability while new hardware is installed and tested. While ensuring the application is constantly available so that there is no lost productivity, the implementation team can ensure the new environment is correctly installed a thoroughly tested. Assured Recoverability: While many updates and upgrades are performed "in place" and are considered "non-disruptive", unforeseen problem can and many times do occur. Because of this it is still best practice to ensure your data is backed up and protected prior to beginning the operation. DR Scout can ensure your data is protected using our patented CDP technology providing the peace of mind that you can always recover your data to the point-in-time just prior to the failure. As well, using the application failover capability of DR Scout will also
alleviate the concern of lost or corrupted data. Non-Intrusive Operation: Consolidating data from remote sites can be a pain-staking process that can cause disruption to the remote operation. DR Scout can be utilized to remove this disruption by capturing the data in a non-intrusive manner. DR Scout's ability to transmit the data via the WAN in a controlled fashion utilizing our unique bandwidth policy management tools means little impact on the production environment and assured protection of all of the remote data at the central location. Heterogeneous Support: Today, many organizations are growing through acquisition and mergers. This activity puts a heavy burden on IT organizations because of the disparate operations between the two organizations. Bringing these differing operations together can be an overwhelming task in not properly planned and without proper tools. DR Scout and it's flexible architecture provides superior support for heterogeneous environments. The capability to handle various platforms helps control costs and reduces, if not eliminates the need for multiple software solutions to support the merging of two different organizations.
? ? ? ? ? ?
doc_472184456.docx