Description
Frame Work For Project Evaluation System Using Business Intelligence Of Collaborative Scm
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 9
FRAME WORK FOR PROJECT EVALUATION SYSTEM
USING BUSINESS INTELLIGENCE OF
COLLABORATIVE SCM
Anantha Keshava Murthy
1
, R Venkataram
2
, S G Gopalakrishna
3
1
Associate Professor, Dept of ME, EPCET, Bangalore, India,
2
Director Research, EPCET, Bangalore, India,
3
Principal, NCET, Bangalore, India,
Abstract
Major corporate and manufacturing organizations
are facing challenges in dealing with large numbers of
activities which involve assets, people and customers. Data
management to effectively integrate enterprise applications in
real time poses a new challenge. To learn from the past and
forecast the future, many manufacturing organizations are
adopting Business Intelligence (BI) tools and systems. The
importance of forecasting business trends and evolve suitable
strategies in real time is being adopted by these organizations.
Modern enterprises [01] are facing ever increasing challenges
of shorter product lifecycles, increased outsourcing, mass
customization demands, more complex products,
geographically dispersed-design teams, inventories subject to
rapid depreciation, and rapid fulfillment needs. To effectively
tackle these challenges, new industrial capabilities are
required in order to obtain competitive advantages in today?s
Internet economy where geographically scattered-design
teams and supply chain partners need to collaboratively design
products on a virtual basis, static designs need to be replaced
by mass customization, often using predefined modules or
building blocks to rapidly configure new product platforms
that can be flexibly managed through their lifecycle and to
exchange and control product information, and to perform
real-time program/project management.
The present work involves development of an
integrated predictive collaboration performance evaluation
frame work based on multiple decision making for Business
to Business (B to B). This has been achieved by using 'SCOR'
model. Secondly, a method for 'Performance management of
collaboration process to monitor and reporting has been
developed. This collaborative performance indicator
combines KPIs of individual companies which includes
development of cKPIs and real-time process performance
analysis. Finally it is proposed to develop a BI system
module for an online project evaluation module. So far, a
frame work is developed for an Integrated (Biz to Biz)
predictive collaboration performance evaluation module
which enables analyses of the result at macro level using
C&RT decision tree model and performance clustering based
on K-Means Construction. The model developed was
deployed at a medium scale machinery manufacturing
company and the domain users were able to prepare their
collaborative performance data according the input-output
model format and then feed it into the model, put
performance improvement scenarios and analyze the result in
terms of real usage feasibility and how to take advantage
from the KPI sensitivity analysis, related to their expected
collaborative performance, analyze the impact of sub-KPIs to
their expected collaborative performance. and form the sub-
KPI improvement planning based on the model result and
their long term strategy.
One of the major approaches to analytics is to identify
the impending change in trend in any Key Performance
Indicators (KPIs) before it accelerates. This kind of early
warning systems are very important and will be useful in
various scenarios like vendor management for services,
service quality Enhancement etc., However, using only a
scalar value to compare multiple series, rank them and project
the series to the future is not appropriate, even though
practically it is possible.
Introduction
Recently, many firms are exposed to a sophisticated
environment which is constituted by open markets [02],
globalization of sourcing, intensive use of information
technologies, and decreasing in product lifecycles. Moreover,
such a complexity is intensified by consumers who are
becoming increasingly demanding in terms of product quality
and service. It means globalization has increased firms?
internationalization, shifting them from local to global markets
and with increasing competitiveness [03]. Furthermore, the
dynamic environment (consisting of competitors, suppliers?
capacity, product variability and customers) complicated the
business process. To that end, many enterprises are often
forced to cooperate together within a Supply Chain (SC) by
forming a virtual enterprise which is a network of agents
typically consisting of suppliers, manufacturers, distributors,
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 10
retailers and consumers. Previous research [04] posits that SC
can be considered as a network of autonomous and semi-
autonomous business entities associated with one or more
family related products.
The ever changing market with prevailing volatility in
business environment with constantly shifting and increasing
customer expectations is causing two types of timeframe based
uncertainties that can affect the system. They are: (i) short term
uncertainties and (ii) long term uncertainties. Short term
uncertainties include day-to-day processing variations, such as
cancelled/rushed orders, equipment failure etc. Long term
uncertainties include material and product unit price
fluctuations, seasonal demand variations. Understanding
uncertainties can lead to planning decisions so that company
can safeguard against threats and can avoid the affect of
uncertainties. As a result, any failure in recognizing demand
fluctuations often hold unpredictable consequences such as
losing customers, decrease in the market share and increasing
in costs associated with holding inventories [05].
In order to achieve competitive advantage,
manufacturers are forced to rely on the agile supply chain
capabilities in the contemporary scenario of changing customer
requirements and expectation as well as with the changing
technological requirements. SC integration often is considered
as a vital tool to achieve competitive advantage [06]. Previous
research proved the implementation difficulty due to certain
factors such as lack of trust among partners and depending
solely on technology. Consideration to People, Processes and
Technology, BI analytics and PM initiative from the
perspective of three groups of participants, are Analysts, Users
and IT staff.
Analysts define and explore business models, mine and analyze
data and events, produce reports and dashboards, provide
insights into the organization?s performance and support the
decision-making processes.
Users “consume” the information, analysis and insight
produced by applications and tools to make decisions or take
other actions that help the enterprise achieve its goals. Some
users may be more than just consumers, such as the top
executives who will help craft the performance metric
framework. Users may also include operational workers, in
addition to executives and managers. The users determine how
well BI, analytics and PM initiatives succeed. Considering
users? requirements from several perspectives:
IT Enablers, who help design, build and maintain the systems
that users and analysts use (see Note 1). Traditional IT roles
such as project managers, data and system architects, and
developers remain important. But BI, analytics and PM
initiatives require more than simply building applications to fit
a list of requirements. Those applications also have to deliver
business results. Users have to want to use them. They have to
support analytic, business and decision processes. Thus, IT
enablers need business knowledge and the ability to work
collaboratively outside their traditional area of expertise.
With the growing number of large data warehouses
[07] for decision support applications, efficiently executing
aggregate queries is becoming increasingly important.
Aggregate queries are frequent in decision support
applications, where large history tables often are joined with
other tables and aggregated. Because the tables are large,
better optimization of aggregate queries has the potential to
result in huge performance gains. Unfortunately, aggregation
operators behave differently from standard relational operators
like select, project, and join. Thus, existing rewrite rules for
optimizing queries almost never involve aggregation
operators. To reduce the cost of executing aggregate queries in
a data warehousing environment, frequently used aggregates
are often pre computed and materialized. These materialized
aggregate views are commonly referred to as summary tables.
Summary tables can be used to help answer aggregate queries
other than the view they represent, potentially resulting in
huge performance gains. However, no algorithms exist for
replacing base relations in aggregate queries with summary
tables so the full potential of using summary tables to help
answer aggregate queries has not been realized.
In this research work an attempt has been made to
develop an integrated project evaluation module. The idea
was to have systematic manners to predict future collaborative
performance using a crosstab query based on moving
average/regression forecasting model.
Background-Literature Review
In this section, tried to explore some basic concepts
and the literature which is mostly related and essential to my
work such as: supply chain management, Bullwhip effect,
Collaborative CRM processes, knowledge management, Key
finding of analytics and performance management framework,
Overview of KPI analysis methodology, Data warehouse and
analytical processing, Data mining, Performance
measurement, Optimizing Aggregations, Autoregressive
Integrated Moving Average , Granger causality test as an
exploratory tool, etc.
Supply Chain Management: Supply Chain Management
focuses on managing internal aspects of the supply chain.
SCM is concerned with the integrated process of design,
management and control of the Supply Chain for the purpose
of providing business value to organizations lowering cost
and enhancing customer reachability. Further, SCM is the
management of upstream and downstream relationships
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 11
among suppliers to deliver superior value at less cost to the
supply chain as a whole. Many factors such as globalization
and demand uncertainty pressures forced companies to
concentrate their efforts on core business [08]. A process
which leads many companies to outsource less profitable
activities so that they gain cost savings as well as increased
focus on core business activities. As a result, most of these
companies have opted for specialization and differentiation
strategies. Moreover, many companies are attempting to adopt
new business models around the concept of networks in order
to cope with such a complexity in making planning and
predicting [09]. The new changes in business environment
have shifted the concentration of many companies towards
adopting mass-customization instead of mass-production.
Further, it derives the attention of many companies to focus
their effort on markets and customer value rather than on the
product [10]. From International Journal of Managing Value
and Supply any single company often cannot satisfy all
customer requirements such as fast-developing technologies, a
variety of product and service requirements and shortened
product lifecycles. Creating such new business environments
have made companies look to the supply chain as an
„extended enterprise?, to meet the expectations of end-
customers.
Bullwhip Effect: In a Supply Chain (SC), the uncertainty
market demands of individual firms are usually driven by some
macro-level, industry-related or economy-related
environmental factors. These are individually managed demand
forecasts and are causing SC to become inefficient in three
ways: (i) supply chain partners invest repeatedly in acquiring
highly correlated demand information which increases the
overall cost of demand forecasting (ii) the quality of individual
forecasts is generally sub-optimal, since individual companies
have only limited access to information sources and limited
ability to process them, it results in less accurate forecasts and
inefficient decision making (iii) firms vary in their capability to
produce good quality forecasts.
Collaborative CRM Processes: CRM entails all aspects
of relationships a company has with its customers [11] from
initial contact, presales and sales to after-sales, service and
support related. Collaboration between firms can improve the
involved intra-organizational business processes. The
identification and definition of collaborative CRM core
processes is still ambiguous. Collaborative business processes
that can be found in literature are marketing campaigns, sales
management, service management, complain management,
retention management, customer scoring, lead management,
customer profiling, customer segmentation, product
development, feedback and knowledge management.
Knowledge Management: Business benefits of these
investments included transactional efficiency, internal process
integration, back-office process automation, transactional
status visibility, and reduced information sharing costs. While
some of the enterprise started to think of in the direction of
acquiring and preserving the knowledge, the primary
motivation for many of these investments was to achieve
better control over day-to-day operations. The concept of
knowledge management: Just like knowledge itself,
knowledge management is difficult to define [12]. However,
is believed that defining what is understood by knowledge
management may be somewhat simpler than defining
knowledge on its own. The idea of „management? gives us a
starting point when considering, for example, the activities
that make it up, explaining the processes of creation and
transfer or showing its main goals and objectives without the
need to define what is understood by knowledge.
Consequently, in literature there are more ideas and
definitions on knowledge management than just on
knowledge, although these are not always clear as there are
numerous terms connected with the concept.
KPI Analysis Methodology: To improve supply chain
management performance in a systematic [13] way, I propose a
methodology of analyzing iterative KPI accomplishments. The
framework consists of the following steps (see Figure.1). First,
the managers identify and define KPIs and their relationships.
Then, the accomplishment costs of these KPIs are estimated,
and their dependencies are surveyed. Optimization calculation
(e.g., structure analysis, computer simulation) is used to
estimate the convergence of the total KPI accomplishment cost,
and to find the critical KPIs and their improvement patterns.
Then the performance management strategy can be adjusted by
interpreting the analysis results. The following sections discuss
the details of this methodology. Identifying KPI and model
their relationships, Managers in supply chains usually identify
KPIs according to their objective requirements and practical
experiences. But to get a systematic or balanced performance
measurement, they often turn [14] to some widely recognized
models, such as BSC and SCOR.
Conventional wisdom tells us a few things about
establishing key performance indicators. It goes something like
this: Determine corporate goals. Identify metrics to grade
progress against those goals. Capture actual data for those
metrics. Jam metrics into scorecards. Jam scorecards down the
throats of employees. Cross fingers. Hope for the best.
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 12
Figure.1- A research framework of improving supply chain KPIs
accomplishment
Data Warehouse-Analytical Processing: Construction
of data warehouses, involves data cleaning and data integration
[15]. This can be viewed as an important pre-processing step
for data mining. Moreover, data warehouses provide analytical
processing tools for the interactive analysis of
multidimensional data of varied granularities, which facilitates
effective data mining. Furthermore, many other data mining
functions such as classification, prediction, association, and
clustering, can be integrated with analytical processing
operations to enhance interactive mining of knowledge at
multiple levels of abstraction.
Subject-oriented: A data warehouse [16] is organized around
major subjects, such as customer, vendor, product, and sales.
Rather than concentrating on the day-to-day operations and
transaction processing of an organization, a data warehouse
focuses on the modeling and analysis of data for decision
makers. Hence, data warehouses typically provide a simple
and concise view around particular subject issues by excluding
data that are not useful in the decision support process.
Integrated: A data warehouse is usually constructed by
integrating multiple heterogeneous sources, such as relational
databases, at les, and on-line transaction records. Data
cleaning and data integration techniques are applied to ensure
consistency in naming conventions, encoding structures,
attribute measures, and so on.
A data cube: A data cube [17] allows data to be modeled and
viewed in multiple dimensions. It is defined by dimensions
and facts. In general terms, dimensions are the perspectives or
entities with respect to which an organization wants to keep
records. For example, consider a sales data from XYZ
company and data warehouse in order to keep records of the
store's sales with respect to the dimensions time, item, branch,
and location.
Fact table: Sales (Facts are numerical measures Ex; Sales
Amount, Number of Units Sold) Fact table contains the names
of the facts, or measures, as well as keys to each of the related
dimension tables.
Dimensions Tables: Time, item, branch and location, (These
dimensions allow the store to keep track of things like
monthly sales of items, and the branches and locations at
which the items were sold).
Data Mining (DM): Data mining has been broadly utilized
and accepted in business and production during the 1990s
[18].Currently, data mining is made of use not only in
businesses but also in many different areas in supply chain and
logistics engineering. A few examples are demand forecasting
system modeling, SC improvement roadmap rule extraction,
quality assurance, scheduling, and decision support systems.
The data mining techniques can normally be categorized into
four sorts i.e., association rules, clustering, classification, and
prediction. At the turn of century, the decision makings were
used in production management to choose the suitable and
agile solutions in real production. Data warehouse systems
allow for the integration of a variety of application systems.
They support information processing by providing a solid
platform of consolidated, historical data for analysis. E.
Performance Measurement (PM): On the other research
rivers, PM context [19] comprised of the multi-criteria
decision attribute (MCDA) are most commonly accepted for
use. The classifications are as follows, hierarchical techniques,
deployment approaches, scoring method and objective
programming. For example, performance improvement of the
selection of freight logistics hub in Thailand was developed by
coordinated simulation. K. A. Associates [20] figured out that
PM, among collaborative SC networks, is vital for
management. There have been many certain attempts to
deploy and explore AI and data mining techniques to make up
for the typical techniques in optimizing PM in SCM with a
better development roadmap.
Optimizing Aggregations: Viewing aggregation as an
extension of duplicate eliminating (distinct) projection
provides very useful intuition for reasoning about aggregation
operators inquiry trees. Rewrite rules for duplicate-eliminating
projection often can be used as building blocks to derive rules
for the more complex case of aggregation. In addition to the
intuition obtained by viewing aggregation as extended
duplicate-eliminating projection, modeling both with one
operator makes sense from an implementation point of view.
Typically, in existing query optimizers both aggregations and
duplicate eliminating projections are implemented in the same
module. Presentation made to a set of query rewrite rules for
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 13
moving aggregation operators in a query tree. Other authors
have previously given rewrite rules for pulling aggregations
up a query tree and for pushing aggregations down a query
tree [21]. My work unifies their results in a single intuitive
framework, and using this framework can derive more
powerful rewrite rules. I present new rules for pushing
aggregation operators past selection conditions (and vice-
versa) and show how selection conditions with inequality
comparisons can cause aggregate functions to be introduced
into or removed from a query tree. Also presented rules of
coalescing multiple aggregation operators in a query tree into
a single aggregation operator, and conversely, rules for
splitting a single aggregation operator into two operators.
Autoregressive Integrated Moving Average-ARIMA:
ARIMA model was introduced by Box and Jenkins (hence also
known as Box-Jenkins model) in 1960s for forecasting a
variable [22]. ARIMA method is an extrapolation method for
forecasting and, like any other such method, it requires only
the historical time series data on the variable under forecasting.
Among the extrapolation methods, this is one of the most
sophisticated methods, for it incorporates the features of all
such methods, does not require the investigator to choose the
initial values of any variable and values of various parameters
a priori and it is robust to handle any data pattern. As one
would expect, this is quite a difficult model to develop and
apply as it involves transformation of the variable,
identification of the model, estimation through non-linear
method, verification of the model and derivation of forecasts.
Analytics and Performance Management
Framework: This framework defines the people, processes
and technologies [23] that need to be integrated and aligned to
take a more strategic approach to business intelligence (BI),
analytics and performance management (PM) initiatives.
• Most organizations use a combination of vendors, products
and services to provide BI, analytics and PM solutions.
• Successful managers recognize the diversity and
interrelationships of the analytic processes within the enterprise
and can address the needs of a diverse set of users without
creating silos .
• A strategic view requires defining the business and decision
processes, the analytical processes, as well as the processes that
define the information infrastructure independently from the
technology.
• The PM, technology and complexity of skills associated with
the strategic use of BI, analytics and PM increases dramatically
as the scope of the initiative widens across multiple business
processes.
• There is no single or right instantiation of the framework;
different configurations can be supported by the framework
based on business objectives and constraints.
Proposals,
• Use this framework to develop a strategy to surface key
decisions, integration points, gaps, overlaps and biases that
business and program managers may not have otherwise
prepared for .
• A portfolio of BI, analytic and PM technologies will be
needed to meet the diversity of requirements of a large
organization.
• Seek the advice from program management specialists so as to
balance investments across multiple projects and consider
bringing BI, analytics and PM initiatives within a formal
program management framework.
Objective and Methodology
The Purpose or objective of the research is
'Develop a BI system module for a Project evaluation, Is an
online project evaluation module which is used to evaluate
Customer Relationship Management (CRM) and Supply
chain management (SCM).
A Proposed Project Evaluation Module We would
introduce an online project evaluation module which is
used to evaluate Customer Relationship Management
(CRM) The prime focus this is for user experience in
driving CRM and Supply chain management (SCM)
systems. These used to coordinate the movement of
products and services from suppliers to customers
(including manufacturers, wholesalers, and retailers). Also
the systems are used to manage demand, warehouses, trade
logistics, transportation, and other issues concerning
facilities, and movement and transformation of materials on
their way to customers. Components of SCM include
supply chain optimization, and supply chain event
management.
Table -1: For Weighted Average Rating Value.
SCM also comprises warehouse management, radio
frequency identification (RFID), and transportation
Rating Value
Full Support requires the vendor to have a WA of 95% or
above for the selected module
95
Average requires the vendor to have a WA of 75% or above
for the selected module
75
Basic Features requires the vendor to have a WA of 55% or
above for the selected module
55
Minimal requires the vendor to have a WA of 25% or above
for the selected module
25
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 14
management. So for we advanced this work up to the level
of setting the priorities.
Functionality of Project analysis module: Assign
priorities for the modules as below. These priorities are
relative to each other. To further refine priority distribution
for any module, please click on the module name. If any
priority is set as "Critical," will have the option to specify a
minimum requirement for that module. By assigning
minimum requirements, we can identify which vendors fail
to meet minimum acceptable ratings. Below are more
about minimum requirements,
Table - 2: For Features and Functions Levels.
Definition of Minimum Requirements: Minimum
requirements provide an efficient method for highlighting
vendors who fail to meet key needs. Any vendor who cannot
full fill the minimum requirements will have a "disqualified
criteria" indicator on the results screen. There are two
ranges of minimum requirements, based on level of depth in
the knowledge tree.
For Modules- Modules link to criteria, but are not rated
themselves. The ratings below are based on the weighted
scores of the criteria within the module. Thus, the score is
the minimum weighted average for that module.
For Features and Functions (lowest level of hierarchy) :
Features and functions appear as the detailed levels of the
hierarchy, and do not link to any sub criteria. The ratings
below are based on actual vendor capabilities for the
selected function. Thus, the score value is the minimum
level of support for that function.
Building Dimensions Tables: A dimension table, for
example a table Item - contain the attributes item name,
brand, and type. Consider simple 2-D data cube which is,
in fact, a table or spreadsheet for sales data items sold per
quarter in the city of Surat . The Table-3 to represent a 2-D
View of sales data for XYZ company according to the
dimensions time and item, at where the sales are from
branches located in the city of Surat. The measure
displayed is Rupees sold.
Figure.2: Lattice of cuboids, making up a 4-D data cube.
Above Figure.2 represents, Lattice of cuboids, making up a
4-D data cube for the dimensions time, item, location, and
supplier. Each cuboid represents a different degree of
summarization. Stars, Snowflakes, and fact constellations:
schemas for multidimensional databases The entity-
relationship data model is commonly used in the design of
relational databases,
A multidimensional data model : A compromise
between the star schema and the Snowflake schema is to
adopt a mixed schema where only the very large dimension
tables are normalized. Normalizing large dimension tables
saves storage space, while keeping small dimension tables
un normalized may reduce the cost and performance
degradation due to joins on multiple dimension tables.
Doing both may lead to an overall performance gain.
However, careful performance tuning could be required to
determine which dimension tables should be normalized and
split into multiple tables. Fact constellation: Sophisticated
applications may require multiple fact tables to share
dimension tables. This kind of schema can be viewed as a
collection of stars, and hence is called a galaxy schema or a
fact constellation.
Table-3: A 2-D View of sales data for XYZ-company
Rating Value
Supported as Delivered "Out of the Box" 100
Supported by Partner Supported via an integrated partner
solution
95
Add-on Partner Supported via add-on products offered by
partners
80
Modification Supported via modification (screen
configurations, reports, GUI tailoring, etc.)
70
Third Party Support Supported via third party solution 60
Customization Supported via Customization (changes to
source code)
40
Future Release Supported in a Future Release 20
Not Supported / Unrated 0
Time
(quarter)
Time (quarter)
I tem Type
home
appliances
Fridge Phone Computer
Q1 600K 900K 2045K 565K
Q2 800K 1002K 2650K 570K
Q3 905K 1020K 4074K 600K
Q4 950K 1200K 5085K 700K
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 15
Figure.3: Fact constellation schema of a data warehouse for
sales and shipping.
Examples for defining star, Snowflake, and fact
constellation schemas: A relational query languages like
SQL is to be used to specify relational queries [24], a data
mining query language can be used to specify data mining
tasks. In particular, we examine an SQL-based data mining
query language called DMQL which contains language
primitives for defining data warehouses and data marts.
Language primitives for specifying other data mining tasks,
such as the mining of concept/class descriptions,
associations, classifications, and so on, will be introduced in
Chapter .
Data warehouses and data marts can be defined
using two language primitives, one for cube definition and
one for dimension definition. The cube definition statement
has the following syntax.
define cube.., {cube_name}[{dimension_list}] : {measure_list
the dimension definition statement has the following syntax.define
dimension.., {dimension_ name}as ({attribute or sub_dimension list})
Examples to define the star, snowflake and
constellations schemas of Examples 2.1 to 2.3 using DMQL.
DMQL keywords are displayed in sans serif font.
Finally, a fact constellation schema can be defined as a set
of interconnected cubes. Below is an example. Example 2.6
The fact constellation schema of Example 2.3 and Figure.3 is
defined in DMQL as follows.
define cube.., sales [time, item, branch, location]:
rupees sold =sum(sales in rupees),units sold =count(*)
define.., dimension time as (time_key, day, day of week,
month, quarter, year)
define.., dimension item as (item_key, item name, brand_type,
supplier_type)
define.., dimension branch as (branch_key, branch_name,
branch_type)
define.., dimension location as (location_key, street, city, state,
country)
define.., cube shipping [time, item, shipper, from_location,
to_location]: rupees cost =sum(cost in rupees), units shipped
=count(*)
define.., dimension item as item in cube sales
define.., dimension shipper as (shipper_key, shipper_name,
location as location in cube sales, shipper_type)
define.., dimension from_location as location in cube sales
define.., dimension to_location as location in cube sales
A define cube statement is used to define data cubes for
sales and shipping, corresponding to the two fact tables of
the schema of Example 2.3. Note that the time, item, and
location dimensions of the sales cube are shared with the
shipping cube. This is indicated for the time dimension, for
example, as follows. Under the define cube statement for
shipping, the statement „?define dimension time as time in
cube sales" is specified. Instead of having users or experts
explicitly define data cube dimensions, dimensions can be
automatically generated or adjusted based on the examination
of data distributions. DMQL primitives for specifying such
automatic generation or adjustments are also possible.
Data Preparation and analyze a dyadic relation:
After the data set of relationship between enterprise and its
direct customer questionnaire gathering following R.
Derrouiche et al., [25] Model to analyze a dyadic relation and
to evaluate its performance, the attribute ranking algorithm
using information gain based on ranker search was calculated
for the two types of relationships.
Sub-KPI impact results from the attribute ranking
algorithm: These results are shown in Figure.4. In addition,
the questionnaire from R. Derrouiche et al., able to
characterize collaborative relation between two or
more partners in a supply chain, evaluating their related
performances accordingly. The former level is the common
perspective as follows: relation climate, relation structure, IT-
used and relation lifecycle and the later level consists of the
perceived satisfaction of the relation and its perceived
effectiveness.
These represent the macro view of model. For example, the
macro view of relation climate has six micro views, and each
micro view has also two sub-micro views.
Next, the data cleaning and input-output format following
C&RT and K-Means structure was conducted to prepare the
learning data.
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 16
Figure. 4 : The sub-KPI impact results from the attribute
ranking algorithm using information gain, based on ranker
search.
Primary impact of each sub-KPI (i) from each relationship
type (j) was calculated from equation 2. Then weight
definition was performed according to equation 2.
(1)
Preparation algorithm for computing the ranking of
time series: A simple pattern based approach has been
used in this research work to compare the time series data
[26]. The ranking of time series is done through automated
sorting of patterns. In order to sort the time series values,
the spread of each series is computed and compared with the
spread of all the series. Large variances suggest a very
different development, while small variances indicate a
similar development pattern. Since the values for each series
are very different, it is not possible to compare the series
values directly. In order to make the series comparable, the
series will be normalized, by dividing the individual values
of the series by series mean. Once the data is normalized,
square of a sum of differences of individual values in the
time series with that of the overall mean vector values is
computed, which results in scalar values for each series.
Ranking of these series of scalars will provide statistically
valid ranks for the time series.
Preparation algorithm for computing the ranking
of time series: A standard approach for model forecasting
is to use techniques like ARIMA or using neural networks.
However several problems limit their usefulness when
dealing with analysis in a practical situation. Some of them
are like –
? Simple models have proved to be effective in
replicating complex models like ARIMA on time
series forecasting [27]
? Non-parametric models like Kernel regression
though simple require human evolution which
limits the usage in a dynamic setting like
interactive analytics.
? Though non-parametric methods like neural
networks show promising results their
computational complexity is prohibitive. The
prediction of future value of a KPI is an important
function in analytics. In this research we adopt the
models proposed by Toskos and integrate with my
analytical system. The equations used is, k-th
moving average:
(2)
The details of the derivations and comparison of
the effectiveness of these models with standard as well as
best ARIMA models are discussed in Toskos. For the sake
of abbreviation the proposed set of model using kth series
are referred to as KMV (Kth series Moving average
Variants). Actual algorithm is given below in Figure. 5.
Figure.5: Prediction with KMV model.
This analysis can be done in two modes: manual and
automatic. In both the modes the process remains the same,
only the space in which the analysis is carried out will
differ. In manual mode, user will select the dimensions of
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 17
interest. However in the automatic mode a pre defined
structure for hierarchical analysis is followed. The actual
process consists of
? Selection of dimensional values and facts
? Forecasting the KPIs using KMV model
? Ranking the time series of predicted values
Results and Discussion
Outcome Proposed Project Evaluation Module :As
explained in the methodology, progressed in setting the
priorities. Further would intended to advance in some of the
areas in SCM and Project estimation modules.
Priorities is set as Very Important from Must have: Project
analysis from functionality module is done in the following
manner, For example assign priorities is set as "Very
Important'', Then will have the option to specify a minimum
requirement for that module. By assigning minimum
requirements, can identify which vendors fail to meet your
minimum acceptable ratings.
Conclusion
Criteria Selection And Module Rating Value: The
graphs below were developed for display the standard
scores of the vendor solutions, as determined through CRM
Model. The graphs reflect functional, technical, and
business criteria. This comparison is based on average
weights and priorities.
Module Total
Module
Rating
Sales Force Automation: Includes - Account
and Contact Management, Activity
Management, Contract Management and
Creation, Internet Sales, Opportunity
Management, Partner Management, Project
and Team Management, Quotes and Proposals,
Sales Forecasting Management, Sales Lead
Management, Sales Process Management,
Territory Management, Team Selling, and
Member Reassignment
246 95.85
Marketing Automation: Includes- Campaign
Execution and Management, Campaign
Management ,Campaign Planning, Collateral
and Brand Management, Lead and List
Management, Marketing Resource
Management
224 96.55
Customer Service and Support: Includes-
Assigning Cases, Creating and Maintaining a
Solutions Knowledge Base, Creating New Cases
(Service Requests),Customer Self-Service
Portal, Escalating Unresolved Cases, Solving
and Closing Cases
56 97.11
Analytics and Reporting: Includes- Analytics,
Reporting
43 94.00
Product Technology: Includes- Business
Functionality, Ongoing CRM Solution Support,
Technical Functionality
619 89.98
Table - 4: Criteria selection and Module Rating Value.
Form the Project analysis modules for Functionality
analysis similar CMR, more efforts are being underway so
as to have a comfortable and sophisticated BI system so as
to enable and analyze in the area of Asset Management,
Data Management, Enterprise Resource Planning (ERP),
Project and Process Management.
Graph - 3: Standard Scores of the Vendor Solutions
References
[01] Collaborative process planning and manufacturing in
product lifecycle management -
www.elsevier.com/locate/compind-, X.G. Ming et al. /
Computers in Industry 59 (2008) 154–166
[02]International Journal of Managing Value and Supply
Chains (IJMVSC) Vol. 3, No. 4, December 2012
[03] Alotaibi, K.F., Fawcett, S. E. and Birou, L. (1993)
“Advancing Competitive Position Through Global And JIT
Sourcing: Review And Directions”, 3(1/2):4-37
[04]Lee, H. L. & Billington, C. (1995). “The Evolution of
Supply-Chain-Management Models and Practice at Hewlett-
Packard”, Interfaces, 25:42-63. Vol. 3, No. 4, December
2012
[05] Gupta, A. and Maranas, C. D. (2003) “Managing
demand uncertainty in supply chain planning”, Computer
and Chemical Engineering, 27:1219-1227.
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 18
[06] Yusuf, Y. Y., Gunsekaran, A. Adeleye, E. O. &
Sivayoganathan, K. (2004) “Agile supply chain capabilities:
Determinants of competitive objectives”, 1(59):379-392.
[07] Aggregate-Query Processing in Data Warehousing
Environments- Ashish Gupta Venky Harinarayan Dallan
Quass -IBM Almaden Research Center
[08] Sanchez-Rodrigues, V., Potter, A. & Naim, M. M.
(2010) “Evaluating the causes of uncertainty in logistics
operations”, International Journal of Logistics Management,
21(1):45 – 64.
[09] Saikouk, T., Zouaghi, I. & Spalanzani, A. (2012)
“Mitigating Supply Chain System Entropy by the
Implementation of RFID”, CERAG, Vienna, Austria.
[10] Grenci, R. T. & Watts, C. A. (2007) “Maximizing
customer value via mass customized econsumer services”,
Business horizons, 50 (2):123-132.
[11] Usability of CRM Systems as Collaboration
Infrastructures in Business Networks Olaf Reinhold,
Germany [email protected], [email protected]
leipzig.de
[12] Earl M (2001) Knowledge management strategies:
toward a taxonomy. Journal of Management Information
Systems 18(1), 215–233.
[13] B.J. Angerhofer, M.C. Angelides, A model and a
performance measurement system for collaborative supply
chains, Decision Support Systems 42 (2006) 283–301.
[14] F.T.S. Chan, H.J. Qi, An innovative performance
measurement method for supply chain management, Supply
Chain Management: An International Journal 8 (3–4) (2003)
209–223.
[15] A. Berson and S. J. Smith. Data Warehousing, Data
Mining, and OLAP. New York: McGraw-Hill, 1997.
[16] S. Chaudhuri and U. Dayal. An overview of data
warehousing and OLAP technology. ACM SIGMOD
Record1997.
[17] P. Deshpande, J. Naughton, K. Ramasamy, A. Shukla,
K. Tufte, and Y. Zhao. Cubing algorithms, storage
estimation, and storage and processing alternatives for olap,
1997.
[18] J. Han and M. Kamber, Data mining: concepts and
techniques.Morgan Kaufmann Publishers, 2001.
[19] P.C. Fishburn, Method for estimating addtive utilities,
Management Science,vol.13-17, pp.435-453, 1997.
[20] K. A. Associates, A Guidebook for Developing a
Transit Performance-measurement System. Washington,
DC., 2003.
[21] W. P. Yan and P. A. Larson. Performing Group-By
Before Join. In ZCDE, 1994.
[22] Box, G.E.P., and G. M. Jenkins. 1970. Time series
analysis: forecasting and control. Holden Day, San Francisco,
CA.
[23] Gartner RAS Core Research Note G00166512 Gartner?s
Business Intelligence, Analytics and Performance
Management Framework, 19 October 2009
[24] E. Thomsen. OLAP Solutions: Building
Multidimensional Information Systems. John Wiley & Sons,
1997.
[25] Derrouiche, R. Neubert G. and Bouras A., Supply chain
management: a framework to characterize the
collaborative strategies, Vol. 21, Issue 4, June 2008 , pp.
426-439.
[26] KPI based analytics in e-Governance – A prototype
using segmentation and trend analysis M.N.Rao, Jay B.
Simha
[27] Toskos C.P, “K-th, weighted and exponential moving
averages for time series forecasting models”, European
Journal of Pure and Applied Mathematics,
Vol.3,No.3,2010,406-416
[28] Min, D.G., and Park, J. D., 2003, “Development of a
Performance-Based Supply Chain Management System,” IE
Interface, 16(3), 167-173.
[29] Kim, K. J., and Dennis K. J. Lin, 2000, “Simultaneous
optimization of mechanical properties of steel by
maximizing exponential desirability functions,” 49(3), 211-
325.
doc_236583989.pdf
Frame Work For Project Evaluation System Using Business Intelligence Of Collaborative Scm
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 9
FRAME WORK FOR PROJECT EVALUATION SYSTEM
USING BUSINESS INTELLIGENCE OF
COLLABORATIVE SCM
Anantha Keshava Murthy
1
, R Venkataram
2
, S G Gopalakrishna
3
1
Associate Professor, Dept of ME, EPCET, Bangalore, India,
2
Director Research, EPCET, Bangalore, India,
3
Principal, NCET, Bangalore, India,
Abstract
Major corporate and manufacturing organizations
are facing challenges in dealing with large numbers of
activities which involve assets, people and customers. Data
management to effectively integrate enterprise applications in
real time poses a new challenge. To learn from the past and
forecast the future, many manufacturing organizations are
adopting Business Intelligence (BI) tools and systems. The
importance of forecasting business trends and evolve suitable
strategies in real time is being adopted by these organizations.
Modern enterprises [01] are facing ever increasing challenges
of shorter product lifecycles, increased outsourcing, mass
customization demands, more complex products,
geographically dispersed-design teams, inventories subject to
rapid depreciation, and rapid fulfillment needs. To effectively
tackle these challenges, new industrial capabilities are
required in order to obtain competitive advantages in today?s
Internet economy where geographically scattered-design
teams and supply chain partners need to collaboratively design
products on a virtual basis, static designs need to be replaced
by mass customization, often using predefined modules or
building blocks to rapidly configure new product platforms
that can be flexibly managed through their lifecycle and to
exchange and control product information, and to perform
real-time program/project management.
The present work involves development of an
integrated predictive collaboration performance evaluation
frame work based on multiple decision making for Business
to Business (B to B). This has been achieved by using 'SCOR'
model. Secondly, a method for 'Performance management of
collaboration process to monitor and reporting has been
developed. This collaborative performance indicator
combines KPIs of individual companies which includes
development of cKPIs and real-time process performance
analysis. Finally it is proposed to develop a BI system
module for an online project evaluation module. So far, a
frame work is developed for an Integrated (Biz to Biz)
predictive collaboration performance evaluation module
which enables analyses of the result at macro level using
C&RT decision tree model and performance clustering based
on K-Means Construction. The model developed was
deployed at a medium scale machinery manufacturing
company and the domain users were able to prepare their
collaborative performance data according the input-output
model format and then feed it into the model, put
performance improvement scenarios and analyze the result in
terms of real usage feasibility and how to take advantage
from the KPI sensitivity analysis, related to their expected
collaborative performance, analyze the impact of sub-KPIs to
their expected collaborative performance. and form the sub-
KPI improvement planning based on the model result and
their long term strategy.
One of the major approaches to analytics is to identify
the impending change in trend in any Key Performance
Indicators (KPIs) before it accelerates. This kind of early
warning systems are very important and will be useful in
various scenarios like vendor management for services,
service quality Enhancement etc., However, using only a
scalar value to compare multiple series, rank them and project
the series to the future is not appropriate, even though
practically it is possible.
Introduction
Recently, many firms are exposed to a sophisticated
environment which is constituted by open markets [02],
globalization of sourcing, intensive use of information
technologies, and decreasing in product lifecycles. Moreover,
such a complexity is intensified by consumers who are
becoming increasingly demanding in terms of product quality
and service. It means globalization has increased firms?
internationalization, shifting them from local to global markets
and with increasing competitiveness [03]. Furthermore, the
dynamic environment (consisting of competitors, suppliers?
capacity, product variability and customers) complicated the
business process. To that end, many enterprises are often
forced to cooperate together within a Supply Chain (SC) by
forming a virtual enterprise which is a network of agents
typically consisting of suppliers, manufacturers, distributors,
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 10
retailers and consumers. Previous research [04] posits that SC
can be considered as a network of autonomous and semi-
autonomous business entities associated with one or more
family related products.
The ever changing market with prevailing volatility in
business environment with constantly shifting and increasing
customer expectations is causing two types of timeframe based
uncertainties that can affect the system. They are: (i) short term
uncertainties and (ii) long term uncertainties. Short term
uncertainties include day-to-day processing variations, such as
cancelled/rushed orders, equipment failure etc. Long term
uncertainties include material and product unit price
fluctuations, seasonal demand variations. Understanding
uncertainties can lead to planning decisions so that company
can safeguard against threats and can avoid the affect of
uncertainties. As a result, any failure in recognizing demand
fluctuations often hold unpredictable consequences such as
losing customers, decrease in the market share and increasing
in costs associated with holding inventories [05].
In order to achieve competitive advantage,
manufacturers are forced to rely on the agile supply chain
capabilities in the contemporary scenario of changing customer
requirements and expectation as well as with the changing
technological requirements. SC integration often is considered
as a vital tool to achieve competitive advantage [06]. Previous
research proved the implementation difficulty due to certain
factors such as lack of trust among partners and depending
solely on technology. Consideration to People, Processes and
Technology, BI analytics and PM initiative from the
perspective of three groups of participants, are Analysts, Users
and IT staff.
Analysts define and explore business models, mine and analyze
data and events, produce reports and dashboards, provide
insights into the organization?s performance and support the
decision-making processes.
Users “consume” the information, analysis and insight
produced by applications and tools to make decisions or take
other actions that help the enterprise achieve its goals. Some
users may be more than just consumers, such as the top
executives who will help craft the performance metric
framework. Users may also include operational workers, in
addition to executives and managers. The users determine how
well BI, analytics and PM initiatives succeed. Considering
users? requirements from several perspectives:
IT Enablers, who help design, build and maintain the systems
that users and analysts use (see Note 1). Traditional IT roles
such as project managers, data and system architects, and
developers remain important. But BI, analytics and PM
initiatives require more than simply building applications to fit
a list of requirements. Those applications also have to deliver
business results. Users have to want to use them. They have to
support analytic, business and decision processes. Thus, IT
enablers need business knowledge and the ability to work
collaboratively outside their traditional area of expertise.
With the growing number of large data warehouses
[07] for decision support applications, efficiently executing
aggregate queries is becoming increasingly important.
Aggregate queries are frequent in decision support
applications, where large history tables often are joined with
other tables and aggregated. Because the tables are large,
better optimization of aggregate queries has the potential to
result in huge performance gains. Unfortunately, aggregation
operators behave differently from standard relational operators
like select, project, and join. Thus, existing rewrite rules for
optimizing queries almost never involve aggregation
operators. To reduce the cost of executing aggregate queries in
a data warehousing environment, frequently used aggregates
are often pre computed and materialized. These materialized
aggregate views are commonly referred to as summary tables.
Summary tables can be used to help answer aggregate queries
other than the view they represent, potentially resulting in
huge performance gains. However, no algorithms exist for
replacing base relations in aggregate queries with summary
tables so the full potential of using summary tables to help
answer aggregate queries has not been realized.
In this research work an attempt has been made to
develop an integrated project evaluation module. The idea
was to have systematic manners to predict future collaborative
performance using a crosstab query based on moving
average/regression forecasting model.
Background-Literature Review
In this section, tried to explore some basic concepts
and the literature which is mostly related and essential to my
work such as: supply chain management, Bullwhip effect,
Collaborative CRM processes, knowledge management, Key
finding of analytics and performance management framework,
Overview of KPI analysis methodology, Data warehouse and
analytical processing, Data mining, Performance
measurement, Optimizing Aggregations, Autoregressive
Integrated Moving Average , Granger causality test as an
exploratory tool, etc.
Supply Chain Management: Supply Chain Management
focuses on managing internal aspects of the supply chain.
SCM is concerned with the integrated process of design,
management and control of the Supply Chain for the purpose
of providing business value to organizations lowering cost
and enhancing customer reachability. Further, SCM is the
management of upstream and downstream relationships
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 11
among suppliers to deliver superior value at less cost to the
supply chain as a whole. Many factors such as globalization
and demand uncertainty pressures forced companies to
concentrate their efforts on core business [08]. A process
which leads many companies to outsource less profitable
activities so that they gain cost savings as well as increased
focus on core business activities. As a result, most of these
companies have opted for specialization and differentiation
strategies. Moreover, many companies are attempting to adopt
new business models around the concept of networks in order
to cope with such a complexity in making planning and
predicting [09]. The new changes in business environment
have shifted the concentration of many companies towards
adopting mass-customization instead of mass-production.
Further, it derives the attention of many companies to focus
their effort on markets and customer value rather than on the
product [10]. From International Journal of Managing Value
and Supply any single company often cannot satisfy all
customer requirements such as fast-developing technologies, a
variety of product and service requirements and shortened
product lifecycles. Creating such new business environments
have made companies look to the supply chain as an
„extended enterprise?, to meet the expectations of end-
customers.
Bullwhip Effect: In a Supply Chain (SC), the uncertainty
market demands of individual firms are usually driven by some
macro-level, industry-related or economy-related
environmental factors. These are individually managed demand
forecasts and are causing SC to become inefficient in three
ways: (i) supply chain partners invest repeatedly in acquiring
highly correlated demand information which increases the
overall cost of demand forecasting (ii) the quality of individual
forecasts is generally sub-optimal, since individual companies
have only limited access to information sources and limited
ability to process them, it results in less accurate forecasts and
inefficient decision making (iii) firms vary in their capability to
produce good quality forecasts.
Collaborative CRM Processes: CRM entails all aspects
of relationships a company has with its customers [11] from
initial contact, presales and sales to after-sales, service and
support related. Collaboration between firms can improve the
involved intra-organizational business processes. The
identification and definition of collaborative CRM core
processes is still ambiguous. Collaborative business processes
that can be found in literature are marketing campaigns, sales
management, service management, complain management,
retention management, customer scoring, lead management,
customer profiling, customer segmentation, product
development, feedback and knowledge management.
Knowledge Management: Business benefits of these
investments included transactional efficiency, internal process
integration, back-office process automation, transactional
status visibility, and reduced information sharing costs. While
some of the enterprise started to think of in the direction of
acquiring and preserving the knowledge, the primary
motivation for many of these investments was to achieve
better control over day-to-day operations. The concept of
knowledge management: Just like knowledge itself,
knowledge management is difficult to define [12]. However,
is believed that defining what is understood by knowledge
management may be somewhat simpler than defining
knowledge on its own. The idea of „management? gives us a
starting point when considering, for example, the activities
that make it up, explaining the processes of creation and
transfer or showing its main goals and objectives without the
need to define what is understood by knowledge.
Consequently, in literature there are more ideas and
definitions on knowledge management than just on
knowledge, although these are not always clear as there are
numerous terms connected with the concept.
KPI Analysis Methodology: To improve supply chain
management performance in a systematic [13] way, I propose a
methodology of analyzing iterative KPI accomplishments. The
framework consists of the following steps (see Figure.1). First,
the managers identify and define KPIs and their relationships.
Then, the accomplishment costs of these KPIs are estimated,
and their dependencies are surveyed. Optimization calculation
(e.g., structure analysis, computer simulation) is used to
estimate the convergence of the total KPI accomplishment cost,
and to find the critical KPIs and their improvement patterns.
Then the performance management strategy can be adjusted by
interpreting the analysis results. The following sections discuss
the details of this methodology. Identifying KPI and model
their relationships, Managers in supply chains usually identify
KPIs according to their objective requirements and practical
experiences. But to get a systematic or balanced performance
measurement, they often turn [14] to some widely recognized
models, such as BSC and SCOR.
Conventional wisdom tells us a few things about
establishing key performance indicators. It goes something like
this: Determine corporate goals. Identify metrics to grade
progress against those goals. Capture actual data for those
metrics. Jam metrics into scorecards. Jam scorecards down the
throats of employees. Cross fingers. Hope for the best.
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 12
Figure.1- A research framework of improving supply chain KPIs
accomplishment
Data Warehouse-Analytical Processing: Construction
of data warehouses, involves data cleaning and data integration
[15]. This can be viewed as an important pre-processing step
for data mining. Moreover, data warehouses provide analytical
processing tools for the interactive analysis of
multidimensional data of varied granularities, which facilitates
effective data mining. Furthermore, many other data mining
functions such as classification, prediction, association, and
clustering, can be integrated with analytical processing
operations to enhance interactive mining of knowledge at
multiple levels of abstraction.
Subject-oriented: A data warehouse [16] is organized around
major subjects, such as customer, vendor, product, and sales.
Rather than concentrating on the day-to-day operations and
transaction processing of an organization, a data warehouse
focuses on the modeling and analysis of data for decision
makers. Hence, data warehouses typically provide a simple
and concise view around particular subject issues by excluding
data that are not useful in the decision support process.
Integrated: A data warehouse is usually constructed by
integrating multiple heterogeneous sources, such as relational
databases, at les, and on-line transaction records. Data
cleaning and data integration techniques are applied to ensure
consistency in naming conventions, encoding structures,
attribute measures, and so on.
A data cube: A data cube [17] allows data to be modeled and
viewed in multiple dimensions. It is defined by dimensions
and facts. In general terms, dimensions are the perspectives or
entities with respect to which an organization wants to keep
records. For example, consider a sales data from XYZ
company and data warehouse in order to keep records of the
store's sales with respect to the dimensions time, item, branch,
and location.
Fact table: Sales (Facts are numerical measures Ex; Sales
Amount, Number of Units Sold) Fact table contains the names
of the facts, or measures, as well as keys to each of the related
dimension tables.
Dimensions Tables: Time, item, branch and location, (These
dimensions allow the store to keep track of things like
monthly sales of items, and the branches and locations at
which the items were sold).
Data Mining (DM): Data mining has been broadly utilized
and accepted in business and production during the 1990s
[18].Currently, data mining is made of use not only in
businesses but also in many different areas in supply chain and
logistics engineering. A few examples are demand forecasting
system modeling, SC improvement roadmap rule extraction,
quality assurance, scheduling, and decision support systems.
The data mining techniques can normally be categorized into
four sorts i.e., association rules, clustering, classification, and
prediction. At the turn of century, the decision makings were
used in production management to choose the suitable and
agile solutions in real production. Data warehouse systems
allow for the integration of a variety of application systems.
They support information processing by providing a solid
platform of consolidated, historical data for analysis. E.
Performance Measurement (PM): On the other research
rivers, PM context [19] comprised of the multi-criteria
decision attribute (MCDA) are most commonly accepted for
use. The classifications are as follows, hierarchical techniques,
deployment approaches, scoring method and objective
programming. For example, performance improvement of the
selection of freight logistics hub in Thailand was developed by
coordinated simulation. K. A. Associates [20] figured out that
PM, among collaborative SC networks, is vital for
management. There have been many certain attempts to
deploy and explore AI and data mining techniques to make up
for the typical techniques in optimizing PM in SCM with a
better development roadmap.
Optimizing Aggregations: Viewing aggregation as an
extension of duplicate eliminating (distinct) projection
provides very useful intuition for reasoning about aggregation
operators inquiry trees. Rewrite rules for duplicate-eliminating
projection often can be used as building blocks to derive rules
for the more complex case of aggregation. In addition to the
intuition obtained by viewing aggregation as extended
duplicate-eliminating projection, modeling both with one
operator makes sense from an implementation point of view.
Typically, in existing query optimizers both aggregations and
duplicate eliminating projections are implemented in the same
module. Presentation made to a set of query rewrite rules for
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 13
moving aggregation operators in a query tree. Other authors
have previously given rewrite rules for pulling aggregations
up a query tree and for pushing aggregations down a query
tree [21]. My work unifies their results in a single intuitive
framework, and using this framework can derive more
powerful rewrite rules. I present new rules for pushing
aggregation operators past selection conditions (and vice-
versa) and show how selection conditions with inequality
comparisons can cause aggregate functions to be introduced
into or removed from a query tree. Also presented rules of
coalescing multiple aggregation operators in a query tree into
a single aggregation operator, and conversely, rules for
splitting a single aggregation operator into two operators.
Autoregressive Integrated Moving Average-ARIMA:
ARIMA model was introduced by Box and Jenkins (hence also
known as Box-Jenkins model) in 1960s for forecasting a
variable [22]. ARIMA method is an extrapolation method for
forecasting and, like any other such method, it requires only
the historical time series data on the variable under forecasting.
Among the extrapolation methods, this is one of the most
sophisticated methods, for it incorporates the features of all
such methods, does not require the investigator to choose the
initial values of any variable and values of various parameters
a priori and it is robust to handle any data pattern. As one
would expect, this is quite a difficult model to develop and
apply as it involves transformation of the variable,
identification of the model, estimation through non-linear
method, verification of the model and derivation of forecasts.
Analytics and Performance Management
Framework: This framework defines the people, processes
and technologies [23] that need to be integrated and aligned to
take a more strategic approach to business intelligence (BI),
analytics and performance management (PM) initiatives.
• Most organizations use a combination of vendors, products
and services to provide BI, analytics and PM solutions.
• Successful managers recognize the diversity and
interrelationships of the analytic processes within the enterprise
and can address the needs of a diverse set of users without
creating silos .
• A strategic view requires defining the business and decision
processes, the analytical processes, as well as the processes that
define the information infrastructure independently from the
technology.
• The PM, technology and complexity of skills associated with
the strategic use of BI, analytics and PM increases dramatically
as the scope of the initiative widens across multiple business
processes.
• There is no single or right instantiation of the framework;
different configurations can be supported by the framework
based on business objectives and constraints.
Proposals,
• Use this framework to develop a strategy to surface key
decisions, integration points, gaps, overlaps and biases that
business and program managers may not have otherwise
prepared for .
• A portfolio of BI, analytic and PM technologies will be
needed to meet the diversity of requirements of a large
organization.
• Seek the advice from program management specialists so as to
balance investments across multiple projects and consider
bringing BI, analytics and PM initiatives within a formal
program management framework.
Objective and Methodology
The Purpose or objective of the research is
'Develop a BI system module for a Project evaluation, Is an
online project evaluation module which is used to evaluate
Customer Relationship Management (CRM) and Supply
chain management (SCM).
A Proposed Project Evaluation Module We would
introduce an online project evaluation module which is
used to evaluate Customer Relationship Management
(CRM) The prime focus this is for user experience in
driving CRM and Supply chain management (SCM)
systems. These used to coordinate the movement of
products and services from suppliers to customers
(including manufacturers, wholesalers, and retailers). Also
the systems are used to manage demand, warehouses, trade
logistics, transportation, and other issues concerning
facilities, and movement and transformation of materials on
their way to customers. Components of SCM include
supply chain optimization, and supply chain event
management.
Table -1: For Weighted Average Rating Value.
SCM also comprises warehouse management, radio
frequency identification (RFID), and transportation
Rating Value
Full Support requires the vendor to have a WA of 95% or
above for the selected module
95
Average requires the vendor to have a WA of 75% or above
for the selected module
75
Basic Features requires the vendor to have a WA of 55% or
above for the selected module
55
Minimal requires the vendor to have a WA of 25% or above
for the selected module
25
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 14
management. So for we advanced this work up to the level
of setting the priorities.
Functionality of Project analysis module: Assign
priorities for the modules as below. These priorities are
relative to each other. To further refine priority distribution
for any module, please click on the module name. If any
priority is set as "Critical," will have the option to specify a
minimum requirement for that module. By assigning
minimum requirements, we can identify which vendors fail
to meet minimum acceptable ratings. Below are more
about minimum requirements,
Table - 2: For Features and Functions Levels.
Definition of Minimum Requirements: Minimum
requirements provide an efficient method for highlighting
vendors who fail to meet key needs. Any vendor who cannot
full fill the minimum requirements will have a "disqualified
criteria" indicator on the results screen. There are two
ranges of minimum requirements, based on level of depth in
the knowledge tree.
For Modules- Modules link to criteria, but are not rated
themselves. The ratings below are based on the weighted
scores of the criteria within the module. Thus, the score is
the minimum weighted average for that module.
For Features and Functions (lowest level of hierarchy) :
Features and functions appear as the detailed levels of the
hierarchy, and do not link to any sub criteria. The ratings
below are based on actual vendor capabilities for the
selected function. Thus, the score value is the minimum
level of support for that function.
Building Dimensions Tables: A dimension table, for
example a table Item - contain the attributes item name,
brand, and type. Consider simple 2-D data cube which is,
in fact, a table or spreadsheet for sales data items sold per
quarter in the city of Surat . The Table-3 to represent a 2-D
View of sales data for XYZ company according to the
dimensions time and item, at where the sales are from
branches located in the city of Surat. The measure
displayed is Rupees sold.
Figure.2: Lattice of cuboids, making up a 4-D data cube.
Above Figure.2 represents, Lattice of cuboids, making up a
4-D data cube for the dimensions time, item, location, and
supplier. Each cuboid represents a different degree of
summarization. Stars, Snowflakes, and fact constellations:
schemas for multidimensional databases The entity-
relationship data model is commonly used in the design of
relational databases,
A multidimensional data model : A compromise
between the star schema and the Snowflake schema is to
adopt a mixed schema where only the very large dimension
tables are normalized. Normalizing large dimension tables
saves storage space, while keeping small dimension tables
un normalized may reduce the cost and performance
degradation due to joins on multiple dimension tables.
Doing both may lead to an overall performance gain.
However, careful performance tuning could be required to
determine which dimension tables should be normalized and
split into multiple tables. Fact constellation: Sophisticated
applications may require multiple fact tables to share
dimension tables. This kind of schema can be viewed as a
collection of stars, and hence is called a galaxy schema or a
fact constellation.
Table-3: A 2-D View of sales data for XYZ-company
Rating Value
Supported as Delivered "Out of the Box" 100
Supported by Partner Supported via an integrated partner
solution
95
Add-on Partner Supported via add-on products offered by
partners
80
Modification Supported via modification (screen
configurations, reports, GUI tailoring, etc.)
70
Third Party Support Supported via third party solution 60
Customization Supported via Customization (changes to
source code)
40
Future Release Supported in a Future Release 20
Not Supported / Unrated 0
Time
(quarter)
Time (quarter)
I tem Type
home
appliances
Fridge Phone Computer
Q1 600K 900K 2045K 565K
Q2 800K 1002K 2650K 570K
Q3 905K 1020K 4074K 600K
Q4 950K 1200K 5085K 700K
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 15
Figure.3: Fact constellation schema of a data warehouse for
sales and shipping.
Examples for defining star, Snowflake, and fact
constellation schemas: A relational query languages like
SQL is to be used to specify relational queries [24], a data
mining query language can be used to specify data mining
tasks. In particular, we examine an SQL-based data mining
query language called DMQL which contains language
primitives for defining data warehouses and data marts.
Language primitives for specifying other data mining tasks,
such as the mining of concept/class descriptions,
associations, classifications, and so on, will be introduced in
Chapter .
Data warehouses and data marts can be defined
using two language primitives, one for cube definition and
one for dimension definition. The cube definition statement
has the following syntax.
define cube.., {cube_name}[{dimension_list}] : {measure_list
the dimension definition statement has the following syntax.define
dimension.., {dimension_ name}as ({attribute or sub_dimension list})
Examples to define the star, snowflake and
constellations schemas of Examples 2.1 to 2.3 using DMQL.
DMQL keywords are displayed in sans serif font.
Finally, a fact constellation schema can be defined as a set
of interconnected cubes. Below is an example. Example 2.6
The fact constellation schema of Example 2.3 and Figure.3 is
defined in DMQL as follows.
define cube.., sales [time, item, branch, location]:
rupees sold =sum(sales in rupees),units sold =count(*)
define.., dimension time as (time_key, day, day of week,
month, quarter, year)
define.., dimension item as (item_key, item name, brand_type,
supplier_type)
define.., dimension branch as (branch_key, branch_name,
branch_type)
define.., dimension location as (location_key, street, city, state,
country)
define.., cube shipping [time, item, shipper, from_location,
to_location]: rupees cost =sum(cost in rupees), units shipped
=count(*)
define.., dimension item as item in cube sales
define.., dimension shipper as (shipper_key, shipper_name,
location as location in cube sales, shipper_type)
define.., dimension from_location as location in cube sales
define.., dimension to_location as location in cube sales
A define cube statement is used to define data cubes for
sales and shipping, corresponding to the two fact tables of
the schema of Example 2.3. Note that the time, item, and
location dimensions of the sales cube are shared with the
shipping cube. This is indicated for the time dimension, for
example, as follows. Under the define cube statement for
shipping, the statement „?define dimension time as time in
cube sales" is specified. Instead of having users or experts
explicitly define data cube dimensions, dimensions can be
automatically generated or adjusted based on the examination
of data distributions. DMQL primitives for specifying such
automatic generation or adjustments are also possible.
Data Preparation and analyze a dyadic relation:
After the data set of relationship between enterprise and its
direct customer questionnaire gathering following R.
Derrouiche et al., [25] Model to analyze a dyadic relation and
to evaluate its performance, the attribute ranking algorithm
using information gain based on ranker search was calculated
for the two types of relationships.
Sub-KPI impact results from the attribute ranking
algorithm: These results are shown in Figure.4. In addition,
the questionnaire from R. Derrouiche et al., able to
characterize collaborative relation between two or
more partners in a supply chain, evaluating their related
performances accordingly. The former level is the common
perspective as follows: relation climate, relation structure, IT-
used and relation lifecycle and the later level consists of the
perceived satisfaction of the relation and its perceived
effectiveness.
These represent the macro view of model. For example, the
macro view of relation climate has six micro views, and each
micro view has also two sub-micro views.
Next, the data cleaning and input-output format following
C&RT and K-Means structure was conducted to prepare the
learning data.
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 16
Figure. 4 : The sub-KPI impact results from the attribute
ranking algorithm using information gain, based on ranker
search.
Primary impact of each sub-KPI (i) from each relationship
type (j) was calculated from equation 2. Then weight
definition was performed according to equation 2.
(1)
Preparation algorithm for computing the ranking of
time series: A simple pattern based approach has been
used in this research work to compare the time series data
[26]. The ranking of time series is done through automated
sorting of patterns. In order to sort the time series values,
the spread of each series is computed and compared with the
spread of all the series. Large variances suggest a very
different development, while small variances indicate a
similar development pattern. Since the values for each series
are very different, it is not possible to compare the series
values directly. In order to make the series comparable, the
series will be normalized, by dividing the individual values
of the series by series mean. Once the data is normalized,
square of a sum of differences of individual values in the
time series with that of the overall mean vector values is
computed, which results in scalar values for each series.
Ranking of these series of scalars will provide statistically
valid ranks for the time series.
Preparation algorithm for computing the ranking
of time series: A standard approach for model forecasting
is to use techniques like ARIMA or using neural networks.
However several problems limit their usefulness when
dealing with analysis in a practical situation. Some of them
are like –
? Simple models have proved to be effective in
replicating complex models like ARIMA on time
series forecasting [27]
? Non-parametric models like Kernel regression
though simple require human evolution which
limits the usage in a dynamic setting like
interactive analytics.
? Though non-parametric methods like neural
networks show promising results their
computational complexity is prohibitive. The
prediction of future value of a KPI is an important
function in analytics. In this research we adopt the
models proposed by Toskos and integrate with my
analytical system. The equations used is, k-th
moving average:
(2)
The details of the derivations and comparison of
the effectiveness of these models with standard as well as
best ARIMA models are discussed in Toskos. For the sake
of abbreviation the proposed set of model using kth series
are referred to as KMV (Kth series Moving average
Variants). Actual algorithm is given below in Figure. 5.
Figure.5: Prediction with KMV model.
This analysis can be done in two modes: manual and
automatic. In both the modes the process remains the same,
only the space in which the analysis is carried out will
differ. In manual mode, user will select the dimensions of
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 17
interest. However in the automatic mode a pre defined
structure for hierarchical analysis is followed. The actual
process consists of
? Selection of dimensional values and facts
? Forecasting the KPIs using KMV model
? Ranking the time series of predicted values
Results and Discussion
Outcome Proposed Project Evaluation Module :As
explained in the methodology, progressed in setting the
priorities. Further would intended to advance in some of the
areas in SCM and Project estimation modules.
Priorities is set as Very Important from Must have: Project
analysis from functionality module is done in the following
manner, For example assign priorities is set as "Very
Important'', Then will have the option to specify a minimum
requirement for that module. By assigning minimum
requirements, can identify which vendors fail to meet your
minimum acceptable ratings.
Conclusion
Criteria Selection And Module Rating Value: The
graphs below were developed for display the standard
scores of the vendor solutions, as determined through CRM
Model. The graphs reflect functional, technical, and
business criteria. This comparison is based on average
weights and priorities.
Module Total
Module
Rating
Sales Force Automation: Includes - Account
and Contact Management, Activity
Management, Contract Management and
Creation, Internet Sales, Opportunity
Management, Partner Management, Project
and Team Management, Quotes and Proposals,
Sales Forecasting Management, Sales Lead
Management, Sales Process Management,
Territory Management, Team Selling, and
Member Reassignment
246 95.85
Marketing Automation: Includes- Campaign
Execution and Management, Campaign
Management ,Campaign Planning, Collateral
and Brand Management, Lead and List
Management, Marketing Resource
Management
224 96.55
Customer Service and Support: Includes-
Assigning Cases, Creating and Maintaining a
Solutions Knowledge Base, Creating New Cases
(Service Requests),Customer Self-Service
Portal, Escalating Unresolved Cases, Solving
and Closing Cases
56 97.11
Analytics and Reporting: Includes- Analytics,
Reporting
43 94.00
Product Technology: Includes- Business
Functionality, Ongoing CRM Solution Support,
Technical Functionality
619 89.98
Table - 4: Criteria selection and Module Rating Value.
Form the Project analysis modules for Functionality
analysis similar CMR, more efforts are being underway so
as to have a comfortable and sophisticated BI system so as
to enable and analyze in the area of Asset Management,
Data Management, Enterprise Resource Planning (ERP),
Project and Process Management.
Graph - 3: Standard Scores of the Vendor Solutions
References
[01] Collaborative process planning and manufacturing in
product lifecycle management -
www.elsevier.com/locate/compind-, X.G. Ming et al. /
Computers in Industry 59 (2008) 154–166
[02]International Journal of Managing Value and Supply
Chains (IJMVSC) Vol. 3, No. 4, December 2012
[03] Alotaibi, K.F., Fawcett, S. E. and Birou, L. (1993)
“Advancing Competitive Position Through Global And JIT
Sourcing: Review And Directions”, 3(1/2):4-37
[04]Lee, H. L. & Billington, C. (1995). “The Evolution of
Supply-Chain-Management Models and Practice at Hewlett-
Packard”, Interfaces, 25:42-63. Vol. 3, No. 4, December
2012
[05] Gupta, A. and Maranas, C. D. (2003) “Managing
demand uncertainty in supply chain planning”, Computer
and Chemical Engineering, 27:1219-1227.
International Journal of Advanced Technology & Engineering Research (IJATER)
www.ijater.com
ISSN No: 2250-3536 Volume 4, Issue 1, Jan. 2014 18
[06] Yusuf, Y. Y., Gunsekaran, A. Adeleye, E. O. &
Sivayoganathan, K. (2004) “Agile supply chain capabilities:
Determinants of competitive objectives”, 1(59):379-392.
[07] Aggregate-Query Processing in Data Warehousing
Environments- Ashish Gupta Venky Harinarayan Dallan
Quass -IBM Almaden Research Center
[08] Sanchez-Rodrigues, V., Potter, A. & Naim, M. M.
(2010) “Evaluating the causes of uncertainty in logistics
operations”, International Journal of Logistics Management,
21(1):45 – 64.
[09] Saikouk, T., Zouaghi, I. & Spalanzani, A. (2012)
“Mitigating Supply Chain System Entropy by the
Implementation of RFID”, CERAG, Vienna, Austria.
[10] Grenci, R. T. & Watts, C. A. (2007) “Maximizing
customer value via mass customized econsumer services”,
Business horizons, 50 (2):123-132.
[11] Usability of CRM Systems as Collaboration
Infrastructures in Business Networks Olaf Reinhold,
Germany [email protected], [email protected]
leipzig.de
[12] Earl M (2001) Knowledge management strategies:
toward a taxonomy. Journal of Management Information
Systems 18(1), 215–233.
[13] B.J. Angerhofer, M.C. Angelides, A model and a
performance measurement system for collaborative supply
chains, Decision Support Systems 42 (2006) 283–301.
[14] F.T.S. Chan, H.J. Qi, An innovative performance
measurement method for supply chain management, Supply
Chain Management: An International Journal 8 (3–4) (2003)
209–223.
[15] A. Berson and S. J. Smith. Data Warehousing, Data
Mining, and OLAP. New York: McGraw-Hill, 1997.
[16] S. Chaudhuri and U. Dayal. An overview of data
warehousing and OLAP technology. ACM SIGMOD
Record1997.
[17] P. Deshpande, J. Naughton, K. Ramasamy, A. Shukla,
K. Tufte, and Y. Zhao. Cubing algorithms, storage
estimation, and storage and processing alternatives for olap,
1997.
[18] J. Han and M. Kamber, Data mining: concepts and
techniques.Morgan Kaufmann Publishers, 2001.
[19] P.C. Fishburn, Method for estimating addtive utilities,
Management Science,vol.13-17, pp.435-453, 1997.
[20] K. A. Associates, A Guidebook for Developing a
Transit Performance-measurement System. Washington,
DC., 2003.
[21] W. P. Yan and P. A. Larson. Performing Group-By
Before Join. In ZCDE, 1994.
[22] Box, G.E.P., and G. M. Jenkins. 1970. Time series
analysis: forecasting and control. Holden Day, San Francisco,
CA.
[23] Gartner RAS Core Research Note G00166512 Gartner?s
Business Intelligence, Analytics and Performance
Management Framework, 19 October 2009
[24] E. Thomsen. OLAP Solutions: Building
Multidimensional Information Systems. John Wiley & Sons,
1997.
[25] Derrouiche, R. Neubert G. and Bouras A., Supply chain
management: a framework to characterize the
collaborative strategies, Vol. 21, Issue 4, June 2008 , pp.
426-439.
[26] KPI based analytics in e-Governance – A prototype
using segmentation and trend analysis M.N.Rao, Jay B.
Simha
[27] Toskos C.P, “K-th, weighted and exponential moving
averages for time series forecasting models”, European
Journal of Pure and Applied Mathematics,
Vol.3,No.3,2010,406-416
[28] Min, D.G., and Park, J. D., 2003, “Development of a
Performance-Based Supply Chain Management System,” IE
Interface, 16(3), 167-173.
[29] Kim, K. J., and Dennis K. J. Lin, 2000, “Simultaneous
optimization of mechanical properties of steel by
maximizing exponential desirability functions,” 49(3), 211-
325.
doc_236583989.pdf