Executive learning exercise and trainers notes for importance performance analysis IPA

Description
Conceptual aspects of this research aim to review issues and to introduce new ways to
employ importance-performance analysis (IPA), also called action-grid analysis (AGA), in formulating
valid research. The purpose of the exercises is facilitating understanding how a variety of matters are
important for research being valid.

International Journal of Culture, Tourism and Hospitality Research
Executive learning exercise and trainer's notes for importance-performance analysis
(IPA): Confronting validity issues
Tzung-Cheng (T.C.) Huan J ay Beaman
Article information:
To cite this document:
Tzung-Cheng (T.C.) Huan J ay Beaman, (2007),"Executive learning exercise and trainer's notes for
importance-performance analysis (IPA)", International J ournal of Culture, Tourism and Hospitality Research,
Vol. 1 Iss 4 pp. 315 - 327
Permanent link to this document:http://dx.doi.org/10.1108/17506180710824208
Downloaded on: 24 January 2016, At: 22:04 (PT)
References: this document contains references to 15 other documents.
To copy this document: [email protected]
The fulltext of this document has been downloaded 537 times since 2007*
Users who downloaded this article also downloaded:
J erome L. McElroy, Peter Tarlow, Karin Carlisle, (2007),"Tourist harassment: review of the literature and
destination responses", International J ournal of Culture, Tourism and Hospitality Research, Vol. 1 Iss 4 pp.
305-314http://dx.doi.org/10.1108/17506180710824190
Sara Dolnicar, (2007),"Management learning exercise and trainer's note for market segmentation in
tourism", International J ournal of Culture, Tourism and Hospitality Research, Vol. 1 Iss 4 pp. 289-295 http://
dx.doi.org/10.1108/17506180710824172
Drew Martin, Arch G. Woodside, (2007),"Experiential learning exercises for tourism and hospitality
executive training: Introduction to a special issue on tourism management", International J ournal of Culture,
Tourism and Hospitality Research, Vol. 1 Iss 4 pp. 269-272http://dx.doi.org/10.1108/17506180710824145
Access to this document was granted through an Emerald subscription provided by emerald-srm:115632 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for
Authors service information about how to choose which publication to write for and submission guidelines
are available for all. Please visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as
providing an extensive range of online products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee
on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive
preservation.
*Related content and download information correct at time of download.
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
Executive learning exercise
and trainer’s notes
for importance-performance
analysis (IPA)
Confronting validity issues
Tzung-Cheng (T.C.) Huan
Graduate Institute of Leisure Industry Management, National Chiayi University,
Taiwan, and
Jay Beaman
Auctor Consulting Associates and Colorado State University, Cheyenne,
Wyoming, USA
Abstract
Purpose – Conceptual aspects of this research aim to review issues and to introduce new ways to
employ importance-performance analysis (IPA), also called action-grid analysis (AGA), in formulating
valid research. The purpose of the exercises is facilitating understanding how a variety of matters are
important for research being valid.
Design/methodology/approach – IPA/AGA, different types of IPA/AGA, and validity issues for
these are introduced. Pursuing two types of IPA/AGA, based on different assumptions and thus distinct
validity criteria, reinforces the need for new thinking regarding valid applications of IPA/AGA.
Practically oriented training exercises reinforce understanding concepts introduced. Possible answers
to exercises encourage thinking about matters that directly affect validity of actual research.
Findings – Unless IPA/AGA research is well conceived, properly executed, and soundly analysed,
implications derived may be misleading. Training exercises show the reader values and pitfalls of
considering IPA/AGA in formulating practically oriented research.
Research limitations/implications – A limitation of the research is that detail results are only
presented for two of at least ?ve types of IPA/AGA.
Originality/value – This paper contributes to the overall understanding of the valid use of
IPA/AGA as a tool in research. The paper also facilitates using IPA/AGA in teaching about research.
Keywords Learning, Value analysis, Action research
Paper type Conceptual paper
Since, the inception of importance-performance analysis (IPA), also called action-grid
analysis (AGA) (Blake et al., 1978; Martilla and James, 1977), the literature continues to
expand. Oh’s (2001) literature review ?nds IPA lacks theoretical development and
raises numerous validity issues with IPA/AGA applications. The version of IPA/AGA
Martilla and James (1977) introduce is invalid except under special conditions. Matzler
et al. (2004, p. 271) state, “it is shown empirically that the managerial implications
derived from an IPA are misleading. Consequently, the traditional IPA needs to be
revised.” Reference to “traditional IPA” is important because the existence of disparate
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1750-6182.htm
Importance-
performance
analysis
315
Received January 2007
Revised April 2007
Accepted May 2007
International Journal of Culture,
Tourism and Hospitality Research
Vol. 1 No. 4, 2007
pp. 315-327
q Emerald Group Publishing Limited
1750-6182
DOI 10.1108/17506180710824208
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
versions of IPA affects the recognition of valid IPA applications. As of 2007, no
literature exists suggesting IPA is an amorphous collection of analysis methods linked
by some form of action grid being critical in analysis (Beaman and Huan, 2008).
The following exercises prompt thinking about types of research in which
IPA/AGA may be used. Research being valid, whether or not IPA/AGA is used, is an
issue. The training exercises guide the reader to think about whether to use IPA/AGA
and how to develop and use attribute importance-performance ratings, or similar
information. Also, the exercises provide data collection and analysis suggestions so
that a goal is achieved.
IPA/AGA background for the exercises
IPA/AGA can be a useful tool for improving a company’s performance (Martilla and
James, 1977). Thinking about whether or not to attend an event or visit a destination is
similar to deciding to purchase a product. To decide on how to modify a product’s
attributes, assume company executives have purchasers’ ratings of importance and
performance for modi?able product attributes a1 to a5. In Figure 1, product attributes
(e.g. a2 ¼ FOOD/MEALS and a3 ¼ PARKING) are associated with points on the
graph. Assume survey data are collected from purchasers that include performance
ratings for the ?ve product attributes. Also, assume data collected include:
.
rating responses for importance and/or;
.
a response or responses on overall performance affecting purchase behavior.
Obtaining a “response or responses on overall performance affecting purchase
behavior” allows the importance values of the attributes to be calculated (Mount, 1997;
or Oh, 2001, for discussion and references on calculated importance).
Figure 1.
Example
importance-performance
action grid
6
5.5
5
4.5
4
3.5
3
2.5
2
1.5
1
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
Performance
I
m
p
o
r
t
a
n
c
e
Q2-Concentrate
here
Q1-Keep up the good work
Q3-Low priority
Q4-Possible Overkill
Center for "Crosshairs" - (p,i)
+ PARKING –
• FOOD/MEALS response (4,5)
+ FOOD/MEALS –
(p
a3
,i
a3
)=(p
p
,i
p
)
(p
a2
,i
a2
)=(p
F
,i
F
)
+ (p
a5
,i
a5
)
+ (p
a1
,i
a1
)
+ (p
a4
,i
a4
)
IJCTHR
1,4
316
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
For Figure 1, assume the following. Survey respondents rated both importance and
performance of attributes a1-a5 (e.g. one ¼ lowest to six ¼ highest). For example, a
person’s responses for FOOD/MEALS could be “5” for importance (extremely
important) and “4” for performance (meaning OK). In the ?gure, the (4,5) point is
plotted at “z”.
To form an action grid such as Figure 1, several steps occur. For Step 1, the
means/medians are computed for attribute importance and performance ratings. In
Step 2, for attributes ak (k ¼ 1 to 5), let mean/median ratings for performance and
importance be p
ak
and i
ak
. Points, ( p
ak
,i
ak
), are plotted. The overall mean/median point
( p,i ) is means/medians of all performance and of all importance responses (e.g. p is the
mean/median for all performance ratings). Step 3 involves plotting ( p,i ) to de?ne an
alternative axis system, the crosshair axis system(see crosshairs in Figure 1). In Step 4,
the labels for quadrants are speci?ed based on the crosshairs. Martilla and James
(1977) maintain that “ , ” and “ . ” properties of points in an action grid quadrant
imply a course of action. The action for a quadrant is speci?ed by the respective
quadrant label. In Figure 1, PARKING’s importance, i
p
, is above the overall
mean/median of i (i
p
. i ) but performance, p
p
, is below p ( p
p
, p). Therefore, the
PARKING point is in the “Concentrate here” quadrant (upper left). This quadrant’s
label re?ects the need to improve performance given relatively high importance and
relatively poor performance. In other words, for Martilla and James (1977) i
p
. i while
p
p
, p infers that performance should be improved. The point for FOOD/MEALS,
( p
F
,i
F
), is in the “high-high” quadrant (upper right), “Keep up the good work.” This title
is based on i
F
. i while p
F
. p. These relations are taken to imply improving
performance is not a priority. The “Possible overkill” quadrant (lower right) is
predicated on doing better not being important if performance is high when importance
is low ( p
a
. p but i
a
, i ). Finally, “Low priority” for modifying an attribute
corresponds with low performance and importance (lower left). Given that, for
example, pro?t will not suffer from reducing resources for attributes with points in the
“Possible overkill” quadrant, such attributes are candidates for resource allocations to
improve performance on a “Concentrate here” attribute.
IPA/AGA evolved rapidly after being introduced (see Beaman, 2007 for references).
The reason importance and performance are referenced in italics is that some
IPA/AGA applications replace them with alternative variables. Also, some IPA/AGA
research uses concepts other than importance-performance. For example, IPA/AGA
research may involve two or more action grids; and other forms employ extended or
alternative designs of Figure 1 (Beaman and Huan, 2008; Mount, 2000; Oh, 2001).
Regardless of the variety of IPA/AGA applications that occur, as of 2007 IPA/AGA
applications are not classi?ed based on underlying theory. Nevertheless, very different
types of IPA/AGA applications exist. Beaman and Huan (2008) identify ?ve distinct
types by AGA-1 to AGA-5. For example, AGA-1 is analysis based directly on Martilla
and James (1977). In applying AGA-1, one accepts that the actions associated with
location of points in quadrants of action grids are appropriate actions to consider.
Validity concerns arise when applying IPA/AGA; the results can yield incorrect
conclusions even without statistical variability. Invalidity that is not caused by
statistical variability arises because assumptions behind an application are ?awed.
The rationale for Martilla and James’ (1977) quadrant labels just show plausibility
of the actions suggested. Appropriate action based on a quadrant label is not
Importance-
performance
analysis
317
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
logical deduction. Matzler et al. (2004) show that quadrant titles only recommend
appropriate action to take under special conditions.
When thinking about Figure 1, a problem with AGA-1 emerges. In terms of absolute
or relative return on investment, improvements on FOOD/MEALS can have greater
impact on pro?t than better PARKING. Given a pro?t goal, decisions should be based
on pro?t/loss in relation to annualized improvement costs. Simply making
“improvements” based on attributes’ positions in an action grid could result in a
high cost to achieve a loss. Also, the common practice of considering satisfaction as the
criterion for rating importance-performance (e.g. how important for satisfaction and
how satisfying was performance) is problematic. Firstly, no sound quantitative theory
exists to explain how a change of D
a
in a mean/median satisfaction rating for an
attribute relates to a change in volume of visits or in pro?t/loss. Furthermore, if some
purchasers do not need parking (e.g. they are local or come on a tour bus), aggregate
arguments that increased satisfaction leads to consequences such as more purchases or
pro?t are ?awed. At most, increased satisfaction from improved PARKING applies to
those using parking. For those needing parking, ideas presented by Matzler et al. (2004)
suggest that once PARKING meets a threshold, improvement on PARKING may be a
low priority compared to better performance in other areas (e.g. in FOOD/MEALS). For
a given level of importance, i
a
, improving low performance does not necessarily have a
greater consequence (e.g. increased pro?tability) than improving performance on an
attribute with high performance.
McKillip and Cox (1998) provide a non-AGA-1 example of IPA/AGA. In their
AGA-4, Beaman and Huan (2008) classify this application as AGA-4. For AGA-4, the
objective is improving how resources are devoted to unique components, ak, of, for
example, a training program. For AGA-4, performance measures, p
ak
, are based on
components’ resources (e.g. performance is credit/contact hours allocated to a
component of training) not on ratings by respondents in a survey. A desirable pattern
for the (p
ak
,i
ak
) for a tour guide certi?cation program’s ?ve elements could extend from
the third to the ?rst quadrant (Figure 2) showing that resource allocation increases
with importance. Managers and decision makers might want to consider whether or
not the position (0,0) should be a point in the pattern. For an AGA-4 action grid,
Beaman and Huan (2008) give reasons for showing variability of i
ak
(e.g. by lines like
high-low lines in a stock-performance chart) and for considering that variability in
analysis.
Recognizing that products are viewed differently by customer segments and seen
differently under various conditions has implications for IPA/AGA. User/purchaser
segment recognition based on expected unique importance-performance response
patterns is a priori segmentation (Wade and Eagles, 2003). Segmentation also can be
created by a computational procedure identifying people based on survey responses
having a pattern (Vaske et al., 1996). Differences in segments’ response patterns (e.g.
importance values) result in unique action grids. If grids for different segments suggest
different actions, selecting best actions falls outside the domain of IPA/AGA. Analysis
must focus on how, or if, actions suggested by segment speci?c action grids contribute
to making sound decisions to achieve a goal or goals.
A complication in using IPA/AGA is that endogenous conditions (e.g. crowded
parking, inadequate supply of desirable food, poor entertainment, or long lines) or
exogenous conditions (e.g. weather, or road construction) can drastically affect
IJCTHR
1,4
318
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
importance-performance ratings. Extremely poor/low ratings (outlier responses)
typically are caused by endogenous and exogenous in?uences. Creating variables that
permit outlier responses to be linked to conditions causing them (e.g. date and duration
of a visit) and collecting data on conditions (e.g. weather data by day), can allow
utilization of non-IPA/AGA analyses to understand how various factors are affecting
goal achievement (e.g. in?uencing pro?t). In other words, information for decision
making can be obtained when in?uences behind outlier responses are examined.
Furthermore, outlier responses can be a catalyst for data collection about
service/facility problems (Beaman and Huan, 2008).
Computer-assisted interviewing (CAI) facilitates collection of information about
why outlier responses are given. For example, a questionnaire can be programmed so
exceptionally “negative” responses prompt questions asking what caused the
response. Using CAI, a verbal reply can even be recorded (see descriptive material and
tutorial at Techneos Systems Inc., 2007). A respondent may report problems with staff
or some other matter that can be addressed but the issue cannot be recognized from
importance-performance ratings of particular attributes.
Training exercises for tourism research analysts and executives
Research can fail due to a lack of thought about how objectives will be achieved. Just
thinking about the use of IPA helps de?ne good research. However, research actually
undertaken may not involve using IPA. In that regard, these two exercises focus on
making prudent decisions about doing research in which IPA/AGA might be used.
Scenarios de?ne situations from which lessons about doing research can be learned.
When reading the scenarios, emphasize comprehension rather than trying to critique
the scenarios. Scenario details are important in answering the questions.
Figure 2.
Example action grid for
AGA-4, assessing resource
allocation against
importance
6
5.5
5
4.5
4
3.5
3
2.5
2
1.5
1
0
0.5
5 4.5 4 3.5 3 2.5 2 1.5 1 0 0.5
Performance (e.g., credit hours or days of training)
I
m
p
o
r
t
a
n
c
e

(
e
.
g
.
,

t
o

c
a
r
e
e
r

s
u
c
c
e
s
s
)
Q2-Possible under resourcing
Q1-High requirement
Q3-Low requirement
Q4-Possibly excess resources
Center for "Crosshairs" - (p,i)
(p
a3
,i
a3
) +
+ (p
a1
,i
a1
)
+ (p
a2
,i
a2
)
+ (p
a5
,i
a5
)
+ (p
a4
,i
a4
)
Importance-
performance
analysis
319
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
Although previous discussion provides background for answering exercise questions,
successful completion of the exercises is facilitated by reading references such as Oh
(2001).
Training exercise 1: AGA-4, adjusting an intern training program to optimize
achievement
Consider that your agency/organization regularly receives interns. You are concerned
with effectiveness of a particular intern training program. After training, these interns
work in four areas (e.g. of hotel/resort operation). Some participants will be offered
employment at the end of the 13-week program.
In the program, interns undergo a one-week orientation and classroom training
program that has six components with training hours allocated as follows: c1 ¼ 12
hrs; c2 ¼ 9 hrs; c3 ¼ 7 hrs; c4 ¼ 5 hrs; c5 ¼ 3 hrs; and c6 ¼ 2 hrs. This training
may be modi?ed. Assume c5 is “Using the organizations computer system” and c6 is
“Overview of the organization’s structure and function.” Training starts with c6. Then
c5 follows since what is learned in c5 is critical to success in other components.
Components c1-c4 can occur in any order. Each intern is trained to work in the four
distinct areas in which all interns are expected to work (e.g. for three weeks each in
areas A1-A4). An intern must “pass/succeed” in components to remain in the intern
program. Signs, study materials, and exams/quizzes on each study component identify
training components by names that interns are expected to know.
About 70 percent of incoming interns complete the one-week training program and
begin the 12 weeks of work experience. Interns work in A1-A4 but they are randomly
selected to start in a particular area. An intern moves sequentially through areas (e.g.
A3 !A4 !A1 !A2). Although the intent is for each intern to have broad and
challenging experiences in each work area, work area managers are responsible for job
assignments. Area management prepares performance reports on interns and they can
recommend to the training centre that an intern be discharged fromthe training program.
Assume you have read McKillip and Cox (1998) and some other material on
IPA/AGA. Therefore, you are considering using AGA-4, IPA for training assessment
and adjustment, to see if the intern training can be improved.
Your Training Division Director asks your organization’s Research and Statistics
Division (RSD) to examine the training program using AGA-4. RSD is given literature
on AGA-4. RSD knows about the intern training program from regular participation in
c6 and from company documents on the training. RSD is to report to your Director on
using AGA-4 or other research to meet training division objectives. In training, interns
learn that RSD regularly collects performance and other information without
compromising employee identity. This exercise involves examining and reacting to
RSD’s suggestions.
RSD recommends carrying out a survey of a random sample of 100 interns. They
are to be employed by the organization for six months or more. The sample is
recommended because the population is clearly identi?ed, easily and cheaply drawn
from personnel records, and non-response should not be a problem. As one knows from
polls, by having 100 interviews and given roughly 100 percent cooperation in
responding, estimates will be accurate within 10 percent with 95 percent certainty. RSD
proposes that estimates that are within 10 percent are more than adequate for the
research.
IJCTHR
1,4
320
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
Informants are to receive an e-mail “asking” them to rate the importance of the six
training components to them being successful in working for the organization. To
assure respondents are anonymous, the data set does not have any information to
identify individuals. Respondents can print a paper questionnaire to place in a
collection bin, rather than making an electronic submission.
Training exercise 1: questions
(1) Which of the following responses is least likely to be a valid critique of the
research plan?
.
The “accurate within 10 percent” statement does not apply to rating data (i.e.
to this research).
.
Nearly, 100 percent response should not be expected.
.
Only c1-c4 should be used in an action grid when applying AGA-4.
.
Obtaining meaningful results is compromised by using ratings based on
being successful in working for the organization.
(2) Which of the following is the most questionable statement?
.
Both, c5 and c6 should be rated on providing needed background for success
in c1-c4.
.
Interns must do well in all components to remain interns.
.
High variability in ratings for c1-c4 can imply the desirability of adjusting
training to interns’ background.
.
A high-failure rate on c5 has no implication for adjusting the training
program.
(3) Sample-selection bias occurs when, for example, successful companies are
studied to see why they are successful. A study being done including failed
companies can change research conclusions (Denrell, 2005). Which two of the
following are least questionable?
.
other than sample size, no problems exist with the sample proposed;
.
RSD should have asked for a clear objective statement for intern training;
.
sample-selection bias is not an issue in this research; and
.
just using AGA-4 in analysis is not appropriate.
(4) Advanced training exercise. Write two or three paragraph emails to RSD and/or
your boss regarding what should happen with the research. Consider how you
think the organization expects you to relate to RSD. You may need to write a
note to your boss on why an e-mail to RSD is needed. Also, write a draft e-mail
message that your boss can revise and send to RSD proposing how to proceed.
Training exercise 2: AGA-1 to increase pro?t
You run a facility for tourists. The destination is isolated so travel costs are
substantial. At the destination visitors only have access to what you offer/sell. About
35 percent of your visitors declare they are likely having a one-time-only visit
experience. About 25 percent are ?rst time visitors who say they are checking out the
destination to see if they want to come “regularly;” are checking-it-out visitors.
Importance-
performance
analysis
321
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
About 40 percent of visitors are repeat visitors. Transport in/out, accommodation, food
for three meals and a late snack, happy hour drinks and use of beach and pools for
swimming and sunbathing are in the fee charged for visiting (e.g. transport in/out is
charge plus a per person per day charge based on accommodation type).
Accommodation types are 1 ¼ economy to 4 ¼ luxury.
Some services are in?uenced by accommodation type. All visitors eat at the same
dining facility. They seat themselves and order from one menu. The resort welcomes
families and offers full day professionally quali?ed child care at a Child Center and
certi?ed care workers for in accommodation evening care, at fee. However, the “per
person per day charge based on accommodation type” is for any person. Furthermore,
other than at happy hour, people receiving types 1-2 accommodations must pay for
their liquor, wine and beer. A person receiving types 3-4 accommodations gets some
alcoholic products free. Although free group instruction is available for activities (e.g.
tennis, golf, spa, windsur?ng, and riding) visitors must pay for related
court/course/equipment if in types 1-3 accommodations.
The Marketing, Research and Performance Evaluation Director (MRPED) suggests
using IPA/AGA(AGA-1) as a ?rst step in deciding about changing attributes to increase
pro?t. Having done other work based on Martilla and James (1977), the Director proposes
generatinga randomsample of checkout parties usingrandomnumbers to select parties at
checkout. A party selected will be asked if the 16 þ person with birthday nearest to the
checkout date can be phoned in roughly two weeks to answer 21 importance-performance
questions. A paper questionnaire will be handed to the person at the checkout desk that
can be completed by the appropriate respondent prior to departure.
The importance-performance questions are about services. Ten questions are about
services that apply equally to all visitors (e.g. transport in/out, dining services, room
service). Five questions are about services paid for by some visitors. Finally, three
questions are about child services and three about fees/charges. A respondent
distributes up to 210 points to the performance questions. Distributing points is done to
cause respondents to think about performance differences as tradeoffs. Evidence
shows that distributing points create an appropriate spread in response values
(Aigbedo and Parameswaran, 2004; Oh, 2001; Beaman and Huan, 2008). For some
activities, performance on some attributes are not relevant to all guests (e.g. child
related and a party has no involvement with children), responses are left blank
(non-response). When a response is left blank, ten fewer points are to be distributed
(e.g. for 18 responses distribute 180 points). Importance is to be determined by
regression analysis. Questions that relate to the company’s pro?t objective are to be
asked and responses will be dependent variables used in computing importance (e.g.
repeat visitors will indicate likelihood of returning on a line such as in Figure 3).
Figure 3.
Notes: “Line” for recording overall impact of performance on future behavior
on the line mark how you feel performance of the destination has influenced
making return visits and put a numeric value in the box provided. Interpret 0 as
“not coming back”; 25% as about 1 in 4 chance of returning; 50% and 75% as
“will come half and come 3/4 as much; 1 as coming at the same rate; and 1.25,
1.5 and 2 as showing the increase in coming
0------0.25%------50%------75%-----1-----1.25-----1.5-----------2
IJCTHR
1,4
322
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
To assess the impact of return frequency, repeat visitors and checking-it-out visitors
will be asked about likely frequency of visiting in the next ten years.
Using the data collected, the MRPED will prepare action grids for three visitor
segments (one-time-only, checking-it-out, and repeat) and will present results and
recommendations to management using the action grids. Medians will be used in
forming action grids so that a few very low ratings (outliers) do not overly in?uence
analysis.
Training exercise 2: questions
(1) Question 2.1. Assume that about 1,500 parties are approached. About 1,000
agree to participate and provide 950 usable questionnaires. Which two of the
following responses are least questionable?
.
The action grids proposed for analysis are appropriate.
.
Computing respondents’ ratings of importance better re?ects how decisions
are made than asking for importance ratings.
.
For a given level of importance, increasing performance on an attribute has
more impact if performance is low.
.
Segmentation is not adequate for the analysis to produce valid results.
(2) Question 2.2. Which of the following is least questionable?
.
By looking at action grids for segments with different grids, one can draw
sound conclusions on attribute modi?cation actions to take.
.
A particular segment’s action grid would give valid information for
modifying attributes if one only had visitors from that segment.
.
For all segments, getting a quantitative response about the respondent’s
likely visit frequency in the next ten years is important.
.
The researcher should ask one-time-only visitors about expected
performance as motivation for coming rather than asking questions to
assess a future relationship with the destination (e.g. asking about
performance in?uencing intention to recommend the destination).
(3) Question 2.3. Now assume that the data collection and the analysis strategy are
improved by a priori strati?cation. Which two of the following are least
questionable?
.
Data must include accommodation type (1-4) and variables on using child
related services.
.
The sampling strategy is good for the analysis that will be needed.
.
Since, sample selection bias may occur, keeping data on parties refusing to
participate in the survey is desirable.
.
Outlier responses are of little value in research on modifying attributes to
increase pro?t.
(4) Question 2.4. Which response is most questionable?
.
For the repeat segment, one should collect data that allows calculating how
attribute change affects volume of visits.
Importance-
performance
analysis
323
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
.
Recording information (e.g. date of arrival and length of stay) to link
endogenous (e.g. service problems) and exogenous (e.g. weather) to
responses can be as valuable as recording importance-performance
responses for attributes.
.
Having outlier responses trigger questions on why an outlier response is
given (e.g. kids do not like the place) can yield key information for decision
making.
.
Importance-performance ratings for satisfaction such as
1 ¼ unimportant/unsatisfactory performance to 5 ¼ very
important/performance very satisfying allow respondents to adequately
express their views on attributes.
(5) Question 2.5. Advance training exercise. Write an essay about issues involved in
using AGA-1 that are raised by answering Questions 2.1-2.4. The essay should
build on the justi?cations for responses given in the answer material. For those
expected to read articles such as Oh (2001), the answer should include a
discussion of how the exercise material relates to matters covered by Oh.
Train’s note – discussion and exercise solutions
Training exercise 1 response: AGA-4
Question 1.1. – (a) is valid since “poll” accuracy does not apply to ratings (Beaman
et al., 2004). Response (b) is the correct response (least likely valid). Given the
circumstances, most employed interns should respond. Choice (c) is valid since c5 and
c6 are training needed for c1-c4. Finally, response (d) is valid given that working with
the organization does not necessarily have anything to do with a respondent feeling
“successful.” A respondent can detest a job that is being done and see no opportunity;
however, she may need the work. Conclusions should be drawn on appropriate
modi?cations to the training.
Question 1.2. – Option (a) is reasonable given that c5 and c6 are prerequisites for
c1-c4. Choice (b) is reasonable given what training is to achieve. For choice (c),
variability in ratings can imply some interns come knowing what training covers.
Discussion can be on other reasons for variability. Finally, choice (d) is the correct
response (most questionable). High failure can have implications for adjusting the
training program. Some people who would be good employees may fail training
because of a lack of background/knowledge. Such a de?ciency could possibly be
eliminated in a few hours. A topic for discussion can be getting and using information
on reasons for non-completion of training/internship.
Question 1.3. – For choice (a), serious problems may exist with the sample because
only “now-employed” interns are in the population (see Question 1.2). Choice (b) is a
least questionable response. A reasonable assumption is that RSD did not request an
objective statement or hold discussions with Training Center personnel since these are
not mentioned. In option (c), sample selection bias may be an issue in this research.
This concern is a topic hinted at in these responses. Discussion can involve considering
what intern training objectives should be and how these objectives impact who should
be interviewed. Finally, choice (d) is also a least questionable response. AGA-4 may be
used but analysis by AGA-4 is clearly not adequate on its own. Discussion should
IJCTHR
1,4
324
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
focus on non-AGA-issues and on doing more with a survey than executing AGA-4 (e.g.
as suggested in these exercises).
Question 1.4.Advance training exercise. – The e-mail should communicate
the research issues covered in Questions 1.1-1.3. The material in Exercise 1
questions’ responses should be used as guideline. The message should build a case for
how the research should be done, and/or for soliciting professional advice on doing the
research. An e-mail to the Training Division Director might outline issues and suggest,
for example, that the Training Division Project Of?cer meet with the RSD Project
Analyst and discuss research options (e.g. because this course of action is the best way
to avoid controversy and get a good research product). A draft e-mail/“memo” to the
Director RSD from the Training Division Director must be consistent with
recommendations to this Director.
Training exercise 2 responses: AGA-1
Question 2.1. – Choice (a) is false since the action grids proposed are not appropriate
(e.g. accommodation and child-related questions apply to subsegments). Option (b) is a
less-questionable response. Importance ratings of attributes regarding returning (how
important is performance on attributes X to Z to returning) are wild guesses but
computed importance values have a quantitative basis. Choice (c) is false since the
condition need not hold (Oh, 2001). Finally, choice (d) is less questionable (see 2.1.a).
Question 2.2. – Choice (a) is questionable. For example, even if grids suggest valid
actions for segments, analysis is required to determine which option or combination of
options to pursue to maximize pro?t. The grids at best suggest options to consider in
further analysis). Choice (b) also is questionable (Matzler et al., 2004). Also, choice (c) is
questionable. The “trick” is that the information on frequency of returning should not
be requested from one-time-only visitors. Finally, choice (d) is least questionable.
Analysis of answers in relation to motivation to come is more likely to lead to sound
conclusions about attributes in?uencing pro?t than analysis of likelihood of
recommending ratings (e.g. 5 ¼ “very likely” to 1 ¼ “no way.” This question opens
a good topic for discussion.
Question 2.3. – Choice (a) is least questionable. For example, segments must re?ect
accommodation and a “relation” to child services to understand relevance of
responses). Choice (b) is questionable since:
.
people 16 þ is suggested to be the population but parties with low 16 þ tend to
be over sampled; and
.
random sampling likely means many segments/strata have few observations.
Option (c) is least questionable (e.g. refusals may be largely dissatis?ed parties).
Finally, choice (d) is questionable. For example, outlier responses provide valuable
information on service failures.
Question 2.4. – For option (a), data should be collected that allows calculating how
attribute change affects volume of visits. Without knowing something about volume,
what will be known about change in pro?t? For choice (b), knowing the context
associated with responses is important in valid interpretation. In choice (c), outliers
signal circumstances that one should learn about. Finally, choice (d) is the most
questionable response. The kind of importance-performance ratings identi?ed and
getting them for satisfaction do not allow any meaningful quantitative inferences
Importance-
performance
analysis
325
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
about the impact of attribute change. For a public service “increased satisfaction” could
be a goal. This exercise has to do with pro?t.
Question 2.5. Advance training exercise. – The essay should be founded on issues
that are raised by answering Questions 2.1-2.4. Organization of material should be by
topic (not be a collection of reactions to responses). For those expected to read articles
such as Oh (2001), a reasonable option is organizing one part of the essay based on Oh’s
topics and another section can introduce new matters.
Conclusions
Though applying IPA/AGA has been common, serious problems exist with many
applications. However, this concern does not mean that IPA/AGA application be
avoided. For example, collecting information about how importance of and
performance on attributes of a product should re?ect expectations, observations,
and experience. For a particular purchase, how do the views of others (e.g. family
members) on attributes’ importance and performance in?uence what a person does (the
decision to purchase, repurchase or not purchase)? Expectation of people reacting to
views of others should guide research formulation. Recognizing the role of others in a
purchase decision helps to design appropriate survey questions.
These exercises stress the importance of thinking about attributes ratings being
segment speci?c. In fact, clearly, some attributes may not apply to all segments while
different responses can be appropriate for speci?c segments. In other words, logical
consideration of using importance and performance information in analysis can
elucidate matters that should be considered so that research yields valid results.
Considering what is expected to in?uence behavior is important whether or not some
version of IPA/AGA is actually used in research.
While only pursuing some speci?c matters, this material focuses on logical thinking
with IPA/AGA as an element in the process. To develop speci?c IPA/AGA
applications, managers must be prepared to address the kinds of theoretical
de?ciencies this paper highlights as well as the concerns raised in Oh (2001). Also,
Matzler et al. (2004) is critical reading for potential users of AGA-1. The payoff is worth
the effort because the correct implementation of IPA/AGA research provides managers
with useful information for making decisions while faulty implementation results in
wasted resources and can result in bad decisions.
References
Aigbedo, H.O. and Parameswaran, R. (2004), “Importance-performance analysis for improving
quality of campus food service”, International Journal of Quality & Reliability
Management, Vol. 21 No. 8, pp. 876-96.
Beaman, J.G. (2007), “IPA reference material”, available at:http://members.ispwest.com/
jaybman/jaybman/ (accessed 20 January).
Beaman, J.G. and Huan, T.C. (2008), “Importance performance analysis (IPA): confronting
validity issues”, in Woodside, A.G. and Martin, D. (Eds), Advances in Tourism
Management, CABI Publishing, Cambridge, MA.
Beaman, J.G., Huan, T.C. and Beaman, J.P. (2004), “Sample size and reliability in measuring
relative change and magnitude”, Journal of Travel Research, Vol. 43 No. 1, pp. 67-74.
Blake, B.F., Schrader, L.F. and James, W.L. (1978), “New tools for marketing research: the action
grid”, Feedstuff, Vol. 50 No. 19, pp. 38-9.
IJCTHR
1,4
326
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)
Denrell, J. (2005), “Selection bias and the perils of benchmarking”, Harvard Business Review,
reprint R0504H, April, available at:http://mahalanobis.twoday.net/stories/682756/ (critical
content of the article accessed 1 February, 2007).
McKillip, J. and Cox, C. (1998), “Strengthening the criterion-related validity of professional
certi?cations”, Evaluation and Program Planning, Vol. 21 No. 2, pp. 191-7.
Martilla, J.A. and James, J.C. (1977), “Importance-performance analysis”, Journal of Marketing,
Vol. 41 No. 1, pp. 77-9.
Matzler, K., Bailom, F., Hinterhuber, H.H., Renzl, B. and Pichler, J. (2004), “The asymmetric
relationship between attribute-level performance and overall customer satisfaction: a
reconsideration of the importance-performance analysis”, Industrial Marketing
Management, Vol. 33 No. 4, pp. 71-277.
Mount, D.J. (1997), “Introducing relativity to traditional importance-performance analysis”,
Journal of Hospitality and Tourism Research, Vol. 21 No. 2, pp. 111-9.
Mount, D.J. (2000), “Determination of signi?cant issues: applying a quantitative method to
importance-performance analysis”, The Journal of Quality Assurance in Hospitality and
Tourism, Vol. 1 No. 3, pp. 49-63.
Oh, H. (2001), “Revisiting importance-performance analysis”, Tourism Management, Vol. 22
No. 6, pp. 617-27.
Techneos Systems Inc. (2007), “Introduction to questionnaire design”, available at: www.
techneos.com/resources/product_demos.asp (accessed 1 January).
Vaske, J.J., Beaman, J.G., Stanley, R. and Grenier, M. (1996), “P-I and segmentation: where do we
go from here?”, Journal of Tourism and Marketing Research, Vol. 5 No. 3, pp. 225-40.
Wade, D.J. and Eagles, P.F.J. (2003), “The use of importance-performance analysis and market
segmentation for tourism management in parks and protected areas: an application to
Tanzania’s national parks”, Journal of Ecotourism, Vol. 2 No. 3, pp. 196-212.
Corresponding author
Tzung-Cheng (T.C.) Huan can be contacted at: [email protected]
Importance-
performance
analysis
327
To purchase reprints of this article please e-mail: [email protected]
Or visit our web site for further details: www.emeraldinsight.com/reprints
D
o
w
n
l
o
a
d
e
d

b
y

P
O
N
D
I
C
H
E
R
R
Y

U
N
I
V
E
R
S
I
T
Y

A
t

2
2
:
0
4

2
4

J
a
n
u
a
r
y

2
0
1
6

(
P
T
)

doc_698017810.pdf
 

Attachments

Back
Top