Description
presents the report of corporate failure in India using multiple discriminant analysis using certain key financial ratios as inputs. Results prove that the model is accurate in predicting the failure of companies based on publicly available information like the Cash Flow, Balance sheet and the Profit and Loss account of the company
A
REPORT ON
„PREDICTION OF CORPORATE FAILURE IN
INDIA: A MULTIPLE DISCRIMINANT
ANALYSIS APPROACH?
PREFACE
Corporate failure is a major economic and social problem, which has an adverse impact on
entrepreneurship, production and supply of goods, prices, employment and so on. Corporate
failure directly concerns the shareholders, employers, bankers, customers and others who have
direct or indirect stake in the company. Industrial sickness is as much of a national problem as
for a company and industry experiencing this menace and hence being able to predict this
sickness becomes very important.
Owing to the criticality of this research, this has been widely researched by eminent finance
scholars all over the world. But the amount of research carried out on the Indian companies
remains woefully inadequate. Predictive models developed in India primarily concentrate on
companies which have defaulted on their bond payments predominantly. While being unable to
service the bondholders is one of the primary indicators of a company going sick, it needs to be
kept in mind that the trading volume in Indian equity market is almost 10 times of the trading
volumes in the debt market. This shows that the debt markets in India are highly
underdeveloped. This has formed our basis for researching on companies which have been
delisted for BSE for winding up.
Globally such models have been created by using multiple discriminant analysis using certain
key financial ratios as inputs. Our model has been created along similar lines, taking into account
the economic cycle too. The results have also been compared with the results obtained by the
logistic regression model. The results prove that the model is accurate in predicting the failure of
companies based on publicly available information like the Cash Flow, Balance sheet and the
Profit and Loss account of the company.
TABLE OF CONTENTS
CHAPTER 1- INTRODUCTION ......................................................................................................................... 5
1. Corporate Failure in India: The Signals, Symptoms and Causes ...................................................... 7
1.1 The Symptoms ................................................................................................................................ 8
1.2 The Causes ................................................................................................................................ 9
CHAPTER 2: LITERATURE REVIEW .................................................................................................... 12
CHAPTER 3- RESEARCH METHODOLOGY ........................................................................................ 23
3.1Objectives .......................................................................................................................................... 23
3.1.1Primary Research Objective ........................................................................................................ 23
3.1.2.Secondary Research Objectives ................................................................................................. 23
3.2 Scope of Research ............................................................................................................................. 23
3.3.Universe of study .............................................................................................................................. 24
3.4 Sample of Study ................................................................................................................................ 24
3.5 Data Collection ................................................................................................................................. 24
3.6 Limitations of Study ......................................................................................................................... 25
3.7 Tools Used ........................................................................................................................................ 26
3.7.1 Multiple Discriminant Analysis ................................................................................................. 26
3.7.2 Logit Model ............................................................................................................................... 30
Chapter 4 -ANALYSIS AND INTERPRETATION .................................................................................. 32
4.1 Analysis............................................................................................................................................. 32
4.2 Classification Statistics ..................................................................................................................... 40
4.3 Model Testing: .................................................................................................................................. 40
4.4 Logistic Regression Data Analysis ................................................................................................... 41
CHAPTER 5 - CONCLUSION AND FINDINGS ..................................................................................... 43
5.1 Conclusion & Findings ..................................................................................................................... 43
REFERENCES ................................................................................................................................................ 45
APPENDIX .................................................................................................................................................... 47
TABLE OF TABLES
TABLE 2.1: COMMONLY USED FINANCIAL RATIOS TO PREDICT CORPORATE FAILURE .................... 16
TABLE 2.2: PREDICTIVE POWERS OF STATIC AND DYNAMIC MODELS ............................................ 17
TABLE 3.1: INPUTS FOR THE MDA ................................................................................................. 25
TABLE 4.1 : LOG DETERMINANTS .................................................................................................. 33
TABLE 4.2 : BOX?S M ..................................................................................................................... 33
TABLE 4.3: STEPWISE RESULTS ..................................................................................................... 34
TABLE 4.4 : VARIABLES IN THE ANALYSIS ..................................................................................... 35
TABLE 4.5 : WILK?S LAMBDA ........................................................................................................ 36
TABLE 4.6 : EIGENVALUES ............................................................................................................. 36
TABLE 4.7 WILK?S LAMBDA AND CHI SQUARE STATISTICS ........................................................... 37
TABLE 4.8 : STANDARDIZED CANONICAL DISCRIMINANT FUNCTION COEFFICIENTS ..................... 38
TABLE 4.9 : CANONICAL DISCRIMINANT FUNCTION COEFFICIENTS ............................................... 38
TABLE 4.10: FUNCTIONS AT GROUP CENTROIDS ............................................................................ 39
TABLE 4.11: CLASSIFICATION PROCESSING SUMMARY .................................................................. 40
TABLE 4.12: CLASSIFICATION RESULTS (A) ................................................................................... 40
TABLE 4.13: TESTING WITH DATA .................................................................................................. 41
TABLE 4.14: CASE PROCESSING SUMMARY ................................................................................... 41
TABLE 4.15: DEPENDANT VARIABLE ENCODING ........................................................................... 42
TABLE 4.16: CLASSIFICATION TABLE ............................................................................................ 42
TABLE A1: ANALYSIS CASE PROCESSING SUMMARY .................................................................... 47
TABLE A2: GROUP STATISTICS ...................................................................................................... 53
TABLE 3A: STRUCTURE MATRIX ................................................................................................... 55
TABLE OF FIGURES
FIGURE 1: CANONICAL DISTRIBUTION FUNCTION .......................................................................... 56
FIGURE 2: CANONICAL DISTRIBUTION FUNCTION .......................................................................... 57
CHAPTER 1- INTRODUCTION
Research in the corporate failure predictions has been gaining significant importance amongst
academics and practitioners since 1966 when Beaver made his first attempt to forecast corporate
failure in UK. The research has gained more importance with the recent bust in the economy in
2008. However, as the corporate failure problem still persists in modern economies, having
significant economic and social implications, and as an accurate and reliable method for
predicting the failure event has not yet been found for Indian companies, research interest is
likely to continue.
Beaver?s approach was „univariate? in that each ratio was evaluated in terms of how it alone
could be used to predict failure without consideration of the other ratios. Altman (1968) tried to
improve Beaver?s study by applying multivariate linear discriminant analysis (LDA), a method
that has been proved to suffer from certain limitations1. Researchers, however, seemed to have
ignored these limitations and continued extending Altman?s model, hoping to achieve higher
classification accuracy. Some examples of these attempts include among others: 1) discriminant
analysis for Indian sick companies(Edward I. Altman and Paul Narayanan (1996)), 2) principal
component analysis including macro economic factors in France(Eric Bataille, Catherine
Bruneau, Alexis Flageollet and Frederic Michaud), 3) developing structural model for
evaluating each firm?s default risk based on Merton?s model(Jorge A Chan Lau and Toni
Gravelle),4)developing Dynamic Ratio-Based Model for signalling Corporate Collapse(Ghassan
Hossari (2002)), 5) developing modified minimum sum of deviations model using the data from
Private Turkish Commercial Banks(International Research Journal of Finance and Economics
ISSN 1450-2887 Issue 12 (2007)), 6) identifying the causes behind the failures of virtual banks
using the Probit methodology, 7) using LPM, Logit, Probit and Discriminant Analysis for
predicting Corporate failure of problematic firms in Greece (Demetrios Ginoglou, Konstantinos
Agorastos, Thomas Hatzigagios), 8) conditional probability analysis approach for UK
industries(Applied Financial Economics, L. LIN and J. PIESSE Department of Banking and
Finance, National Chi-Nan University). Nevertheless, none of these attempts accomplished
higher statistically significant results than Altman?s earlier work and moreover, in the majority of
cases, the practical application of these models presented difficulties due to their complexity.
Nonetheless, failure prediction researchers did not give up and continued to employ various
classification techniques, always hoping for the discovery of the „perfect? model. The most
popular of these techniques are recursive partitioning, survival analysis, neural networks and the
human information processing approach. Their results indicated that no superior method has
been found even though the failure prediction accuracy varied depending on the prediction
method applied.
This study employs two techniques to predict corporate failures in India. The first one is Multiple
Discriminant Analysis. Multiple Discriminant Analysis offers an intuitive representation of
statistical results, thus making it possible to easily interpret results without a deep understanding
of the statistical underlying principles. The second one is Logistic Regression. Finally the
predictive capabilities of both the models are compared to find out the most appropriate model
with highest accuracy for prediction. As work by many researchers has proved that distress
prediction models are fundamentally unstable, in that the coefficients of a model will vary
according to the underlying health of the economy (Moyer, 1977; Mensah, 1984), hence
stressing the need that the model derivation should be as close in time as possible to the period
over which predictions are to be made (Keasy and Watson, 1991), a recent data set (1995-2005)
i.e. the boom period of Indian industrial companies (both failed and healthy) is used.
The study proceeds as follows. Chapter 2 provides literature review of the studies already done
in this field; research methodology is explained in chapter 3; chapter 4 includes analysis and
interpretation; chapter 5 reports conclusion and empirical findings of the study.
1. Corporate Failure in India: The Signals, Symptoms and Causes
Corporate failure is a major economic and social problem, which has an adverse impact on
entrepreneurship, production and supply of goods, prices, employment and so on. Corporate
failure directly concerns the shareholders, employers, bankers, customers and others who have
direct or indirect stake in the company. Industrial sickness is as much of a national problem as
for a company and industry experiencing this menace. In developing countries like India it is
very important that the economy is supported by strong and stable industrial growth. Corporate
failures not only counter the interest of many parties concerned but also lead to lack of
confidence in economic growth for foreign investments. It is in the interest of everyone that these
corporate failures should not take place and for that proper forecasting models have to be
developed.
A company cannot fail all of a sudden. The signals of sickness should be identified as soon as
possible to take counter measures. The warning signals of sickness may differ from enterprise to
enterprise depending upon the stage of its development but the people around can certainly
discern these. The signals then go on to become symptoms of sickness which lead to the failure.
The symptoms of sickness are related to various causes. These signals and symptoms are a great
source of information to companies and financial institutions for prediction, prevention and
control of sickness.
There have been various studies like Argenti?s (1976) „A Study on Corporate Failures? is
analysis of failure. The study has a dynamic approach and traces the firm?s path from health to
failure. According to the study there are three trajectories of failure. Type I refers to a small
business whose performance does not rise beyond poor and fails between 2-3 years of time,
mainly due to serious cost estimation errors. Type II are young companies failing which are
growing at supernormal pace and does not have time to stabilize themselves and Type III failures
refer to mature companies which have been operating since decades.
(1)
(1) Tracing the Trajectories of Sickness – a diagnostic tool for Corporate Turnaround, Dr. S. Pardhasaradhi, Associate Professor, Dept. of
Business Management, O.U.
1.1 The Symptoms
There can be many symptoms of corporate failure like
:
(2)
- Delay or default in Payment to Suppliers
- Irregularity in bank Account
- Delay or default in Payment to Banks
- Frequent Requests for Credit
- Decline in Capacity Utilization
- Low Turnover of Assets
- Poor Maintenance of P & Machinery
- Inability to take trade discount
- Excessive Manpower Turnover
- Extension Of Accounting Period
- Misrepresentation in Fin. statements
- Decline In price of Shares & Debenture
The signals and symptoms are a great source of information to companies and financial
institutions when it comes to prediction and prevention of sickness.
(3)
Firstly, signals from the
sick companies need to be identified. Srivastava (1986) states that a large number of signals are
displayed by failing units initially in several functional areas, viz., short term liquidity problems,
revenue losses, operating losses and overuse of external credit until it reaches a stage where it is
over burdened with debt and not being able to muster sufficient funds to meet its obligation.
These signals then merge with the symptoms related to the root cause of the problem.
Identifying signals and symptoms is a part of process to identify the root causes of the failure.
?
(2) Prediction of corporate failure: Formulation of an Early Warning Model, Scholor Bharat Tiwari, Jamia Millia Islamia University,
2004
?
(3) Srivatasava, S.S. and Yadav, R.A., Management and Monitoring of Industrial Sickness, Concept Publishing Company, New Delhi,
1986.
1.2 The Causes
The causes of sickness are basically related to the disorder in any one or more of the functional
systems within the unit, viz., Production, Finance, Marketing and Personnel. Again external
constraints may also adversely affect the functioning of the four main functional systems, if the
corporate management is unable to tackle the adverse changes. Some of these factors can be
identified as follows:
(i) External Factors
- Competition
- Change In Govt. Regulations
- Scarcity Of Inputs
- New technology
- Shift In Consumers Preference etc.
(ii) Internal Factors
- Managerial Incompetence
- Structural Rigidity
- Lack of Leadership etc.
The external factors are not much under our control but during the course of business they have
to be given due cognizance. But the internal factors are totally under our control. The internal
factors can be drilled down for a proper assessment of the root cause as follows:
Managerial Incompetence
In terms of production following factors can be identified:
? Improper Location
? Wrong Technology
? Uneconomic Plant Size
? Unsuitable P & Machinery
? Inadequate R&D
? Poor Maintenance
In terms of marketing following factors can be indentified:
? Inaccurate Demand Projections
? Improper Product Mix
? Wrong Product Positioning
? Irrational Price Structure
? Inadequate Sales Promotion
? High Distribution Costs
? Poor Customer Service
In terms of finance following factors can be identified:
? Wrong Capital Structure
? Bad Investment Decisions
? Weak Budgetary Control
? Inadequate MIS
? Poor Mgt. Of Receivables
? Bad Cash Management
? Strained Relations with
? Capital Suppliers
? Improper Tax Planning
In terms of personnel following factors are identified:
? Ineffective Leadership
? Bad Labour Relations
? Inadequate Human Relations
? Over Staffing
? Weak Employee Commitment
? Irrational Compensation Structure
Out of all the reasons 4 major reasons have been identified:
a.) Life Cycle Decline: Every company goes through different phases of life cycle i.e.
introduction, growth, maturity and decline. As new technologies emerge, the growth
pattern shift and new industries and firm appears and prosper. At the same time, the older
ones become less competitive and lose their real or relative advantage They lose their
dynamism and their potential to generate adequate return on investment as they
eventually slow down, they are merged into other companies, are bought out or stop
operating altogether.
b.) Trapped by Past Success: The things that drive success i.e. being focused, tried and true
strategies, confident leadership, galvanized corporate cultures and especially the interplay
of all these elements also cause decline if not channelized properly in the interest of
company.
c.) Inappropriate Mental Models: One of the models is to just consider present information
and ignore environmental changes e.g. IBM focussed attention on mainframe computers
but lost business to Apple and Compaq in personal computing. The second model is
considering environmental changes as temporary fad. E.g. Singer sewing machines sales
dipped because they could not believe the environment change.
d.) Rigidity in Response to Crises: Rigid Posture decreases chances of successful adaptation
and survival. The company management should be flexible and ready to accept the
changes as well as incorporate them in the way company is being operated and managed
to avoid any corporate failures.
CHAPTER 2: LITERATURE REVIEW
A literature review is a body of text that aims to review the critical points of current knowledge
and or methodological approaches on a particular topic. Literature reviews are secondary
sources, and as such, do not report any new or original experimental work.
Most often associated with academic-oriented literature, such as theses, a literature review
usually precedes a research proposal and results section. Its ultimate goal is to bring the reader
up to date with current literature on a topic and forms the basis for another goal, such as future
research that may be needed in the area.
Edward I. Altman and Paul Narayanan (1996) “Business Failure Classification Models: An
international Survey” developed a Discriminant analysis model for identifying Sick companies
in India. They referred to sick companies as companies that were kept in operation even after
incurring losses and had used the IDBI definition which said sick companies suffered from:
? Cash losses for a period of 2 years, or if there was a continuous erosion of net worth
? 4 successive defaults on it?s on its debt service obligations
? Persistent irregularity in the use of credit lines
? Tax payments in arrears for one to two years
Altman et al carried out research on 18 sick and 18 healthy companies all of which were publicly
traded. Data used were from period between 1976 and 1995. The companies were drawn from
cement, electrical, engineering, glass, paper and steel industries. The Discriminant analysis
model had been developed based on the significant financial ratios calculated for each of the firm
in question. According to the output model, Cash Flow/Total Debt turned out to be the most
important factor whereas Sales/Total Assets turned out to be the least important variable.
Eric Bataille, Catherine Bruneau, Alexis Flageollet and Frederic Michaud” Business cycle
and Corporate Failure in France: Was there a link?” aimed to extract cyclical factors from
companies? data used to build the default score functions and then from the functions
themselves. The method used by the Bataille et al was the “Principal Component analysis” in the
context of large number of variables and small time periods. Factorial structure was used to
immunize the score functions and related decisions against the cyclical variations in the state of
the economy. This was because any linear classification model was developed with a cross
section of one year and in order to be robust, it needed to be adjusted over a period of time. In
certain cases, a complete re estimation of the model also might become necessary. In some cases,
the nature of the corporate might have changed significantly or related structural changes in the
economy had not been included in the model. In some other cases the score function remained
valid but the Discriminant threshold needed adjustment. Three macroeconomic series were
chosen by the authors: Annual GDP of France by value, output gap of French GDP by volume
obtained by Hodrick-Prescott filter, and industrial production capacity utilization and their effect
on both failing and non-failing firms were tested by the authors. The results indicated a very
strong similarity of the common factors indicating that the state of economy influences failing
and non-failing firms in a similar way, whatever was the sector chosen. Hence it was learnt from
this paper, that any scoring model that we develop need to be adjusted for the business cycles too
as static analysis will not be of much use in predicting the future defaults.
Jorge A Chan Lau and Toni Gravelle “END: A New Indicator of Financial and Non –
Financial Corporate Sector Vulnerability” talked about END (Expected number of Defaults)
as an indicator of Corporate Sector vulnerability. This was an indicator which was based on
forward looking information embedded in equity prices instead of historical information like
financial ratios. Because equity prices were updated on a daily basis, implementation of the END
indicator allowed for real-time monitoring of potential distress conditions in the corporate sector.
This model had been successfully applied in Korea, Malaysia, and Thailand etc. The END
indicator was constructed using a two step approach. In the first step, structural model for
evaluating each firm?s default risk was developed based on Merton?s model. Merton model was
based on the observation that the shareholders hold a call option on the asset value of the firm:
when the asset value of the firm falls below the face value of its debt, the firm was insolvent and
the shareholders? call option was out of the money. Hence, it was possible to use basic option
pricing techniques to value the debt and equity issued by a firm. Furthermore, readily available
balance-sheet information and equity prices could be used to infer the risk-neutral default
probability and the distance-to-default of the firm, or a normalized measure of the gap between
the firm?s asset value and the nominal value of its liabilities. While appealing, Merton?s original
model was unable to capture short-term default risk since continuity assumptions on the asset
value stochastic process ruled out the possibility of jump like default events. However, the rapid
demise of “fallen angels,” that was, investment-grade corporations that went bankrupt in a matter
of days, suggested that default events might be better characterized by jump processes than by
continuous ones. The practitioner?s model adopted in our approach corrects for this deficiency by
introducing uncertain recovery rates in order to capture jump-like default events. The second step
was to assess the probability that a subset of the firms analyzed default during a specified time
horizon. During crisis periods, it seemed reasonable to assume that a large number of defaults
must be driven by a common negative shock affecting the corporate sector rather than by firm-
specific factors. An underlying assumption of the structural model estimate to calculate the END
was that corporate valuations were driven by an unobserved common factor. Each firm?s value
over time was correlated to this common factor to varying degrees. In order to measure the
individual correlation of each firm?s estimated probability of default to this common factor
authors used a principal components analysis. This method assumed that a limited number of
unobserved variables (or factors) explained the total variation of the larger set of variables. That
was, the higher was the degree of co-movement across all individual firm default probability
time series, the fewer the number of principal components (factors) needed to explain a large
portion of the variance of the original series. In the case where the original variables were
identical (perfectly collinear), the first principal component would explain 100 percent of the
variation in the original series. Alternatively, if the series were orthogonal to one another (i.e.,
uncorrelated), it would take as many principal components as there were series to explain all the
variance in the original series. In that case, no advantage would be gained by looking at common
factors, as none existed. After computing the default probabilities for each firm, the authors
computed the amount of variation explained by the first two principal components of 125 firms
in Korea, the 148 firms in Malaysia, and the 79 firms in Thailand during the sample period.
Mohamed Ariff, University of Tokyo & Bond University & J. Ratnatunga, Monash
University(2008)” Do accounting and finance tools serve governance?” presented brief
review of literature on corporate governance and thus proposing corporate governance
framework. One of the objectives of this paper was the more ambitious one of addressing the
role of accounting and finance disciplines to serve corporate governance. Testing the use of some
accounting and finance tools had been done empirically to know if they would have alerted
management, auditors and regulators as well as investors to the impending collapse of failed
firms ahead of time. The model was developed by Edward Altman in 1968. According to the
model, Discriminant analysis was applied to two groups of financial ratios. One group derived
from the last set of accounts of companies prior to failure, and the other from the accounts of on-
going companies. The statistical procedure was then designed to produce a single score (Z score)
which could be used to classify a company as belonging to the failed group or the on-going
group (see Robertson and Mills, 1991). The final model consisted of five ratios that, when
combined in a specific manner, was able to discriminate between the bankrupt and the non-
bankrupt companies in his study. The variables together with their respective weights were
shown as follows:
Z-Score = 6.56 (XI) + 3.26 (X2) + 6.72 (X3) + 1.05 (X4)
Where, X1 = Working capital/Total assets (2)
X2 = Retained earnings/Total assets (3)
X3 = Profit before interest and tax/Total assets (4)
X4 = Net Worth/ Total Liabilities
To make the model operational, Edward Altman combined the failed group and the on-going
group then ordered according to their individual Z scores. It was then possible to specify two
limits as follows:
? An upper limit, where no failed companies were misclassified
? A lower limit, where no on-going companies were misclassified.
The area between the upper (2.60) and lower (1.10) limit was what Altman described as the
„zone of ignorance? or the „grey area?, where a number of failed companies and/or on-going
companies could be misclassified.
Ghassan Hossari (2002)” A Dynamic Ratio-Based Model for signalling Corporate
Collapse” included only those companies that had appointed an administrator, filed for
bankruptcy, gone into liquidation or receivership, failed to lodge listing fees, or wound up. As a
result, 37 such companies were identified among the total of 413 that were de-listed from
Australian Stock exchange (ASX). Similar number of non-collapsed companies was identified.
Hossari used 28 ratios:-
Profit / Total Assets Retained earnings /
Total assets
Total equity/ Total
Assets
Long Term loan /
Total Assets
Current Assets/ Current
Liabilities
Sales/ Total assets Quick assets / Total
Assets
Cash flow/ Current
Liability
Total Liabilities/ Total Assets Cash / Total assets Total equity/ Total
Liability
Current liability/ Total
assets
Working Capital/ total Assets Current Assets/ Total
Assets
Cash / current
Liability
Current liability /Total
Equity
EBIT/ Total Assets Quick Assets/ Current
Liabilities
EBIT/ Total equity Investments/ Working
Capital
Cash Flow/ Total Liabilities Cash Flow/ Total
Assets
Fixed Assets / Total
assets
Long Term Loan /
Total Equity
Total Liability/ Total Equity Profit / Total Equity Fixed assets/ Total
equity
Sales/ Total Equity
Table 2.1: Commonly used financial ratios to predict Corporate Failure
Static Model:
The assumption was that the same financial ratios were capable of signaling corporate collapse
over multiple time periods. Therefore, the same model was used to signal collapse for each year
in the sample period.
Dynamic Model
A suitable model was one that reflected a heuristic behavioral framework; specifically, it was a
dynamic model: dynamic in the sense that it did not rely on a coherent assortment of financial
ratios for signaling the event of collapse over multiple time periods. Separate formulation for
each year in the sample period 1996 to 2001 was used.
Summary of the Overall Predictive Power and Occurrence of Type I Error for both the Static
and Dynamic MDA-Based Models (1996 to 2001):-
Predictive Power of the
Static Model
Predictive Power of the
Dynamic Model
Period Overall Type I Error Overall Type I Error
1996 72.7% 45.5% 100% 0%
1997 66.7% 45.8% 70.8% 58.3%
1998 43.3% 70% 78.3% 13.3%
1999 67.9% 42.9% 85.7% 21.4%
2000 74.6% 33.8% 86.2% 18.5%
2001 90.9% 18.2% 100% 0%
Table 2.2: Predictive powers of Static and Dynamic Models
Very high Type I error was undesirable. This was because the erroneous classification of a
collapsed company as non-collapsed was a costly mistake, whereas the erroneous classification
of a non-collapsed company as collapsed was not. It was expected that the occurrence of Type I
error would be reduced by using a dynamic model.
International Research Journal of Finance and Economics ISSN 1450-2887 Issue 12 (2007)
“Bank Failure Prediction Using Modified Minimum Deviation Model” applied new model -
modified minimum sum of deviations model using the data from Private Turkish Commercial
Banks, and compared the results with the results of classical minimum deviation model in which
factors, which were formed with factor analysis, were used and validity of the models were
discussed. In this model, N firms were evaluated using m independent variables and a binary
classification was made. It was expected that weighted averages of independent variables of
successful firms will be greater than the break point determined in the model and that of
unsuccessful firms will be smaller. Classifications obtained in the conclusion of analysis might
sometimes be different from groupings determined at the beginning of Discriminant analysis.
Misclassification of a successful unit meant that the weighted average value calculated for the
related unit will be smaller than the break point. In other words, the condition stated in equations
1 and 3 would be violated for a misclassified successful unit. In order to prevent this violation,
the equation should be rearranged, in other words possibility of misclassification should be
added to the equation. For this purpose, 0 should be added to the equation in case of correct
classification and in case of misclassification; a deviation variable which was equal to the
distance between the related unit and the break point should be added. The aim of the linear
programming model to be formed was to minimize the sum of these deviation variables. Solution
of the model will give the optimum break point, the value of deviation variable for each unit and
the optimum values of weights of independent variables. The method which was suggested in
this study was determination of ratios to be selected within the mathematical model defined
above. Some arrangements were necessary for the model to realize this function. Firstly, the
ratios to be used should be grouped. For example, in bank failure prediction, the proper
classification of the ratios should be as follows: capital adequacy ratios, profit capital ratios,
liquidity ratios, ratios related to income and expenditure structure and ratios related to the quality
of assets. After ratios were divided into m” groups, the constraints which will enable the model
to select the most proper ratio from each ratio group should be added.
Krishnan Dandapani and Edward R. Lawrence Department of Finance and Real Estate,
College of Business Administration, Florida International University, Miami, Florida,
USA(2001)” Virtual bank failures: An Investigation” identified the causes behind the failures
of virtual banks. Using the Probit model methodology Dandapani and Lawrence examined the
components of the standard bank net income model (which were: net income, interest income
(II), interest expense (IE), the provision for loan losses (PLL), the non-interest income (NII) and
the non-interest expense (NIE)) for the surviving virtual banks and those banks which failed.
They found that the NII and NIE of the successful banks and the banks which failed were
statistically different in the time period before failures. They did the probit analysis on the failed
virtual banks and the failed brick and mortar banks and found that the IIs in both banks were
significantly different. They further explored the NII and NIE of the surviving banks and the
failed banks. Similar to the previous research they found that the brick and mortar banks failed
due to bad asset quality but the failure of virtual banks was mainly due to high NIEs. To
investigate what causes some of the virtual banks to fail, they studied the bank failure as a
dependent variable and regressed it over the constituents of net income, i.e. the II, the IE, the
PLL, the NII and the NIE. Since the dependent variable could only take values between 0 (for
the banks which had failed) and 1 (for the active banks), they used the following probit
regression model for the parametric analysis.
Active/inactive bank =w1+w2*II+w3*IE+w4*PLL+w5*NII+w6* NIE
The significance of parameter wi (i=2-6) indicated that the independent variable i was
statistically different for the active banks and the banks which failed. When comparing the failed
virtual banks and the failed brick and mortar banks they used 0 as dependent variable for the
failed virtual banks and one for the failed brick and mortar banks. The results of probit analysis
showed a statistically significant difference in the NIE and NII of the surviving virtual banks and
the virtual banks that failed. In the period of March 2000-2002, during which most of the
currently inactive banks failed, the net II as a percentage of total assets of the currently inactive
banks was higher than that of the active banks indicating that the net II was not responsible for
the failure of the currently inactive brick and mortar and virtual banks. It was found the
performance of the currently inactive banks better than the performance of the active banks for
almost entire time period of study. Plot of the burden for the active and currently inactive banks
showed that the burden for the currently inactive virtual banks to be higher than the burden for
the currently inactive brick and mortar banks and the currently active virtual banks, especially in
the time period from March 2000 to 2002 which witnessed most of the failures in virtual banks.
Plot of the NII and NIE for the active and currently inactive virtual banks showed that the NII of
the currently inactive banks as a percentage of total assets was higher than that for the now active
banks. NIEs for the failed banks were substantially higher than the NIEs for the now active
banks. Even though the failed banks generated high NIIs, it could not compensate the losses due
to very high NIEs. The losses due to high NIE led to an increase in the PLL for the currently
inactive banks. Plot of the PLL for the active and currently inactive banks showed that the PLL
as a percentage of total assets was substantially higher for the currently inactive banks in the time
period from March 2000 to 2002, the period when most of the currently inactive virtual banks
failed. The figure also showed that the PLL for the currently inactive brick and mortar banks was
higher than the PLL for the currently inactive virtual banks indicating large accumulation of bad
debts. They concluded that the failure of the brick and mortar banks was due to poor asset quality
whereas the virtual banks failed due to very high NIEs.
Demetrios Ginoglou,Ph.D., Konstantinos Agorastos,Ph.D., Thomas Hatzigagios,Ph.D.
University of Macedonia Economic and Social Sciences,Thessaloniki,Greece “Predicting
Corporate failure of problematic firms in Greece with LPM, Logit, Probit and
Discriminant Analysis” used Logit and Probit models of corporate failure to generate the
probability of failure as a financial risk measure I Greece . This study controlled how reasonable
was the division of firms into healthy and problematic and predicting the business failures by
LPM,Logit and Probit models and comparing them as well. The study used Morrison model for
Discriminant analysis for example,classification into bankrupt and non-bankrupt firms,etc.This
classification was based on the chosen financial ratios and linear combination of these that best
discriminates between these characteristics. Linear Probability model had also been used which
was actually a regression of dummy dependent variable,which were dichotomous,on a set of
explanantory variables.The model was as:
Y = b0+b1X1+b2X2+........bnXn +u
Where X was set of explanatory variables and Y=0 for healthy firms and Y=1 for bankrupt
firms.Y was the conditional probability of firm not going bankrupt given the set of explanatory
variables.Thus E(Y/X)would give the probability of a firm staying healthy whose financial ratios
were represented by set of X?s. The Logit and Probit models had been used to model the
conditional probability of bankruptcy as a function of firms?s debt-equity ratio. SPSS statistical
package was used to estimate the models using above methods. Discriminant analysis results
predicted Net Profit/Total Assets,Gross Profit/Total Assets,Total Debt/Stockholder’s equity and
(Current Assets – Short term debt)/Total Assets to be the significant variables. The methods used
in the study had been successful in classifying problematic and healthy firms to the tune of more
than 75 percent .MDA had turned out to be a more advanced but a complicated one than LPM
.But MDA suffers from the drawback that it checks the variable only after they had been used in
the model. Also in a country like Greece where economic conditions were not stable the
variables that were used in a model were also not very stable.Hence a control check of the
variables to be used before they were used in every model was necessary.Discriminant analysis
didn?t allow such control and hence Logit model gave better results in such cases.
Applied Financial Economics, L. LIN and J. PIESSE Department of Banking and Finance,
National Chi-Nan University, 1 University Road, Pu-Li, Nantou 545, Taiwan; Management
Centre, School of Social Science and Public Policy, King?s College London, 150 Stamford
St, London SE1 9NN, UK and University of Stellenbosch, Republic of South Africa(2004)
“Identification of corporate distress in UK industrials: a conditional probability analysis
approach” found out that bankruptcy prediction models depended on three factors: the model,
the variable selection criteria and the optimal cut-off probability. The variables selected reflected
five features that were generally accepted in the literature as contributing to the explanation of
corporate failure.
F1: management inefficiency Two ratios that reflect this were retained earnings/total assets and
profit after tax/total assets. Of these, the former was considered a better guide of a company?s
cumulative longer-term profitability and the latter, a short-term indicator.
F2: capital structure Capital structure in the form of gearing ratios was used extensively as a
measure of corporate risk, and thus a gearing ratio, total liabilities/ total assets, was included in
the study.
F3: insolvency A direct cause of corporate failure was the inability of a company to meet debt
obligations. The choice of a cash-based or working capital-based liquidity ratio was not
conclusive, and cash/current liabilities, change in net cash/total liabilities and working
capital/total assets were all surrogates for solvency.
F4: adverse economic effects In this paper, the annual FTSE all-share index (FTSE) was used as
a measurement of general economic conditions. Thus, it was interesting to examine whether the
failing firms in the sample were alone in performing badly in any particular period, or whether
there was an overall economic effect in that year that would had resulted in bankruptcy for the
more vulnerable firms
F5: income volatility given the short history of many companies, the standard deviation of past
income was not very robust. Instead, a measure of income stability can be constructed, defined:
(Incomet – Incomet-1)/(Incomet + (Incomet-1))
The choice of an optimal cut-off point required knowledge about (i) the costs of type I and type
II errors; and (ii) the prior probabilities of failure and survival. The study first developed a
misclassification cost model. Of these variables, the two that had the greatest impact on
predicting bankruptcy were long term profitability (the negative effect of retained earnings/ total
assets), and gearing (the positive effect of total liabilities/total assets), noting that the counter-
intuitive signs reflected the fact that it was failure that was being modelled here. The estimated
coefficients on income volatility and the market-based ratios were not significantly different
from zero at the 95% level. Two models were found to achieve high levels of accuracy, one that
emphasized classification based on short-term accounting criteria and a second based on longer-
term financial performance.
CHAPTER 3- RESEARCH METHODOLOGY
Research is the systematic process of collecting and analyzing information (data) in order to
increase our understanding of the phenomenon about which we are concerned or interested.
Method is the systematic collection of data (facts) and their theoretical treatment through proper
observation, experimentation and interpretation. Thus research methodology explores to find out
the best suited method for analyzing the concerned topic.
3.1Objectives
3.1.1Primary Research Objective
To determine the model for predicting corporate failures in India, using multiple discriminant
analysis.
3.1.2.Secondary Research Objectives
? To determine the definition of default.
? To determine the sectors in which maximum defaults have occurred.
? To find out the defaulting companies in these sectors and their financial data.
? To find out comparable successful companies operating in these sectors and their
financial data.
? To determine the inputs for the MDA and develop the model
3.2 Scope of Research
Corporate failures are a well researched area in most of the countries. But, surprisingly there is
very little work done in this area in India. The literature review uncovered the fact that such
study has primarily been conducted on companies which have defaulted on their principal or
coupon payments of their bonds.
However, majority of India?s investment in securities goes into the equity markets. Considering
this fact, it becomes important to conduct a similar study on listed firms which wind up causing
huge losses to its shareholders. Taking into consideration the mass appeal of this research, it is
paramount that the model uses information which is freely available to shareholders.
Thus, the scope of research encompasses:
- To develop a model specific to Indian scenario.
- Model to predict corporate failure (delisted for winding up).
- The input data for model to be readily available to shareholders.
3.3.Universe of study
The universe consists of all the firms which got delisted from BSE for winding up and their
comparable listed firms of BSE. The delisted firm?s data can be obtained from the BSE website
(http://www.bseindia.com/about/datal/delist/a-delist.asp). This gives a list of 211 companies
which were delisted since 1970?s.
3.4 Sample of Study
Studies in various countries indicate that financial ratios alone are not reliable indicators of
corporate failures because the business cycle also is a major contributor. Hence, it was decided to
carry out the study during Indian economy?s boom period ranging between 1995-2005. This
reduced the universe of companies from 211 to around 50.
Further, financial data of all these 50 companies was not available. Based on availability of data,
the sample set was reduced to 21 companies.
Comparables are decided based upon the industry, firm size and period of operation. Each failed
company has been paired with its comparable taking our sample size to 42.
3.5 Data Collection
Data was collected using BSE website (List of companies that wound up) and „Capitaline Plus?
Database (Financial Information of sample companies). For the purpose of analysis, 29 ratios
were calculated and given as input for Multiple Discriminant Analysis. 29 ratios were included
to ensure that none of the financial parameters is ignored.
Profit / Total
Assets
Reserves &
Surplus / Total
assets
Total equity/ Total
Assets
Long Term loan / Total Assets
Current Assets/
Current
Liabilities
Sales/ Total
assets
Quick assets / Total
Assets
Cash flow/ Current Liability
Total
Liabilities/
Total Assets
Cash / Total
assets
Total equity/ Total
Liability
Current liability/ Total assets
Working
Capital/ total
Assets
Current Assets/
Total Assets
Cash / current
Liability
Current liability /Total Equity
EBIT/ Total
Assets
Quick Assets/
Current
Liabilities
EBIT/ Total equity Investments/ Working Capital
Cash Flow/
Total Liabilities
Cash Flow/ Total
Assets
Fixed Assets / Total
assets
Long Term Loan / Total Equity
Total Liability/
Total Equity
Profit / Total
Equity
Fixed assets/ Total
equity
Sales/ Total Equity
Table 3.1: Inputs for the MDA
3.6 Limitations of Study
1. Selection of comparable firms: because even if it has been made sure that they are from
the same industry, of the same size and are operational during the same period, there is no
way to know whether the number of years for which the comparable firm has been
operational has any effect on the failure prediction.
2. Period: The sample we have selected comes from the boom period of the Indian economy
ie after the liberalization, between 1995 – 2005. While this may broadly be classified as
the boom period, ten years is a long time for an economy to remain in a same state.
3. Qualitative Factors: There could be qualitative factors like integrity of the top
management, CSR demonstrated by the organization etc which may act as predictors for
the failure. These factors are tough to quantify and we have made a rough
assumption that these qualitative factors are already reflected in the financials of the
company.
3.7 Tools Used
The objective of the research is to formulate a model for predicting the corporate failure in
India using Multiple Discriminant Analysis. But we have developed Logit model too for
prediction and compared the results to demonstrate the superiority of MDA over Logit in
correctly predicting the failures. Hence the basic tools used are
- Multiple Discriminant Analysis
- Logit Analysis
3.7.1 Multiple Discriminant Analysis
Multiple discriminant analysis (MDA) is also termed Discriminant Factor
Analysis and Canonical Discriminant Analysis. It adopts a perspective similar to Principal
Components Analysis, but PCA and MDA are mathematically different in what they are
maximizing. MDA maximizes the difference between values of the
dependent, whereas PCA maximizes the variance in all the variables accounted for by the factor.
Geometrically, the rows of the data matrix can be considered as points in a multidimensional
space, as also the group mean vectors. Discriminating axes are determined in this space, in such
a way that optimal separation of the predefined groups is attained. The first discriminant function
maximizes the differences between the values of the dependent variable. The second function is
orthogonal to it (uncorrelated with it) and maximizes the differences between values of the
dependent variable, controlling for the first factor. And so on. Though mathematically different,
each discriminant function is a dimension, which differentiates a case into categories of the
dependent variable based on its values on the independent variables. The first function will be
the most powerful differentiating dimension, but later functions may also represent additional
significant dimensions of differentiation
Thus Discriminant Analysis is a technique for analyzing data when the criterion or the dependant
variable is categorical and the predictor or the independent variables are interval in nature. For
example, the dependant variable may be the choice of a brand of personal computer (A, B or C)
and the independent variables may be the ratings of the attributes of PCs. The objectives of
discriminant analysis are as follows:
1. Development of Discriminant functions, or linear combinations of the predictors or
independent variables, which will best discriminant between the categories of the
criterion or the dependant variables.
2. Examination of whether significant differences exist among the groups, in terms of the
predictor variables.
3. Determination of which predictor variables contribute to most of the intergroup
differences.
4. Classification of cases to one of the groups based on the values of the predictor variables.
5. Evaluation of the accuracy of classification.
Discriminant Analysis techniques are described by the number of categories possessed by the
criterion variable. When the criterion variable has two categories, the technique is known as two-
group Discriminant Analysis. When three or more categories are involved the technique is
referred to as Multiple Discriminant Analysis. Since in our case we are having only two
categories, namely the successful and the failure companies, in fact, this research can be best
describes as the two group discriminant analysis instead of a multiple discriminant analysis.
3.7.1.1 Discriminant Analysis Model
The discriminant analysis model involves a linear combination of the following terms:
n n
X b X b X b X b b D + + + + + = .. ..........
3 3 2 2 1 1 0
Where D is the discriminant Score
b?s are the discriminant coefficients or the weights
X?s are the predictors or the independent variable
The coefficients or weights are estimated so that the groups differ as much as possible on the
values of the discriminant function. This occurs when the ratio of between-group sum of squares
for the discriminant scores is at a maximum. Any other linear combination will result in a
smaller ratio.
Variables and Data
D is a classification into 2 or more groups and therefore a grouping variable in the terminology
of discriminant analysis. That is groups are formed on the basis of existing data, and are coded as
0 and 1 based on whether the company is a failure or a success, similar to a dummy variable
coding. The independent variables are continuous scale variables and are used as predictors of
the group to which the objects will belong. Therefore, to be able to use discriminant analysis, we
need to have some data on D and the x variables from past record. That is discriminant analysis
is a supervised learning technique where the model is based on existing data unlike clustering,
which is an unsupervised learning technique.
Predicting the group membership for a new data point
A model is built which is a linear equation of the form shown earlier, and the coefficients of the
equation is used to calculate the discriminant score D. Depending upon the cutoff score for D,
which is usually the midpoint of the mean discriminant scores of the two groups, the new points
will be classified.
Accuracy of Classification
The output given by the confusion matrix tells us the percentage of the existing data points which
are correctly classified by this model. This percentage is somewhat similar to the coefficient of
determination in a regression model. But it needs to be noted that this percentage is based upon
applying the model to the same data on which it was built. Generally when applied to other data
this percentage might go down.
Stepwise/ Fixed Model
Stepwise discriminant analysis is analogous to stepwise multiple regression in that the predictor
are entered sequentially based on their ability to discriminate between the groups. An F ratio is
calculated for each predictor by conducting a univariate analysis of the variance in which the
groups are treated as the categorical variable and the predictor as the criterion variable. The
predictor with the highest F ratio is first to be selected for the inclusion in the discriminant
function. A second predictor is added based on the highest adjusted or partial F ratio, taking into
account the predictor already selected. Each predictor thus selected is tested for the retention
based on its relationship with the other predictors selected/
The selection of the stepwise procedure is based on the optimizing criterion adopted. The
Mahalanobois procedure is based on maximizing a generalized measure of the distance between
the two closest groups.
Relative Importance of the Independent Variables
The coefficients of the predictors in the discriminant function should ideally tell us which
predictor is more important in discriminating between the groups. But because the predictors are
measured in different units, comparing the absolute value of the coefficients would make no
sense. To overcome this problem of different measurement units, we must compare standardized
discriminant coefficients. These coefficients are adjusted for the different bases and can be
directly compared to see the effect. The higher the standardized coefficient of a predictor, higher
is the importance of that variable in predicting the failure.
Apriori Probability of Classification into Groups
The discriminant analysis algorithm requires us to assign an apriori probability of a given case
belonging to one of the groups. There are two ways of doing this:
1. An equal probability can be assigned to all the groups. Thus in a 2 group discriminant
analysis, 0.5 probability can be assigned to both the groups.
2. From experience, if we know which group has a higher probability, we can give that
probability to the group.
Since in this research, any company has a equal probability of belonging to any of the two
groups, an equal figure of 0.5 has been assigned to both the groups.
3.7.2 Logit Model
When the dependant variable is binary and there are several independent variables that are
metric, in addition to two-group discriminant analysis, one can also use OLS regression, the logit
and probit models for estimation. The data preparation for running the OLS regression, logit and
probit is similar in that the dependant variable is coded 0 or 1. The probit model is less
commonly used compared to logit model.
Discriminant analysis deals with the issue of which group an observation is likely to belong to.
On the other hand the binary logit commonly deals with the issue of how likely an observation is
to belong to each group. It estimates the probability of an observation belonging to particular
group. Thus logit model falls somewhere between regression and the discriminant analysis in
application. We can estimate the probability of a binary event taking place using the binary logit
model also called as logistic regression. The probability of success may be modeled using the
logit model as:
|
.
|
\
|
+
|
.
|
\
|
=
¿
¿
=
=
k
i
i i
k
i
i i
X a
X a
P
0
0
exp 1
exp
where P is the probability of success
Xi is the independent variable i
Ai is the parameter to be estimated
Model Fit
In logit, commonly used measures of model fit are based on the likelihood function and are Cox
and Snell R
2
and Nagelkerke R
2.
Both these measures are similar to the R
2
in multiple regression.
The Cox and Snell R
2
is constrained in such a way that it cannot equal 1, even if the model
perfectly fits the data. This limitation is overcome by Nagelkerke R
2
.
If the estimated probability of a data point is greater than 0.5, then the predicted value of Y is
one, else Y is set to zero. The predicted values of Y can then be compared with the
corresponding actual values of Y to determine the percentage of correct predictions.
Significance Testing
The testing of individual estimated parameters or coefficients for significance is similar to tat in
the multiple regression. In this case, the significance of the estimated coefficients is based on
Walds?s statistic. This statistic is a test of significance of the logistic regression coefficient based
on the asymptotic normality property of maximum likelihood estimates. The Wald statistic is
Chi-square distribution distributed with 1 degree of freedom if the variables are metric and the
number of categories – 1 if the variable is non metric.
Interpretation of Coefficients
The interpretation of the coefficients is similar to that in multiple regression. The log odds are a
linear function of the estimated parameters. That is if Xi changes by one unit, the log odds
changes by Ai units, when the effect of other independent variables is held constant. The sign of
this will determine whether the probability increases or decreases by this amount.
Software Used
All the data analysis has been carried out in SPSS and Excel.
Chapter 4 -ANALYSIS AND INTERPRETATION
Data do not “speak for themselves”. It reveals what the analyst can detect. Thus proper analysis
will only lead the analyst to the result he/she wants and these results will have significance when
they are interpreted. Analysis and interpretation of the study should relate to the study objectives
and research questions. One often-helpful strategy is to begin by imagining or even outlining the
manuscript(s) to be written from the data.
4.1 Analysis
Input data for 42 Indian corporate was fed into SPSS software and a stepwise Multi-Discriminant
analysis was performed. As a result, the following output was derived which has been analyzed
and interpreted table wise.
Analyze ? Classify ? Discriminant Analysis
Stepwise procedures select the most correlated independent first, remove the variance in the
dependent, then select the second independent which most correlates with the remaining variance
in the dependent, and so on until selection of an additional independent does not increase the R-
squared (in DA, canonical R-squared) by a significant amount (usually significance=.05). As in
multiple regression, there are both forward (adding variables) and backward (removing
variables) stepwise versions.
In SPSS there are several available criteria for entering or removing new variables at each step:
Wilks? lambda, unexplained variance, Mahalanobis? distance, smallest F ratio, and Rao?s V. The
researcher typically sets the critical significance level by setting the "F to remove" in most
statistical packages.
V2 Rank
Log
Determinant
0 4 -3.796
1 4 -4.396
Pooled within-groups 4 -2.455
Table 4.1 : Log Determinants
The larger the log determinant in the table above, the more that group's covariance matrix differs.
The "Rank" column indicates the number of independent variables -- 3 in this case. Since
discriminant analysis assumes homogeneity of covariance matrices between groups, we would
like to see the determinants be relatively equal. Box's M, next, tests the homogeneity of
covariances assumption.
In the multi group model, log determinant values provide an indication of which group?s co
variances differ the most. In our case the log determinant value of failed companies (V2=0) is
the least. If we perform the analysis hence with only successful companies the equal covariance
assumption would be met.
Box's M 65.620
F Approx. 5.848
df1 10
df2 7649.402
Sig. .000
Table 4.2 : Box?s M
Analysis:
Box's M test tests the assumption of homogeneity of covariance matrices. This test is very
sensitive to meeting also the assumption of multivariate normality. Discriminant function
analysis is robust even when the homogeneity of variances assumption is not met, provided the
data do not contain important outliers. For the data below, the test is significant so we conclude
the groups do differ in their covariance matrices, violating an assumption of DA. Note that
when n is large, as it is here, small deviations from homogeneity will be found significant, which
is why Box's M must be interpreted in conjunction with inspection of the log determinants,
above.
Box?s M statistic tests the null hypothesis of equal population covariance matrices. It?s
significance is based on F transformation. The hypothesis of equal covariance matrices is
rejected here as the significance level is .000(less than .10)
Variables Entered/Removed(a,b,c,d)
Step Entered Wilks' Lambda
Statistic df2 df3 Exact F Statistic df1
Statistic df2 Sig. Statistic df1 df2 Sig. Statistic df1
1 Working
Capital/
total
Assets
.586 1 1 40.000 28.218 1 40.000 .000
2 EBIT/
Total
Assets
.486 2 1 40.000 20.611 2 39.000 .000
3 Long
Term loan
/ Total
Assets
.413 3 1 40.000 17.970 3 38.000 .000
4 Fixed
Assets /
Total
assets
.354 4 1 40.000 16.908 4 37.000 .000
At each step, the variable that minimizes the overall Wilks' Lambda is entered.
a Maximum number of steps is 56.
b Minimum partial F to enter is 3.84.
c Maximum partial F to remove is 2.71.
d F level, tolerance, or VIN insufficient for further computation.
Table 4.3: Stepwise Results
Analysis:
This table displays the statistics at each step where variables are entered or removed. The
statistics displayed depends on the choice of method of stepwise selection. Here we have chosen
to enter at each step a variable that would minimize the Wilks? lambda. Wilks? lambda is a
measure of the extent of misfit of the discriminant solution .Values vary from 0 to 1.Values close
to 0 indicate that the groups created are distinctively different whereas values close to 1 indicate
that the group are overlapping.
For an acceptable discriminant solution ì should be less than 0.5
ì = 1 – [(Variance(amongst)/Variance(Total)]
= Variance(Within)/Variance(Total)
Here the Wilk?s lambda is less than 1 and hence shows the better discriminating power of the
model.
TABLE 4: Variables in the Analysis
Step Tolerance F to Remove
Wilks'
Lambda
1 Working Capital/
total Assets
1.000 28.218
2 Working Capital/
total Assets
.985 25.949 .810
EBIT/ Total Assets .985 8.038 .586
3 Working Capital/
total Assets
.972 24.217 .677
EBIT/ Total Assets .984 6.894 .488
Long Term loan /
Total Assets
.986 6.683 .486
4 Working Capital/
total Assets
.920 26.902 .611
EBIT/ Total Assets .984 5.415 .405
Long Term loan /
Total Assets
.656 13.131 .479
Fixed Assets / Total
assets
.654 6.260 .413
Table 4.4 : Variables in the Analysis
Analysis:
These are the statistics for the variables that are in the analysis at each step. Tolerance is used to
determine how much the independent variables are linearly related to one another
(multicollinear). A variable with very low tolerance contributes little information to a model, and
can cause computational problems. Here the tolerance is high and hence the variables contribute
significantly to the model. F-to-enter(3.84) & F-to-remove(2.71) is useful for describing what
happens if the variable is entered/removed from the current model (given that the other variables
remain).
Step
Number of
Variables
Lambd
a df1 df2 df3 Exact F
Statistic df1 df2 Sig. Statistic df1 df2 Sig. Statistic
1 1 .586 1 1 40 28.218 1 40.000 .000
2 2 .486 2 1 40 20.611 2 39.000 .000
3 3 .413 3 1 40 17.970 3 38.000 .000
4 4 .354 4 1 40 16.908 4 37.000 .000
Table 4.5 : Wilk?s Lambda
Analysis:
The number of variables indicates the number of variables in the model at each step. Lambda
Values close to 0 indicate the group means are different. For F statistic if the significance value
is small (less than say 0.10) this indicates that group means differ which is the case here.
Function Eigenvalue % of Variance Cumulative %
Canonical
Correlation
1 1.828(a) 100.0 100.0 .804
a First 1 canonical discriminant functions were used in the analysis.
Table 4.6 : Eigenvalues
The table below shows the eigenvalues. The larger the eigenvalue, the more of the variance in the
dependent variable is explained by that function. Since the dependent in this example has only two
categories, there is only one discriminant function. However, if there were more categories, we would
have multiple discriminant functions and this table would list them in descending order of importance.
The second column lists the percent of variance explained by each function. The third column is the
cumulative percent of variance explained. The last column is the canonical correlation, where the
squared canonical correlation is the percent of variation in the dependent discriminated by the
independents in DA. Sometimes this table is used to decide how many functions are important (ex.,
eigenvalues over 1, percent of variance more than 5%, cumulative percentage of 75%, canonical
correlation of .6). This issue does not arise here since there is only one discriminant function, though we
may note its canonical correlation is not high.
The square root of each eigenvalue provides an indication of the length of the corresponding
eigenvector.The % of variance column allows you to evaluate which canonical variable accounts
for most of the spread.Here function 1 itself describes 100% of all variance. Since, the derived
eigen value is 1.828 (>1), it indicates a significant model.
Test of Function(s)
Wilks'
Lambda Chi-square df Sig.
1 .354 39.502 4 .000
Table 4.7 Wilk?s Lambda and Chi Square Statistics
This second appearance of Wilks's lambda serves a different purpose than its use in the ANOVA table
above. In the table below it tests the significance of the eigen value for each discriminant function. In this
example there is only one, and it is significant.
Function
1
Working Capital/
total Assets
.841
EBIT/ Total Assets .448
Fixed Assets / Total
assets
.585
Long Term loan /
Total Assets
-.786
Table 4.8 : Standardized Canonical Discriminant Function Coefficients
Analysis:
The standardized discriminant function coefficients in the table below serve the same purpose as beta
weights in multiple regressions: they indicate the relative importance of the independent variables in
predicting the dependent.
When variables are measured in different units, the magnitude of an unstandardized coefficient
provides little indication of the relative contribution of the variable to the overall discrimination.
The coefficients of the canonical variable are used to compute a canonical variable score for each
case.
Function
1
Working Capital/
total Assets
2.562
EBIT/ Total Assets 1.000
Fixed Assets / Total
assets
.361
Long Term loan /
Total Assets
-.510
(Constant) .018
Table 4.9 : Canonical Discriminant Function Coefficients
The coefficients displayed in this table are the coefficients of the canonical variable. The
coefficients are used to compute canonical variable scores for each case. Here the score is
2.562(Working Capital/ total Assets)+1(EBIT/ Total Assets)+0.361 (Fixed Assets/ Total
Assets) -0.510 (Long Term Loan/ Total Assets) + 0.018
V2
Function
1
0 -1.319
1 1.319
Table 4.10: Functions at Group Centroids
This table displays the canonical variable means by group. Within-group means are computed for
each canonical variable.
The centroids enable us to determine a cut-off score for application of model on financial data of
corporate. The cut off score is calculated as :
{(Centroid of failure*No. of failed companies) + (Centroid of Success*No. of successful
companies)} / {No. of failed companies + No. of successful companies}
This gives us a cut-off score of
{(-1.319*21)+(1.319*21)}/{21+21} = 0
Cases which evaluate on the function above the cutting point are classified as "Failures," while those
evaluating below the cutting point are evaluated as "Success".
4.2 Classification Statistics
Processed 42
Excluded Missing or out-of-range
group codes
0
At least one missing
discriminating variable
0
Used in Output 42
Table 4.11: Classification Processing Summary
Analysis:
This table shows the number of cases evaluated, processed, and excluded from the classification
V2
Predicted Group
Membership Total
0 1 0
Original Count 0 18 3 21
1 0 21 21
% 0 85.7 14.3 100.0
1 .0 100.0 100.0
a 92.9% of original grouped cases correctly classified.
Table 4.12: Classification Results (a)
This measures the degree of success of this sample.In our case 85.7% of the cases are correctly
classified as failed compan^ies while 100 % of successful companies are correctly classified.
Overall, the model has 92.9% capability to classify correctly.
4.3 Model Testing:
Successful development of a model is incomplete without putting it through a test. For the
purpose of testing the prediction capability of the discriminant model , the following companies
which have already failed were tested by plugging in data in the equation as per the model and
the results were as follows:-
Company
Year of
failure Zscore
7SIV Industries Ltd 2003 -0.5738
Skyline Leather Industries
Ltd 1999 -0.89849
Southern Herbals Ltd 2001 -0.00889
Nortech India Ltd. 1996 -0.32645
Reil Products Ltd 1999 -0.03281
Table 4.13: Testing with data
Since, the z-score for the above companies is less than 0; therefore, we can conclude that the
model is capable of successfully predicting corporate failures in India.
4.4 Logistic Regression Data Analysis
On the same set of data, the logistic regression was run and the results are analyzed below:
The path followed in SPSS is Analyze ? Regression ? Binary Logistic
Unweighted Cases(a) N Percent
Selected Cases Included in Analysis
42 100.0
Missing Cases
0 .0
Total
42 100.0
Unselected Cases
0 .0
Total
42 100.0
If weight is in effect, see classification table for the total number of cases.
Table 4.14: Case Processing Summary
Dependent Variable Encoding
The Dependent Variable Encoding table above shows the dependent variable, success is coded
with the reference category=1="yes", and the failure category is coded 0. This is conventional for
logistic analysis.
Original Value Internal Value
0
0
1
1
Table 4.15: Dependant Variable Encoding
Block 0: Beginning Block
Observed Predicted
V2
Percentage
Correct
0 1 0
Step 0 V2 0
0 21 .0
1
0 21 100.0
Overall Percentage
50.0
a Constant is included in the model.
b The cut value is .500
Table 4.16: Classification Table
The model has 50% classification accuracy.
CHAPTER 5 - CONCLUSION AND FINDINGS
5.1 Conclusion & Findings
Assessing the financial position of a firm and its propensity for bankruptcy is of great interest to
all stakeholders of the firm. This study investigates how to improve the assessment method, and
what variables or combination of variables convey more useful information for bankruptcy
prediction. This research contributes to accounting, finance, and information systems research in
multiple ways. We find that carefully selecting explanatory variables (financial ratios)from both
accounting and market sources can improve future bankruptcy predictions.
- Model :
Z=0.018*Constant-0.51*Long term loan/Total assets+0.361*Fixed assets/Total
assets+ EBIT/Total assets+2.562*Working capital/Total assets
- The significant variables include:
o Long term Loan / Total Assets
o Fixed Assets/Total Assets
o EBIT/Total Assets
o Working Capital/Total Assets
- The cut-off for a company?s z-score to classify as either successful or failed is 0 i.e. if the
score totals below 0, then the company is classified as failed (with 95% confidence level)
and if the score totals above 0, it is classified as successful.
- The model has an overall capability to correctly classify the corporate as successful or
failed 92.9% times.
- To further establish the credibility of the model, it was applied on 5 already failed firms
and they were successfully classified by the model as failed.
- On comparison with Logit, Discriminant analysis was proven to have better classification
ability.
REFERENCES
- Altman E.I. Financial Ratios, Discriminant analysis and the prediction of corporate
bankruptcy. Journal of Finance 1968 (September): 589-609.
- Applied Financial Economics, L. LIN and J. PIESSE Department of Banking and
Finance, National Chi-Nan University, 1 University Road, Pu-Li, Nantou 545, Taiwan;
Management Centre, School of Social Science and Public Policy, King?s College
London, 150 Stamford St, London SE1 9NN, UK and University of Stellenbosch,
Republic of South Africa(2004) “Identification of corporate distress in UK industrials: a
conditional probability analysis approach”
- Beaver W. Market prices, financial ratios, and the prediction of failure, Journal of
Accounting Research (1968).
- Boyd W Harper, Westfall Ralph, Stasch F Stanley, Marketing Research – Text and
Cases, 7
th
Edition, All India Traveller Book Seller, New Delhi, Chapter 16
- Choong Nyoung Kim, Department of Business Administration, University of
Seoul(2001)
- “A Neural Network approach to compare predictive value of accounting versus market
data”
- Demetrios Ginoglou,Ph.D., Konstantinos Agorastos,Ph.D., Thomas Hatzigagios,Ph.D.
University of Macedonia Economic and Social Sciences,Thessaloniki,Greece “Predicting
Corporate failure of problematic firms in Greece with LPM, Logit, Probit and
Discriminant Analysis”
- Dr. S. Pardhasaradhi, Associate Professor, Dept. of Business Management, O.U.(2001)
“Tracing the Trajectories of Sickness – a diagnostic tool for Corporate Turnaround”
- Edward I. Altman and Paul Narayanan (1996) “Business Failure Classification Models:
An international Survey”
- Eric Bataille, Catherine Bruneau, Alexis Flageollet and Frederic Michaud” Business
cycle and Corporate Failure in France: Was there a link?”
- Ghassan Hossari (2002)” A Dynamic Ratio-Based Model for signalling Corporate
Collapse”
- International Research Journal of Finance and Economics ISSN 1450-2887 Issue 12
(2007) “Bank Failure Prediction Using Modified Minimum Deviation Model”
- Jorge A Chan Lau and Toni Gravelle “END: A New Indicator of Financial and Non –
Financial Corporate Sector Vulnerability”
- Krishnan Dandapani and Edward R. Lawrence Department of Finance and Real Estate,
College of Business Administration, Florida International University, Miami, Florida,
USA(2001)” Virtual bank failures: An Investigation”
- Malhotra K Naresh (2007), Marketing Research – An Applied Orientation, 5
th
Edition,
Pearson Prentice Hall, New Delhi, Chapter 18
- Mohamed Ariff, University of Tokyo & Bond University & J. Ratnatunga, Monash
University(2008)” Do accounting and finance tools serve governance?”
- Nargundkar Rajendra, Marketing Research – Text and Cases, 3
rd
Edition, Tata McGraw
Hill, New Delhi, Chapter 11
- Scholor Bharat Tiwari, Jamia Millia Islamia University, (2004) “Prediction of corporate
failure: Formulation of an Early Warning Model”
- Srivatasava, S.S. and Yadav, R.A., (1986) “Management and Monitoring of Industrial
Sickness, Concept Publishing Company, New Delhi”
- Sung Woo Shin1 and Suleyman Biljin Kilic2,(2006), “Using PCA-Based Neural Network
Committee Model for Early Warning of Bank Failure”
- www.capitaline.com
- www.bseindia.com
- www.ebscohost.com
APPENDIX
Unweighted Cases N Percent
Valid 42 100.0
Excluded Missing or out-of-range
group codes
0 .0
At least one missing
discriminating variable
0 .0
Both missing or out-of-
range group codes and at
least one missing
discriminating variable
0 .0
Total 0 .0
Total 42 100.0
Table A1: Analysis Case Processing Summary
V2
Mean
Std.
Deviation Valid N (listwise)
Unweighted Weighted Unweighted Weighted
0 Profit / Total Assets -
1.615523
39059985
5.735876951
381770
21 21.000
Current Assets/ Current
Liabilities
.5793730
0812612
.7091786907
89817
21 21.000
Total Liabilities/ Total
Assets
2.783308
09993698
2.126770848
602280
21 21.000
Working Capital/ total
Assets
-
.1961012
5493701
.4042966795
61029
21 21.000
EBIT/ Total Assets
- .4144586467
21 21.000
.1634173
2074443
63401
Cash Flow/ Total
Liabilities
-
.1283307
4512445
.3568185496
51258
21 21.000
Total Liability/ Total
Equity
-
1.502856
82539987
3.275065206
270175
21 21.000
Sales/ Total assets .9234384
8286923
2.261682882
795814
21 21.000
Cash / Total assets .3154752
2806505
.6569506192
69704
21 21.000
Current Assets/ Total
Assets
.1813208
1581027
.1846626833
25422
21 21.000
Quick Assets/ Current
Liabilities
.4844577
7044052
.6876877162
00810
21 21.000
Cash Flow/ Total Assets -
.2196388
3594029
.4628954948
50094
21 21.000
Profit / Total Equity -
.6998476
2902113
3.916280835
721906
21 21.000
Total equity/ Total
Assets
-
.7063571
4805390
1.445285895
533746
21 21.000
Quick assets / Total
Assets
.3415164
5352316
.6463715468
74880
21 21.000
Total equity/ Total
Liability
-
.1976331
0514872
.5616133825
02813
21 21.000
Cash / current Liability .3545746
0572963
.6466964695
48380
21 21.000
EBIT/ Total equity .0571095
7719548
.2929900568
29022
21 21.000
Fixed Assets / Total
assets
1.158283
49077464
1.196283612
130356
21 21.000
Fixed assets/ Total
equity
-
.6770896
4057991
2.133376390
127143
21 21.000
Long Term loan / Total
Assets
2.138906
68651078
1.962362811
626104
21 21.000
Cash flow/ Current
Liability
-
.3666791
7931090
.6490719899
47628
21 21.000
Current liability/ Total
assets
.6444014
1342621
.4772843731
36002
21 21.000
Current liability /Total
Equity
-
.2498620
9828079
.7539304017
33076
21 21.000
Investments/ Working
Capital
.4442642
6036847
2.054817000
897042
21 21.000
Long Term Loan / Total
Equity
-
1.252994
72711908
2.740449830
871464
21 21.000
Sales/ Total Equity .1391803
8006864
1.786834641
052883
21 21.000
Reserves/ Total Assets -
12.20809
65472784
4
49.00191978
7220300
21 21.000
1 Profit / Total Assets .2209039
7184415
.5479557193
31049
21 21.000
Current Assets/ Current
Liabilities
5.347396
35267341
12.80279672
9957100
21 21.000
Total Liabilities/ Total
Assets
1.158778
54761375
1.172592817
930064
21 21.000
Working Capital/ total
Assets
.3422539
5615118
.2285416928
01171
21 21.000
EBIT/ Total Assets .2605234
6418487
.4790535240
41069
21 21.000
Cash Flow/ Total
Liabilities
-
.2214542
5458228
2.202943171
804353
21 21.000
Total Liability/ Total
Equity
.6384648
3956997
2.381203360
495803
21 21.000
Sales/ Total assets 4.510582
91900548
7.685007454
368340
21 21.000
Cash / Total assets 2.677680
68255796
8.399741450
254340
21 21.000
Current Assets/ Total
Assets
.8294258
8127880
1.660522047
649173
21 21.000
Quick Assets/ Current
Liabilities
4.768027
16585735
9.543952480
744060
21 21.000
Cash Flow/ Total Assets -
.2293659
6597211
2.190903016
205385
21 21.000
Profit / Total Equity .0868016
8257244
.4458603219
39009
21 21.000
Total equity/ Total
Assets
15.94173
40655212
4
62.89919353
0197200
21 21.000
Quick assets / Total
Assets
2.822849
81759940
8.352848555
372290
21 21.000
Total equity/ Total
Liability
16.44369
22530210
5
62.79101654
6825200
21 21.000
Cash / current Liability 2.970715
16678499
8.349597035
570880
21 21.000
EBIT/ Total equity .1944375
7018592
.5169424307
22622
21 21.000
Fixed Assets / Total
assets
1.410869
36924302
1.955476420
919347
21 21.000
Fixed assets/ Total
equity
.5345149
8589407
1.115535343
361420
21 21.000
Long Term loan / Total
Assets
.6780449
6595206
.9504998023
78909
21 21.000
Cash flow/ Current
Liability
-
.5227567
7780966
3.871717210
954797
21 21.000
Current liability/ Total
assets
.4807335
8166169
.3901404166
43157
21 21.000
Current liability /Total
Equity
.2741322
2352313
.6685761683
09544
21 21.000
Investments/ Working
Capital
.0990141
7913191
.2693843831
13473
21 21.000
Long Term Loan / Total
Equity
.3643326
1604683
1.762943590
463086
21 21.000
Sales/ Total Equity 2.334138
14414805
3.218655834
502136
21 21.000
Reserves/ Total Assets -
.2471755
2931953
2.165808684
664851
21 21.000
Total Profit / Total Assets -
.6973097
0937785
4.130262156
502496
42 42.000
Current Assets/ Current
Liabilities
2.963384
68039976
9.274931148
163310
42 42.000
Total Liabilities/ Total
Assets
1.971043
32377537
1.884940795
986955
42 42.000
Working Capital/ total
Assets
.0730763
5060709
.4236000964
08274
42 42.000
EBIT/ Total Assets .0485530
7172022
.4916990394
51294
42 42.000
Cash Flow/ Total
Liabilities
-
.1748924
9985337
1.559366802
275825
42 42.000
Total Liability/ Total
Equity
-
.4321959
9291495
3.028598826
601965
42 42.000
Sales/ Total assets 2.717010
70093736
5.882178199
879110
42 42.000
Cash / Total assets 1.496577
95531151
6.004743987
033820
42 42.000
Current Assets/ Total
Assets
.5053733
4854453
1.212124695
686751
42 42.000
Quick Assets/ Current
Liabilities
2.626242
46814893
7.025846044
063990
42 42.000
Cash Flow/ Total Assets -
.2245024
0095620
1.563981463
854688
42 42.000
Profit / Total Equity -
.3065229
7322435
2.781552997
449649
42 42.000
Total equity/ Total
Assets
7.617688
45873367
44.74265625
5977670
42 42.000
Quick assets / Total
Assets
1.582183
13556128
5.984545469
588590
42 42.000
Total equity/ Total
Liability
8.123029
57393616
44.65814522
2454890
42 42.000
Cash / current Liability 1.662644
88625731
5.997039411
186720
42 42.000
EBIT/ Total equity .1257735
7369070
.4207853362
42317
42 42.000
Fixed Assets / Total
assets
1.284576
43000883
1.606158470
612116
42 42.000
Fixed assets/ Total
equity
-
.0712873
2734292
1.789727660
704931
42 42.000
Long Term loan / Total
Assets
1.408475
82623142
1.692844205
909554
42 42.000
Cash flow/ Current
Liability
-
.4447179
7856028
2.742997979
204289
42 42.000
Current liability/ Total
assets
.5625674
9754395
.4384413502
47716
42 42.000
Current liability /Total
Equity
.0121350
6262117
.7520879616
95944
42 42.000
Investments/ Working
Capital
.2716392
1975019
1.457933653
481030
42 42.000
Long Term Loan / Total
Equity
-
.4443310
5553612
2.418556608
405625
42 42.000
Sales/ Total Equity 1.236659
26210834
2.800861295
939622
42 42.000
Reserves/ Total Assets -
6.227636
03829899
34.78847073
3013890
42 42.000
Table A2: Group Statistics
Function
1
Working Capital/ total
Assets
.621
Total Liabilities/ Total
Assets(a)
-.401
Long Term loan / Total
Assets
-.359
EBIT/ Total Assets .359
Sales/ Total assets(a) .335
Current liability/ Total
assets(a)
-.310
Quick Assets/ Current
Liabilities(a)
.274
Current liability /Total
Equity(a)
.256
Cash / current
Liability(a)
.251
Quick assets / Total
Assets(a)
.238
Cash / Total assets(a) .238
Profit / Total Equity(a) -.214
Reserves/ Total
Assets(a)
-.212
Total equity/ Total
Assets(a)
.176
Total equity/ Total
Liability(a)
.173
Profit / Total Assets(a) -.146
Total Liability/ Total
Equity(a)
.138
Current Assets/ Current .119
Liabilities(a)
Cash flow/ Current
Liability(a)
-.116
Investments/ Working
Capital(a)
-.107
Cash Flow/ Total
Liabilities(a)
-.099
Sales/ Total Equity(a) .093
Long Term Loan / Total
Equity(a)
.092
Current Assets/ Total
Assets(a)
.079
Cash Flow/ Total
Assets(a)
-.070
Fixed assets/ Total
equity(a)
.069
Fixed Assets / Total
assets
.059
EBIT/ Total equity(a) -.008
Table 3A: Structure Matrix
The structure matrix contains within-group correlations of each predictor variable with the
canonical function. It provides a means to study the usefulness of each variable in the
discriminant function. The strongest correlations for function 1 occur for the first 3 variables
defined previously as well.
Figure 1: Canonical Distribution function1
Figure 2: Canonical Distribution function1
doc_948919332.docx
presents the report of corporate failure in India using multiple discriminant analysis using certain key financial ratios as inputs. Results prove that the model is accurate in predicting the failure of companies based on publicly available information like the Cash Flow, Balance sheet and the Profit and Loss account of the company
A
REPORT ON
„PREDICTION OF CORPORATE FAILURE IN
INDIA: A MULTIPLE DISCRIMINANT
ANALYSIS APPROACH?
PREFACE
Corporate failure is a major economic and social problem, which has an adverse impact on
entrepreneurship, production and supply of goods, prices, employment and so on. Corporate
failure directly concerns the shareholders, employers, bankers, customers and others who have
direct or indirect stake in the company. Industrial sickness is as much of a national problem as
for a company and industry experiencing this menace and hence being able to predict this
sickness becomes very important.
Owing to the criticality of this research, this has been widely researched by eminent finance
scholars all over the world. But the amount of research carried out on the Indian companies
remains woefully inadequate. Predictive models developed in India primarily concentrate on
companies which have defaulted on their bond payments predominantly. While being unable to
service the bondholders is one of the primary indicators of a company going sick, it needs to be
kept in mind that the trading volume in Indian equity market is almost 10 times of the trading
volumes in the debt market. This shows that the debt markets in India are highly
underdeveloped. This has formed our basis for researching on companies which have been
delisted for BSE for winding up.
Globally such models have been created by using multiple discriminant analysis using certain
key financial ratios as inputs. Our model has been created along similar lines, taking into account
the economic cycle too. The results have also been compared with the results obtained by the
logistic regression model. The results prove that the model is accurate in predicting the failure of
companies based on publicly available information like the Cash Flow, Balance sheet and the
Profit and Loss account of the company.
TABLE OF CONTENTS
CHAPTER 1- INTRODUCTION ......................................................................................................................... 5
1. Corporate Failure in India: The Signals, Symptoms and Causes ...................................................... 7
1.1 The Symptoms ................................................................................................................................ 8
1.2 The Causes ................................................................................................................................ 9
CHAPTER 2: LITERATURE REVIEW .................................................................................................... 12
CHAPTER 3- RESEARCH METHODOLOGY ........................................................................................ 23
3.1Objectives .......................................................................................................................................... 23
3.1.1Primary Research Objective ........................................................................................................ 23
3.1.2.Secondary Research Objectives ................................................................................................. 23
3.2 Scope of Research ............................................................................................................................. 23
3.3.Universe of study .............................................................................................................................. 24
3.4 Sample of Study ................................................................................................................................ 24
3.5 Data Collection ................................................................................................................................. 24
3.6 Limitations of Study ......................................................................................................................... 25
3.7 Tools Used ........................................................................................................................................ 26
3.7.1 Multiple Discriminant Analysis ................................................................................................. 26
3.7.2 Logit Model ............................................................................................................................... 30
Chapter 4 -ANALYSIS AND INTERPRETATION .................................................................................. 32
4.1 Analysis............................................................................................................................................. 32
4.2 Classification Statistics ..................................................................................................................... 40
4.3 Model Testing: .................................................................................................................................. 40
4.4 Logistic Regression Data Analysis ................................................................................................... 41
CHAPTER 5 - CONCLUSION AND FINDINGS ..................................................................................... 43
5.1 Conclusion & Findings ..................................................................................................................... 43
REFERENCES ................................................................................................................................................ 45
APPENDIX .................................................................................................................................................... 47
TABLE OF TABLES
TABLE 2.1: COMMONLY USED FINANCIAL RATIOS TO PREDICT CORPORATE FAILURE .................... 16
TABLE 2.2: PREDICTIVE POWERS OF STATIC AND DYNAMIC MODELS ............................................ 17
TABLE 3.1: INPUTS FOR THE MDA ................................................................................................. 25
TABLE 4.1 : LOG DETERMINANTS .................................................................................................. 33
TABLE 4.2 : BOX?S M ..................................................................................................................... 33
TABLE 4.3: STEPWISE RESULTS ..................................................................................................... 34
TABLE 4.4 : VARIABLES IN THE ANALYSIS ..................................................................................... 35
TABLE 4.5 : WILK?S LAMBDA ........................................................................................................ 36
TABLE 4.6 : EIGENVALUES ............................................................................................................. 36
TABLE 4.7 WILK?S LAMBDA AND CHI SQUARE STATISTICS ........................................................... 37
TABLE 4.8 : STANDARDIZED CANONICAL DISCRIMINANT FUNCTION COEFFICIENTS ..................... 38
TABLE 4.9 : CANONICAL DISCRIMINANT FUNCTION COEFFICIENTS ............................................... 38
TABLE 4.10: FUNCTIONS AT GROUP CENTROIDS ............................................................................ 39
TABLE 4.11: CLASSIFICATION PROCESSING SUMMARY .................................................................. 40
TABLE 4.12: CLASSIFICATION RESULTS (A) ................................................................................... 40
TABLE 4.13: TESTING WITH DATA .................................................................................................. 41
TABLE 4.14: CASE PROCESSING SUMMARY ................................................................................... 41
TABLE 4.15: DEPENDANT VARIABLE ENCODING ........................................................................... 42
TABLE 4.16: CLASSIFICATION TABLE ............................................................................................ 42
TABLE A1: ANALYSIS CASE PROCESSING SUMMARY .................................................................... 47
TABLE A2: GROUP STATISTICS ...................................................................................................... 53
TABLE 3A: STRUCTURE MATRIX ................................................................................................... 55
TABLE OF FIGURES
FIGURE 1: CANONICAL DISTRIBUTION FUNCTION .......................................................................... 56
FIGURE 2: CANONICAL DISTRIBUTION FUNCTION .......................................................................... 57
CHAPTER 1- INTRODUCTION
Research in the corporate failure predictions has been gaining significant importance amongst
academics and practitioners since 1966 when Beaver made his first attempt to forecast corporate
failure in UK. The research has gained more importance with the recent bust in the economy in
2008. However, as the corporate failure problem still persists in modern economies, having
significant economic and social implications, and as an accurate and reliable method for
predicting the failure event has not yet been found for Indian companies, research interest is
likely to continue.
Beaver?s approach was „univariate? in that each ratio was evaluated in terms of how it alone
could be used to predict failure without consideration of the other ratios. Altman (1968) tried to
improve Beaver?s study by applying multivariate linear discriminant analysis (LDA), a method
that has been proved to suffer from certain limitations1. Researchers, however, seemed to have
ignored these limitations and continued extending Altman?s model, hoping to achieve higher
classification accuracy. Some examples of these attempts include among others: 1) discriminant
analysis for Indian sick companies(Edward I. Altman and Paul Narayanan (1996)), 2) principal
component analysis including macro economic factors in France(Eric Bataille, Catherine
Bruneau, Alexis Flageollet and Frederic Michaud), 3) developing structural model for
evaluating each firm?s default risk based on Merton?s model(Jorge A Chan Lau and Toni
Gravelle),4)developing Dynamic Ratio-Based Model for signalling Corporate Collapse(Ghassan
Hossari (2002)), 5) developing modified minimum sum of deviations model using the data from
Private Turkish Commercial Banks(International Research Journal of Finance and Economics
ISSN 1450-2887 Issue 12 (2007)), 6) identifying the causes behind the failures of virtual banks
using the Probit methodology, 7) using LPM, Logit, Probit and Discriminant Analysis for
predicting Corporate failure of problematic firms in Greece (Demetrios Ginoglou, Konstantinos
Agorastos, Thomas Hatzigagios), 8) conditional probability analysis approach for UK
industries(Applied Financial Economics, L. LIN and J. PIESSE Department of Banking and
Finance, National Chi-Nan University). Nevertheless, none of these attempts accomplished
higher statistically significant results than Altman?s earlier work and moreover, in the majority of
cases, the practical application of these models presented difficulties due to their complexity.
Nonetheless, failure prediction researchers did not give up and continued to employ various
classification techniques, always hoping for the discovery of the „perfect? model. The most
popular of these techniques are recursive partitioning, survival analysis, neural networks and the
human information processing approach. Their results indicated that no superior method has
been found even though the failure prediction accuracy varied depending on the prediction
method applied.
This study employs two techniques to predict corporate failures in India. The first one is Multiple
Discriminant Analysis. Multiple Discriminant Analysis offers an intuitive representation of
statistical results, thus making it possible to easily interpret results without a deep understanding
of the statistical underlying principles. The second one is Logistic Regression. Finally the
predictive capabilities of both the models are compared to find out the most appropriate model
with highest accuracy for prediction. As work by many researchers has proved that distress
prediction models are fundamentally unstable, in that the coefficients of a model will vary
according to the underlying health of the economy (Moyer, 1977; Mensah, 1984), hence
stressing the need that the model derivation should be as close in time as possible to the period
over which predictions are to be made (Keasy and Watson, 1991), a recent data set (1995-2005)
i.e. the boom period of Indian industrial companies (both failed and healthy) is used.
The study proceeds as follows. Chapter 2 provides literature review of the studies already done
in this field; research methodology is explained in chapter 3; chapter 4 includes analysis and
interpretation; chapter 5 reports conclusion and empirical findings of the study.
1. Corporate Failure in India: The Signals, Symptoms and Causes
Corporate failure is a major economic and social problem, which has an adverse impact on
entrepreneurship, production and supply of goods, prices, employment and so on. Corporate
failure directly concerns the shareholders, employers, bankers, customers and others who have
direct or indirect stake in the company. Industrial sickness is as much of a national problem as
for a company and industry experiencing this menace. In developing countries like India it is
very important that the economy is supported by strong and stable industrial growth. Corporate
failures not only counter the interest of many parties concerned but also lead to lack of
confidence in economic growth for foreign investments. It is in the interest of everyone that these
corporate failures should not take place and for that proper forecasting models have to be
developed.
A company cannot fail all of a sudden. The signals of sickness should be identified as soon as
possible to take counter measures. The warning signals of sickness may differ from enterprise to
enterprise depending upon the stage of its development but the people around can certainly
discern these. The signals then go on to become symptoms of sickness which lead to the failure.
The symptoms of sickness are related to various causes. These signals and symptoms are a great
source of information to companies and financial institutions for prediction, prevention and
control of sickness.
There have been various studies like Argenti?s (1976) „A Study on Corporate Failures? is
analysis of failure. The study has a dynamic approach and traces the firm?s path from health to
failure. According to the study there are three trajectories of failure. Type I refers to a small
business whose performance does not rise beyond poor and fails between 2-3 years of time,
mainly due to serious cost estimation errors. Type II are young companies failing which are
growing at supernormal pace and does not have time to stabilize themselves and Type III failures
refer to mature companies which have been operating since decades.
(1)
(1) Tracing the Trajectories of Sickness – a diagnostic tool for Corporate Turnaround, Dr. S. Pardhasaradhi, Associate Professor, Dept. of
Business Management, O.U.
1.1 The Symptoms
There can be many symptoms of corporate failure like
:
(2)
- Delay or default in Payment to Suppliers
- Irregularity in bank Account
- Delay or default in Payment to Banks
- Frequent Requests for Credit
- Decline in Capacity Utilization
- Low Turnover of Assets
- Poor Maintenance of P & Machinery
- Inability to take trade discount
- Excessive Manpower Turnover
- Extension Of Accounting Period
- Misrepresentation in Fin. statements
- Decline In price of Shares & Debenture
The signals and symptoms are a great source of information to companies and financial
institutions when it comes to prediction and prevention of sickness.
(3)
Firstly, signals from the
sick companies need to be identified. Srivastava (1986) states that a large number of signals are
displayed by failing units initially in several functional areas, viz., short term liquidity problems,
revenue losses, operating losses and overuse of external credit until it reaches a stage where it is
over burdened with debt and not being able to muster sufficient funds to meet its obligation.
These signals then merge with the symptoms related to the root cause of the problem.
Identifying signals and symptoms is a part of process to identify the root causes of the failure.
?
(2) Prediction of corporate failure: Formulation of an Early Warning Model, Scholor Bharat Tiwari, Jamia Millia Islamia University,
2004
?
(3) Srivatasava, S.S. and Yadav, R.A., Management and Monitoring of Industrial Sickness, Concept Publishing Company, New Delhi,
1986.
1.2 The Causes
The causes of sickness are basically related to the disorder in any one or more of the functional
systems within the unit, viz., Production, Finance, Marketing and Personnel. Again external
constraints may also adversely affect the functioning of the four main functional systems, if the
corporate management is unable to tackle the adverse changes. Some of these factors can be
identified as follows:
(i) External Factors
- Competition
- Change In Govt. Regulations
- Scarcity Of Inputs
- New technology
- Shift In Consumers Preference etc.
(ii) Internal Factors
- Managerial Incompetence
- Structural Rigidity
- Lack of Leadership etc.
The external factors are not much under our control but during the course of business they have
to be given due cognizance. But the internal factors are totally under our control. The internal
factors can be drilled down for a proper assessment of the root cause as follows:
Managerial Incompetence
In terms of production following factors can be identified:
? Improper Location
? Wrong Technology
? Uneconomic Plant Size
? Unsuitable P & Machinery
? Inadequate R&D
? Poor Maintenance
In terms of marketing following factors can be indentified:
? Inaccurate Demand Projections
? Improper Product Mix
? Wrong Product Positioning
? Irrational Price Structure
? Inadequate Sales Promotion
? High Distribution Costs
? Poor Customer Service
In terms of finance following factors can be identified:
? Wrong Capital Structure
? Bad Investment Decisions
? Weak Budgetary Control
? Inadequate MIS
? Poor Mgt. Of Receivables
? Bad Cash Management
? Strained Relations with
? Capital Suppliers
? Improper Tax Planning
In terms of personnel following factors are identified:
? Ineffective Leadership
? Bad Labour Relations
? Inadequate Human Relations
? Over Staffing
? Weak Employee Commitment
? Irrational Compensation Structure
Out of all the reasons 4 major reasons have been identified:
a.) Life Cycle Decline: Every company goes through different phases of life cycle i.e.
introduction, growth, maturity and decline. As new technologies emerge, the growth
pattern shift and new industries and firm appears and prosper. At the same time, the older
ones become less competitive and lose their real or relative advantage They lose their
dynamism and their potential to generate adequate return on investment as they
eventually slow down, they are merged into other companies, are bought out or stop
operating altogether.
b.) Trapped by Past Success: The things that drive success i.e. being focused, tried and true
strategies, confident leadership, galvanized corporate cultures and especially the interplay
of all these elements also cause decline if not channelized properly in the interest of
company.
c.) Inappropriate Mental Models: One of the models is to just consider present information
and ignore environmental changes e.g. IBM focussed attention on mainframe computers
but lost business to Apple and Compaq in personal computing. The second model is
considering environmental changes as temporary fad. E.g. Singer sewing machines sales
dipped because they could not believe the environment change.
d.) Rigidity in Response to Crises: Rigid Posture decreases chances of successful adaptation
and survival. The company management should be flexible and ready to accept the
changes as well as incorporate them in the way company is being operated and managed
to avoid any corporate failures.
CHAPTER 2: LITERATURE REVIEW
A literature review is a body of text that aims to review the critical points of current knowledge
and or methodological approaches on a particular topic. Literature reviews are secondary
sources, and as such, do not report any new or original experimental work.
Most often associated with academic-oriented literature, such as theses, a literature review
usually precedes a research proposal and results section. Its ultimate goal is to bring the reader
up to date with current literature on a topic and forms the basis for another goal, such as future
research that may be needed in the area.
Edward I. Altman and Paul Narayanan (1996) “Business Failure Classification Models: An
international Survey” developed a Discriminant analysis model for identifying Sick companies
in India. They referred to sick companies as companies that were kept in operation even after
incurring losses and had used the IDBI definition which said sick companies suffered from:
? Cash losses for a period of 2 years, or if there was a continuous erosion of net worth
? 4 successive defaults on it?s on its debt service obligations
? Persistent irregularity in the use of credit lines
? Tax payments in arrears for one to two years
Altman et al carried out research on 18 sick and 18 healthy companies all of which were publicly
traded. Data used were from period between 1976 and 1995. The companies were drawn from
cement, electrical, engineering, glass, paper and steel industries. The Discriminant analysis
model had been developed based on the significant financial ratios calculated for each of the firm
in question. According to the output model, Cash Flow/Total Debt turned out to be the most
important factor whereas Sales/Total Assets turned out to be the least important variable.
Eric Bataille, Catherine Bruneau, Alexis Flageollet and Frederic Michaud” Business cycle
and Corporate Failure in France: Was there a link?” aimed to extract cyclical factors from
companies? data used to build the default score functions and then from the functions
themselves. The method used by the Bataille et al was the “Principal Component analysis” in the
context of large number of variables and small time periods. Factorial structure was used to
immunize the score functions and related decisions against the cyclical variations in the state of
the economy. This was because any linear classification model was developed with a cross
section of one year and in order to be robust, it needed to be adjusted over a period of time. In
certain cases, a complete re estimation of the model also might become necessary. In some cases,
the nature of the corporate might have changed significantly or related structural changes in the
economy had not been included in the model. In some other cases the score function remained
valid but the Discriminant threshold needed adjustment. Three macroeconomic series were
chosen by the authors: Annual GDP of France by value, output gap of French GDP by volume
obtained by Hodrick-Prescott filter, and industrial production capacity utilization and their effect
on both failing and non-failing firms were tested by the authors. The results indicated a very
strong similarity of the common factors indicating that the state of economy influences failing
and non-failing firms in a similar way, whatever was the sector chosen. Hence it was learnt from
this paper, that any scoring model that we develop need to be adjusted for the business cycles too
as static analysis will not be of much use in predicting the future defaults.
Jorge A Chan Lau and Toni Gravelle “END: A New Indicator of Financial and Non –
Financial Corporate Sector Vulnerability” talked about END (Expected number of Defaults)
as an indicator of Corporate Sector vulnerability. This was an indicator which was based on
forward looking information embedded in equity prices instead of historical information like
financial ratios. Because equity prices were updated on a daily basis, implementation of the END
indicator allowed for real-time monitoring of potential distress conditions in the corporate sector.
This model had been successfully applied in Korea, Malaysia, and Thailand etc. The END
indicator was constructed using a two step approach. In the first step, structural model for
evaluating each firm?s default risk was developed based on Merton?s model. Merton model was
based on the observation that the shareholders hold a call option on the asset value of the firm:
when the asset value of the firm falls below the face value of its debt, the firm was insolvent and
the shareholders? call option was out of the money. Hence, it was possible to use basic option
pricing techniques to value the debt and equity issued by a firm. Furthermore, readily available
balance-sheet information and equity prices could be used to infer the risk-neutral default
probability and the distance-to-default of the firm, or a normalized measure of the gap between
the firm?s asset value and the nominal value of its liabilities. While appealing, Merton?s original
model was unable to capture short-term default risk since continuity assumptions on the asset
value stochastic process ruled out the possibility of jump like default events. However, the rapid
demise of “fallen angels,” that was, investment-grade corporations that went bankrupt in a matter
of days, suggested that default events might be better characterized by jump processes than by
continuous ones. The practitioner?s model adopted in our approach corrects for this deficiency by
introducing uncertain recovery rates in order to capture jump-like default events. The second step
was to assess the probability that a subset of the firms analyzed default during a specified time
horizon. During crisis periods, it seemed reasonable to assume that a large number of defaults
must be driven by a common negative shock affecting the corporate sector rather than by firm-
specific factors. An underlying assumption of the structural model estimate to calculate the END
was that corporate valuations were driven by an unobserved common factor. Each firm?s value
over time was correlated to this common factor to varying degrees. In order to measure the
individual correlation of each firm?s estimated probability of default to this common factor
authors used a principal components analysis. This method assumed that a limited number of
unobserved variables (or factors) explained the total variation of the larger set of variables. That
was, the higher was the degree of co-movement across all individual firm default probability
time series, the fewer the number of principal components (factors) needed to explain a large
portion of the variance of the original series. In the case where the original variables were
identical (perfectly collinear), the first principal component would explain 100 percent of the
variation in the original series. Alternatively, if the series were orthogonal to one another (i.e.,
uncorrelated), it would take as many principal components as there were series to explain all the
variance in the original series. In that case, no advantage would be gained by looking at common
factors, as none existed. After computing the default probabilities for each firm, the authors
computed the amount of variation explained by the first two principal components of 125 firms
in Korea, the 148 firms in Malaysia, and the 79 firms in Thailand during the sample period.
Mohamed Ariff, University of Tokyo & Bond University & J. Ratnatunga, Monash
University(2008)” Do accounting and finance tools serve governance?” presented brief
review of literature on corporate governance and thus proposing corporate governance
framework. One of the objectives of this paper was the more ambitious one of addressing the
role of accounting and finance disciplines to serve corporate governance. Testing the use of some
accounting and finance tools had been done empirically to know if they would have alerted
management, auditors and regulators as well as investors to the impending collapse of failed
firms ahead of time. The model was developed by Edward Altman in 1968. According to the
model, Discriminant analysis was applied to two groups of financial ratios. One group derived
from the last set of accounts of companies prior to failure, and the other from the accounts of on-
going companies. The statistical procedure was then designed to produce a single score (Z score)
which could be used to classify a company as belonging to the failed group or the on-going
group (see Robertson and Mills, 1991). The final model consisted of five ratios that, when
combined in a specific manner, was able to discriminate between the bankrupt and the non-
bankrupt companies in his study. The variables together with their respective weights were
shown as follows:
Z-Score = 6.56 (XI) + 3.26 (X2) + 6.72 (X3) + 1.05 (X4)
Where, X1 = Working capital/Total assets (2)
X2 = Retained earnings/Total assets (3)
X3 = Profit before interest and tax/Total assets (4)
X4 = Net Worth/ Total Liabilities
To make the model operational, Edward Altman combined the failed group and the on-going
group then ordered according to their individual Z scores. It was then possible to specify two
limits as follows:
? An upper limit, where no failed companies were misclassified
? A lower limit, where no on-going companies were misclassified.
The area between the upper (2.60) and lower (1.10) limit was what Altman described as the
„zone of ignorance? or the „grey area?, where a number of failed companies and/or on-going
companies could be misclassified.
Ghassan Hossari (2002)” A Dynamic Ratio-Based Model for signalling Corporate
Collapse” included only those companies that had appointed an administrator, filed for
bankruptcy, gone into liquidation or receivership, failed to lodge listing fees, or wound up. As a
result, 37 such companies were identified among the total of 413 that were de-listed from
Australian Stock exchange (ASX). Similar number of non-collapsed companies was identified.
Hossari used 28 ratios:-
Profit / Total Assets Retained earnings /
Total assets
Total equity/ Total
Assets
Long Term loan /
Total Assets
Current Assets/ Current
Liabilities
Sales/ Total assets Quick assets / Total
Assets
Cash flow/ Current
Liability
Total Liabilities/ Total Assets Cash / Total assets Total equity/ Total
Liability
Current liability/ Total
assets
Working Capital/ total Assets Current Assets/ Total
Assets
Cash / current
Liability
Current liability /Total
Equity
EBIT/ Total Assets Quick Assets/ Current
Liabilities
EBIT/ Total equity Investments/ Working
Capital
Cash Flow/ Total Liabilities Cash Flow/ Total
Assets
Fixed Assets / Total
assets
Long Term Loan /
Total Equity
Total Liability/ Total Equity Profit / Total Equity Fixed assets/ Total
equity
Sales/ Total Equity
Table 2.1: Commonly used financial ratios to predict Corporate Failure
Static Model:
The assumption was that the same financial ratios were capable of signaling corporate collapse
over multiple time periods. Therefore, the same model was used to signal collapse for each year
in the sample period.
Dynamic Model
A suitable model was one that reflected a heuristic behavioral framework; specifically, it was a
dynamic model: dynamic in the sense that it did not rely on a coherent assortment of financial
ratios for signaling the event of collapse over multiple time periods. Separate formulation for
each year in the sample period 1996 to 2001 was used.
Summary of the Overall Predictive Power and Occurrence of Type I Error for both the Static
and Dynamic MDA-Based Models (1996 to 2001):-
Predictive Power of the
Static Model
Predictive Power of the
Dynamic Model
Period Overall Type I Error Overall Type I Error
1996 72.7% 45.5% 100% 0%
1997 66.7% 45.8% 70.8% 58.3%
1998 43.3% 70% 78.3% 13.3%
1999 67.9% 42.9% 85.7% 21.4%
2000 74.6% 33.8% 86.2% 18.5%
2001 90.9% 18.2% 100% 0%
Table 2.2: Predictive powers of Static and Dynamic Models
Very high Type I error was undesirable. This was because the erroneous classification of a
collapsed company as non-collapsed was a costly mistake, whereas the erroneous classification
of a non-collapsed company as collapsed was not. It was expected that the occurrence of Type I
error would be reduced by using a dynamic model.
International Research Journal of Finance and Economics ISSN 1450-2887 Issue 12 (2007)
“Bank Failure Prediction Using Modified Minimum Deviation Model” applied new model -
modified minimum sum of deviations model using the data from Private Turkish Commercial
Banks, and compared the results with the results of classical minimum deviation model in which
factors, which were formed with factor analysis, were used and validity of the models were
discussed. In this model, N firms were evaluated using m independent variables and a binary
classification was made. It was expected that weighted averages of independent variables of
successful firms will be greater than the break point determined in the model and that of
unsuccessful firms will be smaller. Classifications obtained in the conclusion of analysis might
sometimes be different from groupings determined at the beginning of Discriminant analysis.
Misclassification of a successful unit meant that the weighted average value calculated for the
related unit will be smaller than the break point. In other words, the condition stated in equations
1 and 3 would be violated for a misclassified successful unit. In order to prevent this violation,
the equation should be rearranged, in other words possibility of misclassification should be
added to the equation. For this purpose, 0 should be added to the equation in case of correct
classification and in case of misclassification; a deviation variable which was equal to the
distance between the related unit and the break point should be added. The aim of the linear
programming model to be formed was to minimize the sum of these deviation variables. Solution
of the model will give the optimum break point, the value of deviation variable for each unit and
the optimum values of weights of independent variables. The method which was suggested in
this study was determination of ratios to be selected within the mathematical model defined
above. Some arrangements were necessary for the model to realize this function. Firstly, the
ratios to be used should be grouped. For example, in bank failure prediction, the proper
classification of the ratios should be as follows: capital adequacy ratios, profit capital ratios,
liquidity ratios, ratios related to income and expenditure structure and ratios related to the quality
of assets. After ratios were divided into m” groups, the constraints which will enable the model
to select the most proper ratio from each ratio group should be added.
Krishnan Dandapani and Edward R. Lawrence Department of Finance and Real Estate,
College of Business Administration, Florida International University, Miami, Florida,
USA(2001)” Virtual bank failures: An Investigation” identified the causes behind the failures
of virtual banks. Using the Probit model methodology Dandapani and Lawrence examined the
components of the standard bank net income model (which were: net income, interest income
(II), interest expense (IE), the provision for loan losses (PLL), the non-interest income (NII) and
the non-interest expense (NIE)) for the surviving virtual banks and those banks which failed.
They found that the NII and NIE of the successful banks and the banks which failed were
statistically different in the time period before failures. They did the probit analysis on the failed
virtual banks and the failed brick and mortar banks and found that the IIs in both banks were
significantly different. They further explored the NII and NIE of the surviving banks and the
failed banks. Similar to the previous research they found that the brick and mortar banks failed
due to bad asset quality but the failure of virtual banks was mainly due to high NIEs. To
investigate what causes some of the virtual banks to fail, they studied the bank failure as a
dependent variable and regressed it over the constituents of net income, i.e. the II, the IE, the
PLL, the NII and the NIE. Since the dependent variable could only take values between 0 (for
the banks which had failed) and 1 (for the active banks), they used the following probit
regression model for the parametric analysis.
Active/inactive bank =w1+w2*II+w3*IE+w4*PLL+w5*NII+w6* NIE
The significance of parameter wi (i=2-6) indicated that the independent variable i was
statistically different for the active banks and the banks which failed. When comparing the failed
virtual banks and the failed brick and mortar banks they used 0 as dependent variable for the
failed virtual banks and one for the failed brick and mortar banks. The results of probit analysis
showed a statistically significant difference in the NIE and NII of the surviving virtual banks and
the virtual banks that failed. In the period of March 2000-2002, during which most of the
currently inactive banks failed, the net II as a percentage of total assets of the currently inactive
banks was higher than that of the active banks indicating that the net II was not responsible for
the failure of the currently inactive brick and mortar and virtual banks. It was found the
performance of the currently inactive banks better than the performance of the active banks for
almost entire time period of study. Plot of the burden for the active and currently inactive banks
showed that the burden for the currently inactive virtual banks to be higher than the burden for
the currently inactive brick and mortar banks and the currently active virtual banks, especially in
the time period from March 2000 to 2002 which witnessed most of the failures in virtual banks.
Plot of the NII and NIE for the active and currently inactive virtual banks showed that the NII of
the currently inactive banks as a percentage of total assets was higher than that for the now active
banks. NIEs for the failed banks were substantially higher than the NIEs for the now active
banks. Even though the failed banks generated high NIIs, it could not compensate the losses due
to very high NIEs. The losses due to high NIE led to an increase in the PLL for the currently
inactive banks. Plot of the PLL for the active and currently inactive banks showed that the PLL
as a percentage of total assets was substantially higher for the currently inactive banks in the time
period from March 2000 to 2002, the period when most of the currently inactive virtual banks
failed. The figure also showed that the PLL for the currently inactive brick and mortar banks was
higher than the PLL for the currently inactive virtual banks indicating large accumulation of bad
debts. They concluded that the failure of the brick and mortar banks was due to poor asset quality
whereas the virtual banks failed due to very high NIEs.
Demetrios Ginoglou,Ph.D., Konstantinos Agorastos,Ph.D., Thomas Hatzigagios,Ph.D.
University of Macedonia Economic and Social Sciences,Thessaloniki,Greece “Predicting
Corporate failure of problematic firms in Greece with LPM, Logit, Probit and
Discriminant Analysis” used Logit and Probit models of corporate failure to generate the
probability of failure as a financial risk measure I Greece . This study controlled how reasonable
was the division of firms into healthy and problematic and predicting the business failures by
LPM,Logit and Probit models and comparing them as well. The study used Morrison model for
Discriminant analysis for example,classification into bankrupt and non-bankrupt firms,etc.This
classification was based on the chosen financial ratios and linear combination of these that best
discriminates between these characteristics. Linear Probability model had also been used which
was actually a regression of dummy dependent variable,which were dichotomous,on a set of
explanantory variables.The model was as:
Y = b0+b1X1+b2X2+........bnXn +u
Where X was set of explanatory variables and Y=0 for healthy firms and Y=1 for bankrupt
firms.Y was the conditional probability of firm not going bankrupt given the set of explanatory
variables.Thus E(Y/X)would give the probability of a firm staying healthy whose financial ratios
were represented by set of X?s. The Logit and Probit models had been used to model the
conditional probability of bankruptcy as a function of firms?s debt-equity ratio. SPSS statistical
package was used to estimate the models using above methods. Discriminant analysis results
predicted Net Profit/Total Assets,Gross Profit/Total Assets,Total Debt/Stockholder’s equity and
(Current Assets – Short term debt)/Total Assets to be the significant variables. The methods used
in the study had been successful in classifying problematic and healthy firms to the tune of more
than 75 percent .MDA had turned out to be a more advanced but a complicated one than LPM
.But MDA suffers from the drawback that it checks the variable only after they had been used in
the model. Also in a country like Greece where economic conditions were not stable the
variables that were used in a model were also not very stable.Hence a control check of the
variables to be used before they were used in every model was necessary.Discriminant analysis
didn?t allow such control and hence Logit model gave better results in such cases.
Applied Financial Economics, L. LIN and J. PIESSE Department of Banking and Finance,
National Chi-Nan University, 1 University Road, Pu-Li, Nantou 545, Taiwan; Management
Centre, School of Social Science and Public Policy, King?s College London, 150 Stamford
St, London SE1 9NN, UK and University of Stellenbosch, Republic of South Africa(2004)
“Identification of corporate distress in UK industrials: a conditional probability analysis
approach” found out that bankruptcy prediction models depended on three factors: the model,
the variable selection criteria and the optimal cut-off probability. The variables selected reflected
five features that were generally accepted in the literature as contributing to the explanation of
corporate failure.
F1: management inefficiency Two ratios that reflect this were retained earnings/total assets and
profit after tax/total assets. Of these, the former was considered a better guide of a company?s
cumulative longer-term profitability and the latter, a short-term indicator.
F2: capital structure Capital structure in the form of gearing ratios was used extensively as a
measure of corporate risk, and thus a gearing ratio, total liabilities/ total assets, was included in
the study.
F3: insolvency A direct cause of corporate failure was the inability of a company to meet debt
obligations. The choice of a cash-based or working capital-based liquidity ratio was not
conclusive, and cash/current liabilities, change in net cash/total liabilities and working
capital/total assets were all surrogates for solvency.
F4: adverse economic effects In this paper, the annual FTSE all-share index (FTSE) was used as
a measurement of general economic conditions. Thus, it was interesting to examine whether the
failing firms in the sample were alone in performing badly in any particular period, or whether
there was an overall economic effect in that year that would had resulted in bankruptcy for the
more vulnerable firms
F5: income volatility given the short history of many companies, the standard deviation of past
income was not very robust. Instead, a measure of income stability can be constructed, defined:
(Incomet – Incomet-1)/(Incomet + (Incomet-1))
The choice of an optimal cut-off point required knowledge about (i) the costs of type I and type
II errors; and (ii) the prior probabilities of failure and survival. The study first developed a
misclassification cost model. Of these variables, the two that had the greatest impact on
predicting bankruptcy were long term profitability (the negative effect of retained earnings/ total
assets), and gearing (the positive effect of total liabilities/total assets), noting that the counter-
intuitive signs reflected the fact that it was failure that was being modelled here. The estimated
coefficients on income volatility and the market-based ratios were not significantly different
from zero at the 95% level. Two models were found to achieve high levels of accuracy, one that
emphasized classification based on short-term accounting criteria and a second based on longer-
term financial performance.
CHAPTER 3- RESEARCH METHODOLOGY
Research is the systematic process of collecting and analyzing information (data) in order to
increase our understanding of the phenomenon about which we are concerned or interested.
Method is the systematic collection of data (facts) and their theoretical treatment through proper
observation, experimentation and interpretation. Thus research methodology explores to find out
the best suited method for analyzing the concerned topic.
3.1Objectives
3.1.1Primary Research Objective
To determine the model for predicting corporate failures in India, using multiple discriminant
analysis.
3.1.2.Secondary Research Objectives
? To determine the definition of default.
? To determine the sectors in which maximum defaults have occurred.
? To find out the defaulting companies in these sectors and their financial data.
? To find out comparable successful companies operating in these sectors and their
financial data.
? To determine the inputs for the MDA and develop the model
3.2 Scope of Research
Corporate failures are a well researched area in most of the countries. But, surprisingly there is
very little work done in this area in India. The literature review uncovered the fact that such
study has primarily been conducted on companies which have defaulted on their principal or
coupon payments of their bonds.
However, majority of India?s investment in securities goes into the equity markets. Considering
this fact, it becomes important to conduct a similar study on listed firms which wind up causing
huge losses to its shareholders. Taking into consideration the mass appeal of this research, it is
paramount that the model uses information which is freely available to shareholders.
Thus, the scope of research encompasses:
- To develop a model specific to Indian scenario.
- Model to predict corporate failure (delisted for winding up).
- The input data for model to be readily available to shareholders.
3.3.Universe of study
The universe consists of all the firms which got delisted from BSE for winding up and their
comparable listed firms of BSE. The delisted firm?s data can be obtained from the BSE website
(http://www.bseindia.com/about/datal/delist/a-delist.asp). This gives a list of 211 companies
which were delisted since 1970?s.
3.4 Sample of Study
Studies in various countries indicate that financial ratios alone are not reliable indicators of
corporate failures because the business cycle also is a major contributor. Hence, it was decided to
carry out the study during Indian economy?s boom period ranging between 1995-2005. This
reduced the universe of companies from 211 to around 50.
Further, financial data of all these 50 companies was not available. Based on availability of data,
the sample set was reduced to 21 companies.
Comparables are decided based upon the industry, firm size and period of operation. Each failed
company has been paired with its comparable taking our sample size to 42.
3.5 Data Collection
Data was collected using BSE website (List of companies that wound up) and „Capitaline Plus?
Database (Financial Information of sample companies). For the purpose of analysis, 29 ratios
were calculated and given as input for Multiple Discriminant Analysis. 29 ratios were included
to ensure that none of the financial parameters is ignored.
Profit / Total
Assets
Reserves &
Surplus / Total
assets
Total equity/ Total
Assets
Long Term loan / Total Assets
Current Assets/
Current
Liabilities
Sales/ Total
assets
Quick assets / Total
Assets
Cash flow/ Current Liability
Total
Liabilities/
Total Assets
Cash / Total
assets
Total equity/ Total
Liability
Current liability/ Total assets
Working
Capital/ total
Assets
Current Assets/
Total Assets
Cash / current
Liability
Current liability /Total Equity
EBIT/ Total
Assets
Quick Assets/
Current
Liabilities
EBIT/ Total equity Investments/ Working Capital
Cash Flow/
Total Liabilities
Cash Flow/ Total
Assets
Fixed Assets / Total
assets
Long Term Loan / Total Equity
Total Liability/
Total Equity
Profit / Total
Equity
Fixed assets/ Total
equity
Sales/ Total Equity
Table 3.1: Inputs for the MDA
3.6 Limitations of Study
1. Selection of comparable firms: because even if it has been made sure that they are from
the same industry, of the same size and are operational during the same period, there is no
way to know whether the number of years for which the comparable firm has been
operational has any effect on the failure prediction.
2. Period: The sample we have selected comes from the boom period of the Indian economy
ie after the liberalization, between 1995 – 2005. While this may broadly be classified as
the boom period, ten years is a long time for an economy to remain in a same state.
3. Qualitative Factors: There could be qualitative factors like integrity of the top
management, CSR demonstrated by the organization etc which may act as predictors for
the failure. These factors are tough to quantify and we have made a rough
assumption that these qualitative factors are already reflected in the financials of the
company.
3.7 Tools Used
The objective of the research is to formulate a model for predicting the corporate failure in
India using Multiple Discriminant Analysis. But we have developed Logit model too for
prediction and compared the results to demonstrate the superiority of MDA over Logit in
correctly predicting the failures. Hence the basic tools used are
- Multiple Discriminant Analysis
- Logit Analysis
3.7.1 Multiple Discriminant Analysis
Multiple discriminant analysis (MDA) is also termed Discriminant Factor
Analysis and Canonical Discriminant Analysis. It adopts a perspective similar to Principal
Components Analysis, but PCA and MDA are mathematically different in what they are
maximizing. MDA maximizes the difference between values of the
dependent, whereas PCA maximizes the variance in all the variables accounted for by the factor.
Geometrically, the rows of the data matrix can be considered as points in a multidimensional
space, as also the group mean vectors. Discriminating axes are determined in this space, in such
a way that optimal separation of the predefined groups is attained. The first discriminant function
maximizes the differences between the values of the dependent variable. The second function is
orthogonal to it (uncorrelated with it) and maximizes the differences between values of the
dependent variable, controlling for the first factor. And so on. Though mathematically different,
each discriminant function is a dimension, which differentiates a case into categories of the
dependent variable based on its values on the independent variables. The first function will be
the most powerful differentiating dimension, but later functions may also represent additional
significant dimensions of differentiation
Thus Discriminant Analysis is a technique for analyzing data when the criterion or the dependant
variable is categorical and the predictor or the independent variables are interval in nature. For
example, the dependant variable may be the choice of a brand of personal computer (A, B or C)
and the independent variables may be the ratings of the attributes of PCs. The objectives of
discriminant analysis are as follows:
1. Development of Discriminant functions, or linear combinations of the predictors or
independent variables, which will best discriminant between the categories of the
criterion or the dependant variables.
2. Examination of whether significant differences exist among the groups, in terms of the
predictor variables.
3. Determination of which predictor variables contribute to most of the intergroup
differences.
4. Classification of cases to one of the groups based on the values of the predictor variables.
5. Evaluation of the accuracy of classification.
Discriminant Analysis techniques are described by the number of categories possessed by the
criterion variable. When the criterion variable has two categories, the technique is known as two-
group Discriminant Analysis. When three or more categories are involved the technique is
referred to as Multiple Discriminant Analysis. Since in our case we are having only two
categories, namely the successful and the failure companies, in fact, this research can be best
describes as the two group discriminant analysis instead of a multiple discriminant analysis.
3.7.1.1 Discriminant Analysis Model
The discriminant analysis model involves a linear combination of the following terms:
n n
X b X b X b X b b D + + + + + = .. ..........
3 3 2 2 1 1 0
Where D is the discriminant Score
b?s are the discriminant coefficients or the weights
X?s are the predictors or the independent variable
The coefficients or weights are estimated so that the groups differ as much as possible on the
values of the discriminant function. This occurs when the ratio of between-group sum of squares
for the discriminant scores is at a maximum. Any other linear combination will result in a
smaller ratio.
Variables and Data
D is a classification into 2 or more groups and therefore a grouping variable in the terminology
of discriminant analysis. That is groups are formed on the basis of existing data, and are coded as
0 and 1 based on whether the company is a failure or a success, similar to a dummy variable
coding. The independent variables are continuous scale variables and are used as predictors of
the group to which the objects will belong. Therefore, to be able to use discriminant analysis, we
need to have some data on D and the x variables from past record. That is discriminant analysis
is a supervised learning technique where the model is based on existing data unlike clustering,
which is an unsupervised learning technique.
Predicting the group membership for a new data point
A model is built which is a linear equation of the form shown earlier, and the coefficients of the
equation is used to calculate the discriminant score D. Depending upon the cutoff score for D,
which is usually the midpoint of the mean discriminant scores of the two groups, the new points
will be classified.
Accuracy of Classification
The output given by the confusion matrix tells us the percentage of the existing data points which
are correctly classified by this model. This percentage is somewhat similar to the coefficient of
determination in a regression model. But it needs to be noted that this percentage is based upon
applying the model to the same data on which it was built. Generally when applied to other data
this percentage might go down.
Stepwise/ Fixed Model
Stepwise discriminant analysis is analogous to stepwise multiple regression in that the predictor
are entered sequentially based on their ability to discriminate between the groups. An F ratio is
calculated for each predictor by conducting a univariate analysis of the variance in which the
groups are treated as the categorical variable and the predictor as the criterion variable. The
predictor with the highest F ratio is first to be selected for the inclusion in the discriminant
function. A second predictor is added based on the highest adjusted or partial F ratio, taking into
account the predictor already selected. Each predictor thus selected is tested for the retention
based on its relationship with the other predictors selected/
The selection of the stepwise procedure is based on the optimizing criterion adopted. The
Mahalanobois procedure is based on maximizing a generalized measure of the distance between
the two closest groups.
Relative Importance of the Independent Variables
The coefficients of the predictors in the discriminant function should ideally tell us which
predictor is more important in discriminating between the groups. But because the predictors are
measured in different units, comparing the absolute value of the coefficients would make no
sense. To overcome this problem of different measurement units, we must compare standardized
discriminant coefficients. These coefficients are adjusted for the different bases and can be
directly compared to see the effect. The higher the standardized coefficient of a predictor, higher
is the importance of that variable in predicting the failure.
Apriori Probability of Classification into Groups
The discriminant analysis algorithm requires us to assign an apriori probability of a given case
belonging to one of the groups. There are two ways of doing this:
1. An equal probability can be assigned to all the groups. Thus in a 2 group discriminant
analysis, 0.5 probability can be assigned to both the groups.
2. From experience, if we know which group has a higher probability, we can give that
probability to the group.
Since in this research, any company has a equal probability of belonging to any of the two
groups, an equal figure of 0.5 has been assigned to both the groups.
3.7.2 Logit Model
When the dependant variable is binary and there are several independent variables that are
metric, in addition to two-group discriminant analysis, one can also use OLS regression, the logit
and probit models for estimation. The data preparation for running the OLS regression, logit and
probit is similar in that the dependant variable is coded 0 or 1. The probit model is less
commonly used compared to logit model.
Discriminant analysis deals with the issue of which group an observation is likely to belong to.
On the other hand the binary logit commonly deals with the issue of how likely an observation is
to belong to each group. It estimates the probability of an observation belonging to particular
group. Thus logit model falls somewhere between regression and the discriminant analysis in
application. We can estimate the probability of a binary event taking place using the binary logit
model also called as logistic regression. The probability of success may be modeled using the
logit model as:
|
.
|
\
|
+
|
.
|
\
|
=
¿
¿
=
=
k
i
i i
k
i
i i
X a
X a
P
0
0
exp 1
exp
where P is the probability of success
Xi is the independent variable i
Ai is the parameter to be estimated
Model Fit
In logit, commonly used measures of model fit are based on the likelihood function and are Cox
and Snell R
2
and Nagelkerke R
2.
Both these measures are similar to the R
2
in multiple regression.
The Cox and Snell R
2
is constrained in such a way that it cannot equal 1, even if the model
perfectly fits the data. This limitation is overcome by Nagelkerke R
2
.
If the estimated probability of a data point is greater than 0.5, then the predicted value of Y is
one, else Y is set to zero. The predicted values of Y can then be compared with the
corresponding actual values of Y to determine the percentage of correct predictions.
Significance Testing
The testing of individual estimated parameters or coefficients for significance is similar to tat in
the multiple regression. In this case, the significance of the estimated coefficients is based on
Walds?s statistic. This statistic is a test of significance of the logistic regression coefficient based
on the asymptotic normality property of maximum likelihood estimates. The Wald statistic is
Chi-square distribution distributed with 1 degree of freedom if the variables are metric and the
number of categories – 1 if the variable is non metric.
Interpretation of Coefficients
The interpretation of the coefficients is similar to that in multiple regression. The log odds are a
linear function of the estimated parameters. That is if Xi changes by one unit, the log odds
changes by Ai units, when the effect of other independent variables is held constant. The sign of
this will determine whether the probability increases or decreases by this amount.
Software Used
All the data analysis has been carried out in SPSS and Excel.
Chapter 4 -ANALYSIS AND INTERPRETATION
Data do not “speak for themselves”. It reveals what the analyst can detect. Thus proper analysis
will only lead the analyst to the result he/she wants and these results will have significance when
they are interpreted. Analysis and interpretation of the study should relate to the study objectives
and research questions. One often-helpful strategy is to begin by imagining or even outlining the
manuscript(s) to be written from the data.
4.1 Analysis
Input data for 42 Indian corporate was fed into SPSS software and a stepwise Multi-Discriminant
analysis was performed. As a result, the following output was derived which has been analyzed
and interpreted table wise.
Analyze ? Classify ? Discriminant Analysis
Stepwise procedures select the most correlated independent first, remove the variance in the
dependent, then select the second independent which most correlates with the remaining variance
in the dependent, and so on until selection of an additional independent does not increase the R-
squared (in DA, canonical R-squared) by a significant amount (usually significance=.05). As in
multiple regression, there are both forward (adding variables) and backward (removing
variables) stepwise versions.
In SPSS there are several available criteria for entering or removing new variables at each step:
Wilks? lambda, unexplained variance, Mahalanobis? distance, smallest F ratio, and Rao?s V. The
researcher typically sets the critical significance level by setting the "F to remove" in most
statistical packages.
V2 Rank
Log
Determinant
0 4 -3.796
1 4 -4.396
Pooled within-groups 4 -2.455
Table 4.1 : Log Determinants
The larger the log determinant in the table above, the more that group's covariance matrix differs.
The "Rank" column indicates the number of independent variables -- 3 in this case. Since
discriminant analysis assumes homogeneity of covariance matrices between groups, we would
like to see the determinants be relatively equal. Box's M, next, tests the homogeneity of
covariances assumption.
In the multi group model, log determinant values provide an indication of which group?s co
variances differ the most. In our case the log determinant value of failed companies (V2=0) is
the least. If we perform the analysis hence with only successful companies the equal covariance
assumption would be met.
Box's M 65.620
F Approx. 5.848
df1 10
df2 7649.402
Sig. .000
Table 4.2 : Box?s M
Analysis:
Box's M test tests the assumption of homogeneity of covariance matrices. This test is very
sensitive to meeting also the assumption of multivariate normality. Discriminant function
analysis is robust even when the homogeneity of variances assumption is not met, provided the
data do not contain important outliers. For the data below, the test is significant so we conclude
the groups do differ in their covariance matrices, violating an assumption of DA. Note that
when n is large, as it is here, small deviations from homogeneity will be found significant, which
is why Box's M must be interpreted in conjunction with inspection of the log determinants,
above.
Box?s M statistic tests the null hypothesis of equal population covariance matrices. It?s
significance is based on F transformation. The hypothesis of equal covariance matrices is
rejected here as the significance level is .000(less than .10)
Variables Entered/Removed(a,b,c,d)
Step Entered Wilks' Lambda
Statistic df2 df3 Exact F Statistic df1
Statistic df2 Sig. Statistic df1 df2 Sig. Statistic df1
1 Working
Capital/
total
Assets
.586 1 1 40.000 28.218 1 40.000 .000
2 EBIT/
Total
Assets
.486 2 1 40.000 20.611 2 39.000 .000
3 Long
Term loan
/ Total
Assets
.413 3 1 40.000 17.970 3 38.000 .000
4 Fixed
Assets /
Total
assets
.354 4 1 40.000 16.908 4 37.000 .000
At each step, the variable that minimizes the overall Wilks' Lambda is entered.
a Maximum number of steps is 56.
b Minimum partial F to enter is 3.84.
c Maximum partial F to remove is 2.71.
d F level, tolerance, or VIN insufficient for further computation.
Table 4.3: Stepwise Results
Analysis:
This table displays the statistics at each step where variables are entered or removed. The
statistics displayed depends on the choice of method of stepwise selection. Here we have chosen
to enter at each step a variable that would minimize the Wilks? lambda. Wilks? lambda is a
measure of the extent of misfit of the discriminant solution .Values vary from 0 to 1.Values close
to 0 indicate that the groups created are distinctively different whereas values close to 1 indicate
that the group are overlapping.
For an acceptable discriminant solution ì should be less than 0.5
ì = 1 – [(Variance(amongst)/Variance(Total)]
= Variance(Within)/Variance(Total)
Here the Wilk?s lambda is less than 1 and hence shows the better discriminating power of the
model.
TABLE 4: Variables in the Analysis
Step Tolerance F to Remove
Wilks'
Lambda
1 Working Capital/
total Assets
1.000 28.218
2 Working Capital/
total Assets
.985 25.949 .810
EBIT/ Total Assets .985 8.038 .586
3 Working Capital/
total Assets
.972 24.217 .677
EBIT/ Total Assets .984 6.894 .488
Long Term loan /
Total Assets
.986 6.683 .486
4 Working Capital/
total Assets
.920 26.902 .611
EBIT/ Total Assets .984 5.415 .405
Long Term loan /
Total Assets
.656 13.131 .479
Fixed Assets / Total
assets
.654 6.260 .413
Table 4.4 : Variables in the Analysis
Analysis:
These are the statistics for the variables that are in the analysis at each step. Tolerance is used to
determine how much the independent variables are linearly related to one another
(multicollinear). A variable with very low tolerance contributes little information to a model, and
can cause computational problems. Here the tolerance is high and hence the variables contribute
significantly to the model. F-to-enter(3.84) & F-to-remove(2.71) is useful for describing what
happens if the variable is entered/removed from the current model (given that the other variables
remain).
Step
Number of
Variables
Lambd
a df1 df2 df3 Exact F
Statistic df1 df2 Sig. Statistic df1 df2 Sig. Statistic
1 1 .586 1 1 40 28.218 1 40.000 .000
2 2 .486 2 1 40 20.611 2 39.000 .000
3 3 .413 3 1 40 17.970 3 38.000 .000
4 4 .354 4 1 40 16.908 4 37.000 .000
Table 4.5 : Wilk?s Lambda
Analysis:
The number of variables indicates the number of variables in the model at each step. Lambda
Values close to 0 indicate the group means are different. For F statistic if the significance value
is small (less than say 0.10) this indicates that group means differ which is the case here.
Function Eigenvalue % of Variance Cumulative %
Canonical
Correlation
1 1.828(a) 100.0 100.0 .804
a First 1 canonical discriminant functions were used in the analysis.
Table 4.6 : Eigenvalues
The table below shows the eigenvalues. The larger the eigenvalue, the more of the variance in the
dependent variable is explained by that function. Since the dependent in this example has only two
categories, there is only one discriminant function. However, if there were more categories, we would
have multiple discriminant functions and this table would list them in descending order of importance.
The second column lists the percent of variance explained by each function. The third column is the
cumulative percent of variance explained. The last column is the canonical correlation, where the
squared canonical correlation is the percent of variation in the dependent discriminated by the
independents in DA. Sometimes this table is used to decide how many functions are important (ex.,
eigenvalues over 1, percent of variance more than 5%, cumulative percentage of 75%, canonical
correlation of .6). This issue does not arise here since there is only one discriminant function, though we
may note its canonical correlation is not high.
The square root of each eigenvalue provides an indication of the length of the corresponding
eigenvector.The % of variance column allows you to evaluate which canonical variable accounts
for most of the spread.Here function 1 itself describes 100% of all variance. Since, the derived
eigen value is 1.828 (>1), it indicates a significant model.
Test of Function(s)
Wilks'
Lambda Chi-square df Sig.
1 .354 39.502 4 .000
Table 4.7 Wilk?s Lambda and Chi Square Statistics
This second appearance of Wilks's lambda serves a different purpose than its use in the ANOVA table
above. In the table below it tests the significance of the eigen value for each discriminant function. In this
example there is only one, and it is significant.
Function
1
Working Capital/
total Assets
.841
EBIT/ Total Assets .448
Fixed Assets / Total
assets
.585
Long Term loan /
Total Assets
-.786
Table 4.8 : Standardized Canonical Discriminant Function Coefficients
Analysis:
The standardized discriminant function coefficients in the table below serve the same purpose as beta
weights in multiple regressions: they indicate the relative importance of the independent variables in
predicting the dependent.
When variables are measured in different units, the magnitude of an unstandardized coefficient
provides little indication of the relative contribution of the variable to the overall discrimination.
The coefficients of the canonical variable are used to compute a canonical variable score for each
case.
Function
1
Working Capital/
total Assets
2.562
EBIT/ Total Assets 1.000
Fixed Assets / Total
assets
.361
Long Term loan /
Total Assets
-.510
(Constant) .018
Table 4.9 : Canonical Discriminant Function Coefficients
The coefficients displayed in this table are the coefficients of the canonical variable. The
coefficients are used to compute canonical variable scores for each case. Here the score is
2.562(Working Capital/ total Assets)+1(EBIT/ Total Assets)+0.361 (Fixed Assets/ Total
Assets) -0.510 (Long Term Loan/ Total Assets) + 0.018
V2
Function
1
0 -1.319
1 1.319
Table 4.10: Functions at Group Centroids
This table displays the canonical variable means by group. Within-group means are computed for
each canonical variable.
The centroids enable us to determine a cut-off score for application of model on financial data of
corporate. The cut off score is calculated as :
{(Centroid of failure*No. of failed companies) + (Centroid of Success*No. of successful
companies)} / {No. of failed companies + No. of successful companies}
This gives us a cut-off score of
{(-1.319*21)+(1.319*21)}/{21+21} = 0
Cases which evaluate on the function above the cutting point are classified as "Failures," while those
evaluating below the cutting point are evaluated as "Success".
4.2 Classification Statistics
Processed 42
Excluded Missing or out-of-range
group codes
0
At least one missing
discriminating variable
0
Used in Output 42
Table 4.11: Classification Processing Summary
Analysis:
This table shows the number of cases evaluated, processed, and excluded from the classification
V2
Predicted Group
Membership Total
0 1 0
Original Count 0 18 3 21
1 0 21 21
% 0 85.7 14.3 100.0
1 .0 100.0 100.0
a 92.9% of original grouped cases correctly classified.
Table 4.12: Classification Results (a)
This measures the degree of success of this sample.In our case 85.7% of the cases are correctly
classified as failed compan^ies while 100 % of successful companies are correctly classified.
Overall, the model has 92.9% capability to classify correctly.
4.3 Model Testing:
Successful development of a model is incomplete without putting it through a test. For the
purpose of testing the prediction capability of the discriminant model , the following companies
which have already failed were tested by plugging in data in the equation as per the model and
the results were as follows:-
Company
Year of
failure Zscore
7SIV Industries Ltd 2003 -0.5738
Skyline Leather Industries
Ltd 1999 -0.89849
Southern Herbals Ltd 2001 -0.00889
Nortech India Ltd. 1996 -0.32645
Reil Products Ltd 1999 -0.03281
Table 4.13: Testing with data
Since, the z-score for the above companies is less than 0; therefore, we can conclude that the
model is capable of successfully predicting corporate failures in India.
4.4 Logistic Regression Data Analysis
On the same set of data, the logistic regression was run and the results are analyzed below:
The path followed in SPSS is Analyze ? Regression ? Binary Logistic
Unweighted Cases(a) N Percent
Selected Cases Included in Analysis
42 100.0
Missing Cases
0 .0
Total
42 100.0
Unselected Cases
0 .0
Total
42 100.0
If weight is in effect, see classification table for the total number of cases.
Table 4.14: Case Processing Summary
Dependent Variable Encoding
The Dependent Variable Encoding table above shows the dependent variable, success is coded
with the reference category=1="yes", and the failure category is coded 0. This is conventional for
logistic analysis.
Original Value Internal Value
0
0
1
1
Table 4.15: Dependant Variable Encoding
Block 0: Beginning Block
Observed Predicted
V2
Percentage
Correct
0 1 0
Step 0 V2 0
0 21 .0
1
0 21 100.0
Overall Percentage
50.0
a Constant is included in the model.
b The cut value is .500
Table 4.16: Classification Table
The model has 50% classification accuracy.
CHAPTER 5 - CONCLUSION AND FINDINGS
5.1 Conclusion & Findings
Assessing the financial position of a firm and its propensity for bankruptcy is of great interest to
all stakeholders of the firm. This study investigates how to improve the assessment method, and
what variables or combination of variables convey more useful information for bankruptcy
prediction. This research contributes to accounting, finance, and information systems research in
multiple ways. We find that carefully selecting explanatory variables (financial ratios)from both
accounting and market sources can improve future bankruptcy predictions.
- Model :
Z=0.018*Constant-0.51*Long term loan/Total assets+0.361*Fixed assets/Total
assets+ EBIT/Total assets+2.562*Working capital/Total assets
- The significant variables include:
o Long term Loan / Total Assets
o Fixed Assets/Total Assets
o EBIT/Total Assets
o Working Capital/Total Assets
- The cut-off for a company?s z-score to classify as either successful or failed is 0 i.e. if the
score totals below 0, then the company is classified as failed (with 95% confidence level)
and if the score totals above 0, it is classified as successful.
- The model has an overall capability to correctly classify the corporate as successful or
failed 92.9% times.
- To further establish the credibility of the model, it was applied on 5 already failed firms
and they were successfully classified by the model as failed.
- On comparison with Logit, Discriminant analysis was proven to have better classification
ability.
REFERENCES
- Altman E.I. Financial Ratios, Discriminant analysis and the prediction of corporate
bankruptcy. Journal of Finance 1968 (September): 589-609.
- Applied Financial Economics, L. LIN and J. PIESSE Department of Banking and
Finance, National Chi-Nan University, 1 University Road, Pu-Li, Nantou 545, Taiwan;
Management Centre, School of Social Science and Public Policy, King?s College
London, 150 Stamford St, London SE1 9NN, UK and University of Stellenbosch,
Republic of South Africa(2004) “Identification of corporate distress in UK industrials: a
conditional probability analysis approach”
- Beaver W. Market prices, financial ratios, and the prediction of failure, Journal of
Accounting Research (1968).
- Boyd W Harper, Westfall Ralph, Stasch F Stanley, Marketing Research – Text and
Cases, 7
th
Edition, All India Traveller Book Seller, New Delhi, Chapter 16
- Choong Nyoung Kim, Department of Business Administration, University of
Seoul(2001)
- “A Neural Network approach to compare predictive value of accounting versus market
data”
- Demetrios Ginoglou,Ph.D., Konstantinos Agorastos,Ph.D., Thomas Hatzigagios,Ph.D.
University of Macedonia Economic and Social Sciences,Thessaloniki,Greece “Predicting
Corporate failure of problematic firms in Greece with LPM, Logit, Probit and
Discriminant Analysis”
- Dr. S. Pardhasaradhi, Associate Professor, Dept. of Business Management, O.U.(2001)
“Tracing the Trajectories of Sickness – a diagnostic tool for Corporate Turnaround”
- Edward I. Altman and Paul Narayanan (1996) “Business Failure Classification Models:
An international Survey”
- Eric Bataille, Catherine Bruneau, Alexis Flageollet and Frederic Michaud” Business
cycle and Corporate Failure in France: Was there a link?”
- Ghassan Hossari (2002)” A Dynamic Ratio-Based Model for signalling Corporate
Collapse”
- International Research Journal of Finance and Economics ISSN 1450-2887 Issue 12
(2007) “Bank Failure Prediction Using Modified Minimum Deviation Model”
- Jorge A Chan Lau and Toni Gravelle “END: A New Indicator of Financial and Non –
Financial Corporate Sector Vulnerability”
- Krishnan Dandapani and Edward R. Lawrence Department of Finance and Real Estate,
College of Business Administration, Florida International University, Miami, Florida,
USA(2001)” Virtual bank failures: An Investigation”
- Malhotra K Naresh (2007), Marketing Research – An Applied Orientation, 5
th
Edition,
Pearson Prentice Hall, New Delhi, Chapter 18
- Mohamed Ariff, University of Tokyo & Bond University & J. Ratnatunga, Monash
University(2008)” Do accounting and finance tools serve governance?”
- Nargundkar Rajendra, Marketing Research – Text and Cases, 3
rd
Edition, Tata McGraw
Hill, New Delhi, Chapter 11
- Scholor Bharat Tiwari, Jamia Millia Islamia University, (2004) “Prediction of corporate
failure: Formulation of an Early Warning Model”
- Srivatasava, S.S. and Yadav, R.A., (1986) “Management and Monitoring of Industrial
Sickness, Concept Publishing Company, New Delhi”
- Sung Woo Shin1 and Suleyman Biljin Kilic2,(2006), “Using PCA-Based Neural Network
Committee Model for Early Warning of Bank Failure”
- www.capitaline.com
- www.bseindia.com
- www.ebscohost.com
APPENDIX
Unweighted Cases N Percent
Valid 42 100.0
Excluded Missing or out-of-range
group codes
0 .0
At least one missing
discriminating variable
0 .0
Both missing or out-of-
range group codes and at
least one missing
discriminating variable
0 .0
Total 0 .0
Total 42 100.0
Table A1: Analysis Case Processing Summary
V2
Mean
Std.
Deviation Valid N (listwise)
Unweighted Weighted Unweighted Weighted
0 Profit / Total Assets -
1.615523
39059985
5.735876951
381770
21 21.000
Current Assets/ Current
Liabilities
.5793730
0812612
.7091786907
89817
21 21.000
Total Liabilities/ Total
Assets
2.783308
09993698
2.126770848
602280
21 21.000
Working Capital/ total
Assets
-
.1961012
5493701
.4042966795
61029
21 21.000
EBIT/ Total Assets
- .4144586467
21 21.000
.1634173
2074443
63401
Cash Flow/ Total
Liabilities
-
.1283307
4512445
.3568185496
51258
21 21.000
Total Liability/ Total
Equity
-
1.502856
82539987
3.275065206
270175
21 21.000
Sales/ Total assets .9234384
8286923
2.261682882
795814
21 21.000
Cash / Total assets .3154752
2806505
.6569506192
69704
21 21.000
Current Assets/ Total
Assets
.1813208
1581027
.1846626833
25422
21 21.000
Quick Assets/ Current
Liabilities
.4844577
7044052
.6876877162
00810
21 21.000
Cash Flow/ Total Assets -
.2196388
3594029
.4628954948
50094
21 21.000
Profit / Total Equity -
.6998476
2902113
3.916280835
721906
21 21.000
Total equity/ Total
Assets
-
.7063571
4805390
1.445285895
533746
21 21.000
Quick assets / Total
Assets
.3415164
5352316
.6463715468
74880
21 21.000
Total equity/ Total
Liability
-
.1976331
0514872
.5616133825
02813
21 21.000
Cash / current Liability .3545746
0572963
.6466964695
48380
21 21.000
EBIT/ Total equity .0571095
7719548
.2929900568
29022
21 21.000
Fixed Assets / Total
assets
1.158283
49077464
1.196283612
130356
21 21.000
Fixed assets/ Total
equity
-
.6770896
4057991
2.133376390
127143
21 21.000
Long Term loan / Total
Assets
2.138906
68651078
1.962362811
626104
21 21.000
Cash flow/ Current
Liability
-
.3666791
7931090
.6490719899
47628
21 21.000
Current liability/ Total
assets
.6444014
1342621
.4772843731
36002
21 21.000
Current liability /Total
Equity
-
.2498620
9828079
.7539304017
33076
21 21.000
Investments/ Working
Capital
.4442642
6036847
2.054817000
897042
21 21.000
Long Term Loan / Total
Equity
-
1.252994
72711908
2.740449830
871464
21 21.000
Sales/ Total Equity .1391803
8006864
1.786834641
052883
21 21.000
Reserves/ Total Assets -
12.20809
65472784
4
49.00191978
7220300
21 21.000
1 Profit / Total Assets .2209039
7184415
.5479557193
31049
21 21.000
Current Assets/ Current
Liabilities
5.347396
35267341
12.80279672
9957100
21 21.000
Total Liabilities/ Total
Assets
1.158778
54761375
1.172592817
930064
21 21.000
Working Capital/ total
Assets
.3422539
5615118
.2285416928
01171
21 21.000
EBIT/ Total Assets .2605234
6418487
.4790535240
41069
21 21.000
Cash Flow/ Total
Liabilities
-
.2214542
5458228
2.202943171
804353
21 21.000
Total Liability/ Total
Equity
.6384648
3956997
2.381203360
495803
21 21.000
Sales/ Total assets 4.510582
91900548
7.685007454
368340
21 21.000
Cash / Total assets 2.677680
68255796
8.399741450
254340
21 21.000
Current Assets/ Total
Assets
.8294258
8127880
1.660522047
649173
21 21.000
Quick Assets/ Current
Liabilities
4.768027
16585735
9.543952480
744060
21 21.000
Cash Flow/ Total Assets -
.2293659
6597211
2.190903016
205385
21 21.000
Profit / Total Equity .0868016
8257244
.4458603219
39009
21 21.000
Total equity/ Total
Assets
15.94173
40655212
4
62.89919353
0197200
21 21.000
Quick assets / Total
Assets
2.822849
81759940
8.352848555
372290
21 21.000
Total equity/ Total
Liability
16.44369
22530210
5
62.79101654
6825200
21 21.000
Cash / current Liability 2.970715
16678499
8.349597035
570880
21 21.000
EBIT/ Total equity .1944375
7018592
.5169424307
22622
21 21.000
Fixed Assets / Total
assets
1.410869
36924302
1.955476420
919347
21 21.000
Fixed assets/ Total
equity
.5345149
8589407
1.115535343
361420
21 21.000
Long Term loan / Total
Assets
.6780449
6595206
.9504998023
78909
21 21.000
Cash flow/ Current
Liability
-
.5227567
7780966
3.871717210
954797
21 21.000
Current liability/ Total
assets
.4807335
8166169
.3901404166
43157
21 21.000
Current liability /Total
Equity
.2741322
2352313
.6685761683
09544
21 21.000
Investments/ Working
Capital
.0990141
7913191
.2693843831
13473
21 21.000
Long Term Loan / Total
Equity
.3643326
1604683
1.762943590
463086
21 21.000
Sales/ Total Equity 2.334138
14414805
3.218655834
502136
21 21.000
Reserves/ Total Assets -
.2471755
2931953
2.165808684
664851
21 21.000
Total Profit / Total Assets -
.6973097
0937785
4.130262156
502496
42 42.000
Current Assets/ Current
Liabilities
2.963384
68039976
9.274931148
163310
42 42.000
Total Liabilities/ Total
Assets
1.971043
32377537
1.884940795
986955
42 42.000
Working Capital/ total
Assets
.0730763
5060709
.4236000964
08274
42 42.000
EBIT/ Total Assets .0485530
7172022
.4916990394
51294
42 42.000
Cash Flow/ Total
Liabilities
-
.1748924
9985337
1.559366802
275825
42 42.000
Total Liability/ Total
Equity
-
.4321959
9291495
3.028598826
601965
42 42.000
Sales/ Total assets 2.717010
70093736
5.882178199
879110
42 42.000
Cash / Total assets 1.496577
95531151
6.004743987
033820
42 42.000
Current Assets/ Total
Assets
.5053733
4854453
1.212124695
686751
42 42.000
Quick Assets/ Current
Liabilities
2.626242
46814893
7.025846044
063990
42 42.000
Cash Flow/ Total Assets -
.2245024
0095620
1.563981463
854688
42 42.000
Profit / Total Equity -
.3065229
7322435
2.781552997
449649
42 42.000
Total equity/ Total
Assets
7.617688
45873367
44.74265625
5977670
42 42.000
Quick assets / Total
Assets
1.582183
13556128
5.984545469
588590
42 42.000
Total equity/ Total
Liability
8.123029
57393616
44.65814522
2454890
42 42.000
Cash / current Liability 1.662644
88625731
5.997039411
186720
42 42.000
EBIT/ Total equity .1257735
7369070
.4207853362
42317
42 42.000
Fixed Assets / Total
assets
1.284576
43000883
1.606158470
612116
42 42.000
Fixed assets/ Total
equity
-
.0712873
2734292
1.789727660
704931
42 42.000
Long Term loan / Total
Assets
1.408475
82623142
1.692844205
909554
42 42.000
Cash flow/ Current
Liability
-
.4447179
7856028
2.742997979
204289
42 42.000
Current liability/ Total
assets
.5625674
9754395
.4384413502
47716
42 42.000
Current liability /Total
Equity
.0121350
6262117
.7520879616
95944
42 42.000
Investments/ Working
Capital
.2716392
1975019
1.457933653
481030
42 42.000
Long Term Loan / Total
Equity
-
.4443310
5553612
2.418556608
405625
42 42.000
Sales/ Total Equity 1.236659
26210834
2.800861295
939622
42 42.000
Reserves/ Total Assets -
6.227636
03829899
34.78847073
3013890
42 42.000
Table A2: Group Statistics
Function
1
Working Capital/ total
Assets
.621
Total Liabilities/ Total
Assets(a)
-.401
Long Term loan / Total
Assets
-.359
EBIT/ Total Assets .359
Sales/ Total assets(a) .335
Current liability/ Total
assets(a)
-.310
Quick Assets/ Current
Liabilities(a)
.274
Current liability /Total
Equity(a)
.256
Cash / current
Liability(a)
.251
Quick assets / Total
Assets(a)
.238
Cash / Total assets(a) .238
Profit / Total Equity(a) -.214
Reserves/ Total
Assets(a)
-.212
Total equity/ Total
Assets(a)
.176
Total equity/ Total
Liability(a)
.173
Profit / Total Assets(a) -.146
Total Liability/ Total
Equity(a)
.138
Current Assets/ Current .119
Liabilities(a)
Cash flow/ Current
Liability(a)
-.116
Investments/ Working
Capital(a)
-.107
Cash Flow/ Total
Liabilities(a)
-.099
Sales/ Total Equity(a) .093
Long Term Loan / Total
Equity(a)
.092
Current Assets/ Total
Assets(a)
.079
Cash Flow/ Total
Assets(a)
-.070
Fixed assets/ Total
equity(a)
.069
Fixed Assets / Total
assets
.059
EBIT/ Total equity(a) -.008
Table 3A: Structure Matrix
The structure matrix contains within-group correlations of each predictor variable with the
canonical function. It provides a means to study the usefulness of each variable in the
discriminant function. The strongest correlations for function 1 occur for the first 3 variables
defined previously as well.
Figure 1: Canonical Distribution function1
Figure 2: Canonical Distribution function1
doc_948919332.docx