Veri zarflama analizi ve bankacılık sektöründe bir uygulama
Data envelopment analysis and an application in the banking sector
- Tez No: 66608
- Danışmanlar: DOÇ. DR. TUFAN V. KOÇ
- Tez Türü: Yüksek Lisans
- Konular: Endüstri ve Endüstri Mühendisliği, Industrial and Industrial Engineering
- Anahtar Kelimeler: Belirtilmemiş.
- Yıl: 1997
- Dil: Türkçe
- Üniversite: İstanbul Teknik Üniversitesi
- Enstitü: Fen Bilimleri Enstitüsü
- Ana Bilim Dalı: Endüstri Mühendisliği Ana Bilim Dalı
- Bilim Dalı: Endüstri Mühendisliği Bilim Dalı
- Sayfa Sayısı: 113
Özet
ÖZET Veri Zarflama Analizi, işletmeler arası etkinliğin ölçümünde parametresiz (yan parametreli) bir takım teknikler kullanarak, işletmelerin göreceli etkinliğini ölçen ve işletmelere girdilerinin ve çıktılarının arttırırım (azaltım) oranlarına göre, bu etkinliğin ne oranda değişeceğini hakkında bilgi veren ampirik bir yöntemdir. Son yıllarda endüstri kollan içerisinde yer alan işletmelerde geniş kullanım alanı bulmuştur ve her geçen gün gelişmektedir. Bu çalışmada, öncelikle Veri Zarflama Analizinin tarihçesi ve kavramı hakkında bilgiler verilmiş, daha sonra modelleri ve teorisi, statik ve dinamik yapılarda kullanım biçimleri, etkinlik çeşitleri hakkında bilgiler verilmiş, etkinlik dağılımlan konusuna değinilerek üçüncü bölümde temel Veri Zarflama Analizi yöntemleri örneklerle incelenmiş, bu örneklerin karşılaştırmalanda sunulmuştur. Dördüncü bölümde, Veri Zarflama Analizinde verilerin değişik formlarda kullanım sonucunda, analizin ne şekilde revize edileceği irdelenmiştir. Son bölümde IMKB'ye kote Bankalar arasında verimlilik analizi çalışması yapılmıştır. Toplam 10 banka analize konu olmuştur. Bankalar toplam aktif, krediler, özsermaye, vadeli mevduat, vadesiz mevduat, personel harcaması ve şube sayısı girdileri ile değerlendirilmiş, çıktı olarakda dönem kan kullanılmıştır. Model çıktı maksimizasyonu olarak oluşturulmuş, Lindo doğrusasl programlama paketi ile çözülmüştür.
Özet (Çeviri)
SUMMARY DATA ENVELOPMENT ANALYSIS and AN APPLICATION IN THE BANKING SECTOR People have been counting and recording results, since time immemorial. It is a natural inclination to want to know“How many?”and“How much?”. Once the answers are recorded it is a straightforward step to begin to compare one step of numbers with another. This ability to compare is such a integral part of our daily life -“Which brand is cheaper?”,“ Which marrow is longer?”,“Who scored more goals?”- that we barely give it a thought. In government and business, on the other hand, a great deal of effort goes into this process of: counting - measuring - comparing. Governments use comparisons to detect changes from one period to another in such measures as: the number of registered unemployed; the value of exports and imports; and the entice mix in the population. They also need to be able to compare the relative effectiveness of alternative tax proposals. Companies also need to compare measures on a period by period basis, both to investigate deviations from plan and to determine if they are doing better or worse than their competitors. Once we get beyond such global statistics and begin to compare the results for smaller groupings, we inevitably encounter the problem of determining an equitable basis for the comparisons. There will be“unquantifiable factors”and the“human element”to consider, and even where these can somehow be taken into equation there still remains the problem of accounting for a variety of characteristics that can sometimes represent conflicting objectives. In the private sector, there is one overriding measure of performance - profit. However, the calculation of profit is hardly ever straightforward since it will depend on sense of accounting conventions concerning the treatment of such factors of as long-term investment, depreciation, tax-deferments and monies set aside for bad depts. In addition, the measurement of profit per se gives no indication of potential for improvement within an organization, even in profit terms. Further, the attainment of profit is in turn dependent on a number of other financial and operational outcomes, and management has to encourage and monitor performance across a wide front. In the public sector where profit is not (or very rarely) an objective, it is necessary to delve more deeply to define outcome measures that can be used in an assessment of performance. Data envelopment analysis (DEA) developed originally as a set of techniques for measuring the relative efficiency of a set of decision-making units (DMUs), when the prize data for imputes and outputs are either unavailable or unknown. These techniques are nonparemetric in the sense that they are entirely based on the observed input-output data. The statistical aspects of the data set are almost ignoredby the traditional DEA models and in this sense they are far from nonparametric. Over the last two decades the DEA models have been widely applied in management science and operations research literature and the theoretical formulations of DEA have been generalized in several directions as follows: 1. Various types of DEA models have been formulated which clarify the concepts of technical and allocative efficiency and their link with the concept of Pareto efficiency in economic theory. 2. Log-linear and nonlinear formulations have extended the linear DEA models, and generalized the concepts of increasing, decreasing or constant returns to scale as applied to multiple-output and multiple-input cases, and 3. Sources of inefficiency identified through the DEA models have been incorporated in regression models in various ways. The essential characteristic of the DEA model as originally formulated by Charnes, Cooper and Rhodes (1978), later refereed to as CCR, is the reduction of the multi-output, multi-input situation for each DMU to that of a single virtual output and a single virtual input. For a particular DMU the ratio of this single virtual output to single virtual input provides a measure of efficiency, which can be compared with other DMUs in the system. This comparation usually performed by a sequence of linear programming (LP) formulation yields a ranking of different DMUs in the system in scale of relative efficiency from the lowest to the highest, where the latter is % 100 efficient. The variables or multipliers used to convert multiple inputs and multiple outputs to a single virtual input and to a single virtual output have been interpreted in the literature in three different ways: 1. These are shadow prices of inputs and outputs, also called“transformation rates”in the CCR approach. These shadow prices are the optimal values of appropriate Lagrange multipliers associated with the LP formulations of appropriate DEA models. 2. These are suitable nonnegative weights as in the theory of index numbers, which can be profitably used in comparing relative efficiencies. This view has been widely applied in developing efficiency measures for various public sector organizations such as public schools, county hospitals and clinics and even prison systems. 3. The multipliers are the parameters of a suitable production frontier implicit in the data set. This view originally proposed by Farrel (1957) for the single output case may be said to have laid the foundation for all non parametric models of efficiency measurement developed later. The CCR model generalized Farrell's approach to multiple outputs and developed the modern concept of multivariate production and hence a cost frontier. From an economic viewpoint the production frontier interpretation is the most important, since it permits the introduction of the dynamic and stochastic components of efficiency. The dynamic components arise through technological progress or regress, learning by doing and the shifts in the production frontier overtime. The stochastic components are due to the observed deviations from the“best practice”production function, and the various errors in managerial production policies. Whereas the econometric models of production frontier utilize the statistical distribution of the error structure to estimate the parameters, the DEA models in their LP formulations assume either a deterministic data structure or a situation where the stochastic assumptions are implicitly satisfied. Thus it is clear that the stochastic framework of DEA models needs to be generalized and extended before they can be applied to stochastic input - output data systems. This is particularly so, when the time series or panel data are involved. DEA involves an alternative principle for extracting information about a population of observations. In contrast to parametric approaches whose object is to optimize a single regression plane through the data, DEA optimizes on each individual observation with an objective of calculating a discrete piecewise frontier determined by the set of Pareto - efficient DMUs. Both the parametric and nonparametric (mathematical programming) approaches use all the information contained in the data. In parametric analysis, the single optimized regression equation is assumed to apply to each DMU. DEA, in contrast, optimizes the performance measure of each DMU. This results in a revealed understanding about each DMU instead of the depiction of a mythical“average”DMU. In other words, the focus of DEA is on the individual observations as represented by n optimizations (one for each observation) required in DEA analysis, in contrast to the focus on the averages and estimation of parameters that are associated with the single - optimization statistical approaches. The parametric approach requires the imposition of a specific functional form (regression equation, a production function, etc.) relating the independent variables to the dependent variables. The functional form selected also requires specific assumptions about the distribution of the error terms and many other restrictions, such as factors earning the value of their marginal product. In contrast, DEA does not require any assumption about the functional form. DEA calculates a maximal performance measure for each DMU relative to all other DMUs in the observed population with the sole requirement that each DMU lie on or below the extremal frontier. Each DMU not on the frontier is scaled against a convex combination of the DMUs on the frontier facet closest to it. For each inefficient DMU (one that lies below the frontier), DEA identifies the sources and level of inefficiency for each of the inputs and outputs. The level of inefficiency is determined by comparison to a single referent DMU or a convex Combination of other referent DMUs located on the efficient frontier that utilize the same level of inputs and produce the same or a higher level of outputs. This achieved by requiring solutions to satisfy inequality constraints that can increase some outputs (or decrease some outputs) without worsening the other inputs or outputs. The calculation of potential improvement for each inefficient DMU does not necessarily correspond to the observed performance of any actual DMU making up the piecewise production frontier or to deterministic projection of an inefficient DMU onto the efficient frontier. The calculated improvements (in each of the inputs and outputs) for inefficient DMUs are indicative of potential improvementsobtainable because the projections are based on the revealed best - practice performance of“comparable”DMUs of that are located on the efficient frontier. DEA relative - efficiency solutions were of interest to operations analysts, management scientists, and industrial engineers largely because of three features of the method: 1. Characterization of each DMU by a single summary relative efficiency score; 2. DMU - specific projections for improvements based on observable referent revealed best - practice DMUs; and 3. Obviation by DEA of the alternative and indirect approach of specifying abstract statistical models and making inferences based on residual and parameter coefficient analysis. The attraction of DEA to traditional frontier econometricians emerged from the new insights obtained in production frontier analysis involving their existence and variance around them. For example, the BCC model (See chapter 3) relaxed the constant-returns-to-scale requirement of the original CCR ratio model and made it possible to investigate local returns to scale. The orientation of DEA on deriving the best - practice frontier and on optimizing the individual DMU affords new ways of organizing and analyzing dala and can result in new managerial and theoretical insights. It should be noted that DEA calculations, 1. focus on individual observations in contrast to population averages; 2. produce a single aggregate measure for each DMU in terms of its utilization of input factors to produce desired outputs; 3. can simultaneously utilize multiple outputs and multiple inputs with each being stated in different units of measurement; 4. can adjust for exogenous variables; 5. can incorporate categorical variables; 6. are value free and do not require specification or knowledge of a priori weights or prices for the inputs and outputs; 7. Place no restriction on the functional form of the production relationship; 8. can accommodate judgment when desired;9. produce specific estimates for desired changes in inputs and/or outputs for projecting DMUs below the efficient frontier onto the efficient frontier; 10. are Pareto optimal; 11. focus on revealed best - practice frontiers rather than on central - tendency properties of frontiers; and 12. satisfy strict equity DEA is a body of concepts and methologies that have now been incorporated in a collection of models with accompanying interpretive possibilities as follows: 1. the CCR ratio model (1978): (i) Yields an objective evaluation of overall efficiency and; (ii) Identifies the sources and estimates the amounts of the thus- identified inefficiencies; 2. the BCC model (1984) distinguishes between technical and scale inefficiencies by: (i) Estimating pure technical efficiency at the given scale of operation and, (ii) Identifying whether increasing, decreasing or constant returns to scale possibilities are present for further exploitation; 3. the Multiplicative models provide: (i) A log-linear envelopment or (ii) A piecewise Cobb-Douglas interpretation of the production process 4. the Additive model : (i) Relate DEA to the earlier Charnes - Cooper (1959) inefficiency analysis and in the process (ii) Relate the efficiency results to the economic concept of Pareto optimality.
Benzer Tezler
- Veri zarflama analizi ve bankacılık sektöründe bir uygulama
Data envelopment analysis and an application in banking sector
GÖZDE ÖZCAN
Yüksek Lisans
Türkçe
2007
Endüstri ve Endüstri MühendisliğiDumlupınar ÜniversitesiEndüstri Mühendisliği Ana Bilim Dalı
YRD. DOÇ. DR. SEMA BEHDİOĞLU
- Toplam etkinlik ölçümü: Veri zarflama analizi ve bankacılık sektöründe bir uygulama
Başlık çevirisi yok
BAHAR ALTINOK
Yüksek Lisans
Türkçe
2002
EkonometriGazi ÜniversitesiEkonometri Ana Bilim Dalı
YRD. DOÇ. DR. ŞENOL ALTAN
- Risk yönetimi ve Türk bankacılık sektöründe bir uygulama
Risk management and an application on Turkish banking sector
MURAT ATAN
- Şirket birleşmelerinin etkinlik açısından değerlendirilmesi ve Türk bankacılık sektöründe bir uygulama
The evaluation of mergers in terms of efficiency and a practical study in Turkish banking sector
İBRAHİM EREM ŞAHİN
- Veri zarflama analizi ile etkinlik ölçümü: Bankacılık sektöründe bir uygulama
Efficiency measurement by data envelopment analysis: An application in the banking sector
MERT GÜZELÇAY
Yüksek Lisans
Türkçe
2019
BankacılıkDokuz Eylül ÜniversitesiEkonometri Ana Bilim Dalı
PROF. DR. İPEK DEVECİ KOCAKOÇ