Capacity Issues and Efficiency Drivers in Brazilian Bulk Terminals

This paper presents an efficiency analysis of Brazi lian bulk terminals built upon the conjoint use of Data Envelopment Analysis and the bootstrapp ing technique. Confidence intervals and bias corrected central estimates were used as corne rsto tools, not only to test for significant differences on efficiency scores and their reciproc als, but also on returns to scale indicators provided by different DEA models. The results of th e study suggest that most Brazilian bulk terminals present increasing returns-to-scale, that is, they are too small in size comparatively to the tasks performed, indicating a capacity short fall. Results also suggest paths for improving efficiency levels in a scenario of low in vestments and capacity constraints: privatization and cargo specialization. A final con tribution to the literature lays on the development a simple methodology to assess returnso-scale based on bootstrap results.


INTRODUCTION
here is a consensus that ports are a vital link in the trading chain due to their contribution to a nation's international competitiveness in the globalization scenario.In order to support trade oriented economic development, port authorities around the globe have been under pressure to improve port efficiency (TONGZON, 1989;CHIN;TONGZON, 1998).In Brazil, accelerated economic growth has increased this demand for port services.
Between 2006 and 2010, the physical aggregate throughput handled by Brazilian ports -measured in tons/year -grew at an average rate of 10% per year (CEL, 2009).This increasing demand for reliable services has put enormous pressure on the infrastructure of Brazilian ports.
Port operations management in Brazil was, until the mid-1990s, completely regulated and controlled by the federal government.Any investment in port infrastructure could be performed solely by Companhia Docas, a state owned company.Only in 1993, when the Brazilian Federal Law 8630, also known as "Port Modernization" Law, was edited the path for port privatization, leasing of terminals, installation of local port authorities and labor deregulation started to be paved (CURCINO, 2007).Although investments in capacity expansion were minimal from that period to these days, the comparison of several ports in terms of their overall efficiency has become an essential part of the Brazilian microeconomic reform agenda for sustaining economic growth based on foreign trade (FLEURY;HIJJAR, 2008).
In 2006, when a federal authority linked to the Transportation Ministry was created to allocate investments among the sector, the performance measurements of ports and terminals started to be conducted on more systematic way.Traditionally, the performance of ports and terminals has been variously evaluated by attempts at calculating and seeking to optimize the operational productivity of cargo handling at the berth and in the terminal area (CULLINANE et al., 2006).In recent years, approaches such as Data Envelopment Analysis (DEA) have been increasingly utilized to analyze production and performance of ports and terminals.
DEA is a non-parametric model based on linear programming (LP) used to address the problem of calculating relative efficiency for a group of Decision Making Units (DMUs) by using multiple measures of inputs and outputs.Applied studies that have used DEA however have typically presented point estimates of inefficiency, with no measure or even discussion of uncertainty surrounding these estimates (CESARO et al., 2009).To solve these problems, T bootstrap techniques have been introduced into DEA analysis (CESARO et al., 2009) allowing the sensitivity of efficiency scores relative to the sampling variation of the frontier to be analyzed, avoiding problems of asymptotic sampling distributions.
Inspired by the current debate in the Brazilian port sector, in which anecdotal evidences suggest a capacity shortfall (AGÊNCIA BRASIL, 2004;DOCTOR, 2003;SALES, 2001), this paper presents an analysis of Brazilian bulk terminals built upon the bootstrapping technique.
The basic idea is to use confidence intervals and bias corrected central estimates as cornerstone tools to assess the efficiency issue in bulk terminals, not only to test for significant differences on efficiency scores and their reciprocals (that is, their distance functions), but also on returns to scale indicators provided by different DEA models.
Results of this study are twofold.First, they corroborate that Brazilian bulk terminals are running short in capacity, highlighting the issue of how achieving productivity gains in the short-middle terms in the absence of capacity expansion.Second they shed some light on this issue, not only demonstrating that public terminals tend to be less efficient than private ones, but also that riverine terminals to be more efficient than maritime ones, due to the specialization provided by handling and moving soybeans on from producers.This second aspect reinforces the role of port deregulation/privatization and cargo specialization in the quest of higher efficiency levels in Brazilian terminals.
The remainder of the paper unfolds as follows.In Section 2 previous studies regarding efficiency measurement in Brazilian port/terminals are presented.Section 3 provides the data to be analyzed as well as additional information on the methodology used such as DEA, Return to Scale characterization and bootstrapping in DEA.Section 4 presents the results on the methodology applied to a sample of 53 bulk terminals in Brazil.Conclusions are given in Section 5.

PREVIOUS STUDIES
A growing number of studies have used DEA to benchmark port efficiency.The comprehensive literature review presented in Panayides et al. (2009) indicates that the number of port/terminals researched in each study ranges from 6 to 104 (mean 28).According to Martín and Roman (2001), although DEA obtains a single, dimensionless, overall index of efficiency, its essential differences to parametric approaches, such as Stochastic Frontier Analysis (SFA), are found in the very nature of the analytical approach.As long as SFA is stochastic and parametric, DEA uses linear programming techniques.Bootstrapping, however, is one of the most attractive solutions to address this major DEA drawback, that is, the BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set-out. 2014 www.bbronline.com.brabsence of statistical properties (ASSAF, 2010).In recent years, as far as the Brazilian case is concerned, only two DEA-based studies have appeared in international peer-reviewed journals.All of them addressed the issues of capacity constraints and the impact of contextual variables on efficiency estimates.Rios and Maçada (2006) point out that, at the time of their paper, no studies developed in Brazil had thus far been conducted.The authors analyzed the relative efficiency of 20 container terminals located in Mercosur during 2002Mercosur during , 2003Mercosur during , and 2004 by means of an inputoriented BCC model.Results indicate that 60% of the terminals were managerially efficient in this three-year period, probably reflecting the fact that the Brazilian terminals had reached record rates of cargo traffic, including higher-value added products such as automobiles.
According to these authors, container traffic increased 23.1% during the period.In Argentina, the container sector had an increase of almost 17%.No further international peer-reviewed studies on the efficiency of Brazilian ports/terminals were conducted from 2006 to 2010.
More recently, Wanke, Barbastefano and Hijjar (2011) analyzed a mix of 25 major Brazilian container/bulk terminals (based on 2008 data).The authors found that the vast majority of Brazilian terminals presented increasing returns to scale, and that bulk terminals appeared to be proportionally smaller than container terminals.Additionally, terminals controlled by the private sector tended (although not statistically significant at 0.05) to be more efficient than those controlled by the government.Statistical tests with efficiency levels were also performed against railroad connectivity and labor force qualification, albeit with inconclusive results, despite previous studies, such as Turner, Windle and Dressner (2004), Cullinane andSong (2003), andDoctor (2003).
On the other hand, Barros, Felício, and Fernandes (2012) As implied in Cooper et al. (2001), the number of DMUs should be at least three times higher than the number of inputs and outputs, in order to attain good discriminatory power in the efficiency estimates.The single input collected from each terminal is the aggregate number of loading hours (per year, all berths considered).As regards the outputs, two variables were collected: aggregate throughput per year (in tons) and number of loaded shipments per year.Contextual variables relate to the terminal ownership -whether public (1) or private (0) -and to its geographical location -whether riverine (1) or maritime (0).With respect to the choice of the input/output variables used, readers should recall one of the aims of the paper, which is to assess different possibilities for increasing efficiency levels in a scenario of low investments and capacity constraints, considering all physical assets to be fixed in the short-middle terms.Put in other words, the idea is to efficiently use all the available shipment capacity in the short-term in order to relief system pressure.Therefore, inputs and outputs were chosen in order to better understand how berth time is being used to achieve higher levels of production, both in terms of loaded shipments and aggregate throughputs.)., 2004).Essentially, one should select an orientation according to which quantities (inputs or outputs) the decision-makers have most control over (COELLI, 1996).
However, given that LP cannot suffer from such statistical problems as simultaneous equation bias, the choice of an appropriate orientation is not as crucial as it is in the econometric estimation case (COELLI, 1996).Furthermore, the choice of orientation will have only minor influences upon the scores obtained and their relative ranks (COELLI; PERELMAN, 1999).
Compared with the stochastic parametric frontier approach, DEA imposes neither a specific functional relationship between production outputs and inputs, nor any assumptions on the specific statistical distribution of the error terms (CULLINANE et al., 2006).An efficient frontier is on the boundary of a convex poly tope created in the space of inputs and outputs, and in which each vertex is an efficient DMU (DULÁ; HELGASON, 1996).Another feature of DEA is that the relative weights of the inputs and the outputs do not need to be known a priori, that is, these weights are determined as part of the solution of the linear problem (ZHU, 2003).or application of DEA to reduced models, in order to rank the effect of variables on efficiency scores (WAGNER; SHIMSHAK, 2007).

RETURN TO SCALES CHARACTERIZATION
Scale inefficiency is due to either increasing or decreasing returns-to-scale (RTS).
Although the constraint on ∑ = n j j 1 λ actually determines the prevalent RTS type of an efficient frontier (ZHU, 2003) -CRS or VRS -scale inefficiency at a given DMU can be assessed under both models.As pointed out by Cooper, Seiford and Tone (2007), while the CCR model simultaneously evaluates RTS and technical inefficiency, the BCC model separately evaluates technical efficiency.
As noted by Odeck and Alkadi (2001), the term ∑ = n j j 1 λ is also known as Scale Indicator ( o SI ) within the CCR model.So, even though the term CRS is used to characterize the CCR model, this model may be used to determine whether increasing, decreasing or constant RTS prevail at a given DMU, by making the input and output slacks explicit in the LP formulation.For instance, if its "input saving" efficiency is greater than its "output increasing" efficiency, increasing RTS prevails (ODECK; ALKADI, 2001).Now with respect to the BCC model, since its efficient frontier is strictly concave, the optimal solution will necessarily designate a given DMU as being in the region of constant, decreasing, or increasing RTS.
Although the choice of orientation will have only minor influences upon the efficiency scores obtained and their relative ranks (COELLI; PERELMAN, 1999), it should be noted, however, that input and output oriented models may give different results in their RTS findings (BANKER et al., 2004).Thus the result secured may depend on the orientation used (RAY, 2010).Increasing RTS may result from an input-oriented model, for example, while an  , 2007), where the "real life" violation of the convexity assumption may or may not be involved.

BOOTSTRAPPING METHOD
According to Simar and Wilson (2004), none of the theoretical models presented are actually observed, including the efficient frontier (CCR, BCC, or FDH) and its respective distance function to each DMU ( ).Thus, all these elements must be estimated.
Estimators are necessarily random variables upon which statistical tests, or at least, confidence intervals (CI) can be built to derive useful conclusions.
Therefore the importance of bootstrap-based approaches, such as those presented in Simar and Wilson (2004) and Wilson (2008), for estimation on the efficiency frontier, should be put into perspective.The discussions on RTS in different DEA models have been confined to "qualitative" characterizations in the form of identifying whether they are increasing, decreasing, or constant (BANKER et al., 2004;COOPER;SEIFORD;TONE, 2007).These bootstrap approaches, however, which are also useful to deal with the asymptotic distribution of DEA/FDH estimators, can be used to implement statistical tests of constant returns to scale versus varying returns to scale, convexity among other things (WILSON, 2009).For example, Daraio and Simar (2007) developed several conditional measures of efficiency, which also provide indicators for the type of RTS.The bootstrap methodology used in this study is detailed next.
The method used in this study departs from the one developed by Simar and Wilson (2004), also presented in Bogetoft and Otto (2010), which adapted the bootstrap methodology to the case of DEA/FDH efficiency estimators, and uses a Gaussian kernel density function for random data generation.The seven-step algorithm is detailed next.summarizes the pseudo datasets of additional estimates that can be possibly generated from the pseudo datasets of inputs and outputs, using the algorithm previously presented.In order to evaluate the adequacy of the convexity assumption imposed by DEA models and to characterize the prevalent RTS within the sample of Brazilian bulk terminals, the methodological framework presented in this section was applied.More precisely, 95% CIs were determined, not only for the set of estimators

INITIAL ESTIMATES
The efficiency rankings calculated using DEA/FDH input-oriented models are given in words, the capacity of the bulk terminal is too small relative to the tasks that it performs.

BOOTSTRAPPED EFFICIENCY SCORES AND CONVEXITY ASSUMPTION
The bootstrapped CCR and BCC efficiency scores, as well as their respective 95% CIs, are presented in Figures 2 and 3 for each DMU.The procedures for computing these estimates, based upon 1,000 bootstrap replications for each efficient frontier, followed the discussions detailed in Simar and Wilson (2004) and Curi, Gitto and Mancuso (2011).
Readers can easily note that public terminals tend to be less efficient than private ones (median of CCR bias-corrected estimates: 0.05 against 0.11; median of BCC bias-corrected estimates: 0.16 against 0.17), thus corroborating previous studies.The opposite is true for riverine terminals (median of CCR bias-corrected estimates: 0.34 against 0.08; median of BCC bias-corrected estimates: 0.40 against 0.16).Most of them are specialized in handling and moving soybeans on from producers, located at middle-eastern inland states, to the closest road/railway in order to reach the major export terminals, located at the ports Santos and Paranaguá.The asymptotic nature of the CIs should also be noted, as their lower and upper bounds are not symmetrical around the central estimate.
With respect to the convexity assumption, the upper bounds for the 95% CIs for the FDH and BCC distance functions are given in Figure 4.The upper bounds for the CCR distance function were omitted in order to improve its readability.It should noted that taking reciprocals of the CI estimates, for the case of analyzing input distance functions instead of efficiency scores, requires reversing the order of the bounds; that is, the reciprocal of the The convexity assumption is not statistically supported in seven DMUs when comparing, against each other, the upper bounds of the 95% CIs for the FDH, BCC, and CCR distance functions.More precisely, the convexity assumption does not hold, at 5% of significance, in DMUs 6, 10, 19, 24, 28, 38, and 47.The "statistical rejection" of the convexity assumption appears to be related to a heterogeneous group of DMUs in terms of their efficiency scores, regardless of their size and contextual variables (cf. Figure 5).This group encompasses the two previously mentioned cases (6 and 24), were original RTS characterizations were found to be discrepant.Further analyses to deal with both discrepant RTS characterizations are discussed next section.

CONCLUSIONS
Inspired by the current debate in the Brazilian port sector, in which anecdotal evidences suggest a capacity shortfall (AGÊNCIA BRASIL, 2004;DOCTOR, 2003;SALES, 2001), this paper presented an analysis of Brazilian bulk terminals built upon the bootstrapping technique.The basic idea of the study was to use confidence intervals and bias corrected central estimates as cornerstone tools, not only to test for significant differences on efficiency scores and their reciprocals (that is, their distance functions), but also on returns to scale indicators provided by different DEA models.
The results of this study suggest that most Brazilian bulk terminals are running short in capacity.According to Odeck and Alkadi (2001) and Ross and Droge (2004), a DMU may be scale inefficient if it experiments decreasing returns to scale by being too large in size, or if it is failing to take full advantage of increasing returns to scale by being too small.Therefore, it can be suggested that the capacity of the bulk terminals in Brazil is too small relative to the tasks that they perform.
It can also be noted that public terminals tend to be less efficient than private ones, corroborating previous studies.As for riverine terminals, the results suggest they tend to be more efficient than maritime ones.Most of them are specialized in handling and moving soybeans on from producers, located at middle-eastern inland states, to the closest road/railway in order to reach the major export terminals, located at the ports Santos and Paranaguá.
Putting in a broader perspective, the purposes of the analysis conducted within Brazilian bulk terminals are threefold.First, it was useful to corroborate empirical evidence regarding the fact that these terminals are running short in capacity, i.e., that increasing returns-to-scale prevail within this industry.Although Wanke, Barbastefano, and Hijjar (2011) reached the same conclusions, it is important to pin point the major methodological differences between both papers: here the analysis focused solely on bulk terminals, with input/output variables deliberately selected to assess different possibilities for increasing efficiency levels in a starting-point scenario of low investments, capacity constraints, and fixed assets in the shortterm.Also, the convexity assumption, very common in DEA studies, was not taken for granted here: bootstrap was used to test this assumption among FDH, BCC, and CCR distance functions, allowing also the returns-to-scale characterization to be probabilistically evaluated under both types of frontiers.
BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set.-out. 2014 www.bbronline.com.brSecond, as mentioned before, the analysis revealed paths for increasing terminal efficiency within the ambit of this capacity constrained scenario: simple non-parametric tests revealed substantial differences between public/private terminals and between riverine (more specialized) and maritime (less specialized) terminals in moving on grains from producers to international markets.This suggests action plans for public authorities and private decision makers in order to better deal with the capacity shortage.
Third, it served as a basis to illustrate some operational thresholds that may emerge during similar analysis.One of them is related to the impact of rejecting the convexity assumption on finding that only one return-to-scale characterization (either CCR or BCC) is statistically significant at a given DMU.The other relates to the additional analysis that should be performed when both returns-to-scale characterizations significantly diverge or none of them is found to be significant at a given DMU.In such cases, respectively, the minimal confidence interval level upon which only one RTS classification remains significant or the maximal CI level below which the first RTS classification becomes significant should determined.For both cases, a simple methodology presented as a flowchart was developed, constituting another contribution of the paper.
Future research should still address the capacity issue in Brazilian ports, possibly adopting a longitudinal perspective and involving the testing for the most influential variables, in order to provide a full map of the efficiency drivers in this environment.Possible approaches in DEA or even SFA could also deal with the issue of efficiency decomposition both in financial and operational terms, taking into account handling costs, waiting time in queues, and service levels, issues that are critical for the competitiveness of Brazilian ports.

Figure 1 -
Figure 1 -Map of the Brazilian Bulk Terminals Researched Source: The authors.3.2DATA ENVELOPMENT ANALYSISDEA is a non-parametric model first introduced by Charnes,Cooper and Rhodes (1978).It is based on linear programming (LP) and is used to address the problem of calculating relative efficiency for a group of Decision Making Units (DMUs) by using multiple measures of inputs and outputs.Given a set of DMUs, inputs and outputs, DEA determines for each DMU a measure of efficiency obtained as a ratio of weighted outputs to weighted inputs.
authors.Thus, the BCC model differs from the CCR model only in the adjunction of the model orientation, whether input or output-oriented, the two measures provide the same scores under constant returns to scale (CRS), but are unequal when varying returns to scale (VRS) are assumed as the efficient frontier (COOPER; SEIFORD; ZHU of iid (independent and identically distributed) draws from the probability density function used to define the respective kernel function; let of fact, once the B pseudo datasets of inputs and outputs for the n DMUs have been obtained, it is straightforward to estimate CIs on a given o DMU , not only for the actual distance functions, but also for the efficiency scores and the RTS indicators.Table4 rejection impacts the RTS characterization under the same input-orientation.These analyses were implemented in Maple 12, with 1,000 bootstrap replications, generated upon Gaussian kernel density functions, for each efficient frontier.Their results are discussed next.

Figure 7 -
Figure 7 -Simple Methodology to Assess RTS Characterization Based upon CIs Source: The authors.

Table 1 .
All these data relate to 2011 and their descriptive statistics are presented in Table2.

Table 2 -Summary Statistics for the Sample (Year: 2011) Descriptive statistics Input measured Outputs measured Contextual variables (I) Loading hours (per year) (O) Loaded shipments (per year) (O) Aggregate throughput (tons/yr)
Source: The authors.
DMUs under evaluation, and io x and ro y are the th i Table 3 summarizes BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set-out.2014 www.bbronline.com.br the envelopment models with respect to the orientations and frontier types (ZHU, 2003), where o DMU represents one of the n o DMU , respectively.

Table 3 -DEA Envelopment Models
ANDERSON, 2003) weights are not crucial a priori, DEA results heavily rely on the set of inputs and outputs used.The more variables (inputs and outputs) in the DEA, the less discerning the analysis is (JENKINS;ANDERSON, 2003).This fact demands higher concern for the variable selection process.Given the large number of initial potential variables to be BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set.-out.2014  www.bbronline.com.br

Table 5
Ross and Droge (2004)1) features variables returns to scale, which are more flexible and reflect managerial efficiency apart from purely technical limits.The vast majority (51 out 53) of the Brazilian bulk terminals analyzed seems to be unambiguously experiencing IRS under both RTS characterizations.No terminal appears to be unambiguously experiencing DRS.Discrepancies between RTS characterizations were found in only two cases (DMUs 6 -the large iron ore terminal of Vale at Tubarão Port -and 24 -a relevant public riverine terminal for soybean transportation from producers located at the inland states of Mato Grosso and Rondônia); both scale efficient, that is, located at the MPSS.According toOdeck and Alkadi (2001)andRoss and Droge (2004), a DMU may be scale inefficient if it experiments decreasing returns to scale by being too large in size, or if it is failing to take full advantage of increasing returns to scale by being too small.So far, these results suggest that most Brazilian bulk terminals are running short in capacity.Put in other average efficiency estimates than the BCC model, with respective average values of 0.15 and 0.27.Also, the CCR model identifies more inefficient terminals (51 vs. 50) than the BCC model does.This result is not surprising, as the CCR model fits a linear production BBR,Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set.-out.2014www.bbronline.com.brtechnology (WILSON, 2009)distance function measure gives the lower bound for the efficient score measure, and vice-versa(WILSON, 2009).
BBR, Vitória, v. 11, n. 5, Art. 4, p. 72 -98, set-out.2014www.bbronline.com.brupper bound Bogetoft and Otto (2010)ds for the 95% CIs for the o SI and o u RTS indicators, as well as their respective bias corrected central estimates, are given in Figure6.The methodology used to analyze its results is synthesized in Figure7.Within the CCR case, a given RTS characterization is considered to be statistically significant only if the lower and upper bounds of the confidence interval for the o SI indicator are both greater than 1 (DRS) or smaller than 1 (IRS).On the other hand, when the BCC case is considered, only if the lower and upper bounds of the confidence interval for the o u indicator are both greater than 0 (DRS) or smaller than 0 (IRS).Both bounds equal to 1 or 0, respectively, strongly suggests CRS at a given significance level.As argued byBogetoft and Otto (2010), since the connection between a given RTS characterization and its estimates is uncertain or stochastic, the hypotheses of a given characterization should be rejected if at least one of the estimated scale indicators falls outside such critical values.