Tuesday, June 2, 2020
Study On Robecos And Its Investment Strategy Finance Essay - Free Essay Example
[Problem definition, relevance and motivation] Robecos investment strategy relies on identifying and exploiting market inefficiencies, which are a result of the predictable patterns in investors behavior. We believe we can outperform the market by locating these inefficiencies. Central in Robecos investment strategy is Robecos proprietary stock-selection model, which is used for decades and was extensively back tested in historical simulations. The outcome of this model is a cross-sectional ranking of stocks based on their expected future returns. In order to generate alpha, Robeco overweights certain high-ranked stocks in comparison to the benchmark weight and underweights certain low-ranked stocks. This ranking process is based on four factors; price momentum, earning revisions, management and value. Management + Value + Earning Revisions + Price Momentum = Stock ranking All these themes consist of multiple variables (e.g. earnings to price ratio (E/P) or book to price ratio (B/P) are variables in Value). These variables determine how attractive the stocks are and they may contain valuable information in predicting stock returns. We equally weight each theme to combine these different themes. [Problem definition] Robecos stock-ranking process focuses on the average effect of earning estimates, regardless of the quality and life of those estimates. We aim to add extra predictive power to the earning estimates b y investigating different aspects: different databases different forms: revisions, predictive surprise, growth level, recommendations different levels: consensus, individual analyst-level, extreme estimates different horizons and life of estimates We try to find out which database is better, which form performs better etc. Well look in more detail to the earning estimates with the goal to add extra predictive power to the currently used earning estimates. The overall question, which we try to answer in this thesis, is: How do we detect whose better and which estimates have better predictive quality. Well examine the predictive power of several candidates (based on the different angles) and well extensively back test these candidates; first we test the single variables and later well test the added value of the new candidates to the existing selection model. [Relevance] Robeco continuously attempt to improve the core stock-selection model, and therefore it is relevant to examine the predictive power of earnings estimates from different angles. We start defining candidates based on estimates at individual analyst-level instead of the consensus earnings estimates. More and more studies focused at the individual analyst level and illustrated the importance of analyst characteristics on the stock-prices. Some candidates based on estimates at individual analyst-level are candidates that focus on past accuracy of earning estimates, the age of the estimates, leader analysts, analyst-true call, and tenure. We also define other candidates that may add extra predictive power to the earning estimates. These candidates are based on the relative earning growth and changes in buy/sell recommendations. Furthermore, we consider different horizons and incorporate data from other financial measures in addition to earnings. [Motivation] A key distinguishing feature of our study from previous literature is that well look at the predictive power of earning estimates from different angles, while most of the previous studies give only an explanation of the abnormal returns of the earning forecasts. Data Well examine different databases with detailed information about earnings estimates: I/B/E/S Detailed database: this database is a good foundation of our research as it offers consensus level and detailed analyst-by analyst earning forecasts. I/B/E/S began collecting earning estimates for U.S. companies around 1976, while the International edition starts in 1987. Factset Estimates database: they also provide consensus and detailed-level earning estimates. They claim their estimates are of higher quality and therefore well examine this database. BETTER DATADESCRIPTION, UNIVERSE, THRESHOLD MARKET CAPITALISATION, COUNTRY, SECTOR ETC Methodology Well define some candidates to improve our current stock-selection model. First, well examine the predictive power of past-accuracy. We try to predict the direction of future estimate revisions. First well define some candidates based on specific analyst characteristics . Currently, Robeco only focuses on the average effect of earning estimates, regardless of the quality of those analysts. Therefore well examine the predictive power of some candidates based on analyst characteristics. Finally, we will examine a relative earnings growth candidate that doesnt rely on analyst characteristics but which has shown predictive power in the literature and which is easy to create given the I/B/E/S dataset. Well construct a top-bottom strategy of each candidate and we will back-test the single variables and the added value of the new candidates to the existing stock-selection model. Candidate list We start with describing a candidate list of potential factors that may help i n predicting stock returns. Because we first define these candidates we reduce data mining. Candidates based on analyst characteristics There are several candidates which well examine based on analyst characteristics. Candidate 1: Past Accuracy We start with simply look at the predictive power of past accuracy. For each analyst on each stock we measure the analysts historical accuracy, using the same measure as in Brown(2001). Brown(2001) shows that for distinguishing more accurate from less accurate earnings forecasts a simple model of past accuracy performs as well as a more complex model based on 5 analyst characteristics. Past accuracy (PAt) is defined as the individual analysts forecast error that year (FEt) minus the mean of the forecast errors of all analysts following the company that year () scaled by the mean of the forecast errors of all analysts following the company that year (): The forecast error is defined as the absolute value of the difference bet ween the actual annual earnings (A0t) and the last forecast made by the analyst for that year (LA1t). FE = |A0t|- |LA1t| We have to examine the database and the distribution of the estimates. Then we can decide which weighting scheme to use. We put more weight on analysts with more accurate estimates in the past. We can also order the estimates and take the median of the ordered set of past accuracy. Candidate 2: Forecast Age Recent estimates are more important than stale estimates. We use the same variable as used in Brown(2001). The forecast age (AGEt) is defined as the number of calendar days between the analysts last annual forecast and the fiscal year-end minus the average forecast age of all analysts following the company that year. We should give more weight to the analysts with the most recent estimate. Again we first have to examine the database and the distribution of the estimates, before we can decide which weighting scheme to use. Candidate 3: Lead Ana lyst The timeliness of analysts forecasts can be used as a proxy for unobservable skills in collecting information as leader analyst should be able to release earning forecasts before competing analysts. Cooper et al (2001) uses a Leader-Follower-Ratio (LFR). This ratio measures to which extent the analyst is a leader. Well also use this ratio to rank the stocks which are followed by these leader analysts. The Leader-Follower-Ratio is the cumulative time that a forecast revision leads to the cumulative time that a forecast revision follows: With and Where ti,1 and ti,0 is the length of time that a certain forecast revision leads or follows a given forecast revision respectively. If the LFR is higher than 1, the analyst is a leader. Cooper et al (2001) shows that forecast revisions by lead analysts are positively correlated with recent changes in stock prices. This may indicate that lead analysts have predictive power and therefore we will examine this candidate. We would expect to observe excess stock returns as investors respond to the release of revised forecasts by follower analysts. Well only focus on these lead analysts. Again, we first examine the database. Now we define variables that are based on conflict of interest. If we know the incentives for analysts to make biased earning forecasts, we can generate an abnormal return by identifying these biases. Candidate 4: Analyst true call In the research report of J.P. Morgan (2009) they focus not only on analyst forecasts that strongly deviating from the consensus but from what they call Analyst true Calls. The earnings forecasts of these analysts are already away from consensus but they move them even further away from the consensus. More weight should be given to these analyst forecasts, as these analysts are very confidential about their forecasts because they even move further away from the consensus. We use the same method as described in the research report of J.P. Morgan (2009 ): First we should find the highest en lowest earnings forecasts for the next fiscal year stock by stock. We focus on the analysts who are already away from consensus. Starting with these analysts, we filter these analysts to include only the stocks where the highest earnings forecast have been further increased. Or the lowest earnings forecasts have been further decreased (over the previous month). Thus, we focus on the analysts who make a forecast revision even further away from consensus. In the third step we create two universes: positive Analyst true calls (the highest earnings forecast is further increased) negative Analyst true calls (the lowest earnings forecast is further decreased) Rank the stocks in the two universes. We buy the stocks in the top of the positive universe and sell the stocks in the bottom of the negative universe. Disadvantage of this test: This methodology is a very strong approach. An analyst will not move further up or down every mo nth. We should test this discreteness. In the research report of J.P. Morgan (2009) they do not repeat this procedure every month. We can make this strategy more robust : Find x% of the earning forecasts in the bottom quintile and x% in the highest quintile. After a revision, we select the estimates which moved further away from consensus. Select from the estimates found in step i, x% of these estimates which are again in the bottom quintile or highest quintile. In the third step we create two universes: positive Analyst true calls negative Analyst true calls Rank the stocks in the two universes. We buy the stocks in the top of the positive universe and sell the stocks in the bottom of the negative universe. The second approach is more suitable for ranking, because not all analysts revise their earnings forecasts each month. Candidate 5: Tenure Brown(2009) shows that a strategy by buying a portfolio of firms that are followed by low-tenure analysts and selling a value-weighted portfolio of firms that are followed by high-tenure analysts earn abnormal returns. We use almost the same definition for tenure as in Brown(2009), but well use months instead of years. TENi,t = the number of months since the analyst first makes an estimate of year-ahead earnings in the I/B/E/S database. Well follow the approach as defined in Brown(2009). We rank all stocks of the firm in the analyst forecast sample based on the median value of analyst tenure. Consider a set of k ordered tenure variables of a set of analysts at a certain point in time t for stock i: TEN1,i,tTEN2,I,t.. TENk,i,t. where The median of this ordered dataset is equal to: MEDIANi,t [TENk,i,t] = ( + )/2 Where is the largest integer not greater than x, and is the smallest integer greater than x. Stocks that are followed by high-tenure analysts are in the top portfolio and stocks which are followed by low-tenure analyst are in the bottom portfolio. We can use this can didate in addition to our current earnings revisions strategy. Candidate 6: Star analyst Fang and Yasuda(2008) show that recommendation changes of star analyst are profitable. We will examine the predictive power of earning forecasts of these star analysts. We use this candidate in addition to the current earnings revisions strategy Fang and Yasuda(2008) measures analysts reputation as the All-American title that is granted by the Institutional Investor magazine. This magazine published rankings throughout the year and has been the greatest source of survey-based rankings which identifies the top analysts. They cover equity markets in Asia, Europe, Japan, Latin America, Russia and the U.S.Ã [1]Ã An analyst remains his star status for 12 months after the publication in the Institutional Investor magazine. AA elections occur in October of every year. We should match the names of the AA analysts from the Institutional Investor listings with I/B/E/S dataset. I NEED ADDITIONAL INFORMATION ABOUT THE AVAILABILITY OF THESE RANKINGS. DO I NEED TO BE A MEMBER BEFORE I HAVE ACCES TO THIS MAGAZINE? Other candidates Candidate 7: Relative earnings growth According to Da and Warachika (2009), stocks with optimistic and pessimistic long-term analyst forecasts relative to the short-term implied growth have negative and positive risk-adjusted returns, respectively. We need the earnings in the previous year (A0t), earnings forecasts for the current fiscal year (A1t) and long-term growth forecasts (LTGt) from the I/B/E/S Detailed Database. We use the same definition of implied short-term growth as in Da and Warachika (2009). The implied short-term growth (ISTGt) is defined as: The difference LTGt ISTGt is appropriate to measure the relative optimism or relative pessimism of analysts at portfolio level. Da and Warachika (2009) conduct the analysis on an earnings-per-share basis, which is also available in the I/B/E/S Detailed Database. In the paper they explain that for some firms the earnings forecasts for the current fiscal year is near zero. Therefore, they construc t a Slope variable as the difference between the rankings of LTG and ISTG. Well use the same approach. We can rank the stocks according to the Slope variable into deciles from 1 to 10 in descending order. We should buy stocks in the top and sell stocks in the bottom. Candidate 8: Changes in buy/sell recommendations Jegadeesh and Titman(2004) and Jha et al(2003) show that the change in analyst buy/sell recommendations provide a meaningful signal as they confirm the earning revisions. In Jegadeesh and Titman (2004) they examine the relation between analyst recommendations and other concurrently available public information. They find that quarterly change in consensus recommendations is a robust return predictor that appears to contain information orthogonal on this range of other predictive variables. Therefore, well use the changes in buy/sell recommendations as signals for further stock performance. Combining previous candidates to create an accurate estimate and investiga te multiple horizons, incorporate data from other financial measures in addition to earnings and use the change in buy/sell recommendation, It is reported in the research paper of Starmine(2007) that a small group of analysts usually lead the peer group and release forecast of higher quality. By following the earnings revisions of these analysts we can improve the outperformance based on consensus. They try to measure the analysts historical accuracy to better predict the direction of future estimates of earning revisions. This model put more weight on the most accurate and most recent estimates. They investigate multiple horizons, incorporate data from other financial measures in addition to earnings and they use the changes in buy/sell recommendations. We can use the basis idea of this model. How can we measure earnings accuracy? We can use past accuracy, the timeliness of the estimate (tenure) and how extreme the estimate is (Analyst-true-call). We first have to test the predictive power of the single candidates and test the accuracy of these variables. OTHER IDEAS? If we have a measure for the accuracy of analyst forecasts, we can calculate an weighted-average estimate which is better than the consensus estimate. Because we identify the individual analysts that are more likely to be accurate in the future, we can get an estimate better than the consensus. We should also look at the age of the earnings estimate. Starmine(2007) exclude analysts with stale forecasts from their analysis. We first have to examine the dataset, we can do the same, or we use a certain weighting scheme (for example exponential). If we have the estimate which is better than the consensus estimate, we can add other aspects to this estimate, as is done in Starmine(2007). We combine the Predicted Surprises (percent difference between this weighted average estimate and the consensus) and consensus changes on EPS, EBITA and Revenue for the current fiscal quarter, curren t fiscal year and next fiscal year. Then we can combine the revisions component score with the recommendation revisions component. Here you can see a screenshot of a video on the StarMine website: Back test We will first test the predictive power of a single candidate, by using the back-test. The basic idea of the back test is to sort the universes into deciles based on the candidate characteristics. Analyze results We select the most promising factors for inclusion in the current stock-selection model. Time Schedule Task Mar. Apr. May Jun. Jul. Aug. Literature review, description methodology X X Download data data check X X Test predictive power of single variables x X Test the added value of the new candidates to the existing selection model X X Further improvements X X Writing thesis X X X
Subscribe to:
Posts (Atom)