Follow up on gold hedging: Western Areas, South Deep and GoldFields

Gold Fields purchased Western Areas (through a share swap) and thus inherited the notoriously “toxic” hedge-book of Western Areas. This event is worth considering in the light of my previous blog on hedging. Let’s apply some analysis and critical thinking here.
First, some real-world imperfections. The hedge book was created in the time of the equally notorious Brett Kebble’s involvement in Western Areas. The structure, the banks involved and the behind-closed-doors-dealings that went into it are not the subject of this post. Definitely scope for some difficult questions here though.

Ok, but what about the hedge itself? Why was it terminated? Ian Cockerill, Chief Executive Officer of Gold Fields said:

  1. “We terminated the Western Areas hedge book because we believe in gold. “
  2. “The hedge book was significantly under water and was a crippling liability to the South Deep mine. Now we can bring the asset to account in a transparent manner.”
  3. “Gold Fields is of the view that the price of gold remains firmly in a long-term upward trend and, with that outlook, it does not make any sense whatsoever to be hedged.”
  4. “It also ensures that Gold Fields remains fully transparent to investors, and that its balance sheet remains simple to understand.”

Let’s take each of these statements in turn.

  1. So Mr Cockerill is stating quite clearly that Gold Fields view is that gold is a good investment, that they expect to make profit about increases in the price of gold over time. Fair enough. And since they are in the gold mining industry, perhaps they will have a more informed view than the average Joe. However, since they are in the gold mining industry, maybe they have a biased view of gold. Most management teams are notoriously optimistic about their company, their industry and can never understand why their share prices are so far below fair value! Also, this doesn’t address my major point that shareholders can easily adjust their exposure to gold in any case. This doesn’t present any arguments for operational improvements or similar efficiencies from terminating the hedge book.
  2. A crippling liability? Raising cash to pay off a liability simply accelerates the cost to now. Not necessarily a bad thing, but not clearly a good thing either. This probably makes sense within the context of point 4 below.
  3. Hmmm, a rehash or point 1 then. Except Mr Cockerill takes it further. “It makes absolutely no sense to hedge”. Well, as I described before, there is more to the decision to hedge than a simple view on the prospects of the gold price. One wonders whether Mr Cockerill couldn’t have expanded on his logical thought process that helped him conclude that there was absolutely no sense.
  4. Ah. Yes! A very valid point, and possibly the only valid point we’ve seen so far. Hedge books are complicated derivative structures and an excellent mining analyst should know about mines and mining and minerals and prices and not necessarily a thing about fancy derivative structures. Fully agree on this one.

So this leaves us with 1/4 or 25% relevancy score. Ok, this is a bit harsh, but it does support my view that hedging decisions are made more on emotion and rhetoric than on rationality and facts.

Now, there is another side to the story. From http://www.mineweb.co.za:

Western Areas also stated that “the hedge banks may be able to terminate the derivative structure as a result of existing circumstances or as a result of an acquisition of control of Western Areas by Gold Fields and in that event, Western Areas may need to make a material payment to the hedge banks and would seek to raise this amount from shareholders, in proportion to their shareholdings. If Gold Fields acquires 100% of Western Areas, then Gold Fields will need to deal with this issue with Western Areas and the hedge banks�.

So, in other words, regardless of what Mr Cockerill and his management team think about gold, they were probably close to forced to close out the hedge book in any case. Let’s hope this was a happy coincidence and not pure spin.

Some more background on the story:

yahoo business
www.mineweb.net

Gill Marcus on Moneyweb in early 2006

Botoxing a bling deal, also from Moneyweb

The place of analysis for entrepreneurs

Entrepreneurs are hailed as saviours of modern society. The current international paradigm (and one which is growing ever stronger in South Africa) is biased towards the idea that career success requires one to create something new and build up something from scratch – head out on one’s own and conquer the earth.

OK, this doesn’t apply to everyone, but the start work at a large company, rise within the ranks to senior management then retire and die doesn’t have the same allure as it used to. I was discussing this with some very bright friends recently. We agreed (in our pop psychology way) that probably one of the causes of this change in attitude was the decrease in loyalty offered by large companies to its employees over the last 30 years. It might be simply a matter of survival that has created this idea that you’ve only made it once you’ve made it your own way.

So increasing numbers of engineers are heading out on their own to form small engineering consulting businesses. Even actuaries are departing the safe and comfortable world of life insurance or pensions for the wild wastelands of “entrepreneurial world-saving activities”. These are entrepreneurs with extensive theoretical training, rigourous mathematical and problem-solving abilities, and (particularly actuaries) a huge array of analysis tools and a deep-seated understanding of risk. Are these skills of benefit or hindrance to these professionals in their new-found living-on-the-edge lives?

Jawwad Farid (an actuary in Pakistan with extensive education in the States, including an MBA) takes a view on this in his blog on the new Image of the Actuary site. (The tagline for the Society of Actuaries is “Risk is Opportunity”, which I quite like as a slogan for actuaries, since we are used to using risk rather than just getting rid of it.) I get the impression that Mr Farid is not a typical actuary in his risk-taking exploits, but he does explain the twin concepts that reduce actuaries’ willingness to take risks:

  1. Possible higher risk aversity (the obvious, but not necessarily all-powerful factor)
  2. Higher opportunity cost of taking risk (more to leave behind in terms of virtually guaranteed good income)

Factor 1 comes from several possible areas. Maybe only risk averse people are attracted to studying actuarial science. Maybe only the risk averse ones make it. Maybe the very process of writing (and passing…) the exams kills off part of the soul of actuaries, leaving them nervous shells of their former beings in the process. While I’m sure these all make a contribution, there can be no question that virtually all actuaries start out as risk averse. In their career choice (at school-leaving age now, mostly) they choose to study a course that is difficult (has a high cost) but reasonably certain rewards (provided they make it). Thus, they are paying a high price to reduce risk. This is a classic definition of risk aversity. As it turns out, many probably underestimate the amount of risk involved in actually getting through the exams. Let it not be said that actuarial students don’t back themselves to meet challenges! Overestimation of one’s abilities is one of the important components of Behavioural Finance and related behavioural studies. (Interestingly, many of these studies show that the extent of overestimation of one’s abilities increases as one’s abilities increase. The frightening fact may be that as actuaries and other “experts” become more qualified, they are more susceptible to overestimating their skills.)

But all those last few paragraphs are something of a digression. The point of this blog is to pose the question: “Does analysis have a place for entrepreneurs?” If it does, then actuaries (and similarly other analytical professionals) should have an advantage, even if it is muted by the dangersr of analysis paralysis. If, on the other hand, the advantage of analysis is so slight in the face of huge uncertainty and the need for brace, gut-based decisions, then actuaries should do well to stay clear.

There is no doubt in my mind that analysis is key to success in business, but I’m an actuary after all.

A bit of the science behind Google and PageRank

I’ve said before that I’m not an expert on Search Engine Optimisation, but it certainly is an interesting area where businesses of all sizes can reap definite benefits from applying a little analysis, maths and science to the problem.

Have a look at this article explaining, in reasonably simple terms, some of the background behind PageRank.

This economy of ours!

Economic data out for South Africa is painting a good, if slghtly confusing picture. The good news is that the economy is still growing healthily (not compared with China’s official growth though) and inflation has dipped down very slightly. Neither of these are truly unexpected though. Exchange rate appreciation and a relaxation in the dollar oild price will have had a muting impact on inflation.

CPIX (the basket excluding things like mortgage repayments and a few others) increased by 5% year-on-year in October. We only have data for October available now because of delays in collecting, collating and analysing data. This was slightly above forecasts, but nobody’s panicking yet. The same figure for September was 5.1%.
Meanwhile, growth in GDP is also continuing (4.7% annualised) in the third quarter, without showing much impact of the interest rate increases Tito Mboweni has put in place this year. Having said that, if one digs down into the sector details, the interest rates increases have not been completely ignored with property-related sectors showing markedly reduced growth from earlier levels.
Now for the confusing part. The previous quarter’s GDP growth has been upped to 5.5% from 4.9%.  First quarter figurees have been upped substantially from 4% to 5%, but last year’s growth adjusted only slightly from 4.9% to 5.1%.

So why all the changes? Well, before everyone goes on about “Lies, Damined Lies, and Statistics”, one should understand that estimating GDP is a tricky task in a developed economy, let alone one with a signficant contribution from an informal sector with limited records and reporting. As it is, there are discrepancies between information such as VAT receipts, money flowing through banks and the official GDP figures. While these measurements will be affected differently by different things, they should have a strong relationship to each other.

I’m going to dig into this as well over the next few months, but any comments or inputs are very welcome!

Urban legends and cocktail conversation

I’ve had an idea to analyse some of the many topics that come up in conversation time and time again. Chances of winning the lotto, FNB’s Million a Month account, randomness of the iPod shuffle to name a few. I’ll try to get hold of some interesting datasets and perform some basic analysis. My aim?

  1. To find out whether any of the often-claimed techniques might actually work
  2. And so discover what sort of randomness is hidden within these events
  3. And also see which of these datasets I can easily get my hands on. Any thoughts on my chances of getting FNB to chat to me about how they select winners?

Anybody have some data they’d like me to look at? Any more questions to add to this list? I’ve been away from blogging for a while, so let me know when your office Christmas party is so I can prepare your answers in time!

Additional Analysis of SEOmoz web popularity data

SEOmoz.org provide some great resources on search engine optimisation (“SEO”). Recently, they performed a really interesting analysis comparing actual site traffic for 25 sites that volunteered their data against indicators from a range of competitive intelligence metrics from sources such as Google PageRank, Technorati Rank, Alexa Rank and SEOmoz.org’s very own Page Strength Tool. The stated goals of the project is described in this quote from their page:

This project’s primary objective is to determine the relative levels of accuracy for external metrics (from sites like Technorati, Alexa, Compete, etc.) in comparison to actual visitor traffic data provided by analytics programs. 25 unique sites, all in the search & website marketing niche, generously contributed data to this project. Through the statistics provided, we can also get a closer look at how the blog ecosphere in the search marketing space receives and sends traffic

You can find the commentary on their updated analysis and also the original article (updated too, I understand).

Now, I’m not yet an expert on SEO, but I do know a few things about data analysis. Whereas their results indicate that none of the measures are particularly useful, I have three points to add:

1 Significance of correlation coefficients

A correlation coefficient does not need to be 0.9 or 0.95 to be significant as mentioned:

Technorati links is actually an almost usable option at this point, though any scientific analysis would tell you that correlations below 90-95% shouldn’t be used.

Roughly speaking, correlation coefficients greater than about 0.7 or 70% explain approximately half the variability in the observed variable (actual page visits). Whether or not this is “significant” depends on the amount of data used to measure the correlation. There are some very specific tests for measures of significance for correlation coefficients – I have summarised the results of one of the standard tests here:

SEOmoz data Correlation Significance Table

Beyond the technical statistical tests though, I would imagine that there is a great deal of value in estimating a large part of the practical popularity of a website (and presumably page visits is a sensible measure of this) through freely available “competitive intelligence metrics”. On the other hand, if you are looking for a near-exact replica of actual visits, then a much higher correlation coefficient is required.

2 Extending analysis to multiple regression rather than single correlations

OK, this does take the analysis beyond the original stated goal, but it is interesting to see how good a model of actual site popularity we can develop based on freely available “competitive intelligence metrics”. But first, it is useful to consider the correlation matrix between all variables (the “dependent variable” and all independent variables). In an ideal regression model, the independent variables will be uncorrelated with each other. On the other hand, if these metrics are any good, we would expect them to be strongly correlated with each other.
SEOmoz data Correlation Matrix
As can be seen from the table above, there are several strong correlations between the independent variables. This can lead to problems with “multicollinearity” for multiple regression technqiues, but since I am trying to keep this post non-technical, I’ll leave that alone for now. It is also interesting that while all the large (loosely defined here as greater than 70% or less than -70%) correlations are positive, there are many negative correlations as well. Thus, some measures appear to be using different information or approaches to provide the metrics. Most interesting to me is that TR Rank and TR Link have a correlation coefficient of -50%. This will be a hint to our multiple regression results…
I decided to use only very basic tools for the analysis so interested readers can perform the same analysis on their own with only MS Excel (generally a fairly weak statistics platform even with the Data Analysis add-in activated). My aim was to find a model that explained more of the Average Visits than Technorati Links by combining several variables together. I had to exclyde Compete Rank and Ranking Rank due to the limitations of Excel’s regression tools. I would measure “good” models by having a high adjusted R-squared, and significant and sensible estimates for individual variables as well. The results of a “good” model (although not necessarily the best since I did fairly quick and dirty model selection) are given below:

SEOmoz data Multiple Regression Results

SEOmoz data Multiple Regression Results Summary

The model has a “Multiple R” (which is intuitively analogous to the normal Pearson correlation coefficient) of 89%, and the model explains 80% of the variability in Average Visits. Other measures of goodness of fit include a high Adjusted R-squared (relative to other models fitted) of 71%, a F-statistic for overall model significance of 9.5 which gives a significance level or p-value of 0.00008 and low p-values for most independent variables included in the model. The intercept itself is not signfiicant, but we leave it in to improve the overall fit of the model. Similarly, while the significance level for Alexa Page Views is relatively high at 17%, it does add to the overall model in terms of fitting the data well.

SEOmoz data Multiple Regression fitted model

Again, very interestingly but not surprising by now, many of the coefficients are negative. This implies that, at least once adjusting for the other variables, these measures are associated with lower rather than higher Average Visits. This suggests more analysis and more data is needed to understand the dynamics here properly!

3 Quality and quantity of data
This leads me to my final comment. 25 Websites, while great to have even this much data, is not really anywhere close enough data to analyse this problem. This isn’t because of the small size of 25 sites in relation to the total available websites on the ‘net, but rather to do with the spread of sites across the different types of websites and the potential to fit the model too closely to the exact data provided rather than to some underlying reality. Again, this is a difficult area to discuss correctly and thoroughly without becoming very technical so I’ll leave that well alone too.

Final comments

This analysis and presentation of results is very lite for something this interesting. There is an enormous amount more that could be done with time, energy, more data, and, for my part, a better understanding of how each of these competitive intelligence metrics are intended to work. I’d welcome any comments on what analysis would be desired (time-series? Non-linear models? More detailed regression? Rank correlation?) and whether there is any chance of getting more data. I’d be very happy to dig deeper and post the results here and/or directly on SEOmoz.org

Models: there’s wrong and then there is Wrong

One of my favourite quotes is by George Box: All models are wrong, but some are useful. If you work with models and understand their place in the universe, you may already agree with this too. However, there is more than one type of wrong, and while it is not always possible to tell which is which when the milk has been spilt, the difference is important.

Models are always wong in that they aren’t a perfect replica of the “real thing” being modelled. Some may argue exceptions and that some models do perfectly model the underlying reality – I haven’t been convinced yet. The fundamental point is, if the model is the same as reality, what is the need for the model?

The purpose of most models is to provide a useful way of understanding an extremely complex system. Extremely complex systems are difficult to understand in their entirety. Economists are regularly getting bashed for throwing dangerous phrases like ceteris paribus around in their commentary and conclusions. Why the insistence on holding all other things equal? Because their model is only complex enough to understand a few components of reality and so is wrong when it comes to those other areas. This is problematic when those other things turn out to be important and unequal. The technical term for these models is “not useful”. I’ll give George the credit for this term too.

Nobody said it was going to be easy…

To build a useful model, that is. Understanding the benefits of modelling specifics components requires and in-depth, often intuitive feel for the problem at hand. A consultant brought in from the outside won’t necessarily have this unless the problem is a common or generic one. A good consultant will spend a significant amount of time listening and understanding the problem, the environment and the broader issues that will influence the real benefits drivers. Recognising the costs of modelling individual pieces of the problem is more a technical problem. Knowledge of model-building approaches, computer systems and applications, statistical techniques and actuarial projections, database management and data-mining, logical thought and system building all come into the process. Knowledge is required, but there’s often little substitute for experience too. Throw in some serious academic training too and we can start to hit Excel.

But what about the other Wrong?

The wrong I’ve discussed so far is a pretty mild sort of wrong. Intended, required, carefully thought through and ultimately useful. But what about Wrong in a simpler form. Wrong because a mistake was made? Wrong because a spreadsheet included errors? The real-world experience of model errors small and very, very large is compelling. Mistakes do happen. This post doesn’t deal with how to prevent reduce errors (plan, document, independent review etc.), but rather with how one classifies an error once it has been discovered.

A recent example I experienced was where a mistake had been made. Unfortunately for everyone it was one of the large, conspicuous and nasty types. The cause of the mistake could have been anything from incorrect proprietary models to incompetence, with lack of judgement, lack of review, weak control processes and lack of ownership of risk management protocols floating around somewhere in between. It is impossible to tell what was intended at the date the mistake was originally made since there is no record of what was intended, why it was done, how the decision was made, what checks were performed and who gave the thumbs up to go ahead. Unobserved and unrecorded history makes for compelling spy stories and thrillers, but not so great on the dry high school textbooks.

The little-known Wrong before other wrongs

Given the story above, the Wrong seems to be the lack of a clear objective stated at the outset, with clear understanding and documentation of this objective at the start. So often, the simple act of framing a problem correctly makes giant leaps towards it resolution. This is often the Wrong that precedes other wrongs:

  • Know what you are trying to do;
  • Make sure you understand why; and
  • Be clear and specific about describing it so that you and everyone else are on the same page.

Another of my favourite quotes is by George Bernard Shaw: “The single biggest problem in communication is the illusion that it has taken place“. That last bullet above isn’t as simple as it seems.

Reference for this post

Box, G.E.P., Robustness in the strategy of scientific model building, in Robustness in Statistics, R.L. Launer and G.N. Wilkinson, Editors. 1979, Academic Press: New York.

Measuring marketing for law firms

I haven’t discussed measurable marketing initiatives yet, which is a shame because it’s one of the most important “fuzzy business decisions” that most companies assume don’t require hard analysis, measurement and good business judgement. There is a long-standing (and important) debate around whether creative advertisements achieve the stated objective of the advertising campaign. I don’t know the answer to this question, and I suspect the answer is frustratingly along the lines of “it depends”. However, an easier debate is:

Are the stated objectives of marketing campaigns sufficiently closely related to critical business goals?

The answer is often “What business goals? This is just marketing.” This answer simply isn’t good enough.

Larry Bodine, who writes a blog that attempts to coax and drag law firms into the New Age of marketing professional services, had an interesting piece on the importance of measurable results from Chief Marketing Officers for law firms. His post is written more from the perspective of the budding new CMO, but it’s an interesting read for anyone looking at measuring marketing success. His article made me realise that I need to give this topic some more attention over the next few weeks. More on this later then.