Category Archives: communication

Make A Million competition encourages financial meltdown

[There is an additional post for the 2010 competition on how to not lose money in the make a million competition.]

The Make a Million competition is in its fourth year. The aim? Take a personal investment of R10,000 and compete with other “investors” to earn the highest return over a short period of a few months.

The competition itself has always struck me as a little strange. No doubt the purpose is to raise the profile of PSG and encourage new clients to begin share trading with them. However, it is also positioned as an educational programme where newbies can learn to invest in the stock market.

Invest? Oh, you mean speculate like crazy?

The irony is the strong use of the word invest in all of this. Investing is taking carefully considered long-term positions in great companies at reasonable or cheap valuations to benefit from the improving prospects of the underlying business and be rewarded for providing capital to the business.

The nature of this competition can be likened to a exotic type of binary option. Whoever makes the highest return in this short period (far too short for the economic and business fundamentals involved in real investing to play a serious part) wins the “pay-off”. The sensible strategy is to find the most volatile assets possible and plough all the cash into these few positions in the hope that they skyrocket. The downside is limited to R10,000 (ok, R10,650 if one includes the entrance fee) and the upside has the 7 digits of a million rand.

The competition is at least partially marketed  at those new to trading. This is NOT the right way to introduce investors to the stock market.

Introducing MaM4 with Single Stock Futures

This year, the competition has outdone itself in terms of promoting irresponsible speculation through introducing Single Stock Futures (SSFs). In this way, investors get to increase the volatility of their positions several times, to increase the possible (not expected!) value of their portfolios. Again, the downside is limited, so strategies that ramp up volatility (at the expensive of expected return) can be shown to be optimal.

The sneaky thing is that as participants increase the volatility of their portfolios, the expected value of the winning portfolio increases! The average return of all portfolios will be unchanged, and may quite likely even decrease. However, the winning portfolio (which both I and Nassim Taleb would both call the luckiest portfolio) is likely to be more impressive. A misleading, if attractive advert for stock speculating and PSG. As the volatility of individual portfolios increase, the range of outcomes (positive and negative) increases. Since the largest portfolio will win, we have increased the expected size of the winning portfolio.

All sounds horribly like the cause of the current financial meltdown

The current global financial disaster was largely created  through excessive gearing, poor understanding and management of risks, arguably weak regulation, and performance incentives that directly motivate risk taking for decision makers.

One of the key lessons of economics is that incentives drive action. Executives and traders were given huge upside potential in terms of bonuses and stock options for good performance, and the relatively limited downside of a still-significant salary and maybe a polite “moved on to new challenges”. We reward for upside performance and ignore risk in the process.

Nassim Taleb’s (if anything more relevant now that a few years ago books, “Fooled by Randomness” which I strongly recommend and “Black Swan” which is just ok) core point is that their is so much randomness and uncertainty in investment results that it is nearly impossible to identify real skill.

So the old Make a Million competitiion promoted risk-blind speculation over short time horizons. The new one takes this several steps further into the irresponsible.

Change Make A Million, save your soul before it’s too late

PSG, Moneyweb, Galileo Capital, JSE, ABSA Capital, Satrix and Deutsche Bank, do the right thing. Change the competition to one that is responsible.

Some interesting ideas for an alternative, less unconscionable competition:

  • Be based on at least performance over a full year (I accept a 10 year competition isn’t viable!)
  • Allow investors to choose a portfolio at the start and NOT trade. (oh, this doesn’t encourage brokerage and adrenalin and excitement? Pity.)
  • Not allow investors to use geared products such as warrants and SSFs. Serious investments in underlying equities only please.
  • Change the prize to be split amongst every investors who makes a return of Inflation + 15% over the period. This way, participants aim to generate very good returns with high probability (this target is still difficult for a real-world asset manager!) rather than go all-out for maximum return.
  • Alternative performance targets could also be offered. Best risk-adjusted return, with risk-adjustment by traditional standard deviation, or downside variance, or maximum draw-down. You could also consider performance relative to the index as a whole, which would prevent scenarios where nearly everyone or hardly anyone wins the prize because of movements in the overall market.
  • Abolish the abominable practice of allowing more than one account, more than one entry, and the ability to re-enter if one’s funds drop low during the competition. What were you thinking anyway?

These are just a few thoughts I had. Virtually anything must be better than what you’re doing at the moment.

5 Mistakes you make when you leave the science out of marketing

Marketing is naively thought to be mostly art and very little science. While it is true that there is are elements of inspiration and creativity and passion involved, the balance of an effective strategic marketing role is heavily in favour of science.

As a further point to consider, I put forward the proposition that much of really great science involves inspiration and creativity in passion in more than equal measures to a successful marketing decision. Newton’s development of the laws of motion and gravity, Copernicus’ solar-centered world, Pasteur’s painstaking experiments to support and understand germ theory are all well known examples of brilliance and flair combined with method and rigour.

But where does science contribute to marketing? Is it possible to reap the benefits of logic and analysis and rigour without damaging the creative process?

The answer is “absolutely without a doubt” for numerous reasons. I will touch on just one in the next few paragraphs to demonstrate the idea.

Introducing analytics

The most commonly thought of analytics when it comes to marketing is customer analytics. Better understanding of customer behaviour, preferences and ultimately buying decisions is enormously valuable. Take what was done in the past, compare the success rates of the different initatives, and stop doing the ones that don’t work.

Any organisation can benefit from understanding what works and what doesn’t, and shifting resources to those functions that work. Good organisations also understand the value of play and experimentation, and will continue to allow an element of trial and error. Truly excellent organisations combine experimentation with analytics to truly understand on a measurable level which experiments work and which should be tossed.

A real life example of the place of analytics

Let’s consider a very specific example. An old university friend of mine has started a new venture with a unique offering that clearly means a great deal to him. He has the passion, and presumably the product, to make his idea a success. He also had the good sense to plug into social networking platforms such as Facebook to spread the word of his new website and associated content. So far we have an excellent platform for success.

Mistake #1: Ignoring pre-existing science and analytical results

However, the design of the website appears to have been performed without understanding the hard, measurable evidence from a range of pre-existing studies and material. The website makes it difficult to buy. A long slide-show intro precedes access to the main page, frustrating regular visitors to the page (the intro cannot be skipped) and severaly damaging the ability of his site to be spidered and highly ranked by search engines.

So several mistakes have been made by disregarding the clear evidence that has been accumulated through analysing customer behaviour on similar projects.

Mistakes #2: Not performing analytics on web page at the outset

An excellent first step in understanding how customers will interact with your sales channel is to watch customers interact with your sales channel. Before a site goes live, invite some representatives from your target market (friends and family will do in a pinch if budget is tight, as long as you are confident they will give honest feedback).

Watch them interact with your website (or other sales and information channel). Where are they confused? Do they ask many questions? You won’t be there in person for most of your customers. What do they say is good, what do they say is ugly. If one guinea pig says something doesn’t work, that could be personal preference. If all 3 or 4 give similar feedback, the scientific evidence is mounting and a wise marketer would make changes.

This can be a very quick and easy, but amazingly valuable way to understand the strengths and weaknesses of your approach. Don’t assume you are just like your customers.

Mistakes #4: Not starting to collect analytics and data from the start

It is so easy to collect useful information, if you plan it in from the start. Once the system is set up and the process is working, invaluable information will flow with every visit, every call, every surf and every purchaser.

Not setting up to collect data is usually the first sign that the marketer doesn’t understand the value of understanding the customer.

Mistake #5: Thinking science cheapens the experience

Perhaps this should be mistake #1. Many people with great ideas feel that their ideas should sell on their own merit. They view logical, analytical understanding of customers to be beneath them. If the product is good, if customers will benefit from purchasing the good or service from you, then you owe it to them to make it as easy as possible for as many as possible of them to effortlessly find their way from oblivious potential customer to satisfied repeat customer.

If your aim is to build the perfect mousetrap, perhaps it is worth finding out what customers want in a mousetrap, where they like to buy it and how they like to buy it.

What is your total property return?

Measurement is a tricky thing. So many ways to get it wrong. So many of this incorrect or misleading measures are applied to property returns.

The 30 second view from a backward looking bird

The South African residential housing market has been entertaining to watch over several years now. At first, a sensible readjustment as confidence in our country and economy improved. Then, a further strong surge off the back of lower interest rates, themselves a result of imported deflation through a strengthening currency. The currency’s strength, hindsight shows us, was again due to improved confidence in our country, but with a large dose of commodity cycle boom. See Australia’s economy and currency for a similar pattern. There economy has a similar base to our own.

But then, with time, the property boom whizzed a little out of hand. Everybody (and their shoe-shine boy) was investing in multiple properties, any properties, as much as heavily geared 100% and more loans could buy.

Interestingly, well over a year ago First National Bank began tightening its credit granting criteria, and decreasing the concessions below prime offered. This was a clear decision to give up precious market share to maintain margins and risk levels. Well done FirstRand.

The more frightening eyes-forward reality check

And now the property market is looking rather sorry. Except for a few ridiculous estate agents punting the strong demand and limited supply (and nobody ever really believes them anyway) the market is on its way down. Standard Banks property measure (median of their sales) has been significantly down for several months now. ABSA’s (mean of their portfolio) is still positive, but only just.

Mean vs median vs trimmed mean

Standard Bank uses the median since it is more stable than the mean and less subject to outliers. This is true of the median, but I would be interested to see the results with a few other measures.  The trimmed mean is the average of values within a certain range (typically 10th percentile to 90th percentile, but different ranges are possible). This measure provides a better measure of overall central tendency, but limits the effects of outliers.

Weaving baskets

Both ABSA and Standard Bank use the ratio of the average (according to their respective measures) prices of houses within their portfolios acquired in month x to month (x-t) to measure the change in houses prices over period t from (x-t) to x. This captures three effects:

  1. Changes in average houses prices over the period
  2. Changes in the mix of houses sold nationally over the period
  3. Changes in the make-up of the sales to each bank over the period

Item 1 is what we want to understand. Items 2 and 3 are distortions to the numbers. Measurement errors, if you like.

Fortunately, since these portfolios are both significant portions of the total sales, the baskets are fairly representative of national sales. However, as economic conditions change, activity in different segments of the market would be expected to change. So it is clear that this is not only a source of increased error in measurement, but quite likely to be a bias.

Then, as touched on earlier in this post, banks will have different risk-appetites for mortgages at different points.

Incidentally, the competition commission would do well to notice these clear identifiers of competition between banks. These differences are the stuff of real cut-throat competition, with banks holding out as long as they dare before making a stand for profit margins, hoping that others will also breathe a sigh of relief and follow suit. This doesn’t reflect cartel-like behaviour, but rather tight competition where perfect competition pushes all participants in the great market game to the limit.

So as banks make decisions at different times, there share of the total property market will shift and change. This again introduces not only additional error but potential bias. Of course, it is conceivable that taking some sort of appropriately weighted average across all providers of mortgages could remove this problem. However, large as ABSA and Standard Bank are, they are by no means the entire market.

Glitch in the matrix

Standard Bank representatives have highlighted another specific problem area.

Unusual property market activity preceded the National Credit Act. The view put forward is that property prices experienced an upwards blip as market participants (blame spread between buyers, sellers, estate agents, mortgage originators and banks, but just not equally) rushed to process their purchases before the new rules came in to play and possibly restricted their transactions. The evidence would suggest that buyers were prepared to pay a little more to push through the deals. Or estate agents worked an extra few hours to earn their large commissions, since they had the most to lose with lower volumes of sales.

Like-for-like sales, a better alternative

Other property market indices in other countries have the luxury of more data. Their property indices are constructed by examining sales of the same house over time.

This may be best explained with an example. Two houses, A and B are sold for ZAR500,000 each in 2005. In 2006 (exactly a year later), house A is resold for ZAR550,000. A fair estimate of property price inflation over the year is 10%.  In the second year, house B is sold for ZAR616,000. The two-year return is measured as (616,000 / 500,000) = 23.2%.  The one year return over the second year is (1+23%)/(1+10%)-1 = 12%.  For those of you familiar with forward rates and spot rates, this should be ringing a bell about now.

Example 1 for measuring property price changes
House Sale t=0 Sale t=1 Sale t=2 Holding period return
A 500,000 550,000 10%
B 200,000 616,000 23.2%
Average 500,000 550,000 616,000

Now, for this example as provided, we could also take the ratio of sales in each year (year one: 550,000/500,000; year two: 616,000/550,000) and get the same growth. The two methods diverge when the house prices themselves are different from the outset. Let’s consider the scenario of houses C and D.

Example 2 for measuring property price changes
House Sale t=0 Sale t=1 Sale t=2 Holding period return
C 500,000 550,000 10%
D 200,000 246,400 23.2%
Average 350,000 550,000 246,400

We the figures from the above table, the results look very different on the two methods. The like-for-like method gives us identical returns to Example 1 10% and 12% for year one and two respectively. However, on the naive average of sale prices in each period, we get nonsensical results of 57.1% in year one and -55% in year two.

This example gives particularly poor results, because the change in mix of properties sold in each year are particularly different.

Almost the conclusion

So we have already explored some of the difficulties in measuring the actual change in property prices. The remaining step is to understand how property is measured as an investable asset in the financial press.

A typical approach is to consider the total capital growth or decline in property prices over the period. Sometimes commentators deduct CPIX off this nominal growth figure to determine the real capital growth or decline. If this result is negative, investors are deemed to have lost money in real terms. However, this is naive:

  • It ignores rental cashflows that would have been received over the period. Rental yields fluctuate, btu can range anywhere from 1.5% for high-priced residential houses to 10% for industrial rentals in unattractive areas. One must add this to the capital return to determine whether an investor has made a profit or loss over the period under consideration.
  • On the other hand, direct property is an expensive asset class to invest in. There can be an array of expenses incurred in the management of the investment. These should be deducted off the gross return calculated so far.
  • A more complex area is that of financing. Those who say property is a fantastic investment usually highlight the fact that “nobody is making any more land” (which is not strictly true, given Cape Town foreshore is reclaimed land, as well as parts of the Netherlands, Beirut and development around Dubai to name just a few of which I am aware) and that one gets to use the bank’s money to make money. If one considers the effects of gearing, one must also consider whether the financing is fixed or variable rate, and possibly into the realms of risk-adjusted returns
  • For taxable investors, one might want to consider tax and the extent to which expenses incurred are tax deductable, and the split between income tax on rentals and capital gains tax on, well, capital gains.

So the real conclusion is that measurement of returns to investors in property is a complex area. Although we may all feel we have a good understanding of property since many of us own property, that doesn’t provide instant certification to discuss the property market in the financial press and on financial websites.

Telkom, SBC and a few things suddenly making sense

Business Report is running a story about the shareholder agreement between government and SBC that impacted South Africa’s telecommunications environment.

Ann Crotty (from Business Report) writes:

The shareholders’ agreement signed by the government when it sold a 30 percent stake in Telkom to the Thintana Communications consortium placed both companies above South Africa’s laws, according to a US academic journal.

As the story goes, when Telkom was privatised, a shareholder agreement was created that allowed the new partners (notably SBC or “Southernwest Bell Corporation”) to ignore any regulations that contravened their shareholder agreement. Parts of the story also indicate that SBC lawyers may have had a strong hand in writing telecoms legislation itself – not exactly what I would call a disinterested, objective bystander!
Seems like in the heady days of early democracy in South Africa, someone slipped up and let the litigious monster of SBC have more of a say in our country’s telecoms policy than 45 million people or so. I haven’t yet read the underlying academic paper referenced, but if even some of this is true it is an amazing revelation.

Some comments from Slashdot readers:

Well if you set up a monopoly it will be abused, you need very strong regulators to keep anything clean. Doesn’t matter if its a state run monopoly (NHS, BT (before privatisation), British Rail etc) or a granted monopoly.


You should blame the politicians who voted to allow the monopoly deal in the first place. Do you believe for one second that they did not know what they were doing?


A company with a “government granted” monopoly abused it. Shocking!

Incidentally, any true monopoly must be government granted. Without the government’s force to keep competition away, it’s merely a really effective competitor in an open market, like Wal-Mart.

A monopoly, whether government owned (e.g. the US Post Office) or government granted (e.g. AT&T and the Baby Bells in the US, before cellphones, cable company phone service, etc.), is not required to innovate and improve to retain customers, like a free-market business is. Because of this they will tend to deliver a lower quality product at a higher price.


This shows why private monopolies and back-room arrangements are bad. Public monopolies (public utilities, private utilities with public reporting requirements, etc.) are not shown to be bad by this case.

Liberal economic policies help in a lot of things, but utilities are one of the cases where it’s an infrastructure investment that still is most efficiently done cooperatively, particularly since you have to deal with public rights-of-way and all that. Services on top of the infrastructure should be liberalized, of course.

We really do need to get people to think beyond left and right more these days and more on what works best for the particular situation.

Deja vu and the myopia of our spirit

Amongst the stormy seas of markets recently (off the back of a credit and liquidity crunch apparently initiated by ongoing and deepening problems with sub-prime loans in the US and the related CDOs), bobs the grey and bloated bodies of a clichéd failure.

Unwavering belief in trends, normal market conditions and trading rules developed out of a less than infinite history of prices have again resulted in burnt fingers and an abundance of flotsam and jetsam on the high seas of international markets. Computer and algorithm-driven “quant funds” have apparently taken a beating in the “unusual” market conditions of late. These systems are usually calibrated to a period of history, to identify profitable trading strategies based on complicated models, multiple factors and supposedly rigorous statistical analysis.
High volatility and correlation across markets took down LTCM (read When Genius Failed: The Rise and Fall of Long-Term Capital Management) before. So-called “programme-trading” or “portfolio insurance” that was blamed (non incontroversially and not fully substantiated) for the 1987 market crash. Portfolio insurance is still alive and well in the form of delta hedging. Turns out the old name had a rather negative taint to it. Don’t get the wrong idea, I’m not against delta-hedging, or any specific trading strategy. I’m just not convinced that the results are all they’re cracked up to be. A system that works well some of the time then fails spectacularly every now and then is not my idea of a good night’s sleep, or a sustainable long-term strategy.

Goldman Sach’s apparently still believes in the system. Then again, they have to say that, don’t they?

The developers of these systems would do well to look at the past from a human and historical view rather than just a limited slice of a time-series. It’s too easy to consider recent history as representative of the future. We all do it, but the trick is to maintain some scepticism and not get carried away by hope, greed and fear.

Measures, targets and Alchemy

When a measure becomes a target, it ceases to be a good measure.

I first heard this quote when dealing with performance measurement and remuneration structures for senior management. In that scenario, the danger is that you get exactly what you measure rather than the good behaviours related to or driven by the metrics chosen. The measure starts as Earnings Per Share (EPS) growth, which is generally a good thing. However, once management do the maths, they realise that reducing dividends to zero will boost EPS growth, even if it means pursing projects with a return lower than shareholders’ cost of capital. Measure becomes a target; measure ceases to be useful.
More on that some other time – it is an interesting point itself.

Now on to the magic and mystery, and science and great skill, and analysis and mathematics and theories, and occasional quack and snake-oil salesman – Search Engine Optimisation. Isn’t there an argument to say that, if the aim of search engines with their great yet imperfect algorithms is to reward fresh, relevant and useful content for relevant search terms, then the best long-term strategy would be to continue to write and publish fresh, relevant and useful content? No quick wins, and with less of the alchemy involved SEO companies wouldn’t get as many customers, but why isn’t this the best advice for long term traffic and search engine ranking? Rather than pursuing loopholes and quirks in any particular (temporary) search system, the measure should match something more fundamental – being a useful website. Difficult for that approach to need to be changed when Google uses “nofollow” links or omits duplicate stories or starts recognising your “invisible white on white text”.
Ok, before the backlash begins, there are practical lessons that can help search engines. “Obvious” things like “search engines are not people and therefore will struggle to read text if it is really a picture embedded in a fancy Flash animation”. This is probably a bad example – I expect fresh, useful and relevant content appears less often in glitzy Flash clips.

So isn’t it time to take the difficult medicine, and build a brand and loyalty and readership and customers and repeat business and structure value and goodwill by actually earning it?

Urban legends and cocktail conversation

I’ve had an idea to analyse some of the many topics that come up in conversation time and time again. Chances of winning the lotto, FNB’s Million a Month account, randomness of the iPod shuffle to name a few. I’ll try to get hold of some interesting datasets and perform some basic analysis. My aim?

  1. To find out whether any of the often-claimed techniques might actually work
  2. And so discover what sort of randomness is hidden within these events
  3. And also see which of these datasets I can easily get my hands on. Any thoughts on my chances of getting FNB to chat to me about how they select winners?

Anybody have some data they’d like me to look at? Any more questions to add to this list? I’ve been away from blogging for a while, so let me know when your office Christmas party is so I can prepare your answers in time!

Models: there’s wrong and then there is Wrong

One of my favourite quotes is by George Box: All models are wrong, but some are useful. If you work with models and understand their place in the universe, you may already agree with this too. However, there is more than one type of wrong, and while it is not always possible to tell which is which when the milk has been spilt, the difference is important.

Models are always wong in that they aren’t a perfect replica of the “real thing” being modelled. Some may argue exceptions and that some models do perfectly model the underlying reality – I haven’t been convinced yet. The fundamental point is, if the model is the same as reality, what is the need for the model?

The purpose of most models is to provide a useful way of understanding an extremely complex system. Extremely complex systems are difficult to understand in their entirety. Economists are regularly getting bashed for throwing dangerous phrases like ceteris paribus around in their commentary and conclusions. Why the insistence on holding all other things equal? Because their model is only complex enough to understand a few components of reality and so is wrong when it comes to those other areas. This is problematic when those other things turn out to be important and unequal. The technical term for these models is “not useful”. I’ll give George the credit for this term too.

Nobody said it was going to be easy…

To build a useful model, that is. Understanding the benefits of modelling specifics components requires and in-depth, often intuitive feel for the problem at hand. A consultant brought in from the outside won’t necessarily have this unless the problem is a common or generic one. A good consultant will spend a significant amount of time listening and understanding the problem, the environment and the broader issues that will influence the real benefits drivers. Recognising the costs of modelling individual pieces of the problem is more a technical problem. Knowledge of model-building approaches, computer systems and applications, statistical techniques and actuarial projections, database management and data-mining, logical thought and system building all come into the process. Knowledge is required, but there’s often little substitute for experience too. Throw in some serious academic training too and we can start to hit Excel.

But what about the other Wrong?

The wrong I’ve discussed so far is a pretty mild sort of wrong. Intended, required, carefully thought through and ultimately useful. But what about Wrong in a simpler form. Wrong because a mistake was made? Wrong because a spreadsheet included errors? The real-world experience of model errors small and very, very large is compelling. Mistakes do happen. This post doesn’t deal with how to prevent reduce errors (plan, document, independent review etc.), but rather with how one classifies an error once it has been discovered.

A recent example I experienced was where a mistake had been made. Unfortunately for everyone it was one of the large, conspicuous and nasty types. The cause of the mistake could have been anything from incorrect proprietary models to incompetence, with lack of judgement, lack of review, weak control processes and lack of ownership of risk management protocols floating around somewhere in between. It is impossible to tell what was intended at the date the mistake was originally made since there is no record of what was intended, why it was done, how the decision was made, what checks were performed and who gave the thumbs up to go ahead. Unobserved and unrecorded history makes for compelling spy stories and thrillers, but not so great on the dry high school textbooks.

The little-known Wrong before other wrongs

Given the story above, the Wrong seems to be the lack of a clear objective stated at the outset, with clear understanding and documentation of this objective at the start. So often, the simple act of framing a problem correctly makes giant leaps towards it resolution. This is often the Wrong that precedes other wrongs:

  • Know what you are trying to do;
  • Make sure you understand why; and
  • Be clear and specific about describing it so that you and everyone else are on the same page.

Another of my favourite quotes is by George Bernard Shaw: “The single biggest problem in communication is the illusion that it has taken place“. That last bullet above isn’t as simple as it seems.

Reference for this post

Box, G.E.P., Robustness in the strategy of scientific model building, in Robustness in Statistics, R.L. Launer and G.N. Wilkinson, Editors. 1979, Academic Press: New York.