Sunday, June 03, 2007

Guh, Enough about A/B testing already..

Sorry, I cant stay away from it. I've seen a few articles over the years, this one is still one of of my favourites by a chap called John Quarto-von Tivadar. Very technical, but to me it gets many point across quite clearly.

I like a/b or Multivariate testing. I enjoy the fact its brought a feeling of scientific methodolgy into developing websites.

The 2 main problems for me has always been that

a) It only works for the average customer - you may be able to increase revenue through testing butit always work on the basis you have one type of customer (or customers all centred around one type of behaviour) - you have to set the test up very cleverly to be able to understand why different customer segments work in different ways.

b) You cant polish a turd - If a design is bad, small incremental modification is not going to make the great leaps you require - how do you get great leaps? combinations of usability studies, common & business sense, understanding what your customer wants and seeing what your customer does.

When I look at the way the mainstream vendor technology has changed over the years, its interesting to note evolution in this area. I do think its worth highlighting that these testing techniques was snapped up by the traditional web vendors, those whose software only really analysed per page, and not per customer (ie. the data is generally very OLAP-y and only structured for reporting in certain dimensions to be drilled into rather than on a complete customer usage basis). There is a world of clever statistics and data mining that has been left behind, even though some vendors talk the data mining / analysis talk most of their tools do not have data architectures that would support smarter, cleverer, customer centric view needed to move on. Not only that, apart from multivariate testing, most vendors have no clear way of deploying and making a difference with all those lovely metrics they provide you with, let alone any that are provided by fancy analytics. There is, for me, a clear gap in the building of applications in this area.

I think I may elaborate this in the next post. One of the current hot topics that I have some experience in is recommendation engines. These have previously been based on things either too black-boxy or without the fundamental emphasis on feedback on the calculations they perform to help and aid a user make their business better - algorithms and applications are great, but without providing feedback on how they work, its like entrusting your business to an idiot savant, pretty impressive at somethings but not something that copes with a lot of variety.

Also they are usually pretty limited dimensionally - what's the products bought, what's the products seen - not usually much of a concept on customer segment, customer actions - What they need is an architecture that combines the clever counting most current recommendations engines do with some insight into customer behaviour and business segments, and then a healthy understanding of the risk and impact of offering any one product over any other. This is not easy, because you'll have many different things impacting on the final recommendation, an application has to take on board the risk level the business is prepared to accept - but for me the most important thing is feedback its success in as clear a manner as possible. To make things more complicated there then has to be a layer on top of all this which is business rules required (to push stock, react to triggers not captured in models, reacting to new circumstances) but again, the business has to see the impact of the rules they deploy.

Actually, I'll stop there, this has already go on too long - I'll pick up with this another time as I reckon I'm going a bit off topic for what I planned to be a quick post. Toodle-loo.

No comments:

Footer

Add to Technorati Favorites