I enjoy doing interviews – listening to other people’s questions gives me a different perspective compared to the questions I ask of myself (and of data) all the time… This question came form Manu Jeevan as part of his interview for the Big Data Made Simple blog. It made me think about something that has been troubling me for some time.
Data spam has always been present within web analytics reports. However over the past year or so it has become a real PITA. By data spam, I am referring to spammers and scammers polluting your Google Analytics reports with their junk links in the hope you will say – “Oh what is that? Let’s visit the referral site that is sending us traffic and see who they are”. Of course the purpose is to to drive traffic to their own site for ad impressions, or to push malware down your throat. Just like email spam, its annoying and a time-waster – but unlike email spam it is not so in your face. That means referral spam lies below the radar, buried in your reports. The result is its data distorting effects can often go un-noticed.
In this post I show you how to eradicate it.
If you follow me you know I have been writing my latest book these past 12 months. Happily that process is complete and the new book is launched – Successful Analytics: Gain Business Insights by Managing Google Analytics – Feb 2015 (both print and ebook versions). Note, if you have read my previous book series, this […]
I have tracked this issue since the beginning – plotting the percentage of organic traffic impacted by not provided. First, only visitors logged into their Google account were effected and hence tech related websites (attracting a more tech savvy audience) were disproportionately impacted. However, Google has since applied this to pretty much all visitors using Google organic search.
The position now: Not provided impacts 80-90% of organic searches…
Assuming you have no other “macro” drivers on your site – for example, no e-commerce facility, lead generation request from, store finder information, or advertisement click-throughs – how can you measure content engagement?
Here is my list of 10 tangible goals:
1. Show a snippet/summary first and then require a click to expand for more information
2. Use ratings e.g. rate this page/article, did this answer you question (y/n)?
A post to clarify a common misunderstood problem when setting permissions to enable the linking of your Google Analytics account with your AdWords account.
Quote: [Google Analytics] “Content Experiments sucks and I will never use it for any of my clients….run away”
The above snippet came from a post by Michael Whitaker (smart thinker, worth following) who asked for feedback on comments made at the Imagine 2013 conference earlier this year. My initial response was “hmmm – poor comments indeed. Whether you like a G product or not, to say that Google’s stats methods are unreliable, or reporting doesn’t work really is silly and lacks credibility.”
I am actually no big fan of the Google Analytics Content Experiments either, but I wish to put my views into context based on the following simple A/B test.
Here’s the problem… The default Return on Investment (ROI) displayed by Google Analytics is misleading for two reasons.
Issue 1: Google Analytics combines revenue form your transactions and goals. That can lead to double counting, if for example, an add-to-cart click is a monetised goal.
Issue 2: Google Analytics has no idea about what profit margins you operate under – how can it? Google therefore has to assume that *ALL* revenue generated by your visitors is 100% profit.
In this post I show you how to avoid these issues and calculate your AdWords REAL ROI. Its purpose is to take you to the next level – allowing you to move beyond adjusting bids simply based on conversions. Instead, you can go after the “highest” value converters.
Figure 2 – How big a difference is the default ROI versus the REAL ROI?
As you can see in Figure 2, we are not tweaking the edges here!
Avinash Kaushik is a great measurement thought provoker (up there with the likes of Tufte imho), all-round nice guy and friend of mine. I always come away from his posts challenged and simulated – quite a feat to achieve for your peers in a niche industry. The following post from him – Multi-Channel Attribution Modeling: The Good, Bad and Ugly Models – is a great reference read, though I disagree on a couple of items. Once you have digested Avinash’s thoughts, here is my input…
My thoughts on why the Guardian and the Washington Post are barking up the wrong tree with their constant side-stories. It is disappointing to read the story degrading in this way.
“Analysing this type of meta-data is exactly what companies such as Google, Yahoo, Twitter, Facebook etc. openly do.”
Seriously… what is the problem with collecting and analysing meta-data?