
What’s must-have and what’s nice-to-have. So once you acknowledge this, it makes it easier to prioritize content. It’s absolutely normal that the longer the page, the less people make it all the way down. This shows you scroll depth – how far down people scroll.

I consider this far more useful than any mouse movement or click heatmap. You can see if key pieces of information – both in terms of text and visuals – are in the area that’s visible to almost all users. Understanding attention can help you assess the effectiveness of the page design, especially above the fold area. What makes this useful is that it takes account different screen sizes and resolutions, and shows which part of the page has been viewed the most within the user’s browser.
Google analytics click map full#
It shows which areas of the page have been viewed the most by the user’s browser with full consideration of the horizontal and vertical scrolling activity. Some tools – like SessionCam for instance – provide attention maps. If you discover something (image, sentence etc) that people want to click on, but isn’t a link, then: If there’s an image or text that people think is a link or want to be a link, they’ll click on it. So there is one useful bit here I like – you can see clicks on non-links. Provided that you have enhanced link attribution turned on and set up, Google Analytics overlay is great (but some people prefer to see it on a click map type of visual).Īnd if you go to Behavior -> Site Content -> All pages, and click on an URL, you can open up Navigation Summary for any URL – where people came from, and where they went after. You can see where people click also with Google Analytics – and I actually prefer that. But again, this is not magic, it’s an algorithm and not based on your actual users’ behavior.Ī click map is a visual representation, aggregated data of where people click. Check out Attention Insight, Feng GUI, and EyeQuant. Using algorithmic tools is especially a good idea if you lack traffic. While I don’t fully trust these either (not based on actual users), I don’t think they’re any less trustworthy than your hover maps. They take into account stuff like colors, contrast, size of elements. There are also tools that algorithmically analyze your user interface, and generate heat maps off of that. I mean I do look at the info if it’s there – to see if it confirms my own observations / suspicions (or not), but I don’t put much weight on it.

That’s why I typically ignore this types of heatmaps. People might be looking at stuff that they don’t hover over, and might hovering over stuff that gets very little attention – and hence the heat map is inaccurate.

The accuracy of this thing is always questionable. It shows you areas that people have hovered over with their mouse cursor – and the idea is that people look where they hover, so it’s kind of like poor man’s eye tracking. When people say ‘heat map’, they typically mean hover map. Red equals lots of action, and blue equals no action. What is a heat map? It’s a graphical representation of data where the individual values contained in a matrix are represented as colors. If the heat map is based off like 34 users, do not trust any of it. A rough ballpark would be 2000-3000 pageviews per design screen. Very important: like with A/B testing, you need enough sample size per page / screen before you can trust any results. So always check the tools documentation to see what they call what, how they define things. Some tools – like Crazyegg for instance – calls a click map “heat map”. There are many tools that enable you to do this stuff, and some of them call the stuff differently. In a nutshell: we can record what people do with their mouse / trackpad, and can quantify that information.
