Tuesday, August 13, 2013

The Technology Research 2x2 Matrix Fallacy

Vinay Bhagat, co-founder & CEO, TrustRadius

I’m new to the technology research space. Perhaps that makes me na├»ve. I like to think it affords me a fresh perspective. One of the things that’s always baffled me is the obsession with framing every product category in a 2x2 matrix. Gartner does this with their Magic Quadrant, with axes for completeness of vision and ability to execute. Per Wikipedia, these component scores lead to a vendor position in one of four quadrants:

·         Leaders are said to score higher on both criteria: the ability to execute and completeness of vision. These are said to be typically larger, mature businesses.
·         Challengers are said to score higher on the ability to execute and lower on the completeness of vision. Typically larger, settled businesses with what Gartner claims to be minimal future plans for that industry.
·         Visionaries are said to score lower on the ability to execute and higher on the completeness of vision. Typically smaller companies.
·         Niche players are said to score lower on both criteria: the ability to execute and completeness of vision. Typically new additions to the Magic Quadrant.

Forrester Research has the Wave, also a 2x2, which compares Strategy vs. Strength of current offering. While this approach is simplistic and perhaps an interesting way to assess investment opportunities, I always struggled with its efficacy for a technology buyer:

·         Frequently, products only vaguely related are analyzed in the same chart. A recent Forrester analysis placed GetSatisfaction, a provider of online customer community software on the same chart as Bazaarvoice, a provider of software/services for ratings/reviews. The Magic Quadrant for Social CRM from Gartner put Bazaarvoice, Visible Technologies (a social media monitoring platform) and GetSatisfaction in the same chart.
·         The methodology is biased towards large enterprise products.  It’s nigh-on impossible to get into the “leader” category (i.e. top right) if you have a very strong focused application that excels at one specific purpose e.g. Social Media Analytics; if you are a smaller company with a great product well suited to small, medium enterprises; or have a product fine-tuned for a single vertical market.
·         Most importantly, the approach fails to factor use case – what’s right for one company and context, may be completely wrong for another. For example, within Social Media Management, there are at least five distinct use cases - listening, publishing, campaigning, customer care, and curation - and different products are optimized for different use cases. Another factor is how large, complex is your enterprise - are you a Fortune 500 company, a mid-market enterprise or SMB?  Another common use case factor is what is the primary ERP or CRM you need to integrate to, as that can drive a different selection preference.
·         Lastly, only a small subset of potential vendors are typically analyzed.

As “crowd-sourced” review sites emerge in the business technology field to contest the traditional analyst model, let’s remember not to make the same mistake of trying to jam all products in one broad category into a 2x2. Let’s instead take the time to learn the differences in potential use case, and provide mechanisms to help buyers truly find the best solutions to match their unique use case.


  1. Analyst rankings of vendors are completely flawed at this stage of the business and tech universe. The major vendors with deep pockets absolutely influence how solutions are ranked, even if they can't "pay for" a ranking directly. (I know this from experience.) The up and comers get ignored or relegated to some kind of unwarranted rookie status, even though their solution may be better than the big dog, simply because they don't pay to play. This is fueled by vendors who thump their chests for inclusion and the others who say the analysis is flawed because they were neglected. In the end the customers are the ones who get screwed with flawed analysis.

    Part of the problem is how the F500 CIO's use analyst reports to justify purchase decisions.

    Independent review sites are also prone to issues. I've seen plenty of sites that pull together a terrible ranking of vendors simply to give their site some SEO juice. (My focus is mostly Marketing Automation & CRM)

    It would be great to see a Consumer Reports-style code of ethics. Reviews are conducted in detail and unbiased. And any vendor that uses the Consumer Report review in advertising is subject to being sued.

    Brian Hansford
    Tw. @RemarkMarketing

    1. Thanks for your comments Brian.

      It's a bold mission, but that's what much of what we're trying to accomplish with TrustRadius. We ask reviewers to follow a structured survey (online interview) to rate a product on different dimensions and substantiate their opinions with comments. Every reviewer is authenticated. Every review staged for review by a researcher, to ensure balance, clarity and completeness.

      As we aggregate review count, we'll curate individual reviews into different synopses and sector overviews.

  2. Great post Vinay. I think this is particularly true in the world of open architecture and open source. The enterprise has the choice of integrating best of breed applications to form an end to end solution. The 2X2 was relevant when enterprise pursued the one neck to choke investment and looked for that 'completeness of vision". Now the enterprise is looking for strength in your domain, great execution & openness for a low cost of integration and ownership.

    1. Excellent points Banafsheh. Thank you for taking the time to share them.

      Open source in particular is a domain very poorly covered by traditional analysts. You're absolutely right that completeness of vision emphasizes broad suite offerings that may play quite poorly with other solutions.

  3. This comment has been removed by the author.