I wrong or is the conversation about social media metrics stuck in the mud? We are confronted by a preponderance of data that points to social media as something a lot of people use. To wit, on Friday, the Pew Research Center’s Internet & American Life Project reported that “the percentage of U.S. adults ages 50 and older nearly doubled from 22% in April 2009 to 42% in May 2010… During the same timeframe, the percentage of adults from 18 to 29 years old using social media rose from 76% to 86%.”
Not too surprising. In fact, I would be bold enough to say that we can officially declare social media as “used” by our various constituencies, in the broadest sense. Where we may very well be missing the boat is in the balance of output and acquisition-based vs. outcome-conversion-based metrics being used to measure the effectiveness of social media efforts. Reflecting on the results of the CASE/mStoner/Slover Linett Strategies social media study, the three most frequently cited tools for measuring social media effectiveness were, number of friends who make a post, sheer number of participants, and click-throughs. Lower down on the list were things like event participation, donations and, the golden ticket in my opinion, surveys. How can we know if we are changing opinions and attitudes and inspiring action without testing the waters?
Having a strategy, complete with goals and associated metrics, behind your social media program is essential. Further, tying that strategy into your overall communication, engagement and institutional initiatives is critical to the internal relevance of your program. If someone at your institution asks “How’s our social media program working?” we need to not only have the tools at hand to provide an informed answer, we should have the analysis to back it up and a plan to repeat the parts that have been successful.
Pointing again to the social media survey, the biggest challenge we have is likely resource-based. Nearly every respondent noted that they are using in-house resources to measure the effectiveness of their programs. This means that, in all likelihood, staff are either being asked to fit measurement into their already busy schedule (which was probably the same case when they were asked to take on social media responsibilities!), and have very little time to take a thoughtful approach to measurement. Sound familiar?