If you've run any kind of search on your name, your company or your brand, you are likely well acquainted with the good (compliment) and the bad (complaint) of online reputation monitoring (ORM). The ugly, simply put, is all the irrelevant spam that is floating around on the Web (splogs, bloatware, flooding, phishing and malware scripts) that turn up in your vanity searches or feeds. I've covered my views as they pertain to the "ugly" in past posts and comments, but wanted to expand further on the matter here.
For those using free search tools to perform ORM, the task of sifting through the "ugly" is as tedious as the online visibility of the brands they safeguard. A tedious, but often necessary task which most would agree helps to keep track of any and all conversation that could potentially make or break a company's brand, or threaten risk/security protection programs.
As the focus moves more and more towards depth of sourcing and covering the entire online terrain (blogs + comments, forums/message boards, social networking sites, consumer/advocacy/review sites, gripe sites, video and images), the importance of filtering through all this content implies a considerable investment in time and effort.
With the understanding that comes along with tracking relevant online noise, what role does the ugly play in ORM? More specifically, can we afford to overlook and cast aside the "ugly" content that lingers online? It is in this authors opinion that overlooking and not keeping track of such content could be even more harmful than burying ones head in the sand when a negative online post surfaces.
Bloatware and flooding scripts often fool even the most diligent ORM trackers by scraping relevant online content from mainstream media, blogs and other online sources. This means pulling in people's names, company names, products and brands into their post. Why manage it if its only junk? It all comes back to search, and the very real potential that the "ugly" can turn up in a client or potential clients search results.
One recommendation is to actually consider managing the "ugly" on behalf of the client. If you are already managing brands and reputations on behalf of clients and provide summaries or reports of online activity, make sure you deactivate the linking capabilities for the clients, especially the more serious cases of malicious scripting. Prudent avoidance should not be taken to mean that you need not report or apprise your client about the existence of any of the "ugly" stuff.
Our RepuTrace™ tool offers an add-on service which stores and organizes spam content in a "quarantined" area of its online console for two main reasons. The first is to manage such things as the associations made in the posting to a company or brand. This means links to external sites, and to interpret the way the content, images or video connect and associate with the subject or mention of the company or brand to determine whether its existence has any negative impact on reputation. The second reason is to deliver trends and patterns over time to help track any online behaviour or activity that could potentially give rise to a full-blown malware issue (ie. phishing scripts, browser hijacking scripts, etc.).
The Evolution of Human Analysis and Review
Nathan Gilliat's original post and visual has inspired me to revisit the topic of Human vs. Machine analysis. As acceptance keeps pouring in for reputation monitoring services, the bar seems to set higher in terms of delivering on vendor promises. While depth of sourcing has become a central focus, in recent years, more vendors have also started to look at including some level of human review in their offerings. The goal: to ultimately attain the highest possible relevance, precision and quality of monitoring possible.
While algorithms and "smart" filtering technology can work well in filtering out splogs, bloatware and spam, it is still computer-generated or "machine monitoring" and to put it simply, it will always lack the sophistication, complexity and precision to weed out irrelevant results, interpret nuances in language (ie. sarcasm), and more importantly, keep up with the cat and mouse game being played by overzealous SEO players using sometimes questionable search positioning tactics.
The resulting task is to be able to effectively monitor ones reputation, while prudently avoiding bloatware, flooding scripts, splogs, spam and other junk that could pose a malware issue for the client. While the above monitoring meter conveys my own biases and experiences with ORM tools as both a maker and end user, it ought to illustrate the gradual evolution from machine monitoring to services that offer both machine & human analysis.
The graphic ought also illustrate a numerical equivalent which matches the type of monitoring, with machine monitoring scoring between 1-3 as it relates to precision, 3-8 for monitoring which combines machine and human analysis, and 8-10 for monitoring services which include some layer of risk interpretation to its monitoring service.
It is in this authors opinion that on a scale of 1-10 (1=least precise, and 10=most precise), the right combination of machine monitoring and human review can work to strike the right balance of managing ones online reputation. While the scale is meant to illustrate the technological limitations if brand and reputation monitoring is placed squarely on the lap of machine analysis, the numerical score attached to each monitoring range may also serve as a confidence gauge for the precision, relevance and quality of data by analysis type.
Some services have started to include a layer of interpreting risk to their monitoring services. While it may seem apparent that assigning a risk score and value to determine the seriousness of an online post is the next gradual step in the evolution of monitoring analysis, it isn't a given that the vendor will offer "corrective" measures or suggestions as part of their service offering.
Both our RepuTrace™ and RepuTrack™ services includes something we call "risk alerts" which works as an additional online reputation safeguard. For instance, clients who might knowingly or unknowingly be targets of an online activist cause. More recently, we see this level of risk occurring under the flag of "green" activism, more specifically in the categories of carbon or coal finance. Combining both machine monitoring and human review, posts of interest are flagged by machine analysis, reviewed by a human, cross-referenced according to specific client parameters, at which point the client is contacted as required (ie. usually by email and/or phone).
While our services have incorporated this level of "risk interpretation" for several years, we anticipate that more vendors will include some level of risk scoring and valuation with its monitoring services, or as part of an overall reputation monitoring & management strategy. The question remains whether other vendors will achieve this objective with human review, or take a shot at doing it solely with machine analysis.
RepuTrace™ is the All-in-One Corporate Intelligence Tool which can also be used to assist in the areas of brand and reputation monitoring, investigations, competitive intelligence gathering, market intelligence analysis and research or even to protect against counterfeit brand issues.
To schedule a free online-demonstration of RepuTrace™, click here.
|<< <||Current||> >>|
Below are some links to product or company mentions in mainstream media:
Protecting the firm’s name on the web | Law Times
Safeguard Your Brand Reputation Online | Inc. Technology
They’ve got their eyes on you—are your ears burning? | ComputerWorld Canada
Blog author threatens to go "on a killing spree" | CNW Group
Blog author threatens to go "on a killing spree" | PR Newswire
Tips on Safeguarding Your Online Reputation | WSJ Startup Journal