International Reporting Students,

Here's a so-called "News Quality" chart reflecting one keen observer's plotting of major U.S. and UK media. It's drawn attention online amid growing international concerns over the spread of misinformation and disinformation. I'm not endorsing any of the judgments, merely sharing it for your consideration of the methodology. 

The chart was produced by Denver, Colorado, attorney Vanessa Otero. You may visit her website here: http://www.allgeneralizationsarefalse.com/ . She plots news organizations on a two-way scale of sophistication and perceived political slant. Here's her explanation of the reasoning behind her chart and its methodology:

Why I Created It

I am frustrated by the reality that people don’t like to read. I LOVE to read and write. I have an English degree and a law degree, and I read and write every day for work. As a hobby, I read the great articles that are out there on the topic of media bias and accuracy. All of you who are reading this know that there is an abundance of great journalism out there—truly more than ever. I have the pleasure and privilege of reading a lot of this stuff, as do you.

But I know that the medium of a well-written article just doesn’t reach people who don’t read long things. In this post, I refer to such people as “non-readers” or “infrequent readers.” I am fully aware that the website MediaBiasFactCheck, and the organization Pew Research, and media research departments at many universities have large sets of empirical data available to review, and that those sources are more reputable than *just me*. But non/infrequent readers don’t read those sources. What do they read? Memes, which are often just two juxtaposed pictures with a pithy, terrible, one-sentence argument placed on top in large white letters. Tweets in which arguments are limited to 140 characters. They also prefer to watch videos, like YouTube “documentaries,” no matter how deceptively edited or spun.

Memes and tweets and YouTube videos spread quickly. They don’t take any effort to read, and people are convinced by them. They base their viewpoints upon them IN PLACE of basing their opinions upon long written pieces. To the extent that infrequent readers read, they prefer short articles that confirm their biases. Because they read very little, their comprehension skills and ability to distinguish good writing from bad writing is low. This is true for infrequent readers across the political spectrum. All of this is extremely disturbing to me.

Many non/infrequent-readers prefer easily digestible, visual information. I wanted to take the landscape of news sources that I was highly familiar with and put it into an easily digestible, visual format. I wanted it to be easily shareable, and more substantive than a meme, but less substantive than an article. I cite the fact that it has been shared over 20,000 times on Facebook (that I know of) and viewed 3 million times on Imgur as evidence that I accomplished the goal of it being shareable. In contrast, maybe one-one millionth the amount of people will read this boring-ass article about my methodology behind it.

Many non/infrequent readers are quite bad at distinguishing between decent news sources and terrible news sources. I wanted to make this chart in the hopes that if non/infrequent readers saw it, they could use it to avoid trash. For those of you who can discern between the partisan leanings of The Economist and the Wall Street Journal, I have to say this chart was not primarily made for your benefit. You are already good at reading and distinguishing news sources.

The fact that the chart is shareable does not necessarily make it TRUE. Having heard feedback from all corners of the internet, I know that many people disagree with my placements of news sources upon it. However, even people who disagree with the placements find the taxonomy helpful, because it provides a baseline for a discussion about media sources, which are inherently difficult to classify. Often, verbal and written discussions about news sources are limited to descriptions of sources as “good” and “bad,”  and “biased” and “unbiased.” This chart allows for a few more dimensions to the conversation. However, as discussed below, there are many metrics on which to evaluate and classify media, and this chart doesn’t include them all.

In creating the chart, I had to make (mostly) subjective decisions regarding four particular aspects, explained below.

Choosing the Vertical Categories

First, I considered what makes a news source generally “high quality” or “low quality.” “Quality” itself is an incredibly subjective metric. I figured a good middle category to start with would be journalism that regularly meets recognized ethics standards the profession, such as those set by the Society of Professional Journalists. http://www.spj.org/ethicscode.asp. Above and beyond that, I determined that factors that can make a particular article or broadcast “higher quality” include 1) a high level of detail, 2) the presence of analysis, and 3) a discussion of implications and/or complexity. So I created the categories of “Analytical” for sources that have 1) detail and 2) analysis, and “Complex” for sources that regularly have the discussions of 3) implications and/or complexity. To read the “Complex” and “Analytical” sources, you often have to be familiar with facts learned from sources ranked lower on the vertical axis. However, complexity is not always a good thing. Sometimes, real issues get obscured with complex writing.

Then, I considered what makes a news source “lower quality.” One of the factors is simplicity. Simplicity CAN lead to “low quality” if a deep issue is only covered at a very surface level. Simplicity is fine for stories like “a man robbed a liquor store,” but it’s often bad for, say, coverage of a complex bill being considered by your state legislature. There are sources that cover complex stories (e.g, Hillary e-mail stories, Trump foundation stories, and really, most political stories) in a VERY simple format, and I think that decreases civic literacy. Therefore, I created a below-average quality category called “Basic AF.” However, simplicity is not necessarily a bad thing. Sometimes you need “just the story.”

I have strong feelings about what factors really lower the quality of a source, and those are 1) sensationalism and 2) self-promotion in the form of “clickbait” headlines. Sources that engage in these actions are often geared toward attracting the attention of the non/infrequent reader. Sensationalism plays upon the worst emotions in us, such as fear and anger. Clickbait online articles have headlines that are rife with hyperbole. Then, the content of the articles themselves are loaded with adjectives (e.g., “clearly,” “obviously,” “desperately,” “amazing,” “terrific”) that are hallmarks of poor persuasive writing. That category definitely went at the bottom.

Few people quibble with the vertical categories as I have selected them, but as stated above, “complex” is not necessarily good and “basic” is not necessarily bad. Therefore, the “journalistic quality” arrow does not correlate perfectly with the vertical categories, and as a result, I myself find it to be an imperfect way to rank journalistic quality. However, they correlate enough that the ranking still makes sense, minus a few outliers. In particular, USA Today and CNN get pretty harsh vertical rankings due to my categories. I think USA Today is a pretty high quality publication, even though most of its stories are basic.

Note that the vertical categories do not take into consideration the presence of “truth” in a source. For example, the Wall Street Journal near the top, and CNN near the bottom, both generally report on things that are “true.” The vertical categories also do not differentiate between whether sources are more fact or opinion based. For example, both The National Review (near the top) and The Blaze (at the bottom) write very opinionated pieces.

 

Choosing the Horizontal Categories

Sorting sources based on partisan bias was a bit more straightforward, but I wanted to differentiate between the level of partisan bias. The categories are fairly self-explanatory. They are also the most highly debatable. Good arguments can be made as to whether a source is minimally partisan, “skews” partisan, or is “hyper” partisan. The “Utter Garbage/Conspiracy Theories” category is for those sources that “report” things that are demonstrably false and for which no apology or retraction is issued in the wake of publishing such a false story. These stories may include, for example, how the Obamas’ children were stolen from another family (on the right), or that the government is purposely poisoning us and changing the weather with chemtrails from airplanes (on the left). For the most part, even the “hyper-partisan” sites try to base their stories on truth (e.g., Occupy Democrats, Red State), and are held to account if they publish something demonstrably false. Generally, the closer a source is to the middle on this chart, the more they are taken to task by their peers for publishing or reporting something false.

The categorization of a source in the hyper-partisan or even utter garbage category does not mean that every story published there is false. Many articles may just be very opinionated versions of the truth, or half-truths. And occasionally, sometimes a hyper-partisan or garbage site will stumble upon an actual scoop, due to their willingness to publish stories that haven’t been sourced or verified. Their classification in these categories is mainly because they are widely recognized by other journalists as regularly falling short of standard journalism ethics and practices.

Lots of people have a problem with the category of “mainstream/minimally partisan.” To clarify, the category is called “minimally partisan,” not “non-partisan.” Because journalists are human people, they have opinions, and these opinions can make their way into their reporting. However, they also have professional standards and are held to account by their peers. Further, one can police one’s own biases to a certain extent if one is cognizant of them. The difference between “minimally partisan” and “skews partisan” is easily distinguishable by the intent of the organization. If they mean to be objective, that counts as minimally partisan here. If they mean to present a progressive point of view (MSNBC), or mean to present a conservative point of view (FOX News) that’s at least skewing partisan.

Choosing the News Sources to Include

The sources I initially chose include those I read most often and those I am exposed to most often through aggregators or other sources. They also include sources which I have reason to believe many others are exposed to most often. For people who get their news on the internet, their default browser home page is often a starting point for where to find news, and these home pages are often news aggregators. Yahoo, MSN, and the Microsoft Windows Edge Browser home page all present particular news sources. Many people also get their news sources from Facebook and Twitter (an alarming number, 40%, as I have seen in a recent survey, ONLY get their news from Facebook). Another aggregator is the Apple News App. Between these sources, I selected some of the most popular, making sure to include some in each category, and an approximately equal number of left and right partisan sources.

Note that I did not quantitatively determine how many sites are out there on each partisan side. Some people object to this and believe there are far more trash websites on one side or the other. I do not have the time or resources to conduct such a quantitative measure, so I did not conduct one. Some believe that because this measure is omitted, I am promoting a false equivalency between the sides. This may be true, if there is truly one partisan side that has significantly more garbage news sources. However, I believe there is value in presenting partisan balance within the chart so that more people across the spectrum are willing to take it seriously.

Many sources are not on here. That’s because there are hundreds of them. I could add twice as many easily, but then it would lose its readability. Remember, some people don’t like to read. For many, the words on the chart were too much.

 

Factors for Placing the News Sources on the Chart

I could have taken a number of empirical and quantitative approaches, but as stated earlier, but I did not set out to first conduct such a wide ranging study and then publish the results thereof. I just wanted to visually present a concept that many of us already hold in our heads. I am not affiliated with any research organizations that do this kind of work. I was actually very surprised that this chart was so widely shared, because  I am not an authority on this subject, and literally nothing I have ever written or drawn has attracted so much attention and scrutiny.

I am, however, experienced in defending my positions with facts and arguments, and I place value on the notion that assertions must be supported. I have outlined my support for these placements below.

One way to analyze sets of complex facts  is the approach used in our courts. There are some legal questions to which our courts have determined the best way to answer is through a multi-factor test. These multi-factor tests are appropriate for factual scenarios where there are many considerations to weigh. For example, in trademark law, to determine whether consumers are likely to be confused by competing trademarks, there is a 13- factor test. In patent law, to determine a reasonable amount of royalties to be paid for patent infringement, there is a 15-factor test. As a lawyer, I am comfortable with this multi-factor test approach, so I created one and applied it.

Given the popularity of this chart, though, I think it would be valuable to take my taxonomy and multi-factor test for placement and use it as a starting point for an actual study. A good empirical, data-driven study would probably look like a large panel of well-regarded journalists, writers, academics, and media observers poring over voluminous amounts of writing, spanning tens of thousands of articles and at least thousands of individual news sources, with the help of research assistants. It would probably use software to count and categorize words used in these articles and require cross-checking for verification of facts. As noted below in my list of factors, some just require a yes or no answer, but some are truly measurable and quantifiable. For each of the factors that are quantifiable here, I note that in my own evaluation, I only quantified these factors very generally, based on my observation and reading of headlines and articles. That is, I did not precisely count everything that could be measured. A real study could precisely quantify each of these factors, which would result in more precise placement of news sources. However, even in a quantitative study, certain aspects to placement will still be subjective; namely, the weight given to a particular factor in determining the ultimate ranking. It appears that any high-quality study of media sources requires both subjective and objective aspects, given that it is an analysis of written and spoken words.

Here are the factors I considered for each source, in no particular order. Below each factor is a note regarding what categories the factor weighted a source toward, and why. The notes also indicate whether a factor is quantifiable and could be more precisely measured in a future study for a future version of the chart

  1. Whether it exists in print

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” for several reasons. Print publication costs much more money, time, and effort to build than an internet one. Most print publications have significant numbers of staff members, including professional journalists. In order to have built a successful print publication, an organization will have had to spend time and effort building credibility among a significant audience. Reputation is necessary in order to have people buy newspapers for the purposes of getting the news. As a result of the above reasons, most print publications have longevity.

2.  Whether it exists on TV, and if so, whether it existed before cable

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” for similar reasons factor #1 (print). Cable lowered barriers to entry for radio broadcast news.

3. Whether it exists on radio, and if so, whether it existed before satellite radio

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” for similar reasons factors 1 (print) and 2 (TV). Satellite radio lowered barriers to entry for radio broadcast news.

4. Length of time established

Greater longevity weighted sources somewhat toward “mainstream/minimal partisan bias.” Longevity allows for the establishment of reputation (even a changing one) over time. However, newer sources can still be reputable and high-quality.

5. Readership/Viewership

This is a quantifiable factor. Greater readership and viewership weighted heavily toward “mainstream/minimal partisan bias” and somewhat toward the middle category of “meets high standards.”

6. Reputation for a partisan point of view among other news sources

“Reputation” is a highly subjective term, just like “quality.” Reputation varies and is fuzzy, but no one denies that it exists. Reputation testimony is admissible in court as evidence, so I included a few specific kinds of reputation as valid factors here. Other news sources talk about each other. If a large, established newspaper calls an internet website “left-wing,” or “right-wing,” and if these same internet websites call the large, established newspaper “the mainstream media,” they are in agreement as to each other’s partisan point of view.

7. Whether the source actively differentiates between opinion and reporting pieces

A “yes” answer weighted sources heavily toward “mainstream/minimal partisan bias” and was a determinative factor in whether the source was categorized at least in part as “mainstream” or fell completely into “skews partisan.” For example, the Washington Post, New York Times, and Wall Street Journal all have labeled opinion sections, while MSNBC, FOX, and Vox do not.

8. Proportion of opinion pieces to reporting pieces

This measure is also quantifiable. Greater percentages of reporting pieces weighted heavily toward “mainstream” and somewhat toward the middle category of “meets high standards.

9.Proportion of world news coverage to American political coverage

This measure is also quantifiable. Greater international news coverage weighted sources heavily upward. However, this measure is also subjective. I am of the opinion that if a source spends more time on world news, that indicates that it views itself as responsible for delivering all major news, rather than just focusing on ones that drive website traffic, like political gossip.

10. Repetition of same news stories

High repetition, in view of the medium, weighted sources heavily into the lowest vertical category for sensationalism. This was a main reason for CNN’s ranking toward sensationalism.

11. Reputation for a partisan point of view among my peers on social media

This factor sounds the most biased and subjective of all the factors, and it probably is. It is also typically the MAIN criteria upon which most people would rank these sources on the chart. There is some validity to using this measure; if your known conservative friend likes a source, it likely has a conservative point of view, and if your known liberal friend likes a source, it likely has a liberal point of view. There are obvious drawbacks to using this measure given the “echo chamber” nature of our social media feeds. If most of your friends have the same viewpoint as you, and you are all ideologically very partisan, then if they call a particular partisan source credible, that impacts one’s impartiality.

This factor was somewhat determinative of the placement of sources along the partisan spectrum, and hardly determinative of placement vertically.

12. Party affiliation of regular contributors/interviewees

This factor is also quantifiable. A balance of party affiliation weighted somewhat toward mainstream, and imbalance weighed to the partisan sides proportionally.

13. Presence of hyperbole in titles of articles

This factor is also quantifiable. The presence of hyperbole weighted heavily away from the center for partisanship, and weighted heavily downward for quality. I correlated more hyperbole with more partisanship and less quality.

14. Presence of adjectives in persuasive writing

This factor is also quantifiable. The presence of many adjectives weighted heavily away from the center for partisanship, and weighted heavily downward for quality. I correlated more adjectives with more partisanship and less quality.

15. Quality of grammar, spelling, punctuation, capitalization, and font size

Mistakes in grammar, spelling, punctuation weighted sources heavily downward for quality. Improper capitalization also weighted sources heavily downward for quality. Excessive capitalization (e.g., all caps) and excessive font size weighted heavily horizontally for partisanship and somewhat downward for quality. For example, the enormous, daily, all caps top headline on HuffPo pushed it well into the hyper-partisan category, but only down a little for quality.

16. Presence of an ideological reference or party affiliation in the title of the publication

Presence of reference or affiliation weighted sources heavily to the edges for partisanship and downward for quality (e.g., Occupy Democrats, Red State).

17. Effects of trying to actively control for my own known bias

I tried to evaluate my own bias and take it into account by first defining what my bias is and then making adjustments to correct for it. This exercise is difficult but crucial. It is imprecise and highly subjective. However, anyone who tries to make placements on this chart should engage in it.

I submit that a first way to evaluate your partisan bias is to categorize yourself on a number of political issues upon which there is consensus of what constitutes left, right, and center. Therefore, I started by evaluating my own views on what I think is “correct” and “true” on the issues of civil rights, taxes, business regulation, and the role of government in general. I am pretty adamant about civil rights and equality for all, especially for people of color, women, immigrants, and the LGBTQ community. I believe that places me in a somewhat left-of center category. On taxes and business regulation, I believe that neither “the government” nor “corporations” are all good or all bad. On the whole, I believe government does good things about 70-90% of the time and messes things up 10-30% of the time. I believe corporations do good things about 70-90% of the time and mess things up 10-30% of the time. As a result, I fall quite squarely in the middle, ideologically, on issues of taxes, business regulation, and the role of government.

In view of these evaluations, it would be fair to call me a left-leaning moderate.

To correct for this bias, I had to consider that there is a decent chance I am just wrong on what “the truth” or “the correct answer” is on one or more (or all) political issues. The likelihood that any one of us is completely right on all the issues is quite low. I have to acknowledge that there exists consensus about certain issues to the right of where I stand on them. That is, because approximately 46% of voters consider FOX News reputable and conservative principles acceptable, I cannot simply discount their likelihood of being right on the bet that I am right and they are wrong. As a result, I ranked Fox News higher on quality and less extreme on partisanship than I probably would have otherwise. I also ranked hyper-partisan left wing sites lower on the quality scale than I would have otherwise, and ranked complex/analytical conservative sources more centrally and higher than I would have otherwise.

Questions of bias, truth, and whether there is a center get philosophical and existential very quickly. All any of us can do is try to recognize and control for our biases.

Overall, this factor pushed conservative sources up and to the center, and liberal sources down and to the left in relation to where I might have ranked them purely on my ideological stances. It also pushed the sources into a relative balance that some argue does not exist.

A future study would benefit from having an somewhat equal number of left-leaning and right-leaning moderates arriving at a consensus to control for bias.

Factors Not Considered

I did not weigh the role of money from advertisers, ownership of sources, or corporate structure as factors in any meaningful way. I believe those factors are more closely related to the issue of media focus as opposed to media partisanship and journalistic quality. This chart was about partisanship and quality. It intersects with the topic of media focus only tangentially. I think the factors of money from advertisers, ownership of sources, and corporate structure can and do influence the topics that media sources focus upon.

Complaints about mainstream media focus are valid, but this is a whole complex topic in and of itself. Examples of these complaints include “why did it take so long to get mainstream coverage of the Standing Rock/Dakota Access Pipeline protests?” “Why did it take so long to get mainstream coverage about Bernie Sanders?” Why all the obsession with Hillary’s e-mails?” “Why the all-consuming coverage of all things Trump?” People point to money from advertisers, ownership of sources, and corporate structure as the root of these problems of misplaced focus, but I think it is more complex than that. Factors related to human psychology and attention, as well as modern technology likely play a role. Therefore, I left out the factors of money and corporations because it is an altogether different inquiry, and not necessary to resolve now in order to rank sources according to partisanship and quality. I believe factors 1-17 are sufficient to meaningfully place news sources along the continuum of this particular chart.

Edits, Arguments, and Future Versions

Based on thoughtful and legitimate feedback, I would likely make some edits on placement in my original chart. These include moving the Economist to the left of the midline, and splitting CNN into TV and Internet versions, and ranking the CNN Internet version in the middle circle while leaving the CNN TV version where it is. I would consider moving the Washington Post A LITTLE to the left, but I’d like to engage in a discussion about that.

I would be happy to have arguments about each of the listed factors above, and would entertain suggestions for other factors. I am also considering suggestions for future versions.

If others are inclined to take on the work of gathering data for the factors identified as quantifiable, I would be interested in supporting such work in some way.

Thanks for reading and thinking.

 





Última modificación: miércoles, 31 de mayo de 2017, 12:15