Main menu

Pages

Do audiences and critics really disagree about movies?

featured image

There has always been a divide between audiences and critics when it comes to movie reviews, but will 2022 mark the biggest divide yet between critics and fans? Thanks to the rise of review aggregators and audience review scoring systems such as , IMDB, audience and critic opinions have been distilled down to simple averages, competing with each other.


Audiences and critics have reacted very differently to many high-profile movies and shows in recent years, highlighting the differences in how the two groups evaluate movies and TV shows. Review data from Rotten Tomatoes, Metacritic, and IMDB are commonly used to track opinions between audiences and critics and suggest that the disagreement between critics and audiences is getting worse. But how reliable is that data? Is that what the numbers really mean?

screen video of the day

RELATED: Rotten Tomatoes audience scores influence movie performance more than critics

Clearly, there is a divide between critics and audiences, and when such a situation arises, interaction between the two groups on social media can be difficult, but the data really point to a widening gap. or whether the data themselves are reliable. It’s enough to use like this. We take a look at how viewers and critics review movies to assess exactly what this data means and how it affects our perception of movie reviews.


Audience and critic movie review data say different things

First, it’s important to note that viewer review data and critic review data are not really the same thing. The process for a critic to be approved by her Rotten Tomatoes involves establishing a significant body of work and publication by reputable publications. Each time a review is submitted, the critic must write a full review of the publication and provide specific scoring information to Rotten Tomatoes so that it can be edited as part of a larger score. On the other hand, to submit a viewer review, someone simply registers with her email address, clicks on the number of stars and submits a rating.

Ease of submission means that audience reviews on Rotten Tomatoes can often be simple star scores with no detailed text. Accurately reflect any nuanced evaluation. Regardless of the differences in the validity of the subjective opinions of the audience compared to the subjective opinions of the critics, the differences in behavior between each group, coupled with the very different review submission processes, are the reason why the audience review data and the critics’ It is meant that the review data represent different and therefore not directly comparable.

Audience review data is deeply flawed

Not only are the data sets of viewer reviews and Rotten Tomatoes critic reviews vastly different, but many of the viewer review data are also deeply flawed. Critic review data has its own set of problems, but submissions are accepted only from approved critics, and all review scores are a professional obligation for critics to submit quality work. The fact that they are derived from written reviews of publications means that the data can be accurately assumed. It reflects the review behavior of approved critics. Of course, all reviews are subjective, so the data on critical reviews doesn’t say as much about the movies being reviewed as it does about the behavior of the reviewers themselves.

RELATED: Why Don’t You Watch Rotten Tomatoes Scores Are So Strange

When it comes to viewer review data, it’s very messy. Even assuming many audience members are astute and articulate reviewers, they may even be experts who have not yet been endorsed by Rotten Tomatoes or the review aggregator in question, but once those reviews are posted Soon it becomes irrelevant.Without a way to separate bad apples in the same bucket as disgruntled fans, overly enthusiastic fans, trolls, casual viewers, etc., the whole bucket of

Measuring audience data against itself over time to establish trends and changes in what is accurately represented by the collective consciousness of the audience is one thing, but collecting and validating it is quite another. Measuring it against critics data is another thing altogether. process.

Thanks to the Internet, disagreements between critics and audiences are more visible than ever

While there is certainly disagreement between audiences and critics, we cannot use the available data impartially to determine whether that split is for the better or worse. However, we can say that the split is becoming more pronounced. Thirty years ago, it would have been difficult to prove that audiences and critics disagreed, except by measuring box office performance. If an audience disagrees with a critic, the best way to express it to a wider audience is to write to the editor of the publication in question, but there is no guarantee that it will actually be printed. Thanks to the Internet, viewers can also publish reviews. With the rise of social media, it’s also easier than ever for viewers to voice their disagreements to critics.

This does not prove that disagreements are increasing. Now that critics are no longer the only ones with access to public platforms, disagreements have become more apparent than ever. The lack of clear audience data means it’s impossible to determine whether audiences are actually becoming positive or negative about the film, but the louder I’m sure. Just as review bombs can’t be said to reflect viewers’ feelings about the quality of a movie or show, they can definitely be used as an indicator of viewers’ passion or investment.

RELATED: How Rotten Tomatoes Scores Compare Lord of the Rings and The Hobbit Movies

Giving viewer reviews the same level of authentication as critic reviews is a very cumbersome process and would also fundamentally change the nature of the data collected.Platforms like Rotten Tomatoes If reviews had to go through the same level of rigor as critics in order to be posted on , the nature of the data collected would fundamentally change. It might eliminate trolls and review bombs, but it effectively creates another set of critics data and is far from representative of the wider audience’s consensus.

Instead of comparing critic review data with viewer review data, the best way to compare the divisions between critics and viewers is to use box office and viewership metrics. Critic reviews and viewer reviews may be a perfect match, but viewer review data may not actually reflect how viewers feel if the viewers aren’t actually there to see it. The use of viewership data instead of audience review data may have its own inherent flaws, but its legitimacy is less questionable and less likely than all reviews will achieve. and striving to provide the clearest answers to basic questions. “Who wants to see this?”

Comments