Using Data Science to Predict TV Viewer Behavior and Formulate a Hit TV Show

Media companies and tv executives are starting to adopt the importance of information science when you are looking at understanding viewership. By combining unstructured details (e.g. text, video clip, etc.) with standard data sources, data researchers are actually using machine learning to recognize exactly how generation decisions influence ratings and which have the best important effect.

In a recently available engagement with a worldwide media conglomerate, the Pivotal Data Labs team investigated the thing that makes viewers tune in to and tune out of certain tv shows. The challenge at the coming was that diverse and large quantities of information for broadcast shows isn’t usually available, given that the majority of information is still collected physically. As a result, we’d to be imaginative about what extra datasets might be used to supply a predictive model.

Present attempts, which utilized physically collected metadata, had arrived at a ceiling in predictive results. Although these designs were advanced, they have been all fed with attributes based off of structured information. To improve upon these present attempts, an augmentation of this dataset with unstructured energy sources as video, transcript, audio, and community were explored by us. Ultimately, we chose to make use of transcript information in our modeling efforts because it was by far the most readily accessible. In that way, we could effectively improve upon existing designs as well as give actionable insights that may be used immediately to Tv show creators.

In this blog, we will describe our approach, the tools that we used, and some lessons learned.

Background: Adding More Data (Science) to Traditional Ratings

Historically, media companies are restricted in their comprehension of viewers, making use of just third party information sources, such as Nielsen tv ratings, to monitor and evaluate target audience size and composition. Nielsen collects information from both diaries and television connected products to determine viewing habits for a lot of demographics like area, economic class, race, gender, and age. Nevertheless, for a television show producer, this information doesn’t provide specific comments about precisely how to enhance an episode or perhaps individual broadcast.

Unlike the digital, cultural world, utilizing information to generate choices is not typical in the tv community and considered rather innovative. The sole businesses doing something similar are actually the more recent media companies as Netflix and Amazon, that have been tracking the real significant details numbers to determine what shows will probably be effective, for instance House of Cards with Kevin Spacey. For instance, online focused companies take approaches using meta tags with info about almost as thirty million plays each day to figure out what’ll be a struck, what viewers such as, as well as what keeps them viewing.

Goals: Bringing Pivotal Data Science into the Picture

To be able to help our client enhance their understanding of viewer conduct, we shipped an end-to-end alternative – it encompassed a framework to consume as well as adjust the unstructured transcripts, predictive versions, along with a means interacting with the information and models.

While many commercial solutions are actually specialized as well as proprietary, we had been in a position to create an open solution making use of the Pivotal platform that sets the groundwork for future advanced analytics succeed. Furthermore, this particular remedy was created to scale both in phrases of selection of traffic exchanges (i.e., each show within their network) also as broadcasts (i.e., each show which has at any time aired).

The project deliverables included::

  • A text analytics framework—ingesting, transforming and modeling transcript data in a scalable way
  • In-database machine learning models—using predictive toolsets, like MADlib or Python libraries via PL/Python
  • An application—incorporating the data and models into a lightweight application to explore the data and provide what-if simulations

Data, Platform, and Approach

Data

Several sources had been made readily available for the project: Nielsen scores information, physically collected metadata, as well as show transcripts.

Each information source differed in quality and format. The Nielsen information was provided in report type and then required little work to stuff into the Pivotal wedge for evaluation. The manually collected information was additionally in article form; as with the majority of manually collected information, a great deal of entry mistakes were contained by it and is usually unreliable for modeling reasons. Lastly, the show transcripts had been in text format and then held little to no regular structure from one broadcast to the next.

Platform

The final model was deployed on a Pivotal Hadoop/HAWQ instance exposed to Pivotal Cloud Foundry as a service for production usage. A prototype Node.js program was pressed to the exact same Cloud Foundry instance, that totally exposed end users to analytical insights and made it possible for them to work together with model results.

Approach

As with most data science projects, and text analytics in particular, the vast majority of energy is actually spent cleaning and manipulating information. Part of this particular energy was developing a framework that could have the inconsistently formatted transcripts as well as ready them so that one might use some number of advanced NLP techniques and algorithms. With this task, we utilized a topic design to produce attributes for the general model. This framework may also have been utilized for extra features grounded on tone, words complexity, and other things.

The text framework included the following steps:

  • Data Clean Up: Matching up spoken text with speakers in non-standardized text
  • In-database Text Transformation: Parsing, Tokenization, Lemmatization, and TF-IDF
  • Corpus Reduction: Defining the dictionary of interest
  • Text Modeling: LDA modeling to identify the underlying topics within the transcripts

The output from the text framework was combined with other features and fed into a series of supervised models built for each viewer population.

Overall Model

The modeling phase began with narrowing lower the tens of countless numbers functions created to those discovered to get very predictive of viewership metrics. Using MADlib’s parallelized setup of linear regression, a regression was operated for each attribute to calculate the specific influence of its on ratings. Probably the most relevant capabilities were then filtered more for multi collinearity. Several algorithms were then compared to identify the most performant model for the data, with elastic net regression resulting in the highest predictive accuracy.

The Insights and Results

It’s a generally held belief that show structure (and exclusively commercial breaks) have probably the highest measurable effect on viewership. Nevertheless, we discovered it’s really a mix of people, content, and format on a show. These elements also differ based on the public demographics of the market.

Unexpected important variables included:

  • Speaker characteristics
  • Number of people shown on screen at a time
  • Broadcast topics

Though thousands of options based off of the were included by us physically collected metadata, the great bulk of them wound up falling out of the last design since they’d no predictive power. Rather, the most related variables had been produced from the transcript data. This analysis delivered a distinct viewpoint on the motorists of show viewership as well as reputation changes over time – significant and new value to decision making.

In aproximatelly eight days, the project was sent, demonstrated the potential of using unstructured data, as well as showed the extensibility of Pivotal platform. Equipped with training, the platform, and the code via information transfer, the business has taken the following steps towards being a data driven enterprise – creating an application which leverages a broad range of data and data science to offer actionable insights straight to Tv broadcast decision manufacturers.

Source: This blog is part of a series with joint work performed by Jarrod Vawdrey and Noelle Sio.

Megha is a Editor/Writer/Social Media Lover of all that is 'quaint', her favourite things include dogs, cats, butter popcorn, Northanger Abbey novels. She has previously worked for News18, CNN IBN.

Leave your thoughts

Search Job