Should you license your old YouTube uploads? Here’s how your current analytics can help you decide.

By 04/05/2023
Should you license your old YouTube uploads? Here’s how your current analytics can help you decide.

At Electric Monster and our agency business Little Monster, we are regularly asked to provide viewership projections for our YouTube properties and those of our clients.

More and more, we are being approached to sell and/or license our video libraries or are asked by our clients if they should do the same due to the increasing presence of companies like Jellysmack, Spotter, and others looking to license or acquire content libraries on YouTube. To evaluate these deals, it is important to be able to forecast the future monthly revenue of these libraries.

Over time, we have developed a sophisticated model that aims to predict the performance of library content, new content, RPM, and expected revenue.


Subscribe to get the latest creator news


In this writeup, we will describe how the model is built and why, and how you can use this approach to estimate the future performance of your videos so that you may make an informed decision as to whether or not you should license or sell your video library.

Making a model that grows with creators

Historically, our models used a combination of the growth and performance of “New Content” (i.e. content released in the current month) and “Library Content” (i.e. content released in every month prior).

At a high level, modeling performance in this way makes sense. Once a video is posted, it becomes part of the library. Most (non-Kids) videos get their largest viewership in the first month. Videos posted in a given month generally account for a large percentage of the overall views for that month. Ergo, we separated these two periods out (the first month, then everything thereafter).

These models were okay. The models gave a decent general sense of performance but lacked sophistication as to what was actually happening under the surface.

Where these models fail, though, is in modeling library growth or decay. If we project increasing views on New Content, reason holds that Library Content viewership would also increase, though not at the same rate, since new, better-performing content was becoming library content after its first month. If videos do better in their first month, they tend to get more views in their second month than previous videos, and so on. The same is true in the other direction.

While this can ultimately prove out, it lacks meaningful precision. It is imprecise because it is often lumping New Content that is one or seven or 30, 60, 90 days old with content that is sometimes many years old.

This form of modeling also does not do a good job of modeling what is happening in the viewership of each individual video’s viewership. In most cases, a video will often generate a significant amount of viewership in its first days on the platform and then slowly decay over time. Yes there will be pops and resurgences here and there, but your average video will decay at a fairly predictable rate over time. This former model type is insufficient for accurately representing that phenomenon or predicting how that content will perform over time.

It is especially insufficient when you are being offered cash for a license to your videos or to sell your videos.

Given the lack of precision of this type of model, we needed a new model foundation that would give us more accurate predictions for the future of the library as well as better and clearer goals for new content performance.

We rebuilt our models based on a few factors:

  • Monthly cohorts
  • Library decay rate
  • Seasonality
  • New Content performance
  • New Content growth rate

These five factors make up the base of the foundation of our model.

There are two things worth noting before moving on from our foundation:

  • Kids’ content (for ages zero to seven)
  • The impact of new content on library

Kids’ content can perform very differently than non-kids’ content for two primary reasons. First, the audience behavior of kids is significantly different than that of older audiences. Second, the YouTube Kids platform behaves wildly differently than the core product. Ultimately, however, just because a channel or content has this designation, this model still generally works. What we have found is that instead of a “decay rate,” made-for-kids content tends to have an acceleration rate, where monthly viewership on content can increase over time.

The second aspect that our model has generally not taken into consideration is the impact of new content on your library. Without more data and more channels to assess, we do not have a clear picture as to what the viewership impact would be if New Content began to greatly increase or decrease in performance outside of that content itself. Anecdotally, we have observed corresponding correlations in the same direction, but from a complexity standpoint, we have not built this into our model.

How it works

Our model starts by dividing videos into cohorts based on the month in which they were released (Monthly Cohorts). We chose to group videos in this manner because it is far easier to manually pull this data and because it tends to balance high-performing videos versus underperforming videos.

Next, we create a Library Decay Rate by taking a look at the rate at which each Monthly Cohort decays over time. For example, if a Cohort generated 1,000,000 views in its first month and then generated 600,000 in its second month, it would have a 40% first-month decay rate. We then average this over the extent to which we have data. In the example below, we have data going back nearly five years. When broken out, it looks like this:

This table shows the raw view data of our cohorts summed and averaged based on the number of data points that we have (“N” column). From there we average the decline over the span of twelve months to smooth out the curve. We want to smooth out the curve so that the month-to-month variance doesn’t wreak havoc on the projections. For example, if you look at months 16 to 17, you’ll see that performance swings from -13% to +1%. This swing would cause outsized peaks and valleys in our model down the line, so we smooth out the average decay over a 12 month period and utilize that in the out months.

Ultimately, this data looks like this:

What the above table shows is that by month 10 a cohort would be getting 91% of the views it got in month 9. As a slope, it looks like this:


This analysis gives the foundation for predicting the viewership on each Monthly Cohort, as well as the fall-off rate of new monthly cohorts.

Theoretically, a person could stop here and get a solid estimate as to how many views their library will generate over time. However, stopping here would be short-sighted as Seasonality can have a significant impact on library content.

Seasonality is essentially how we incorporate the month-to-month variance of different times of the year into the model. This is essential to get a more precise model because there are many macro factors at play on viewership. These factors include things like school schedules, advertising load, etc.

For example, we can take a look at one of our more educationally focused channels which sees a decrease on average of about 9% over the summer months:

To determine seasonality, we take a given month’s views and divide that by that year’s average monthly views. For example, if in August we generated 860,000 views and for the year we generated on average 1,000,000 monthly views, we look at August as having a seasonality factor of 86% (or -14%). We then average these percentages across the years that we have data for to determine our seasonality factor. For the channel in this example, this is what the curve would look like:

Once we have our seasonality multiplier, we incorporate it into the model by multiplying it by the expected viewership in a given month for each monthly cohort in the library.

Seasonality also plays a significant role in our projections of New Content. Videos that are posted in a given month in the future is how we define New Content. Each video released is put into the Monthly Cohort for the month in which it is released. To estimate the performance of New Content we look at the average of views per new video. This is an example of what the table we use for that calculation looks like:

Given this analysis, we can now begin to project how future monthly cohorts will perform based on the volume of videos we plan to release, and the growth (or decline) rate of views on New Content.

In terms of actually making the projections though, new content performance can vary wildly. In the example above, the month-to-month growth or decline sometimes swings by as much as 100 percentage points. On average in 2022, this channel grew by 14.5% month over month (not factoring Seasonality).

This is where the context of understanding where a channel is in its lifecycle comes into play.

This particular channel has grown tremendously over two years, but the programming on the channel radically changed in that same time period. So while we would love to project 500%+ year over-year growth, that is not realistic, especially as we can see the rate of growth slowing:

For our modeling purposes, the average growth rate over the last 6 months made the most sense.

In reality, over the past 4 months since this model was built, average first months views on new content on the channel have grown by an average of 11% month over month:

On a similar note, this model predicted 10.0mm views on Library Content in October, and 8.89mm on Library Content in November. In reality, we hit 10.4 and 8.7, a difference of 4% and 2% respectively.

The final element of our model is RPM projections. RPM is revenue per mille, or revenue per 1,000 views. To create our projections for RPM we currently look at the RPM for the channel as a whole. This creates a blended number that we utilize for future revenue predictions.

For the purposes of determining expected revenue from Library Content versus New Content, we create two separate RPM predictions. We forecast different RPMs for Library and New Content because there is often a significant difference between the two types of content since older videos tend to be a bit different than newer content, which can have an impact on RPM. However, both RPM numbers are modeled in the same way.

First, we aggregate the data over the time period we have. Next we average the RPM for each year. We then utilize the average number to create a seasonality factor. Finally, we average out the seasonality impact across the data set we have. This averaging across years has the impact of smoothing out what can be very lumpy growth or decay (especially over the last few years), while simultaneously factoring in the macro seasonality impacts of increased spending on advertising throughout the year. Ultimately, we create a table that looks like this:

We then repeat the process for the following years by applying the annual average growth rate to our baseline, then layering in the seasonality to give us monthly RPM predictions. Dividing our views by 1,000, and then multiplying by the RPM gives us our revenue.

In aggregate, when predicting expected revenue from Library Content, our model creates a summary that looks like this:

From this, you will have a solid estimate of how a channel’s Library Content is likely to perform over the following months and years. You can compare this estimate to what you’re being offered and see how it matches up.

Whether or not a licensing or buyout of your channel’s content is a bigger decision than just how the price offered stacks up against the estimate from this model. A person must weigh many factors including their business goals, tax implications, the macroeconomic environment, and more.

Tracking New Content and making predictions

There are three outcomes from this modeling that we particularly enjoy at Electric Monster and Little Monster.

  • Visualizing month over month performance of new content
    • Our model (when updated on a monthly basis) gives us great insight into how well we did from an execution standpoint on a month to month basis. For example, this is a chart of the average views per new video, and the mo/mo growth rate for one of our channels:
  • Visualizing monthly cohort annual performance
    • Similarly, we really enjoy looking at how our content is performing over a year, and how that corresponds to views in both the short and long term. The first graph shows how each month has impacted viewership over time, and the second graph shows how this year’s viewership is stacking up against 2021:

  • Visualizing the long tail
    • The final useful chart shows us how annual cohorts stack up to the past and gives a good general sense of how this year’s performance will translate into viewership for years to come:

While not perfect, this tool does a very good job of projecting viewership on both new and library content. When we add in the RPM component, we are able to produce a much more precise projection of expected revenue over a given time period.

In turn, this data can help a channel owner determine many things, including whether or not a particular deal is good for them, how much they need to grow new content viewership to reach their goals, and what they could expect from their channel in the future.

Matt Gielen is the founder and CEO of Little Monster Media Co., a video agency specializing in production and audience development on YouTube. Founded in the summer of 2016 Little Monster has already helped dozens of clients big and small grow their audiences including MovieClips, Condé Nast, Viacom, CBSi, and NBCu. Formerly, Matt was Vice President of Programming and Audience Development at Frederator Networks where he oversaw the building of the audiences for Cartoon Hangover, Channel Frederator and the Channel Frederator Network.

You can read more of Matt’s articles on Tubefilter here, and follow Matt on Twitter.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.