A Junior Quant's Guide to Time-Series Momentum
Same signals, dumber markets.
"You’re not predicting the future. You’re betting yesterday keeps happening."
Not long ago, we set out to understand how quantitative trading strategies really work — and since then, we’ve been sticking to a pretty reliable formula:
“Start with a broad strategy class, form a clearly-explainable market view, find the optimal expression, lever up, and go home.”
As we’ve covered in our Junior Quant’s Guide to Ultra-Leveraged Trading, we realized that generating alpha isn’t done by just rote-copying a methodology, but rather by taking the high-level concept as a baseline inspiration and then applying a proprietary angle to make it yours.
We made a decent bit of progress with our relative value approach, but naturally, we wanted more.
So, with the blueprint for strategy discovery in mind, our first step was to find a big-picture strategy class, then once we understood the core mechanics, we’d get our hands dirty and see what we could do differently.
After some searching, we landed on a crowded, but interesting corner of the market — momentum.
Now, we’ve briefly touched on momentum in the past, but we’ve upgraded our tools and knowledge substantially since then — so, with fresh eyes, let’s see how we set out to see just how much we’d be able to pull out of this corner.
But before we try to outsmart it, let’s see how this thing actually works.
The Basics of Momentum
Core idea: Stocks with strong momentum continue to have strong momentum — the best performers continue to do well, the worst performers continue to do poorly.
Typical application: The most cited momentum paper suggests a basic 12-1 monthly return sort. So, taking the returns of the last 12 months, excluding the most recent one, and using that as your ranking metric.
You’d buy the best performers in that decile and short the worst, creating a market-neutral momentum portfolio.
Simple enough: just buy the winners and short the losers — easy, right?
Well, as we’ll demonstrate shortly, the high-level concept is always the simplest part. Actually implementing it effectively is where things get tricky.
To see how, let’s first address the problem of creating our initial investment universe.
An investment “universe” is just the set of stocks that we’re choosing from. While we only want to have exposure to a few stocks, there are thousands in the available universe.
This introduces the first real problem: how do you narrow down the total universe?
Well, one approach is to start with just those that have liquid, weekly option expirations.
Options availability is directly tied to trader interest and liquidity — if a stock has a weekly option expiration cycle, it’s likely a stock that “matters”. This simple heuristic helps weed out penny stocks, illiquid names, and anything the market isn’t seriously watching.
However, this kind of data isn’t just sitting around in pre-packaged apps. If we want to narrow it down properly, we’ll need to do it ourselves.
Getting Our Hands Dirty
We’re going to be working with historical universes, so it’s crucial that we avoid survivorship bias.
A rookie mistake is to pull the stocks that are available right now and then use that as the universe for historical testing.
This is problematic because it ignores stocks that were delisted or became illiquid, and instead focuses only on those that survived to the present day. If a stock made it to today, odds are it performed decently—but you wouldn’t have known that at the time.
So, first things first, we head over to the polygon.io API and we pull all of the stocks that were actively traded on a given day:
Naturally, this will yield thousands of tickers, the majority of which are unsuitable to us.
So, one thing we can do is sort by those with the highest notional volume that day.
Notional volume simply refers to the $ amount of shares traded, generally calculated as the volume-weighted average price (VWAP) multiplied by the volume of the day.
10 shares traded at ~$10 a share gives a notional volume of $100 (10x$10).
We don’t want to go too complex early on, so we’ll just apply a simple heuristic of needing at least $50m worth of shares traded on that day.
This alone narrows down our list from ~5000 to less than 800, bringing some of the more recognizable names to the forefront:
From this smaller selection, we then want to query what the options chain for those stocks looked like on that day.
We want to see at least 6 consecutive weekly option listings, this will demonstrate that these are stocks that “mattered” at that point in time. If the consecutive option listings are non-weekly (e.g., Jan 17 → Mar 17 → Jun 17), we skip it.
Although pinging the API for each stock to check the dates sounds computationally intensive, this task only takes ~5 minutes for a moderately fast CPU.
Once we’ve cycled through our selection for the given date, we arrive at our final universe:
Starting with a messy list of ~5000 stocks, we were able to narrow it down by a factor of 14, leaving us with some of the most liquid and actively traded names as they existed at that historical moment — no lookback bias, no survivorship.
So, now that we have a smaller, workable universe, we need to work on our methodology to actually know which to long and which to short.
If you’re already a paid subscriber, truly — thank you. ❤️ Your support powers better data, better tools, and better research.
If you’ve been enjoying the work and want to support what we’re building, consider becoming a paid subscriber. It means more than you think and helps us keep doing it right. 🫡
A Sharpe Dose of Beta
First, we need a fair way to determine whether a stock was a strong or weak performer.
If we just use average historical monthly returns, things can get skewed. For example, if a stock doubled in one month but stayed completely flat the other 11, the average monthly return would be around 8% (100 ÷ 12) — which is a bit misleading.
So instead, what if we simply used the Sharpe ratio?
The sharpe ratio is just a measure of risk-adjusted returns — if the stock went up in a straight line with minimal volatility, the sharpe ratio will be high — if it went up with major volatility, it’ll be lower:
Now, while the sharpe ratio is great for describing the overall “shape” of the returns, it doesn’t give us much insight beyond that — a stock that went from $10 to $11 in a year with no volatility would have a very high sharpe ratio, but it wouldn’t exactly be the most suitable if we’re chasing outperformance.
We need a way to walk the line between chasing low volatility and high absolute returns.
To do that, we introduce another metric — beta.
Beta essentially just represents the sensitivity of a stock relative to a benchmark like the S&P 500 — if a stock has a beta of 3, it implies that for every 1% the S&P 500 goes up, the stock will go up by 3%.
So, using the same lookback period (last 12 months excluding last), we calculate the beta of each stock compared to the S&P 500. Once we’ve got Sharpe and beta in hand, we can start to build a high-level view of our dataset:
We create a metric, mom_score (momentum score), that’s essentially the beta multiplied by the sharpe ratio.
This way, we get a single, sortable metric that’s clearly interpretable:
High mom score = high beta, high sharpe → strong positive returns
Low mom score = high beta, high negative sharpe → strong negative returns
Using this metric, we can now sort the available stocks into top and bottom deciles — These two deciles form the bedrock of our sample momentum strategy:
Each month, we pull the available universe and use historical data to calculate the momentum score for each stock.
We purchase the 10 stocks with the highest scores and simultaneously short the 10 with the lowest scores.
We hold this long/short basket for 1-month.
Repeat.
To get a clearer picture of what this looks like in practice, here’s a sample long-short basket from our dataset:
Top Decile
Bottom Decile
As demonstrated, this sample portfolio performed as expected, with the net average return of the long basket outperforming that of the short basket over the same period.
Now, let’s zoom-out and see how this approach performed in aggregate:
A very crude implementation — but so far, not bad.
Now, as always, we have to address certain realities — there’s a few things that make this not as easy as it seems:
The bottom basket, while predictable, is often difficult to short with high cost-to-borrows.
The option market also prices this in, so if you wanted to buy puts, you would be paying a substantial volatility premium.
Sometimes, the borrow rate can exceed 100% on an annualized basis, so if you made a short bet, it would also be a bet on volatility, since you’re betting that the downside move within the next month will be fast enough to exceed the borrowing cost of the period.
Momentum is crowded.
These top/bottom decile stocks are from a very small universe (there’s only so many liquid stocks) — when times are stable, the heavy flows exacerbate the momentum and the returns are great. However, when times are bad, the unwinding can be ugly:
These long-short portfolios are run with leverage, so when there’s a systemic shock, the goal is to de-lever across the board — buying back the short basket, selling off the long basket.
This means that the perceived market-neutral benefits can quickly disappear as you start losing money in both-directions.
Now, when looking at these problems, a clever quant may start to see some opportunity:
In theory, if you wanted to run a tail-risk strategy, you’d want to buy cheap convexity (exponential payoff structures):
Upside risk on the short-basket will be cheap (via calls) — it’s unlikely to payoff without a systemic shock, but these are the stocks that will be going up sharply in the short-term given a market crash.
Downside risk on the long basket will be cheap (puts) — implied volatility might still be a bit rich since these stocks are known for high realized volatility, but nevertheless, these will go down violently in the event of a systemic shock.
Of course, that only works if you can time the systemic event. But even if not, these decile picks tend to have some of the most predictable short-term movements in risk-off regimes.
Also, to be fair, this experiment was done with a relatively large basket — long 10, short 10.
The stocks in these baskets often experience extreme, explosive moves that get diluted with diversification, so if you increase the concentration to just say, the top/bottom 3 of each, you can substantially increase your volatility.
At that point, you could probably get away with a naive approach of flipping cheap weekly options on each stock as opposed to just going the long-short portfolio route — after all, you already “know” the direction and that these stocks will be volatile.
Sharpe Today, S**tcoins Tomorrow
So far, we’ve shown how this strategy works in the world of stocks—but not long ago, we came across a very interesting crypto product: perpetual futures.
To quickly recap, perpetual futures are basically the rawest way of getting exposure to crypto — you can lever up to 100x, long or short, on a wide variety of tokens.
Which brings us to the question:
If momentum in stocks is too crowded, can we replicate the same approach, but with smaller cryptocurrencies?
Well, the answer is yes.
In fact, we already are.
The nuances of profitably implementing this in crypto are even more peculiar, so we’re going to have a dedicated post for how we do this in crypto — what our alternative data sources are, where we’re going for leverage, and what our results have been.
This will be posted on Quant Galore Crypto, a new publication solely focused on putting you in the loop on high-signal crypto research.
We’ll keep this publication centered on traditional quant finance—but for crypto alpha you won’t find anywhere else, that’s where to go.
Follow us on Medium to stay in the loop.
While you’re at it, feel free to join our Discord community and follow us on X/Twitter (Our X account is a bit inactive, but we’re booting it back up shortly 😉).
Final Thoughts
This is another example of how building a strategy tends to follow a familiar, reliable process: Start with a high-level idea, add your own nuances, find the right instruments to express your view, lever up — and go home.
We don’t want to mislead or hype you on, but you were just shown a real, systematic way of getting exposure to the U.S. stocks that are often the top over/under performers. These are often the stocks that you hear about having absurd retail-driven moves, but now you know how to get ahold of them before-hand.
You can have this be a cool new detail you know about the space, but if you really wanted to — you can make a lot of money with being able to get exposure to this momentum factor with such precision.
In the spirit of The Quant’s Playbook, we’re attaching the code and instructions on how you can replicate every piece of this — from the backtest to the production version you can run in real-time and in the future.
A major shout out to polygon.io for the data that powers all of this (we receive no compensation). You can use code QUANTGALORE for a 10% discount.
Code
We’ve kept the code and instructions as low-friction as possible—but we know that not everyone has experience with things like storing data in a SQL table or working hands-on with large historical datasets.
That’s why we created the Volatility Trading Bible, a full guide that walks you through how we run these experiments. It covers everything — from setting up the IDE we use, to creating your own private database, to deploying real-world strategies step by step.
As part of our broader revamp of the Quant Galore ecosystem, we’ll be updating the guide to ensure everything still works — and we may add new features. But it’s fully functional today, and support is available if you get stuck.
👉 Check out the Volatility Trading Bible
For both backtesting and production, you will need to create and store your list of liquid, optionable stocks at the available points in time.
Navigate to our Time-Series Momentum repository, and download the “point-in-time-options” file.
To save time, you can just use the “historical-liquid-tickers” csv provided, but this won’t be updated on a forward basis.
You would replace the SQL engine credentials with that of your own and run the file outright — once run, the data will be sent to the newly created "historical_liquid_tickers_polygon" table in your database.
Once you have the liquid, optionable ticker list saved, you are ready to move into backtesting/production:
Backtesting
From the same repository, download the “mtum-backtest-public” file.
You would need to swap-in your SQL credentials so that your previously created dataset can be connected.
You can just run the file outright and the corresponding plot will generate, but we encourage you to inspect the code line-by-line to get an understanding of how it works and see what changes you can make.
Production
From the same repository, download the “mtum-prod-public” file.
Swap-in your SQL credentials one last time.
You can just run the file out of the box.
The top decile of the month will be stored in the top_decile variable and the bottom decile in the bot_decile variable.
The portfolio is rebalanced monthly, so the output will not change until the first trading day of each month.
Happy trading! 🫡🫡
If you’re a paid subscriber, truly—thank you. ❤️ Your support helps us keep pushing further: better data, better tools, better research.
If you’ve been enjoying the work and want to support what we’re building here, consider becoming a paid subscriber. It means more than you think, and it helps us keep doing this right. 🫡
Still needing to scratch that mental itch? If you enjoyed this post, you’ll probably love some of our others just like it:











