You’re an avid knowledge scientist and experimenter. You recognize that randomisation is the summit of Mount Proof Credibility, and also you additionally know that when you’ll be able to’t randomise, you resort to observational knowledge and Causal Inference strategies. At your disposal are varied strategies for spinning up a management group — difference-in-differences, inverse propensity rating weighting, and others. With an assumption right here or there (some shakier than others), you estimate the causal impact and drive decision-making. However for those who thought it couldn’t get extra thrilling than “vanilla” causal inference, learn on.
Personally, I’ve usually discovered myself in no less than two eventualities the place “simply doing causal inference” wasn’t simple. The frequent denominator in these two eventualities? A lacking management group — at first look, that’s.
First, the cold-start state of affairs: the corporate desires to interrupt into an uncharted alternative area. Usually there isn’t any experimental knowledge to be taught from, nor has there been any change (learn: “exogenous shock”), from the enterprise or product aspect, to leverage within the extra frequent causal inference frameworks like difference-in-differences (and different cousins within the pre-post paradigm).
Second, the unfeasible randomisation state of affairs: the organisation is completely intentional about testing an thought, however randomisation just isn’t possible—or not even wished. Even emulating a pure experiment could be constrained legally, technically, or commercially (particularly when it’s about pricing), or when interference bias arises within the market.
These conditions open up the area for a “completely different” sort of causal inference. Though the tactic we’ll deal with right here just isn’t the one one fitted to the job, I’d love so that you can tag alongside on this deep dive into Regression Discontinuity Design (RDD).
On this put up, I’ll provide you with a crisp view of how and why RDD works. Inevitably, this may contain a little bit of math — a nice sight for some — however I’ll do my finest to maintain it accessible with basic examples from the literature.
We’ll additionally see how RDD can sort out a thorny causal inference problem in e-commerce and on-line marketplaces: the impression of itemizing place on itemizing efficiency. On this sensible part we’ll cowl key modelling issues that practitioners usually face: parametric versus non-parametric RDD, choosing the proper bandwidth parameter, and extra. So, seize your self a cup of of espresso and let’s bounce in!
Define
How and why RDD works
Regression Discontinuity Design exploits cutoffs — thresholds — to get well the impact of a remedy on an final result. Extra exactly, it seems for a pointy change within the likelihood of remedy task on a ‘working’ variable. If remedy task relies upon solely on the working variable, and the cutoff is unfair, i.e. exogenous, then we are able to deal with the models round it as randomly assigned. The distinction in outcomes simply above and beneath the cutoff provides us the causal impact.
For instance, a scholarship awarded solely to college students scoring above 90, creates a cutoff based mostly on take a look at scores. That the cutoff is 90 is unfair — it might have been 80 for that matter; the road had simply to be drawn someplace. Furthermore, scoring 91 vs. 89 makes the entire distinction as for the remedy: both you get it or not. However concerning functionality, the 2 teams of scholars that scored 91 and 89 should not actually completely different, are they? And those that scored 89.9 versus 90.1 — for those who insist?
Making the cutoff might come right down to randomness, when it’s only a bout a couple of factors. Perhaps the scholar drank an excessive amount of espresso proper earlier than the take a look at — or too little. Perhaps they acquired dangerous information the night time earlier than, had been thrown off by the climate, or anxiousness hit on the worst potential second. It’s this randomness that makes the cutoff so instrumental in RDD.
With out a cutoff, you don’t have an RDD — only a scatterplot and a dream. However, the cutoff by itself just isn’t geared up with all it takes to determine the causal impact. Why it really works hinges on one core identification assumption: continuity.
The continuity assumption, and parallel worlds
If the cutoff is the cornerstone of the method, then its significance comes completely from the continuity assumption. The concept is a straightforward, counterfactual one: had there been no remedy, then there would’ve been no impact.
To floor the concept of continuity, let’s bounce straight right into a basic instance from public well being: does authorized alcohol entry improve mortality?
Think about two worlds the place everybody and the whole lot is identical. Apart from one factor: a legislation that units the minimal authorized consuming age at 18 years (we’re in Europe, people).
On the earth with the legislation (the factual world), we’d anticipate alcohol consumption to leap proper after age 18. Alcohol-related deaths ought to bounce too, if there’s a hyperlink.
Now, take the counterfactual world the place there isn’t any such legislation; there must be no such bounce. Alcohol consumption and mortality would seemingly comply with a {smooth} development throughout age teams.
Now, that’s a great factor for figuring out the causal impact; the absence of a bounce in deaths within the counterfactual world is the essential situation to interpret a bounce within the factual world because the impression of the legislation.
Put merely: if there isn’t any remedy, there shouldn’t be a bounce in deaths. If there may be, then one thing aside from our remedy is inflicting it, and the RDD just isn’t legitimate.
The continuity assumption might be written within the potential outcomes framework as:
start{equation}
lim_{x to c^-} mathbb{E}[Y_i(0) mid X_i = x] = lim_{x to c^+} mathbb{E}[Y_i(0) mid X_i = x]
label{eq: continuity_po}
finish{equation}
The place (Y_i(0)) is the potential final result, say, danger of loss of life of topic (/mathbb{i}) beneath no remedy.
Discover that the right-hand aspect is a amount of the counterfactual world; not one that may be noticed within the factual world, the place topics are handled in the event that they fall above the cutoff.
Sadly for us, we solely have entry to the factual world, so the idea can’t be examined straight. However, fortunately, we are able to proxy it. We’ll see placebo teams obtain this later within the put up. However first, we begin by figuring out what can break the idea:
- Confounders: one thing aside from the remedy occurs on the cutoff that additionally impacts the result. For example, adolescents resorting to alcohol to alleviate the crushing strain of being an grownup now — one thing that has nothing to do with the legislation on the minimal age to eat alcohol (within the no-law world), however that does confound the impact we’re after, taking place on the similar age — the cutoff, that’s.
- Manipulating the working variable:
When models can affect their place with regard to the cutoff, it might be that models who did so are inherently completely different from those that didn’t. Therefore, cutoff manipulation may end up in choice bias: a type of confounding. Particularly if remedy task is binding, topics could attempt their finest to get one model of the remedy over the opposite.
Hopefully, it’s clear what constitutes a RDD: the working variable, the cutoff, and most significantly, affordable grounds to defend that continuity holds. With that, you’ve gotten your self a neat and efficient causal inference design for questions that may’t be answered by an A/B take a look at, nor by a few of the extra frequent causal inference strategies like diff-in-diff, nor with stratification.
Within the subsequent part, we proceed shaping our understanding of how RDD works; how does RDD “management” confounding relationships? What precisely does it estimate? Can we not simply management for the working variable too? These are questions that we sort out subsequent.
RDD and devices
In case you are already aware of instrumental variables (IV), you may even see the similarities: each RDD and IV leverage an exogenous variable that doesn’t trigger the result straight, however does affect the remedy task, which in flip could affect the result. In IV this can be a third variable Z; in RDD it’s the working variable that serves as an instrument.
Wait. A 3rd variable; perhaps. However an exogenous one? That’s much less clear.
In our instance of alcohol consumption, it isn’t exhausting to think about that age — the working variable — is a confounder. As age will increase, so would possibly tolerance for alcohol, and with it the extent of consumption. That’s a stretch, perhaps, however not implausible.
Since remedy (authorized minimal age) is determined by age — solely models above 18 are handled — handled and untreated models are inherently completely different. If age additionally influences the result, by way of a mechanism just like the one sketched above, we acquired ourselves an apex confounder.
Nonetheless, the working variable performs a key position. To know why, we have to have a look at how RDD and devices leverage the frontdoor criterion to determine causal results.
Backdoor vs. frontdoor
Maybe nearly instinctively, one could reply with controlling for the working variable; that’s what stratification taught us. The working variable is confounder, so we embody it in our regression, and shut the backdoor. However doing so would trigger some hassle.
Keep in mind, remedy task is determined by the working variable so that everybody above the cutoff is handled with all certainty, and definitely not beneath it. So, if we management for the working variable, we run into two very associated issues:
- Violation of the Positivity assumption: this assumption says that handled models ought to have a non-zero likelihood to obtain the other remedy, and vice versa. Intuitively, conditioning on the working variable is like saying: “Let’s estimate the impact of being above the minimal age for alcohol consumption, whereas holding age fastened at 14.” That doesn’t make sense. At any given worth of working variable, remedy is both all the time 1 or all the time 0. So, there’s no variation in remedy conditional on the working variable to help such a query.
- Excellent collinearity on the cutoff: in estimating the remedy impact, the mannequin has no technique to separate the impact of crossing the cutoff from the impact of being at a specific worth of X. The outcome? No estimate, or a forcefully dropped variable from the mannequin design matrix. Singular design matrix, doesn’t have full rank, these ought to sound acquainted to most practitioners.
So no — conditioning on the working variable doesn’t make the working variable the exogenous instrument that we’re after. As an alternative, the working variable turns into exogenous by pushing it to the restrict—fairly actually. There the place the working variable approaches the cutoff from both aspect, the models are the identical with respect to the working variable. But, falling simply above or beneath makes the distinction as for getting handled or not. This makes the working variable a legitimate instrument, if remedy task is the one factor that occurs on the cutoff. Judea Pearl refers to devices as assembly the front-door criterion.

LATE, not ATE
So, in essence, we’re controlling for the working variable — however solely close to the cutoff. That’s why RDD identifies the native common remedy impact (LATE), a particular flavour of the typical remedy impact (ATE). The LATE seems like:
$$delta_{SRD}=Ebig[Y^1_i – Y_i^0mid X_i=c_0]$$
The native bit refers back to the partial scope of the inhabitants we’re estimating the ATE for, which is the subpopulation across the cutoff. In actual fact, the additional away the information level is from the cutoff, the extra the working variable acts as a confounder, working towards the RDD as a substitute of in its favour.
Again to the context of the minimal age for authorized alcohol consumption instance. Adolescents who’re 17 years and 11 months previous are actually not so completely different from these which might be 18 years and 1 month previous, on common. If something, a month or two distinction in age just isn’t going to be what units them aside. Isn’t that the essence of conditioning on, or holding a variable fixed? What units them aside is that the latter group can eat alcohol legally for being above the cutoff, and never the previous.
This setup allows us to estimate the LATE for the models across the cutoff and with that, the impact of the minimal age coverage on alcohol-related deaths.
We’ve seen how the continuity assumption has to carry to make the cutoff an attention-grabbing level alongside the working variable in figuring out the causal impact of a remedy on the result. Particularly, by letting the bounce within the final result variable be completely attributable to the remedy. If continuity holds, the remedy is as-good-as-random close to the cutoff, permitting us to estimate the native common remedy impact.
Within the subsequent part, we’ll stroll by way of the sensible setup of a real-world RDD: we determine the important thing ideas; the working variable and cutoff, remedy, final result, covariates, and at last, we estimate the RDD after discussing some essential modelling decisions, and finish the part with a placebo take a look at.
RDD in Motion: Search Rating and itemizing efficiency Instance
In e-commerce and on-line marketplaces, the start line of the client expertise is trying to find an inventory. Consider the customer typing “Nikon F3 analogue digital camera” within the search bar. Upon finishing up this motion, algorithms frantically kind by way of the stock on the lookout for one of the best matching listings to populate the search outcomes web page.
Time and a spotlight are two scarce assets. So, it’s within the curiosity of everybody concerned — the client, the vendor and the platform — to order probably the most outstanding positions on the web page for the matches with the best anticipated probability to develop into profitable trades.
Moreover, place results in client behaviour counsel that customers infer increased credibility and desirability from objects “ranked” on the prime. Take into consideration high-tier merchandise being positioned at eye-height or above in supermarkets, and highlighted objects on an e-commerce platform, on the prime of the homepage.
So, the query then turns into: how does positioning on the search outcomes web page affect an inventory’s probabilities to be bought?
Speculation:
If an inventory is ranked increased on the search outcomes web page, then it’ll have the next probability of being bought, as a result of higher-ranked listings get extra visibility and a spotlight from customers.
Intermezzo: enterprise or idea?
As with every good speculation, we’d like a little bit of idea to floor it. Good for us is that we aren’t looking for the treatment for most cancers. Our idea is about well-understood psychological phenomena and behavioural patterns, to place it overly refined.
Consider primacy effect, anchoring bias and the resource theory of attention. These are properly concepts in behavioural and cognitive psychology that again up our plan right here.
Kicking off the dialog with a product supervisor can be extra enjoyable this manner. Personally, I additionally get excited when I’ve to brush up on some psychology.
However I’ve discovered by way of and thru {that a} idea is de facto secondary to any initiative in my trade (tech). Apart from a analysis workforce and challenge, arguably. And it’s honest to say it helps us keep on-purpose: what we’re doing is to carry enterprise ahead, not mom science.
Figuring out the reply has actual enterprise worth. Product and business groups might use it to design new paid options that assist sellers get their listings on increased positions — a win for each the enterprise and the person. It might additionally make clear the worth of on-site actual property like banner positions and advert slots, serving to drive progress in B2B promoting.
The query is about incrementality: would’ve itemizing (mathbb{j}) been bought, had it been ranked 1st on the outcomes web page, as a substitute of fifteenth. So, we wish to make a causal assertion. That’s exhausting for no less than two causes:
- A/B testing comes with a value, and;
- there are confounders we have to take care of if we resort to observational strategies.
Let’s develop on that.
The price of A/B testing
One experiment design might randomise the fetched listings throughout the web page slots, impartial of the itemizing relevance. Breaking the inherent hyperlink between relevance and place, we might be taught the impact of place on itemizing efficiency. It’s an attention-grabbing thought — however a pricey one.
Whereas it’s an inexpensive design for statistical inference, this setup is type of horrible for the person and enterprise. The person may need discovered what they wanted—perhaps even made a purchase order. However as a substitute, perhaps half of the stock they’d have seen was remotely a great match due to our experiment. This suboptimal person expertise seemingly hurts engagement in each the brief and long run — particularly for brand spanking new customers who’re nonetheless to see what worth the platform holds for them.
Can we consider a technique to mitigate this loss? Nonetheless dedicated to A/B testing, one might expose a smaller set of customers to the experiment. Whereas it’ll scale down the results, it might additionally stand in the way in which of reaching adequate statistical energy by reducing the pattern measurement. Furthermore, even small audiences might be accountable for substantial income for some firms nonetheless — these with hundreds of thousands of customers. So, slicing the uncovered viewers just isn’t a silver bullet both.
Naturally, the way in which to go is to depart the platform and its customers undisturbed — and nonetheless discover a technique to reply the query at hand. Causal inference is the appropriate mindset for this, however the query is: how will we do this precisely?
Confounders
Listings don’t simply make it to the highest of the web page on a great day; it’s their high quality, relevance, and the sellers’ fame that promote the rating of an inventory. Let’s name these three variables W.
What makes W tough is that it influences each the rating of the itemizing and in addition the likelihood that the itemizing will get clicked, a proxy for efficiency.
In different phrases, W impacts each our remedy (place) and final result (click on), serving to itself with the standing of confounder.

Subsequently, our activity is to discover a design that’s match for function; one which successfully controls the confounding impact of W.
You don’t select regression discontinuity — it chooses you
Not all causal inference designs are simply sitting round ready to be picked. Typically they present up if you least want them, and generally you get fortunate if you want them most — like right this moment.
It seems like we are able to use the web page cutoff to determine the causal impression of place on clicks-through price.
Abrupt cutoff in search outcomes pagination
Let’s unpack the itemizing advice mechanism to see precisely how. Right here’s what occurs beneath the hood when a outcomes web page is generated for a search:
- Fetch listings matching the question
A rough set of listings is pulled from the stock, based mostly on filters like location, radius, and class, and many others. - Rating listings on private relevance
This step makes use of person historical past and itemizing high quality proxies to foretell what the person is most probably to click on. - Rank listings by rating
Increased scores get increased ranks. Enterprise guidelines combine in adverts and business content material with natural outcomes. - Populate pages
Listings are slotted by absolute relevance rating. A outcomes web page ends on the okayth itemizing, so the okay+1th itemizing seems on the prime of the subsequent web page. That is goes to be essential to our design. - Impressions and person interplay
Customers see the ends in order of relevance. If an inventory catches their eye, they could click on and consider extra particulars: one step nearer to the commerce.
Sensible setup and variables
So, what is precisely our design? Subsequent, we stroll by way of the reasoning and identification of the important thing elements of our design.
The working variable
In our setup, the working variable is the relevance rating (s_j) for itemizing j. This rating is a steady, complicated perform of each person and itemizing properties:
$$s_j = f(u_i, l_j)$$
The itemizing’s rank (r_j) is solely a rank transformation of (s_j), outlined as:
$$r_i = sum_{j=1}^{n} mathbf{1}(s_j leq s_i)$$
Virtually talking, because of this for analytic functions—comparable to becoming fashions, making native comparisons, or figuring out cutoff factors—realizing an inventory’s rank conveys almost the identical data as realizing its underlying relevance rating, and vice versa.
Particulars: Relevance rating vs. rank
The relevance rating (s_j) displays how properly an inventory matches a particular person’s question, given parameters like location, value vary, and different filters. However this rating is relative—it solely has that means throughout the context of the listings returned for that individual search.
In distinction, rank (or place) is absolute. It straight determines an inventory’s visibility. I consider rank as a standardising transformation of (s_j). For instance, Itemizing A in search Z may need the best rating of 5.66, whereas Itemizing B in search Ok tops out at 0.99. These uncooked scores aren’t comparable throughout searches—however each listings are ranked first of their respective outcome units. That makes them equal by way of what actually issues right here: how seen they’re to customers.
The cutoff, and remedy
If an inventory simply misses the primary web page, it doesn’t fall to the underside of web page two — it’s artificially bumped to the highest. That’s a fortunate break. Usually, solely probably the most related listings seem on the prime, however right here an inventory of merely reasonable relevance results in a chief slot —albeit on the second web page — purely as a result of arbitrary place of the web page break. Formally, the remedy task (D_j) goes like:
$$D_j = start{circumstances} 1 & textual content{if } r_j > 30 0 & textual content{in any other case} finish{circumstances}$$
(Observe on international rank: Rank 31 isn’t simply the primary itemizing on web page two; it’s nonetheless the thirty first itemizing total)
The power of this setup lies in what occurs close to the cutoff: an inventory ranked 30 could also be almost similar in relevance to at least one ranked 31. A small scoring fluctuation — or a high-ranking outlier — can push an inventory over the edge, flipping its remedy standing. This native randomness is what makes the setup legitimate for RDD.
The end result: Impression-to-click
Lastly, we operationalise the result of curiosity because the click-though price from impressions to clicks. Keep in mind that all listings are ‘impressed’ when when the web page is populated. The clicking is the binary indicator of the specified person behaviour.
In abstract, that is our setup:
- Consequence: impression-to-click conversion
- Therapy: Touchdown on the primary vs. second web page
- Working variable: itemizing rank; web page cutoff at 30
Subsequent we stroll by way of easy methods to estimate the RDD.
Estimating RDD
On this part, we’ll estimate the causal parameter, interpret it, and join them again to our core speculation: how place impacts itemizing visibility.
Right here’s what we’ll cowl:
- Meet the information: Intro to the dataset
- Covariates: Why and easy methods to embody them
- Modelling decisions: parametric RDD vs. not. Selecting the polynomial diploma and bandwidth.
- Placebo-testing
- Density continuity testing
Meet the information
We’re working with impressions knowledge from one among Adevinta’s (ex-eBay Classifieds Group) marketplaces. It’s actual knowledge, which makes the entire train really feel grounded. That mentioned, values and relationships are censored and scrambled the place essential to guard its strategic worth.
An necessary notice to how we interpret the RDD estimates and drive selections, is how the information was collected: solely these searches the place the person noticed each the primary and second web page had been included.
This fashion, we partial out the web page fastened impact if any, however the actuality is that many customers don’t make it to the second web page in any respect. So there’s a huge quantity hole. We talk about the repercussion within the evaluation recap.
The dataset consists of those variables:
- Clicked: 1 if the itemizing was clicked, 0 in any other case – binary
- Place: the rank of the itemizing – numeric
- D: remedy indicator, 1 if place > 30, 0 in any other case – binary
- Class: product class of the itemizing – nominal
- Natural: 1 if natural, 0 if from an expert vendor – binary
- Boosted: 1 if was paid to be on the prime, 0 in any other case – binary
click on | rel_position | D | class | natural | boosted |
1 | -3 | 0 | A | 1 | 0 |
1 | -14 | 0 | A | 1 | 0 |
0 | 3 | 1 | C | 1 | 0 |
0 | 10 | 1 | D | 0 | 0 |
1 | -1 | 0 | Ok | 1 | 1 |
Covariates: easy methods to embody them to extend accuracy?
The working variable, the cutoff, and the continuity assumption, provide you with all it’s good to determine the causal impact. However together with covariates can sharpen the estimator by decreasing variance — if accomplished proper. And, oh is it straightforward to do it unsuitable.
The simplest factor to “break” concerning the RDD design, is the continuity assumption. Concurrently, that’s the final factor we wish to break (I already rambled lengthy sufficient about this).
Subsequently, the principle quest in including covariates is to it in such manner that we cut back variance, whereas conserving the continuity assumption intact. One technique to formulate that, is to imagine continuity with out covariates and with covariates:
start{equation}
lim_{x to c^-} mathbb{E}[Y_i(0) mid X_i = x] = lim_{x to c^+} mathbb{E}[Y_i(0) mid X_i = x] textual content{(no covariates)}
finish{equation}
start{equation}
lim_{x to c^-} mathbb{E}[Y_i(0) mid X_i = x, Z_i] = lim_{x to c^+} mathbb{E}[Y_i(0) mid X_i = x, Z_i] textual content{(covariates)}
finish{equation}
The place (Z_i) is a vector of covariates, for topic i. Much less mathy, two issues ought to stay unchanged after including covariates:
- The useful type of the working variable, and;
- The (absence of the) bounce in remedy task on the cutoff
I didn’t discover out the above myself; Calonico, Cattaneo, Farrell, and Titiunik (2018) did. They developed a proper framework for incorporating covariates into RDD. I’ll go away the small print to the paper. For now, some modelling tips can maintain us going:
- Mannequin covariates linearly in order that the remedy impact stays the identical with and with out covariates, due to a easy and {smooth} partial impact of the covariates;
- Preserve the mannequin phrases additive, in order that the remedy impact stays the LATE, and doesn’t develop into conditional on covariates (CATE); and to keep away from including a bounce on the cutoff.
- The above implies that there be no interactions with the remedy indicator, nor with the working variable. Doing any of those could break continuity and invalidate our RDD design.
Our goal mannequin could seem like this:
start{equation}
Y_i = alpha + tau D_i + f(X_i – c) + beta^prime Z_i + varepsilon_i
finish{equation}
For letting the covariates work together with the remedy indicator, the type of mannequin we wish to keep away from seems like this:
start{equation}
Y_i = alpha + tau D_i + f(X_i – c) + beta^prime (Z_i cdot D_i) + varepsilon_i
finish{equation}
Now, let’s distinguish between two methods of virtually together with covariates:
- Direct inclusion: Add them on to the result mannequin alongside the remedy and working variable.
- Residualisation: First regress the result on the covariates, then use the residuals within the RDD.
We’ll use residualisation in our case. It’s an efficient manner cut back noise, produces cleaner visualisations, and protects the strategic worth of the information.
The snippet beneath defines the result de-noising mannequin and computes the residualised final result, click_res
. The concept is easy: as soon as we strip out the variance defined by the covariates, what stays is a much less noisy model of our final result variable—no less than in idea. Much less noise means extra accuracy.
In observe, although, the residualisation barely moved the needle this time. We are able to see that by checking the change in customary deviation:
SD(click_res) / SD(click on) - 1
provides us about -3%, which is small virtually talking.
# denoising clicks
mod_outcome_model <- lm(click on ~ l1 + natural + boosted,
knowledge = df_listing_level)
df_listing_level$click_res <- residuals(mod_outcome_model)
# the impression on variance is proscribed: ~ -3%
sd(df_listing_level$click_res) / sd(df_listing_level$click on) - 1
Regardless that the denoising didn’t have a lot impact, we’re nonetheless in a great spot. The unique final result variable already has low conditional variance, and patterns across the cutoff are seen to the bare eye, as we are able to see beneath.

We transfer on to a couple different modelling selections that usually have an even bigger impression: selecting between parametric and non-parametric RDD, the polynomial diploma and the bandwidth parameter (h).
Modelling decisions in RDD
Parametric vs non-parametric RDD
You would possibly marvel why we even have to decide on between parametric and non-parametric RDD. The reply lies in how every method trades off bias and variance in estimating the remedy impact.
Selecting parametric RDD is actually selecting to cut back variance. It assumes a particular useful type for the connection between the result and the working variable, (mathbb{E}[Y mid X]), and suits that mannequin throughout your complete dataset. The remedy impact is captured as a discrete bounce in an in any other case steady perform. The standard type seems like this:
$$Y = beta_0 + beta_1 D + beta_2 X + beta_3 D cdot X + varepsilon$$
Non-parametric RDD, alternatively, is about decreasing bias. It avoids sturdy assumptions concerning the international relationship between Y and X and as a substitute estimates the result perform individually on both aspect of the cutoff. This flexibility permits the mannequin to extra precisely seize what’s taking place proper across the threshold. The non-parametric estimator is:
(tau = lim_{x downarrow c} mathbb{E}[Y mid X = x] – lim_{x uparrow c} mathbb{E}[Y mid X = x])
So, which must you select? Truthfully, it might probably really feel arbitrary. And that’s okay. That is the primary in a collection of judgment calls that practitioners usually name the enjoyable a part of RDD. It’s the place modelling turns into as a lot an artwork as it’s a science.
I’ll stroll by way of how I method that alternative. However first, let’s have a look at two key tuning parameters (particularly for non-parametric RDD) that can information our ultimate determination: the polynomial diploma and the bandwidth, h.
Polynomial diploma
The connection between final result and the working variable can take many varieties, and capturing its true form is essential for estimating the causal impact precisely. In case you’re fortunate, the whole lot is linear and there’s no want to think about polynomials — In case you’re a realist, you then in all probability wish to learn the way they’ll serve you within the course of.
In deciding on the appropriate polynomial diploma, the purpose is to cut back bias, with out inflating the variance of the estimator. So we wish to enable for flexibility, however we don’t wish to do it greater than essential. Take the examples within the picture beneath: with an final result of low sufficient variance, the linear type naturally invitations the eyes to estimate the result on the cutoff. However the estimate turns into biased with solely a barely extra complicated type, if we implement a linear form within the mannequin. Insisting on a linear type in such a fancy case is like becoming your ft right into a glove: It type of works, but it surely’s very ugly.
As an alternative, we give the mannequin extra levels of freedom with a higher-degree polynomial, and estimate the anticipated (tau = lim_{x downarrow c} mathbb{E}[Y mid X = x] – lim_{x uparrow c} mathbb{E}[Y mid X = x]), with decrease bias.

The bandwidth parameter: h
Working with polynomials in the way in which that’s described above doesn’t come freed from worries. Two issues are required and pose a problem on the similar time:
- we have to get the modelling proper for whole vary, and;
- your complete vary must be related for the duty at hand, which is estimating (tau = lim_{x downarrow c} mathbb{E}[Y mid X = x] – lim_{x uparrow c} mathbb{E}[Y mid X = x])
Solely then we cut back bias as supposed; If one among these two just isn’t the case, we danger including extra of it.
The factor is that modelling your complete vary correctly is harder than modelling a smaller vary, specifically if the shape is complicated. So, it’s simpler to make errors. Furthermore, your complete vary is sort of sure to not be related to estimate the causal impact — the “native” in LATE provides it away. How will we work round this?
Enter the bandwidth parameter, h. The bandwidth parameters aids the mannequin in leveraging knowledge that’s nearer to the cutoff, dropping the international knowledge thought, and bringing it again to the native scope RDD estimates the impact for. It does so by weighting the information by some perform (mathbb{w}(X)) in order that extra weight is given to entries close to the cutoff, and fewer to the entries additional away.
For instance, with h = 10, the mannequin considers the vary of complete size 20; 10 on all sides of the cutoff.
The efficient weight is determined by the perform, (mathbb{w}). A bandwidth perform that has a hard-boundary behaviour known as a sq., or uniform, kernel. Consider it as a perform that provides weights 1 when the information is inside bandwidth, and 0 in any other case. The gaussian and triangular kernels are two different steadily used kernels by practitioners. The important thing distinction is that these behave much less abruptly in weighting of the entries, in comparison with the sq. kernel. The picture beneath visualises the behaviour of the three kernels features.

All the pieces put collectively: non- vs. parametric RDD, polynomial diploma and bandwidth
To me, selecting the ultimate mannequin boils right down to the query: what’s the easiest mannequin that does the nice job? Certainly — the precept of Occam’s razor by no means goes out of style. In practise, this implies:
- Non- vs. Parametric: is the useful type easy on each side of the cutoff? Then a single match, pooling knowledge from each side will do. In any other case, nonparametric RDD provides the flexibleness that’s wanted to embrace two completely different dynamics on both aspect of the cutoff.
- Polynomial diploma: when the perform is complicated, I opt-in for increased levels to comply with the development higher flexibly.
- Bandwidth: if simply picked a excessive polynomial diploma, then I’ll let h be bigger too. In any other case, decrease values for h usually go properly with decrease levels of polynomials in my expertise*, **.
* This brings us to the commonly accepted advice within the literature: maintain the polynomial diploma decrease than 3. In most use circumstances 2 works properly sufficient. Simply be sure you choose mindfully.
** Additionally, notice that h suits particularly properly within the non-parametric mentality; I see these two decisions as co-dependent.
Again to the itemizing place state of affairs. That is the ultimate mannequin to me:
# modelling the residuals of the result (de-noised)
mod_rdd <- lm(click_res ~ D + ad_position_idx,
weight = triangular_kernel(x = ad_position_idx, c = 0, h = 10), # that is h
knowledge = df_listing_level)
Decoding RDD outcomes
Let’s have a look at the mannequin output. The picture beneath exhibits us the mannequin abstract. In case you’re aware of that, all of it will come right down to deciphering the parameters.
The very first thing to take a look at is that handled listings have ~1% level increased likelihood of being clicked, than untreated listings. To place that in perspective, that’s a +20% change if the press price of the management is 5%, and ~ +1% improve if the management is 80%. In terms of sensible significance of this causal impact, these two uplifts are day and night time. I’ll go away this open-ended with a couple of inquiries to take house: when would you and your workforce label this impression as a possibility to leap on? What different knowledge/solutions do we have to declare this observe worthy of following?
The rest of the parameters don’t actually add a lot to the interpretation of the causal impact. However let’s go over them rapidly, nonetheless. The second estimate (x) is that of the slope beneath cutoff slope; the third one (D x (mathbb(x))) is the extra [negative] factors added to the earlier slope to replicate the slope above the cutoff; Lastly, the intercept is the typical for the models proper beneath the cutoff. As a result of our final result variable is residualised, the worth -0.012 is the demeaned final result; it not is on the dimensions of the unique final result.

Completely different decisions, completely different fashions
I’ve put this picture collectively to indicate a group of different potential fashions, had we made completely different decisions in bandwidth, polynomial diploma, and parametric-versus-not. Though hardly any of those fashions would have put the choice maker on a completely unsuitable path on this explicit dataset, every mannequin comes with its bias and variance properties. This does color our confidence of the estimate.

Placebo testing
In any causal inference methodology, the identification assumption is the whole lot. One factor is off, and your complete evaluation crumbles. We are able to faux the whole lot is alright, or we put our strategies to the take a look at ourselves (imagine me, it’s higher if you break your personal evaluation earlier than it goes on the market)
Placebo testing is one technique to corroborate the outcomes. Placebo testing checks the validity of outcomes through the use of a setup similar to the actual one, minus the precise remedy. If we nonetheless see an impact, it alerts a flawed design — continuity can’t be assumed, and causal results can’t be recognized.
Good for us, we have now a placebo group. The 30-listing web page minimize solely exists on the desktop model of the platform. On cell, infinite scroll makes it one lengthy web page; no pagination, no web page bounce. So the impact of “going to the subsequent web page” shouldn’t seem, and it doesn’t.
I don’t suppose we have to do a lot inference. The graph beneath already tells us your complete story: with out pages, going from the thirtieth place to the thirty first just isn’t completely different from going from another place to the subsequent. Extra importantly, the perform is {smooth} on the cutoff. This discovering provides an excessive amount of credibility to our evaluation by showcasing that continuity holds on this placebo group.

The placebo take a look at is without doubt one of the strongest checks in an RDD. It assessments the continuity assumption nearly straight, by treating the placebo group as a stand-in for the counterfactual.
After all, this depends on a brand new assumption: that the placebo group is legitimate; that it’s a sufficiently good counterfactual. So the take a look at is highly effective provided that that assumption is extra credible than assuming continuity with out proof.
Which signifies that we must be open to the chance that there isn’t any correct placebo group. How will we stress-test our design then?
No-manipulation and the density continuity take a look at
Fast recap. There are two associated sources of confounding and therefore to violating the continuity assumption:
- direct confounding from a 3rd variable on the cutoff, and
- manipulation of the working variable.
The primary can’t be examined straight (besides with a placebo take a look at). The second can.
If models can shift their working variable, they self-select into remedy. The comparability stops being honest: we’re now evaluating manipulators to those that couldn’t or didn’t. That self-selection turns into a confounder, if it additionally impacts the result.
For example, college students who didn’t make the minimize for a scholarship, however go on to successfully smooth-talk their establishment into letting them go with the next rating. That silver tongue can even assist them getting higher salaries, and act as confounder after we examine the impact of scholarships on future earnings.

So, what are the indicators that we’re in such state of affairs? An unexpectedly excessive variety of models simply above the cutoff, and a dip slightly below (or vice versa). We are able to see this as one other continuity query, however this time by way of the density of the samples.
Whereas we are able to’t take a look at the continuity of the potential outcomes straight, we are able to take a look at the continuity of the density of the working variable on the cutoff. The McCrary take a look at is the usual instrument for this, precisely testing:
(H_0: lim_{x to c^-} f(x) = lim_{x to c^+} f(x) quad textual content{(No manipulation)})
(H_A: lim_{x to c^-} f(x) neq lim_{x to c^+} f(x) quad textual content{(Manipulation)})
the place (f(x)) is the density perform of the working variable. If (f(x)) jumps at x = c, it means that models have sorted themselves simply above or beneath the cutoff — violating the idea that the working variable was not manipulable at that margin.
The internals of this take a look at is one thing for a distinct put up, as a result of fortunately we are able to rely rdrobust::rddensity
to run this take a look at, off-the-shelf.
require(rddensity)
density_check_obj <- rddensity(X = df_listing_level$ad_position_idx,
c = 0)
abstract(density_check_obj)
# for the plot beneath
rdplotdensity(density_check_obj, X = df_listing_level$ad_position_idx)

The take a look at exhibits marginal proof of a discontinuity within the density of the working variable (T = 1.77, p = 0.077). Binomial counts are unbalanced throughout the cutoff, suggesting fewer observations slightly below the edge.
Normally, this can be a crimson flag as it might pose a thread to the continuity assumption. This time nevertheless, we all know that continuity truly holds (see placebo take a look at).
Furthermore, rating is finished by the algorithm: sellers don’t have any means to control the rank of their listings in any respect. That’s one thing we all know by design.
Therefore, a extra believable clarification is that the discontinuity within the density is pushed by platform-side impression logging (not rating), or my very own filtering within the SQL question (which is elaborate, and lacking values on the filter variables should not unusual).
Inference
The outcomes will do that time round. However Calonico, Cattaneo, and Titiunik (2014) spotlight a couple of points with OLS RDD estimates like ours. Particularly, about 1) the bias in estimating the anticipated final result on the cutoff, that not is de facto at the cutoff after we take samples additional away from it, and a pair of) the bandwidth-induced uncertainty that’s omitted of the mannequin (as h is handled as a hyperparameter, not a mannequin parameter).
Their strategies are applied in rdrobust
, an R and Stata package deal. I like to recommend utilizing that software program in analyses which might be about driving real-life selections.
Evaluation recap
We checked out how an inventory’s spot within the search outcomes impacts how usually it will get clicked. By specializing in the cutoff between the primary and second web page, we discovered a transparent (although modest) causal impact: listings on the prime of web page two acquired extra clicks than these caught on the backside of web page one. A placebo take a look at backed this up—on cell, the place there’s infinite scroll and no actual “pages,” the impact disappears. That provides us extra confidence within the outcome. Backside line: the place an inventory exhibits up issues, and prioritising prime positions might enhance engagement and create new business prospects.
However earlier than we run with it, a few necessary caveats.
First, our result’s native—it solely tells us what occurs close to the page-two cutoff. We don’t know if the identical impact holds on the prime of web page one, which in all probability alerts much more worth to customers. So this could be a lower-bound estimate.
Second, quantity issues. The primary web page will get much more eyeballs. So even when a prime slot on web page two will get extra clicks per view, a decrease spot on web page one would possibly nonetheless win total.
Conclusion
Regression Discontinuity Design just isn’t your on a regular basis causal inference methodology — it’s a nuanced method finest saved for when the celebrities align, and randomisation isn’t doable. Just remember to have a great grip on the design, and be thorough concerning the core assumptions: attempt to break them, after which attempt tougher. When you’ve gotten what you want, it’s an extremely satisfying design. I hope this studying serves you properly the subsequent time you get a possibility to use this methodology.
It’s nice seeing that you simply acquired this far into this put up. If you wish to learn extra, it’s potential; simply not right here. So, I compiled a small checklist of assets for you:
Additionally try the reference part beneath for some deep-reads.
Joyful to attach on LinkedIn, the place I talk about extra matters just like the one right here. Additionally, be happy to bookmark my private website that’s a lot cosier than right here.
All photographs on this put up are my very own. The dataset that I used is actual, and it isn’t publicly obtainable. Furthermore, the values extracted from it are anonymised; modified or omitted, to keep away from revealing strategic insights concerning the firm.
References
Calonico, S., Cattaneo, M. D., Farrell, M. H., & Titiunik, R. (2018). Regression Discontinuity Designs Utilizing Covariates. Retrieved from http://arxiv.org/abs/1809.03904v1
Calonico, S., Cattaneo, M. D., & Titiunik, R. (2014). Strong nonparametric confidence intervals for regression-discontinuity designs. Econometrica, 82(6), 2295–2326. https://doi.org/10.3982/ECTA11757