Estimating effect of multiple treatments#

[1]:
from dowhy import CausalModel
import dowhy.datasets

import warnings
warnings.filterwarnings('ignore')
[2]:
data = dowhy.datasets.linear_dataset(10, num_common_causes=4, num_samples=10000,
                                     num_instruments=0, num_effect_modifiers=2,
                                     num_treatments=2,
                                     treatment_is_binary=False,
                                     num_discrete_common_causes=2,
                                     num_discrete_effect_modifiers=0,
                                     one_hot_encode=False)
df=data['df']
df.head()
[2]:
X0 X1 W0 W1 W2 W3 v0 v1 y
0 1.767708 0.055920 -0.240717 -0.843351 1 3 6.726900 5.318777 295.611567
1 -1.470650 -0.938818 -0.975773 -0.355186 3 3 16.839656 12.739667 -1120.844957
2 -0.342821 1.699236 2.115395 -1.886431 1 1 3.640529 6.192848 194.403694
3 0.083615 0.197779 1.158983 -2.716997 1 3 5.658150 2.749601 100.307163
4 1.376552 0.300785 -0.641883 -1.958623 3 0 12.339354 10.039195 784.314906
[3]:
model = CausalModel(data=data["df"],
                    treatment=data["treatment_name"], outcome=data["outcome_name"],
                    graph=data["gml_graph"])
[4]:
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
../_images/example_notebooks_dowhy_multiple_treatments_4_0.png
../_images/example_notebooks_dowhy_multiple_treatments_4_1.png
[5]:
identified_estimand= model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Estimand type: EstimandType.NONPARAMETRIC_ATE

### Estimand : 1
Estimand name: backdoor
Estimand expression:
    d
─────────(E[y|W1,W3,W0,W2])
d[v₀  v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W1,W3,W0,W2,U) = P(y|v0,v1,W1,W3,W0,W2)

### Estimand : 2
Estimand name: iv
No such variable(s) found!

### Estimand : 3
Estimand name: frontdoor
No such variable(s) found!

Linear model#

Let us first see an example for a linear model. The control_value and treatment_value can be provided as a tuple/list when the treatment is multi-dimensional.

The interpretation is change in y when v0 and v1 are changed from (0,0) to (1,1).

[6]:
linear_estimate = model.estimate_effect(identified_estimand,
                                        method_name="backdoor.linear_regression",
                                        control_value=(0,0),
                                        treatment_value=(1,1),
                                        method_params={'need_conditional_estimates': False})
print(linear_estimate)
*** Causal Estimate ***

## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE

### Estimand : 1
Estimand name: backdoor
Estimand expression:
    d
─────────(E[y|W1,W3,W0,W2])
d[v₀  v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W1,W3,W0,W2,U) = P(y|v0,v1,W1,W3,W0,W2)

## Realized estimand
b: y~v0+v1+W1+W3+W0+W2+v0*X0+v0*X1+v1*X0+v1*X1
Target units: ate

## Estimate
Mean value: 34.04612866069517

You can estimate conditional effects, based on effect modifiers.

[7]:
linear_estimate = model.estimate_effect(identified_estimand,
                                        method_name="backdoor.linear_regression",
                                        control_value=(0,0),
                                        treatment_value=(1,1))
print(linear_estimate)
*** Causal Estimate ***

## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE

### Estimand : 1
Estimand name: backdoor
Estimand expression:
    d
─────────(E[y|W1,W3,W0,W2])
d[v₀  v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W1,W3,W0,W2,U) = P(y|v0,v1,W1,W3,W0,W2)

## Realized estimand
b: y~v0+v1+W1+W3+W0+W2+v0*X0+v0*X1+v1*X0+v1*X1
Target units:

## Estimate
Mean value: 34.04612866069517
### Conditional Estimates
__categorical__X0  __categorical__X1
(-3.415, -0.17]    (-3.972, -1.109]     -70.424952
                   (-1.109, -0.514]     -33.177203
                   (-0.514, -0.0098]    -14.653352
                   (-0.0098, 0.551]       4.920673
                   (0.551, 3.39]         37.852659
(-0.17, 0.411]     (-3.972, -1.109]     -38.279800
                   (-1.109, -0.514]      -5.334605
                   (-0.514, -0.0098]     15.731057
                   (-0.0098, 0.551]      36.055460
                   (0.551, 3.39]         67.421581
(0.411, 0.919]     (-3.972, -1.109]     -20.080857
                   (-1.109, -0.514]      14.305287
                   (-0.514, -0.0098]     34.978155
                   (-0.0098, 0.551]      54.666784
                   (0.551, 3.39]         87.340856
(0.919, 1.521]     (-3.972, -1.109]      -1.487976
                   (-1.109, -0.514]      33.019920
                   (-0.514, -0.0098]     53.622393
                   (-0.0098, 0.551]      73.352510
                   (0.551, 3.39]        106.654895
(1.521, 4.496]     (-3.972, -1.109]      29.589954
                   (-1.109, -0.514]      64.460886
                   (-0.514, -0.0098]     82.707948
                   (-0.0098, 0.551]     104.178150
                   (0.551, 3.39]        133.896098
dtype: float64

More methods#

You can also use methods from EconML or CausalML libraries that support multiple treatments. You can look at examples from the conditional effect notebook: https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html

Propensity-based methods do not support multiple treatments currently.