Estimating effect of multiple treatments#
[1]:
from dowhy import CausalModel
import dowhy.datasets
import warnings
warnings.filterwarnings('ignore')
[2]:
data = dowhy.datasets.linear_dataset(10, num_common_causes=4, num_samples=10000,
                                     num_instruments=0, num_effect_modifiers=2,
                                     num_treatments=2,
                                     treatment_is_binary=False,
                                     num_discrete_common_causes=2,
                                     num_discrete_effect_modifiers=0,
                                     one_hot_encode=False)
df=data['df']
df.head()
[2]:
| X0 | X1 | W0 | W1 | W2 | W3 | v0 | v1 | y | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 1.492408 | 1.864732 | 1.451726 | -1.378493 | 0 | 1 | 7.337298 | 8.468493 | 427.680664 | 
| 1 | -0.225128 | 1.330722 | 0.682820 | -1.514373 | 2 | 3 | 15.294786 | 21.219314 | 592.476997 | 
| 2 | 1.168789 | 1.615064 | 2.678809 | 1.136191 | 1 | 1 | 21.903209 | 18.012904 | 1758.642539 | 
| 3 | 0.381273 | -0.541439 | 1.185083 | 0.538530 | 2 | 0 | 11.717049 | 13.636377 | 307.295432 | 
| 4 | -0.766450 | -0.815868 | 0.759406 | -0.885587 | 0 | 0 | 4.165663 | 1.338965 | 46.802171 | 
[3]:
model = CausalModel(data=data["df"],
                    treatment=data["treatment_name"], outcome=data["outcome_name"],
                    graph=data["gml_graph"])
[4]:
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
 
 
[5]:
identified_estimand= model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
    d
─────────(E[y|W0,W3,W2,W1])
d[v₀  v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W0,W3,W2,W1,U) = P(y|v0,v1,W0,W3,W2,W1)
### Estimand : 2
Estimand name: iv
No such variable(s) found!
### Estimand : 3
Estimand name: frontdoor
No such variable(s) found!
Linear model#
Let us first see an example for a linear model. The control_value and treatment_value can be provided as a tuple/list when the treatment is multi-dimensional.
The interpretation is change in y when v0 and v1 are changed from (0,0) to (1,1).
[6]:
linear_estimate = model.estimate_effect(identified_estimand,
                                        method_name="backdoor.linear_regression",
                                        control_value=(0,0),
                                        treatment_value=(1,1),
                                        method_params={'need_conditional_estimates': False})
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
    d
─────────(E[y|W0,W3,W2,W1])
d[v₀  v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W0,W3,W2,W1,U) = P(y|v0,v1,W0,W3,W2,W1)
## Realized estimand
b: y~v0+v1+W0+W3+W2+W1+v0*X1+v0*X0+v1*X1+v1*X0
Target units: ate
## Estimate
Mean value: 57.418041871079026
You can estimate conditional effects, based on effect modifiers.
[7]:
linear_estimate = model.estimate_effect(identified_estimand,
                                        method_name="backdoor.linear_regression",
                                        control_value=(0,0),
                                        treatment_value=(1,1))
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
    d
─────────(E[y|W0,W3,W2,W1])
d[v₀  v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W0,W3,W2,W1,U) = P(y|v0,v1,W0,W3,W2,W1)
## Realized estimand
b: y~v0+v1+W0+W3+W2+W1+v0*X1+v0*X0+v1*X1+v1*X0
Target units:
## Estimate
Mean value: 57.418041871079026
### Conditional Estimates
__categorical__X1  __categorical__X0
(-3.054, -0.411]   (-3.093, -0.353]     -15.114353
                   (-0.353, 0.237]       15.478462
                   (0.237, 0.749]        35.691136
                   (0.749, 1.339]        54.287082
                   (1.339, 3.949]        86.083691
(-0.411, 0.179]    (-3.093, -0.353]      -2.360913
                   (-0.353, 0.237]       29.599009
                   (0.237, 0.749]        49.306862
                   (0.749, 1.339]        69.013179
                   (1.339, 3.949]        98.947739
(0.179, 0.7]       (-3.093, -0.353]       6.199982
                   (-0.353, 0.237]       37.590758
                   (0.237, 0.749]        57.524360
                   (0.749, 1.339]        77.263439
                   (1.339, 3.949]       109.140521
(0.7, 1.276]       (-3.093, -0.353]      14.011759
                   (-0.353, 0.237]       45.222825
                   (0.237, 0.749]        65.813071
                   (0.749, 1.339]        85.281400
                   (1.339, 3.949]       118.259892
(1.276, 4.274]     (-3.093, -0.353]      27.833026
                   (-0.353, 0.237]       59.190924
                   (0.237, 0.749]        79.831669
                   (0.749, 1.339]        99.855317
                   (1.339, 3.949]       131.344970
dtype: float64
More methods#
You can also use methods from EconML or CausalML libraries that support multiple treatments. You can look at examples from the conditional effect notebook: https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html
Propensity-based methods do not support multiple treatments currently.
