Estimating effect of multiple treatments
[1]:
from dowhy import CausalModel
import dowhy.datasets
import warnings
warnings.filterwarnings('ignore')
[2]:
data = dowhy.datasets.linear_dataset(10, num_common_causes=4, num_samples=10000,
num_instruments=0, num_effect_modifiers=2,
num_treatments=2,
treatment_is_binary=False,
num_discrete_common_causes=2,
num_discrete_effect_modifiers=0,
one_hot_encode=False)
df=data['df']
df.head()
[2]:
X0 | X1 | W0 | W1 | W2 | W3 | v0 | v1 | y | |
---|---|---|---|---|---|---|---|---|---|
0 | 0.688028 | -0.570179 | -0.787887 | 0.479003 | 3 | 3 | 9.679098 | 6.204148 | 108.190491 |
1 | -0.288296 | -0.603436 | 0.162024 | -0.651557 | 0 | 0 | 2.372026 | -3.495385 | 12.209979 |
2 | 1.905892 | -0.833302 | 1.485347 | -1.040674 | 2 | 1 | 13.177795 | -0.545013 | 139.956383 |
3 | 0.621439 | -0.585854 | 1.408494 | 1.183161 | 2 | 0 | 10.331170 | 7.882728 | 95.378611 |
4 | 0.101068 | -0.273072 | -0.117280 | -0.515313 | 2 | 3 | 11.525165 | 0.560455 | 127.060069 |
[3]:
model = CausalModel(data=data["df"],
treatment=data["treatment_name"], outcome=data["outcome_name"],
graph=data["gml_graph"])
[4]:
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
[5]:
identified_estimand= model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W2,W0,W3,W1])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W2,W0,W3,W1,U) = P(y|v0,v1,W2,W0,W3,W1)
### Estimand : 2
Estimand name: iv
No such variable(s) found!
### Estimand : 3
Estimand name: frontdoor
No such variable(s) found!
Linear model
Let us first see an example for a linear model. The control_value and treatment_value can be provided as a tuple/list when the treatment is multi-dimensional.
The interpretation is change in y when v0 and v1 are changed from (0,0) to (1,1).
[6]:
linear_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=(0,0),
treatment_value=(1,1),
method_params={'need_conditional_estimates': False})
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W2,W0,W3,W1])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W2,W0,W3,W1,U) = P(y|v0,v1,W2,W0,W3,W1)
## Realized estimand
b: y~v0+v1+W2+W0+W3+W1+v0*X0+v0*X1+v1*X0+v1*X1
Target units: ate
## Estimate
Mean value: 5.078224345610501
You can estimate conditional effects, based on effect modifiers.
[7]:
linear_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression",
control_value=(0,0),
treatment_value=(1,1))
print(linear_estimate)
*** Causal Estimate ***
## Identified estimand
Estimand type: EstimandType.NONPARAMETRIC_ATE
### Estimand : 1
Estimand name: backdoor
Estimand expression:
d
─────────(E[y|W2,W0,W3,W1])
d[v₀ v₁]
Estimand assumption 1, Unconfoundedness: If U→{v0,v1} and U→y then P(y|v0,v1,W2,W0,W3,W1,U) = P(y|v0,v1,W2,W0,W3,W1)
## Realized estimand
b: y~v0+v1+W2+W0+W3+W1+v0*X0+v0*X1+v1*X0+v1*X1
Target units:
## Estimate
Mean value: 5.078224345610501
### Conditional Estimates
__categorical__X0 __categorical__X1
(-3.758, -0.64] (-4.402, -1.361] -72.578624
(-1.361, -0.783] -39.306583
(-0.783, -0.286] -19.224005
(-0.286, 0.334] 1.070074
(0.334, 3.166] 35.627126
(-0.64, -0.0764] (-4.402, -1.361] -57.395848
(-1.361, -0.783] -23.361290
(-0.783, -0.286] -3.911810
(-0.286, 0.334] 17.008063
(0.334, 3.166] 49.812228
(-0.0764, 0.412] (-4.402, -1.361] -47.951547
(-1.361, -0.783] -15.183702
(-0.783, -0.286] 4.696040
(-0.286, 0.334] 25.565515
(0.334, 3.166] 57.882635
(0.412, 0.997] (-4.402, -1.361] -38.293553
(-1.361, -0.783] -6.556265
(-0.783, -0.286] 13.596805
(-0.286, 0.334] 34.149412
(0.334, 3.166] 68.228800
(0.997, 4.057] (-4.402, -1.361] -24.095849
(-1.361, -0.783] 7.787794
(-0.783, -0.286] 27.864159
(-0.286, 0.334] 48.219762
(0.334, 3.166] 83.308854
dtype: float64
More methods
You can also use methods from EconML or CausalML libraries that support multiple treatments. You can look at examples from the conditional effect notebook: https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html
Propensity-based methods do not support multiple treatments currently.