Mixed models: How to generate predictions "fixed" (marginalised?) over the random factors #2390
Replies: 2 comments 12 replies
-
Maybe I'm misunderstanding, but do you just want μ_intercept + μ_x * x[i, 1] ? If you want to marginalize over the participants, then you need a notion of what the distribution of the participants are (which it doesn't seem like you have here, IIUC) |
Beta Was this translation helpful? Give feedback.
-
Super sorry to keep annoying with this Tor, but now using the vectorized syntax suggested above it looks like the Here's an MWE: using Turing, StatsFuns, DataFrames, Random
@model function mod(y, x1, x2)
μ_intercept ~ Normal(0, 0.5)
μ_x1 ~ Normal(1, 0.5)
μ_x2 ~ Normal(2, 0.5)
σ ~ truncated(Normal(0.0, 1), lower=0)
μ = μ_intercept .+ μ_x1 .* x1 .+ μ_x2 .* x2
y .~ Normal.(μ, σ)
end
fit = mod(rand(10), rand(10), rand(10))
chain = sample(fit, NUTS(), 400)
pred = predict(mod([missing for i in 1:10], rand(10), rand(10)), chain)
My guess is that we should in this example use MvNormal, but is there a way to make it work for likelihood distributions that don't have a multivariate form? |
Beta Was this translation helpful? Give feedback.
-
Let's start with a simple linear mixed model that takes
participant
as random factors.I can easily generate predictions on the original data, useful for instance for posterior predictive checks.
However, often for visualization purposes / postprocessing (marginal effects computation) we would need to generate predictions on new data, for instance on a data "grid" with specific values. While these values are often specifically chosen for the fixed effects, we are often interested in "marginalising" over the random effects, i.e., having the average prediction for the average random factor.
I thought that one way to go about it would be to also set
participant
asmissing
, but that doesn't seem to be the correct way.Any advice on how to achieve something like that?
Beta Was this translation helpful? Give feedback.
All reactions