-
Notifications
You must be signed in to change notification settings - Fork 24
Description
I have a dataset of about 40k cells with only two conditions (condition1 and condition2). When I run the following testNhoods with the model.contrasts = "condition1 - condition2" argument, I get a result of table(da_results$SpatialFDR < 0.1) as all FALSE:
da_results <- testNhoods(milor,
design = ~ 0 + condition + batch,
design.df = design,
reduced.dim="PCA",
fdr.weighting="graph-overlap",
model.contrasts = "condition1 - condition2"
)
The above is the Pvalue histogram of that result.
However, when I run the same function without including the model.contrasts = "condition1 - condition2" argument, I get a result that looks more expected, with some significant values and some non significant. The Pvalue distribution looks as follows:
Also, if I switch the design argument to design = ~ 0 + condition (doesn't account for batch) or reorder it to design = ~ 0 + batch + condition, the result of table(da_results$SpatialFDR < 0.1) will either be all TRUE or all FALSE
Is the model.contrasts argument necessary when there is only two conditions to choose from? I did read through the provided link to categorical variables from the vignette, but I am still confused on the stark differences between the two results. Which one is the correct one to go with? We don't expect to see any extensive changes, but having them return as all insignificant or all significant would also be pretty unexpected.
My design matrix looks like the following:
design <- data.frame(colData(milor))[,c("orig.ident", "condition", "batch")]
design$batch <- as.factor(design$batch)
design <- distinct(design)
rownames(design) <- design$orig.ident
# contrast <- c("condition1 - condition2")
> design
orig.ident condition batch
sample1 sample1 condition1 1
sample2 sample2 condition2 1
sample3 sample3 condition1 2
sample4 sample4 condition2 2
sample5 sample5 condition1 1
sample6 sample6 condition2 1