Skip to content

Commit 7a46534

Browse files
dp: Fix DP scheduler locking
When at least two DP modules are running, each on a separate core, using irq_lock() may lead to interrupts being disabled for a very long time. When module_is_ready_to_process() always returns true, the DP task is executed in a loop all the time except for periods when preempted by higher priority threads. irq_lock() disables interrupts globally. Using irq_lock() on multiple cores can lead to unbalanced double locks without unlock in between. Consider the case: core 1 calls irq_lock(1); this does not prevent core 2 from also calling flags = irq_lock(2); now flags contains the "interrupts disabled" state as interrupts were previously globally disabled by core 1. Then core 1 calls irq_unlock() -- interrupts are re-enabled; then core 2 calls irq_unlock(flags) to restore interrupts, which actually leads to interrupts being disabled. On the next loop iteration, core 1 calls flags = irq_lock(1), and since then interrupts might be disabled forever with only two DP threads constantly running. This fixes a regression in multicore DP tests. The issue is triggered by this commit 4225c27, which just allows the DP task to run all the available time without being triggered by LL for every cycle. Signed-off-by: Serhiy Katsyuba <[email protected]>
1 parent 03044fc commit 7a46534

File tree

1 file changed

+6
-26
lines changed

1 file changed

+6
-26
lines changed

src/schedule/zephyr_dp_schedule.c

Lines changed: 6 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -52,11 +52,6 @@ struct task_dp_pdata {
5252
uint32_t ll_cycles_to_start; /* current number of LL cycles till delayed start */
5353
};
5454

55-
#ifdef CONFIG_USERSPACE
56-
/* Single CPU-wide lock
57-
* The irq_lock is not available for USERSPACE (non-privileged) threads.
58-
* Therefore semaphore is used to control critical section.
59-
*/
6055
#define DP_LOCK_INIT(i, _) Z_SEM_INITIALIZER(dp_lock[i], 1, 1)
6156
#define DP_LOCK_INIT_LIST LISTIFY(CONFIG_MP_MAX_NUM_CPUS, DP_LOCK_INIT, (,))
6257

@@ -67,6 +62,10 @@ struct task_dp_pdata {
6762
static
6863
STRUCT_SECTION_ITERABLE_ARRAY(k_sem, dp_lock, CONFIG_MP_MAX_NUM_CPUS) = { DP_LOCK_INIT_LIST };
6964

65+
/* Each per-core instance of DP scheduler has separate structures; hence, locks are per-core.
66+
*
67+
* TODO: consider using cpu_get_id() instead of supplying core as a parameter.
68+
*/
7069
static inline unsigned int scheduler_dp_lock(uint16_t core)
7170
{
7271
k_sem_take(&dp_lock[core], K_FOREVER);
@@ -80,29 +79,10 @@ static inline void scheduler_dp_unlock(unsigned int key)
8079

8180
static inline void scheduler_dp_grant(k_tid_t thread_id, uint16_t core)
8281
{
82+
#if CONFIG_USERSPACE
8383
k_thread_access_grant(thread_id, &dp_lock[core]);
84-
}
85-
86-
#else /* CONFIG_USERSPACE */
87-
88-
static inline void scheduler_dp_grant(k_tid_t thread_id, uint16_t core)
89-
{
90-
}
91-
92-
/* Single CPU-wide lock
93-
* as each per-core instance if dp-scheduler has separate structures, it is enough to
94-
* use irq_lock instead of cross-core spinlocks
95-
*/
96-
static inline unsigned int scheduler_dp_lock(uint16_t core)
97-
{
98-
return irq_lock();
99-
}
100-
101-
static inline void scheduler_dp_unlock(unsigned int key)
102-
{
103-
irq_unlock(key);
104-
}
10584
#endif
85+
}
10686

10787
/* dummy LL task - to start LL on secondary cores */
10888
static enum task_state scheduler_dp_ll_tick_dummy(void *data)

0 commit comments

Comments
 (0)