Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions .github/workflows/twister.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
name: Run Twister patch series (PR)
runs-on:
- runs-on=${{ github.run_id }}
- runner=4cpu-linux-x64
- runner=64cpu-linux-x64
# Keep aligned with target NCS version
container: ghcr.io/nrfconnect/sdk-nrf-toolchain:v3.1.0
defaults:
Expand Down Expand Up @@ -47,7 +47,12 @@ jobs:
apt-get update
apt-get install -y build-essential ninja-build gcc-multilib g++-multilib ruby

- name: Run Twister
- name: Twister Build Only (all)
working-directory: nrf-bm
run: |
west twister -T tests --build-only

- name: Run Twister (native_sim)
working-directory: nrf-bm
run: |
if [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
Expand Down
38 changes: 25 additions & 13 deletions doc/nrf-bm/libraries/bm_zms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ Registering the user callback handler

BM_ZMS can be used with asynchronous and synchronous storage APIs.

To use it with asynchronous APIs, it is recommended to register a callback handler that will be called when an operation (write/init) has finished.
To use it with asynchronous APIs, it is recommended to register a callback handler that will be called when an operation (mount, clear, write, or delete) has finished.
Read operations are synchronous and do not require a callback handler.

Mounting the storage system
Expand All @@ -114,17 +114,19 @@ To mount the file system, the following members in the struct :c:struct:`bm_zms_

.. code-block:: c

struct bm_zms_fs_config {
/** File system offset in non-volatile memory **/
off_t offset;

/** Storage system is split into sectors, each sector size must be multiple of
* erase-blocks if the device has erase capabilities
*/
uint32_t sector_size;
/** Number of sectors in the file system */
uint32_t sector_count;
};
/** Configuration for Zephyr Memory Storage file system structure initialization. */
struct bm_zms_fs_config {
/** File system offset in non-volatile storage. */
off_t offset;
/** Storage system is split into sectors. The sector size must be a multiple of
* `erase-block-size` if the device has erase capabilities.
*/
uint32_t sector_size;
/** Number of sectors in the file system. */
uint32_t sector_count;
/** Event handler for propagating events. */
bm_zms_evt_handler_t evt_handler;
};

Initialization
==============
Expand All @@ -135,9 +137,17 @@ To do this, it looks for a closed sector followed by an open one.
Then, within the open sector, it finds (recovers) the last written ATE.
After that, it checks that the sector after this one is empty, or it will erase it.

If this initialization is successful, the library sets the flag ``bm_zms_fs.init_flags.initialized`` to true.
If this initialization is successful, the library will propagate a :c:enum:`BM_ZMS_EVT_MOUNT` event to the configured event handler.
For asynchronous storage backends, you must wait for the initialization to finish before triggering a write or read operation.

Clearing the storage system
===========================

Call the :c:func:`bm_zms_clear` function to clear the storage and uninitialize it.

If this uninitialization is successful, the library will propagate a :c:enum:`BM_ZMS_EVT_CLEAR` event to the configured event handler.
For asynchronous storage backends, you must wait for the uninitialization to finish before reinitializing the storage system.

BM_ZMS ID/data write
====================

Expand All @@ -153,6 +163,8 @@ If BM_ZMS still has some queued write operations to process, it sets the ``bm_zm
If the sector is full (cannot hold the current data + ATE), BM_ZMS moves to the next sector, garbage collects the sector after the newly opened one, and then erases it.
Data whose size is smaller or equal to 8 bytes is written within the ATE.

When a write operation has completed, the library will propagate a :c:enum:`BM_ZMS_EVT_WRITE` event to the configured event handler.

BM_ZMS ID/data read (with history)
==================================

Expand Down
4 changes: 3 additions & 1 deletion doc/nrf-bm/release_notes/release_notes_changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,6 @@ Libraries

* Updated:

* The :c:func:`bm_zms_register` function to return ``-EFAULT`` instead of ``-EINVAL`` when the input parameters are ``NULL``.
* The :c:func:`bm_zms_mount` function to return ``-EFAULT`` when the input parameter ``fs`` is ``NULL``.
* The :c:func:`bm_zms_clear` function to return ``-EFAULT`` when the input parameter ``fs`` is ``NULL``.
* The :c:func:`bm_zms_write` function to return ``-EFAULT`` when the input parameter ``fs`` is ``NULL``.
Expand All @@ -154,6 +153,9 @@ Libraries
* The ``CONFIG_BM_ZMS_MAX_USERS`` Kconfig option.
Now the library expects at most one callback for each instance of the struct :c:struct:`bm_zms_fs`.
* The ``bm_zms_init_flags.cb_registred`` member as it was not used anymore.
* The ``bm_zms_register`` function.
The event handler configuration is now done with the struct :c:struct:`bm_zms_fs_config`.
* The selection of the :kconfig:option:`CONFIG_EXPERIMENTAL` Kconfig option.

* :ref:`lib_peer_manager` library:

Expand Down
38 changes: 21 additions & 17 deletions include/bm/fs/bm_zms.h
Original file line number Diff line number Diff line change
Expand Up @@ -65,18 +65,20 @@ struct bm_zms_evt {
uint32_t id;
};

/* Init flags. */
/** Init flags. */
struct bm_zms_init_flags {
volatile bool initialized; /* true when the storage is initialized. */
volatile bool initializing; /* true when initialization is ongoing. */
/** true when the storage is initialized. */
volatile bool initialized;
/** true when initialization is ongoing. */
volatile bool initializing;
} __packed;

/**
* @brief Bare Metal ZMS event handler function prototype.
*
* @param evt The event.
*/
typedef void (*bm_zms_cb_t)(struct bm_zms_evt const *evt);
typedef void (*bm_zms_evt_handler_t)(struct bm_zms_evt const *evt);

/** Zephyr Memory Storage file system structure */
struct bm_zms_fs {
Expand Down Expand Up @@ -106,8 +108,8 @@ struct bm_zms_fs {
struct bm_storage zms_bm_storage;
/** Number of writes currently handled by the storage system. */
atomic_t ongoing_writes;
/** User callback for propagating events. */
bm_zms_cb_t user_cb;
/** Event handler for propagating events. */
bm_zms_evt_handler_t evt_handler;
#if CONFIG_BM_ZMS_LOOKUP_CACHE
/** Lookup table used to cache ATE addresses of written IDs. */
uint64_t lookup_cache[CONFIG_BM_ZMS_LOOKUP_CACHE_SIZE];
Expand All @@ -124,6 +126,8 @@ struct bm_zms_fs_config {
uint32_t sector_size;
/** Number of sectors in the file system. */
uint32_t sector_count;
/** Event handler for propagating events. */
bm_zms_evt_handler_t evt_handler;
};

/**
Expand All @@ -136,20 +140,12 @@ struct bm_zms_fs_config {
* @{
*/

/**
* @brief Register a callback to BM_ZMS for handling events.
*
* @param fs Pointer to the file system structure.
* @param cb Pointer to the event handler callback.
*
* @retval 0 on success.
* @retval -EFAULT if @p fs or @p cb are NULL.
*/
int bm_zms_register(struct bm_zms_fs *fs, bm_zms_cb_t cb);

/**
* @brief Mount a BM_ZMS file system.
*
* @note Once the mount operation is completed, a @ref BM_ZMS_EVT_MOUNT event will be propagated
* to the configured event handler.
*
* @param fs Pointer to the file system.
* @param config Pointer to the configuration for file system initialization.
*
Expand All @@ -166,6 +162,9 @@ int bm_zms_mount(struct bm_zms_fs *fs, const struct bm_zms_fs_config *config);
* @brief Clear the BM_ZMS file system from device. The BM_ZMS file system must be re-mounted after
* this operation.
*
* @note Once the clear operation is completed, a @ref BM_ZMS_EVT_CLEAR event will be propagated
* to the configured event handler.
*
* @param fs Pointer to the file system.
*
* @retval 0 if the clear operation is queued successfully.
Expand All @@ -181,6 +180,8 @@ int bm_zms_clear(struct bm_zms_fs *fs);
* @note When the `len` parameter is equal to `0` the entry is effectively removed (it is
* equivalent to calling @ref bm_zms_delete()). It is not possible to distinguish between a
* deleted entry and an entry with data of length 0.
* Once the write operation is completed, a @ref BM_ZMS_EVT_WRITE event will be propagated
* to the configured event handler.
*
* @param fs Pointer to the file system.
* @param id ID of the entry to be written.
Expand All @@ -200,6 +201,9 @@ ssize_t bm_zms_write(struct bm_zms_fs *fs, uint32_t id, const void *data, size_t
/**
* @brief Delete an entry from the file system.
*
* @note Once the delete operation is completed, a @ref BM_ZMS_EVT_DELETE event will be propagated
* to the configured event handler.
*
* @param fs Pointer to the file system.
* @param id ID of the entry to be deleted.
*
Expand Down
7 changes: 1 addition & 6 deletions lib/bluetooth/peer_manager/modules/peer_data_storage.c
Original file line number Diff line number Diff line change
Expand Up @@ -345,16 +345,11 @@ uint32_t pds_init(void)
/* Check for re-initialization if debugging. */
__ASSERT_NO_MSG(!module_initialized);

err = bm_zms_register(&fs, bm_zms_evt_handler);
if (err) {
LOG_ERR("Could not initialize NVM storage. bm_zms_register() returned %d.", err);
return NRF_ERROR_INTERNAL;
}

struct bm_zms_fs_config config = {
.offset = PEER_MANAGER_PARTITION_OFFSET,
.sector_size = CONFIG_PM_BM_ZMS_SECTOR_SIZE,
.sector_count = (PEER_MANAGER_PARTITION_SIZE / CONFIG_PM_BM_ZMS_SECTOR_SIZE),
.evt_handler = bm_zms_evt_handler,
};

err = bm_zms_mount(&fs, &config);
Expand Down
2 changes: 1 addition & 1 deletion samples/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

# All these samples shall run without multithreading
config MULTITHREADING
default n if !UNITY
default n if (!UNITY && !(ZTEST && BOARD_NATIVE_SIM))

# Software ISR table is not needed if multithreading is not used
config GEN_SW_ISR_TABLE
Expand Down
Loading