Skip to content

Memory issue when continuously calling the lambda function #5

@jogando

Description

@jogando

Hi, when i continuously call the lambda function, the memory increases call after call.

This is how i'm debugging the code:
console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));

The first time i call the function I get this:
imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088}
imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}

The second time i call the function i get the following:
imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}

Looks like the statement
const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))

If you take a look to the "numTensors" property, it's increased after each function call.

After 5 lambda executions my lambda fails with
Error: Runtime exited with error: signal: killed

Is there a way to clean the resources from the previous lambda function call?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions