Skip to content

Conversation

@hertschuh
Copy link
Collaborator

  • standardize_shape would allow int-like objects (e.g. NumPy scalars) to be in the shape, but would not resolve them to actual ints. This could cause issues later. With this change, objects that can be cast to int will be cast to int, which includes NumPy scalars, but also TensorFlow constant or eager tensors.
  • standardize_shape would have custom code to handle torch.Size. Generalized it to turn anything iterable to a plain tuple.
  • Added units tests
  • Removed duplicate unit tests
  • Added TensorFlow specific unit test

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hertschuh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the standardize_shape utility function to provide more robust and consistent handling of tensor dimensions across different backend frameworks. The primary goal is to ensure that all shape dimensions are represented as standard Python integers within a tuple, regardless of their initial type (e.g., NumPy scalars, TensorFlow tensors, or torch.Size objects). This change improves type safety and reduces potential compatibility issues by enforcing a uniform shape representation.

Highlights

  • Shape Normalization: The standardize_shape function now consistently casts int-like objects (e.g., NumPy scalars, TensorFlow constant/eager tensors) to native Python integers, preventing potential issues from non-integer types in shape dimensions.
  • Iterable Shape Handling: The function now generalizes the conversion of any iterable shape (including torch.Size) into a plain Python tuple, simplifying backend-specific logic.
  • Improved Validation: Enhanced validation explicitly rejects string or float dimensions and ensures all dimensions are non-negative integers or None.
  • Refactored Unit Tests: Unit tests have been updated, with new tests added for NumPy and TensorFlow scalar types, and redundant backend-specific tests removed for better coverage and maintainability.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves standardize_shape by making it more robust in handling different int-like objects and by generalizing it to convert any iterable to a plain tuple, removing backend-specific code. The accompanying test changes are also great, with better organization, removal of duplicates, and addition of new test cases for the new functionality. I've found a couple of minor issues in the test file related to naming, but overall this is a solid improvement.

@hertschuh hertschuh force-pushed the shape_int_dims branch 2 times, most recently from 9664432 to 32aa942 Compare November 24, 2025 22:59
- `standardize_shape` would allow int-like objects (e.g. NumPy scalars) to be in the shape, but would not resolve them to actual ints. This could cause issues later. With this change, objects that can be cast to int will be cast to int, which includes NumPy scalars, but also TensorFlow constant or eager tensors.
- `standardize_shape` would have custom code to handle `torch.Size`. Generalized it to turn anything iterable to a plain tuple.
- Added units tests
- Removed duplicate unit tests
- Added TensorFlow specific unit test
@codecov-commenter
Copy link

codecov-commenter commented Nov 24, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.57%. Comparing base (9bcdbc7) to head (9888260).

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21867      +/-   ##
==========================================
- Coverage   82.57%   82.57%   -0.01%     
==========================================
  Files         577      577              
  Lines       59568    59572       +4     
  Branches     9345     9344       -1     
==========================================
+ Hits        49189    49191       +2     
- Misses       7974     7975       +1     
- Partials     2405     2406       +1     
Flag Coverage Δ
keras 82.39% <100.00%> (-0.01%) ⬇️
keras-jax 62.86% <100.00%> (+<0.01%) ⬆️
keras-numpy 57.52% <100.00%> (+<0.01%) ⬆️
keras-openvino 34.33% <100.00%> (+<0.01%) ⬆️
keras-tensorflow 64.39% <100.00%> (+<0.01%) ⬆️
keras-torch 63.57% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Nov 25, 2025
@hertschuh hertschuh merged commit a40ddf6 into keras-team:master Nov 25, 2025
12 of 13 checks passed
@hertschuh hertschuh deleted the shape_int_dims branch November 25, 2025 01:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready to pull Ready to be merged into the codebase size:M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants