-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Validate user input against project-schema.json #10433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate user input against project-schema.json #10433
Conversation
Reviewer's GuideThis PR extends schema-based validation to the Sequence diagram for poetry init with schema validationsequenceDiagram
actor User
participant CLI as Poetry CLI
participant InitCmd as InitCommand
participant Factory
User->>CLI: Run 'poetry init'
CLI->>InitCmd: Start initialization
InitCmd->>User: Prompt for project name
User->>InitCmd: Provide project name
InitCmd->>InitCmd: Collect other inputs
InitCmd->>InitCmd: Build pyproject data
InitCmd->>InitCmd: Call _validate(pyproject_data)
InitCmd->>Factory: validate(pyproject_data)
Factory-->>InitCmd: Return validation results
alt Validation errors
InitCmd->>User: Show validation error and abort
else Validation passes
InitCmd->>InitCmd: Save pyproject.toml
InitCmd->>User: Complete initialization
end
Class diagram for InitCommand validation changesclassDiagram
class InitCommand {
+_init_pyproject(...)
+_get_pool()
+_validate(pyproject_data: dict) dict
}
class Factory {
+validate(pyproject_data: dict) dict
}
InitCommand ..> Factory : uses
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
…eature/invalid-project-name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @rbogart1990 - I've reviewed your changes and they look great!
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `src/poetry/console/commands/init.py:270` </location>
<code_context>
+ # validate fields before creating pyproject.toml file. If any validations fail, throw error.
+ validation_results = self._validate(pyproject.data)
+ if validation_results.get("errors"):
+ self.line_error(f"<error>Validation failed: {validation_results}</error>")
+ return 1
+
</code_context>
<issue_to_address>
Consider formatting validation errors for readability
Displaying only relevant error messages or a summary instead of the full dictionary will make the output clearer for users.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
if validation_results.get("errors"):
self.line_error(f"<error>Validation failed: {validation_results}</error>")
return 1
=======
if validation_results.get("errors"):
errors = validation_results["errors"]
if isinstance(errors, dict):
error_list = [f"- {field}: {msg}" for field, msg in errors.items()]
elif isinstance(errors, list):
error_list = [f"- {msg}" for msg in errors]
else:
error_list = [str(errors)]
formatted_errors = "\n".join(error_list)
self.line_error(f"<error>Validation failed with the following errors:\n{formatted_errors}</error>")
return 1
>>>>>>> REPLACE
</suggested_fix>
### Comment 2
<location> `tests/console/commands/test_init.py:1213` </location>
<code_context>
+ (".", "just dot"),
+ ],
+)
+def test_invalid_project_name(invalid_project_name, reason):
+ pyproject_data = build_pyproject_data(invalid_project_name)
+ result = InitCommand._validate(pyproject_data)
+
+ assert "errors" in result, f"Expected error for: {reason}"
+ assert any("project.name must match pattern" in err for err in result["errors"])
</code_context>
<issue_to_address>
Missing test for interactive mode with invalid project name.
Please add an integration test for interactive mode that simulates invalid project name input and verifies the correct error handling and process exit.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if validation_results.get("errors"): | ||
| self.line_error(f"<error>Validation failed: {validation_results}</error>") | ||
| return 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Consider formatting validation errors for readability
Displaying only relevant error messages or a summary instead of the full dictionary will make the output clearer for users.
| if validation_results.get("errors"): | |
| self.line_error(f"<error>Validation failed: {validation_results}</error>") | |
| return 1 | |
| if validation_results.get("errors"): | |
| errors = validation_results["errors"] | |
| if isinstance(errors, dict): | |
| error_list = [f"- {field}: {msg}" for field, msg in errors.items()] | |
| elif isinstance(errors, list): | |
| error_list = [f"- {msg}" for msg in errors] | |
| else: | |
| error_list = [str(errors)] | |
| formatted_errors = "\n".join(error_list) | |
| self.line_error(f"<error>Validation failed with the following errors:\n{formatted_errors}</error>") | |
| return 1 |
tests/console/commands/test_init.py
Outdated
| def test_invalid_project_name(invalid_project_name, reason): | ||
| pyproject_data = build_pyproject_data(invalid_project_name) | ||
| result = InitCommand._validate(pyproject_data) | ||
|
|
||
| assert "errors" in result, f"Expected error for: {reason}" | ||
| assert any("project.name must match pattern" in err for err in result["errors"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Missing test for interactive mode with invalid project name.
Please add an integration test for interactive mode that simulates invalid project name input and verifies the correct error handling and process exit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I struggled to create an integration test for this. I wasn't able to get it to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here’s one way to get an “end‐to‐end” / integration‐style test working without having to wrestle with fully interactive prompts:
- Use the
--no-interactionflag so you can drive the command entirely by CLI options (no stdin juggling). - Pass your invalid name via the
--nameoption. - Assert that Poetry aborts with a non‐zero exit code and that the validation message appears on stdout/stderr.
For example, drop something like this into tests/console/commands/test_init.py:
from pathlib import Path
import pytest
from click.testing import CliRunner
from poetry.console.application import Application
@pytest.mark.parametrize(
"invalid_name, reason",
[
("new+project", "plus sign"),
("new/project", "slash"),
# … your other cases …
],
)
def test_init_non_interactive_invalid_name(tmp_path: Path, invalid_name: str, reason: str):
# arrange: run inside an empty temp dir
runner = CliRunner()
app = Application()
# act: invoke `poetry init --no-interaction --name <invalid>`
result = runner.invoke(
app,
["init", "--no-interaction", "--name", invalid_name],
obj={},
catch_exceptions=False,
)
# assert: we expect a failure exit code
assert result.exit_code != 0, f"Should have failed for {reason}"
# and the error message refers to the schema‐validation pattern
assert "project.name must match pattern" in result.output
# (you can tighten this to exactly match your error wording)Why this works:
--no-interactionmakes Poetry skip all the “press Enter to accept defaults” prompts.- You drive the only input you care about (
--name) on the CLI. - You can then assert on
exit_codeand onresult.output(which includes both stdout & stderr).
If you really want to simulate a fully interactive session (e.g. to catch the prompt, type a bad name, then press Enter several times), you can still do that with CliRunner.invoke(..., input="bad+name\n\n\n"), but in my experience the --no-interaction approach is far simpler and still 100% exercise the same validation logic.
Give that a try, and let me know if you hit any snags!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @rbogart1990 - I've reviewed your changes and they look great!
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `src/poetry/console/commands/init.py:133` </location>
<code_context>
- name = project_path.name.lower()
+ name = self.option("name") or project_path.name.lower()
- if is_interactive:
- question = self.create_question(
- f"Package name [<comment>{name}</comment>]: ", default=name
</code_context>
<issue_to_address>
Interactive prompt now always runs even when --name is provided
If this behavior is unintentional, move the prompt back inside the `if not name` block to avoid prompting when --name is provided.
</issue_to_address>
### Comment 2
<location> `src/poetry/console/commands/init.py:270` </location>
<code_context>
+ # validate fields before creating pyproject.toml file. If any validations fail, throw error.
+ validation_results = self._validate(pyproject.data)
+ if validation_results.get("errors"):
+ self.line_error(f"<error>Validation failed: {validation_results}</error>")
+ return 1
+
</code_context>
<issue_to_address>
Printing the full `validation_results` dict may expose internal details
Extract and display only the relevant error messages to prevent leaking internal or sensitive information.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
if validation_results.get("errors"):
+ self.line_error(f"<error>Validation failed: {validation_results}</error>")
+ return 1
=======
if validation_results.get("errors"):
+ error_messages = validation_results.get("errors")
+ if isinstance(error_messages, dict):
+ error_messages = list(error_messages.values())
+ if isinstance(error_messages, list):
+ for error in error_messages:
+ self.line_error(f"<error>Validation error: {error}</error>")
+ else:
+ self.line_error("<error>Validation failed due to unknown error format.</error>")
+ return 1
>>>>>>> REPLACE
</suggested_fix>
### Comment 3
<location> `tests/console/commands/test_init.py:1213` </location>
<code_context>
+ (".", "just dot"),
+ ],
+)
+def test_invalid_project_name(invalid_project_name, reason):
+ pyproject_data = build_pyproject_data(invalid_project_name)
+ result = InitCommand._validate(pyproject_data)
+
+ assert "errors" in result, f"Expected error for: {reason}"
+ assert any("project.name must match pattern" in err for err in result["errors"])
</code_context>
<issue_to_address>
Consider adding tests for borderline valid/invalid names and unicode.
Adding tests for names at the validity boundary, including maximum length, unicode, and mixed case, will help ensure comprehensive validation coverage.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if validation_results.get("errors"): | ||
| self.line_error(f"<error>Validation failed: {validation_results}</error>") | ||
| return 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 suggestion (security): Printing the full validation_results dict may expose internal details
Extract and display only the relevant error messages to prevent leaking internal or sensitive information.
| if validation_results.get("errors"): | |
| self.line_error(f"<error>Validation failed: {validation_results}</error>") | |
| return 1 | |
| if validation_results.get("errors"): | |
| + error_messages = validation_results.get("errors") | |
| + if isinstance(error_messages, dict): | |
| + error_messages = list(error_messages.values()) | |
| + if isinstance(error_messages, list): | |
| + for error in error_messages: | |
| + self.line_error(f"<error>Validation error: {error}</error>") | |
| + else: | |
| + self.line_error("<error>Validation failed due to unknown error format.</error>") | |
| + return 1 |
tests/console/commands/test_init.py
Outdated
| def test_invalid_project_name(invalid_project_name, reason): | ||
| pyproject_data = build_pyproject_data(invalid_project_name) | ||
| result = InitCommand._validate(pyproject_data) | ||
|
|
||
| assert "errors" in result, f"Expected error for: {reason}" | ||
| assert any("project.name must match pattern" in err for err in result["errors"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Consider adding tests for borderline valid/invalid names and unicode.
Adding tests for names at the validity boundary, including maximum length, unicode, and mixed case, will help ensure comprehensive validation coverage.
…eature/invalid-project-name
This reverts commit 5623e28.
|
This is no longer needed. Its been replaced with this PR, which has the same code changes, but with a cleaned up commit history. |
|
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
This PR introduces validation for user input values during the
poetry initcommand. It uses the rules defined in the project-schema.json schema. These rules are already enforced when running thepoetry addcommand, so this change extends the same validation rules to thepoetry initcommand.Previously, user inputs were not validated against these schema rules, allowing the creation of Poetry projects with invalid or disallowed names (e.g. "new+project").
This change ensures that inputs conform to the schema before proceeding, preventing invalid project configurations.
Pull Request Check List
Resolves: #10170
Summary by Sourcery
Add schema-based validation to the
poetry initcommand to ensure project metadata, especially the project name, conforms to the defined project-schema rules and abort initialization on validation failures.New Features:
poetry initusing the existing project-schema JSON rulesBug Fixes:
Enhancements:
InitCommand._validatemethod for reusable schema validationTests: