Skip to content

Conversation

@ch4r10t33r
Copy link
Contributor

What was wrong?

To save time and effort for both authors and reviewers, it’s better to use AI for standard code quality checks. This frees up reviewers to focus on identifying logical flaws in the code.

How was it fixed?

  • Integrated copilot-instructions.md and enforced these checks in the github workflow.

To-Do

  • Requesting everyone to review the copilot-instructions.md file to confirm you’re aligned with its contents. @KolbyML @syjn99, please add any additional content needed to ensure pull requests meet the expected standards.

@ch4r10t33r ch4r10t33r requested a review from KolbyML as a code owner September 17, 2025 17:16
@ch4r10t33r ch4r10t33r requested a review from syjn99 September 17, 2025 17:16
Copy link
Contributor

@KolbyML KolbyML left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just marking this well we have internal discussions

@unnawut
Copy link
Contributor

unnawut commented Sep 23, 2025

My main concern with the PR is it's wrapping everything together in one AI lump. There is stuff that:

  1. can be done by a linter
  2. maybe AI review can be helpful
  3. only needed if we want to support AI pair-programming (and there's the question of which platform if we're going to support it)

Most of the PR content I think is only needed for AI pair-programming but I don't think we should focus on that at all for now.

We should separate all these 3 categories and discuss one by one and if we want to implement something, do it in multiple small PRs gradually. Maybe discuss & finalize in this order: linter (least subjective, most deterministic) -> AI review -> AI pair-programming.

E.g. if things can be done by a linter, it should be done by a linter, the AI can be instructed to run the linter and make the fixes rather than duplicating the rules in the AI instructions which also doesn't guarantee correctness.

@jihoonsong
Copy link
Contributor

Although I'm not sure how much this would be useful but I think having a trial period doesn't hurt. It seems there have been discussions more than necessary. We're just discussing based on our guess and past experiences, while AI always have been growing fast. I don't think we would reach to a good conclusion unless we actually try it out. Just give it a shot and decide whether to have it or not later. During the trial period, we can adapt the instruction set to find a good one. For example, some rules such as Code Organization Patterns might not suitable for evolving software. But at the same time, it maybe better to feed more information, I'm not sure. Please feel free to participate the prompt engineering, if anyone interested in this feature.

Among those three categories, I do think we want to improve 1. We've seen some portion of PR comments were constantly about 1. Let's be lazy and be more efficient. It's being taken care of by this PR. It won't perfect from the get-go, but we could have higher expectation as it's static check.

@KolbyML
Copy link
Contributor

KolbyML commented Oct 10, 2025

Closing this for the time being, we can potentially bring this issue up again in the future

@KolbyML KolbyML closed this Oct 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants