-
Notifications
You must be signed in to change notification settings - Fork 69
feat: Github copilot integration + changes to the workflow to enforce code quality + security checks #788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
KolbyML
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just marking this well we have internal discussions
|
My main concern with the PR is it's wrapping everything together in one AI lump. There is stuff that:
Most of the PR content I think is only needed for AI pair-programming but I don't think we should focus on that at all for now. We should separate all these 3 categories and discuss one by one and if we want to implement something, do it in multiple small PRs gradually. Maybe discuss & finalize in this order: linter (least subjective, most deterministic) -> AI review -> AI pair-programming. E.g. if things can be done by a linter, it should be done by a linter, the AI can be instructed to run the linter and make the fixes rather than duplicating the rules in the AI instructions which also doesn't guarantee correctness. |
|
Although I'm not sure how much this would be useful but I think having a trial period doesn't hurt. It seems there have been discussions more than necessary. We're just discussing based on our guess and past experiences, while AI always have been growing fast. I don't think we would reach to a good conclusion unless we actually try it out. Just give it a shot and decide whether to have it or not later. During the trial period, we can adapt the instruction set to find a good one. For example, some rules such as Among those three categories, I do think we want to improve 1. We've seen some portion of PR comments were constantly about 1. Let's be lazy and be more efficient. It's being taken care of by this PR. It won't perfect from the get-go, but we could have higher expectation as it's static check. |
|
Closing this for the time being, we can potentially bring this issue up again in the future |
What was wrong?
To save time and effort for both authors and reviewers, it’s better to use AI for standard code quality checks. This frees up reviewers to focus on identifying logical flaws in the code.
How was it fixed?
To-Do