How to integrate ChatGPT to review pull requests on GitHub using GitHub Actions

How to integrate ChatGPT to review pull requests on GitHub using GitHub Actions

I decided to add to my open source project ChatGPT as a reviewer of pull requests, so that he can immediately point out bugs and small inaccuracies in the code. In the article, I will share how to do this without buying foreign numbers, cards and various VPNs. For this, we will use the ProxyAPI service and write a small yml file for GitHub Actions.

beginning

The first thing to start with is adding a file .github/workflows/cr.yml at the root of the project.

The content of the file itself is as follows:

name: Code Review

permissions:
  contents: read
  pull-requests: write

on:
  pull_request:
    types: [opened, reopened, synchronize]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: anc95/[email protected]
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          LANGUAGE: Russian
          OPENAI_API_ENDPOINT: https://api.proxyapi.ru/openai/v1
          MODEL: gpt-3.5-turbo
          PROMPT: "You are an experienced Kotlin/Java developer and your job is to review pull requests. Please review the following code for any misunderstandings or violations. Don't spend time commenting on what is already working perfectly. I'm looking for constructive criticism and suggestions for improving the code, only useful and thorough notes."

I will not dwell on the explanation of each line. I will tell you only about the important ones.

We will use the ready-made action - uses: anc95/[email protected] link to the repository.

Now let’s run through the settings of this action:

  • GITHUB_TOKEN required in most actions (about GITHUB_TOKEN)

  • OPENAI_API_KEY – This is the OpenAI key. To use requests to OpenAI from the Russian Federation, you can create a key in the ProxyAPI service. To do this, after registration, go to your personal account and create a key:

Next, you need to create a new secret with the name OPENAI_API_KEY (you can use your own name) in your repository:

  • LANGUAGE is responsible for the language in which ChatGPT will write comments on the pull request. This is how the code looks like, where the prompt for chatGPT is compiled

private generatePrompt = (patch: string) => {
    const answerLanguage = process.env.LANGUAGE
      ? `Answer me in ${process.env.LANGUAGE},`
      : '';

    const prompt =
      process.env.PROMPT ||
        'Below is a code patch, please help me do a brief code review on it. Any bug risks and/or improvement suggestions are welcome:';

    return `${prompt}, ${answerLanguage}:
    ${patch}
    `;
  };

That is, after the prompt from the variable PROMPT (about her below) will be added to the end of the request Answer me in Ukrainian.

  • OPENAI_API_ENDPOINT is responsible for where to send the request, but since a proxy service is used, it is necessary to change the value to https://api.proxyapi.ru/openai/v1

  • MODEL specifies the version of ChatGPT that will respond. Default is used gpt-3.5-turboBut it can be changed to any supported by the service. You can read about the supported versions of ChatGPT here

  • PROMPT responsible for the request. It’s simple here: it’s a normal question to the model as in a normal chat, so we ask the model to roar the code. It is important that each diff file will be requested separately.

Conclusion

Now, after creating a pull request, the workflow will start:

If you did everything correctly, after the end of the workflow, ChatGPT will write comments on each changed file in your request:

Related posts