Configuration

This choice can be made using the GoFast CLI or by modifying the docker-compose.yml file.

via CLI

Possible options are Cloudflare R2, AWS S3, Google Cloud Storage and Local (folder).

via Docker Compose

Possible options are r2, s3, gcs and local.

Files Configuration

Depending on the provider, you need to set the following environment variables in the docker-compose.yml file:

  • for r2, set R2_ENDPOINT, R2_ACCESS_KEY, and R2_SECRET_KEY.
  • for s3, set S3_REGION, S3_ACCESS_KEY, and S3_SECRET_KEY.
  • for gcs, set GOOGLE_APPLICATION_CREDENTIALS.
  • for local, set FILE_DIR.
services:
  server:
    environment:
      FILE_PROVIDER: r2
      BUCKET_NAME: ${BUCKET_NAME}
      R2_ENDPOINT: ${R2_ENDPOINT}
      R2_ACCESS_KEY: ${R2_ACCESS_KEY}
      R2_SECRET_KEY: ${R2_SECRET_KEY}
      # FILE_PROVIDER: s3
      # S3_REGION: ${S3_REGION}
      # S3_ACCESS_KEY: ${S3_ACCESS_KEY}
      # S3_SECRET_KEY: ${S3_SECRET_KEY}
      # FILE_PROVIDER: gcs
      # GOOGLE_APPLICATION_CREDENTIALS: ${GOOGLE_APPLICATION_CREDENTIALS}
      # FILE_PROVIDER: local
      # FILE_DIR: ${FILE_DIR}

Implementations Details

Regardless of the protocol configuration, File operations (uploads and downloads) will always be handled using HTTP requests. This approach is much simpler to implement compared to using gRPC, which would require splitting files into chunks.

The files have a seperate file for HTTP routes: /http/route_file.go.

The local provider stores files in a local folder, while the other providers store files in their respective cloud storage services.

Adding a New Provider

To add a new provider, follow these steps in the /file/provider.go file:

  1. Add the new provider to the Provider constant.
const (
	S3    Provider = "s3"
	R2    Provider = "r2"
	GCS   Provider = "gcs"
	Local Provider = "local"
    Azure Provider = "azure"
)
  1. Create a new provider struct.
type azureProvider struct {
    // Add any required fields here
}
  1. Return the new provider in the NewProvider function:
case Azure:
    return &azureProvider{
        // Initialize the fields here
    }
  1. Implement the required methods:
func (p *azureProvider) uploadFileToProvider(ctx context.Context, file *File) (*File, error) {
    // Implement the logic to upload a file
}

func (p *azureProvider) downloadFileFromProvider(ctx context.Context, fileId string) ([]byte, error) {
    // Implement the logic to download a file
}

func (p *azureProvider) removeFileFromProvider(ctx context.Context, fileId string) error {
    // Implement the logic to remove a file
}
  1. Add any new secrets in the env.go file.

  2. Fill in the docker-compose.yml file with the new provider configuration.

services:
  server:
    environment:
      FILE_PROVIDER: azure
      AZURE_ACCOUNT_NAME: ${AZURE_ACCOUNT_NAME}
      AZURE_ACCOUNT_KEY: ${AZURE_ACCOUNT_KEY}
      AZURE_CONTAINER_NAME: ${AZURE_CONTAINER_NAME}

Getting Secrets

Cloudflare R2

  1. Go to Cloudflare R2 and create an account.
  2. Create a new bucket
  3. On R2 dashboard, click on Manage R2 API Tokens and create a new token.
  4. Set the R2_ENDPOINT, R2_ACCESS_KEY, and R2_SECRET_KEY in the docker-compose.yml file.

AWS S3

  1. Go to the AWS Console and create an account.
  2. Go to the S3 dashboard.
  3. Create a new bucket.
  4. Go to the IAM dashboard.
  5. Create a new user with AmazonS3FullAccess policy.
  6. Set the S3_REGION, S3_ACCESS_KEY, and S3_SECRET_KEY in the docker-compose.yml file.

Google Cloud Storage

  1. Go to the Google Cloud Console and create an account.
  2. Create a new project.
  3. Go to the Storage dashboard.
  4. Create a new bucket.
  5. Go to the IAM & Admin dashboard.
  6. Create a new service account with Storage Admin role.
  7. Download the JSON key and set the GOOGLE_APPLICATION_CREDENTIALS in the docker-compose.yml file.

Need help?

Visit our discord server to ask any questions, make suggestions and give feedback :).

https://discord.gg/EdSZbQbRyJ