How to Configure S3 Access for AWS App Runner A Complete IAM and VPC Setup Guide

Introduction

When deploying applications on AWS App Runner, you may need to access S3 buckets to store or retrieve data. However, by default, App Runner services don’t have permission to interact with S3. This guide aims to solve that problem by walking you through the process of granting your App Runner service secure access to S3 buckets.

We’ll achieve this by:

  1. Creating an IAM role with the necessary S3 permissions
  2. Configuring your App Runner service to use this role
  3. Setting up networking components (if using a VPC) to ensure connectivity

By following these steps, you’ll enable your App Runner service to securely read from and write to S3 buckets, allowing for seamless integration of S3 storage in your application.

How to trim data in column with sql

To trim newline characters from a text field in SQL database, you can use the trim() function along with the replace() function. Here’s how you can do it:

UPDATE your_table
SET your_column = trim(replace(replace(your_column, char(10), ''), char(13), ''))

This SQL statement does the following:

replace(your_column, char(10), '') removes line feeds (LF, \n)

replace(..., char(13), '') removes carriage returns (CR, \r)

trim(...) removes any leading or trailing whitespace

This approach handles both Unix-style (LF) and Windows-style (CR+LF) line endings.

How to Restore Files from Glacier on Amazon S3 Storage using Bash Script

How to Restore Files from Glacier on Amazon S3 Storage using Bash Script

Amazon S3 storage is a popular cloud storage service that provides scalable object storage for data backup, archive, and disaster recovery. One of the storage options available on Amazon S3 is Amazon Glacier, which provides a low-cost, long-term storage solution for data archiving and backup. However, retrieving data from Amazon Glacier can be time-consuming and expensive. In this tutorial, we’ll show you how to restore files from Glacier on Amazon S3 storage using a Bash script.

How to scrape page source with Go and chromedp

How to scrape page source with Go and chromedp

It’s clear what we are trying to achieve, so let’s consider the requirements. Firstly, we need a tool to render web pages since JavaScript is commonly used nowadays. Secondly, we require an API to communicate with the headless browser. Lastly, saving the result can be challenging as browsers are designed to interact with rendered results rather than directly with the source code.

Headless browser

So we are looking for a headless browser. We are going to use Chrome’s headless-shell because it’s easy to use, and it’s based on Chromium . The most significant advantage is docker image, which we can efficiently run on our local machine or anywhere in the cloud.