# AWS S3 integration

## Generating a limited S3 read user and keys

{% hint style="info" %}
You can skip this step and go down to if you already have AWS Access Key ID and Secret Key ID for the S3 buckets you wish to integrate
{% endhint %}

Log into the AWS console and navigate to the **IAM Dashboard** → **Users***.*  Then, click **Add User.**

![](https://3806122466-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fk3DfNQKoMy1JXdaDLvpG%2Fuploads%2FoNlSygJbgpI3KoeO7aen%2FScreen%20Shot%202022-03-21%20at%203.29.44%20PM.png?alt=media\&token=4c8629b9-2e6a-4740-8725-609e15ab6806)

From the Add User screen, pick any username, like *biodock-s3-access,* and check **Programmatic access.**

On the next screen, select **Attach existing policies directly** and search for S3.  From the results, choose AmazonS3ReadOnlyAccess.

![](https://3806122466-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fk3DfNQKoMy1JXdaDLvpG%2Fuploads%2FnvBJCX0meQhNrX9S0bzP%2FScreen%20Shot%202022-03-21%20at%203.35.40%20PM.png?alt=media\&token=db3968a6-d88c-491c-b239-43f7c2cfe905)

*Note:  If you want to restrict access to a single bucket, you will need to create a custom policy and attach it to the user.  An example minimum policy is shown at the bottom of this page.*

Once you have created your user by clicking next through the screens, make sure to copy this user's AWS Key ID & Secret Key ID for the next step.

## Connecting your S3 Bucket with Biodock

Navigate to the **Files** page on the left sidebar and select the **Amazon S3** option --

![](https://3806122466-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fk3DfNQKoMy1JXdaDLvpG%2Fuploads%2FnmVFY5Y8qzSp1cjd4VsE%2FScreen%20Shot%202022-03-21%20at%203.07.24%20PM.png?alt=media\&token=7e5370a0-cf11-44eb-87f4-55946593f1e4)

Provide your AWS Access Key & Secret. Your keys are AES-256 encrypted and stored but you can disconnect at any time using the **Disconnect** button on the top right, which will purge your keys from Biodock.

You can **search** only by prefix in the S3 file explorer (this restriction is created by AWS).  Type in the prefix for the desired files or folders you want to import into Biodock and click **Import to Biodock** and then choose the location in **My Files** where you would like to import the image.

![](https://3806122466-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fk3DfNQKoMy1JXdaDLvpG%2Fuploads%2FSfHNVeqhSUvLBKDF1GIc%2FScreen%20Shot%202022-03-21%20at%204.11.03%20PM.png?alt=media\&token=1932b576-7b77-4615-b97e-e9fd79348fe0)

![](https://3806122466-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fk3DfNQKoMy1JXdaDLvpG%2Fuploads%2FkPrXb5orVvFUUeYjyH0G%2FScreen%20Shot%202022-03-22%20at%202.38.37%20PM.png?alt=media\&token=54b936fa-82b4-4816-b91f-4309e94c90b8)

You can then navigate to **My Files** and start using your imported files in Biodock. See [Running AI Analysis on Images](https://docs.biodock.ai/files/broken-reference).

![](https://3806122466-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fk3DfNQKoMy1JXdaDLvpG%2Fuploads%2FW6VOd6e0vqgedJSBaiXW%2FScreen%20Shot%202022-03-22%20at%202.39.23%20PM.png?alt=media\&token=c49b2608-626f-40c1-8216-d870a955dbee)

### Minimum working policy

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::test-examples-biodock", // Replace with your bucket
                "arn:aws:s3:::test-examples-biodock/*" // Replace with your bucket
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}
```
