The following example shows how to use the new CloudFront Standard Logging v2 to store logs to an S3 bucket with CloudFormation.
This example can be used as-is to create an S3 bucket for your logs as well as organize CloudFront by hostname for each CloudFront Distribution ID. The templates utilize SSM Parameter Store and all templates must be run in the us-east-1 region for accessing CloudFront Standard Logging v2 features. All CloudFront management and configuration must be done in the us-east-1 (N. Virginia) region including SSL certificates and Standard Logging v2 features.
Using these templates as-is for an organization that has for example 3 CloudFront distributions using hostnames example.com, app.example.com, and store.example.com will result with 1 new s3 bucket with 3 new root folders for each hostname followed by folders in the format of YYYY/mm/dd/HH/. All log files will have the same file naming and fields separated by spaces legacy CloudFront logging format. In addition the links provided at the bottom should give you enough information to further extend both the fields logged as well as change the format that the logs are stored.
Bucket Name Example: cf-logs-123456789abc
Bucket Object Example: /example.com/2025/09/23/08/XXXXXXXXXXXX.2025-09-23-08.abcdef01.gz
What is CloudFront Standard Logging v2?
Standard Logging v2 is a set of options that greatly extend the original standard logging options for CloudFront. You can now deliver logs to S3 in 4 different formats with additional columns of information as well as deliver log data to Amazon CloudWatch Logs and Amazon Data Firehose. When delivering logs to S3, new options allow you to format the S3 path where the files are stored in addition to the folder prefix and include variables in this path including the CloudFront Distribution ID, year in 4 digit format, as well as the month, day and hour in 2 digit format. Up to this point you needed to write a lambda function that would move the files into organized folders by date. This eliminates that expense which is super useful. In addition, you can now select additional fields available to put into the logs as well as change the output format including w3c (same as previous legacy version), JSON, Plain, and Parquet format. Note that if you pick Parquet there is additional cost for AWS to convert the data to the columnar format. You can now set the column delimiter character as well to either a space (previous default for w3c format) or a tab.
To setup CloudFront Standard Logging v2 in the AWS Management Console (Web portal), see the CloudFront Standard Logging documentation. To setup CloudFront Standard Logging v2 with the AWS cli, see the Configure standard logging (v2).
The remainder of this post focuses on how to setup CloudFront Standard Logging v2 with CloudFormation.
CloudFront Standard Logging v2 to S3 bucket CloudFormation Example
The following example uses 2 templates. The first template creates a new S3 bucket for storing the logs from CloudFront to. It stores it’s bucket name in SSM Parameter Store for use by the 2nd CloudFormation template which can be used for any number of CloudFront distributions. The following example assumes all logs would be stored in the same bucket organized by hostname. You can make variations of these templates or use of Ansible to also set the S3 bucket as a command line parameter in the second template if you want to use different buckets for specific logs. You could also combine the two templates as one and make an S3 bucket for every distribution, I highly recommend maintaining two separate templates for manageability over time. Very important, if you ever want to change the Standard Logging template, depending on the change you may need to delete the stack then recreate it. This 2 template approach is perfectly suited for that scenario.
Template 1 of 2: Create S3 Bucket For CloudFront Standard Logging v2
Template 1: cloudfront-s3-bucket-for-logging-v2.yaml
---
AWSTemplateFormatVersion: "2010-09-09"
Description: S3 bucket for all CloudFront logs in v2 format
Resources:
CloudFrontLogsBucket:
Type: "AWS::S3::Bucket"
DeletionPolicy: Retain
Properties:
BucketName: !Join
- '-'
- - cf-logs
- !Select
- 4
- !Split
- '-'
- !Select
- 2
- !Split
- '/'
- !Ref AWS::StackId
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
Tags:
- Key: Application
Value: CloudFront
CloudFrontLogsBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref CloudFrontLogsBucket
PolicyDocument:
Statement:
- Sid: CloudFrontWriteAccess
Action:
- s3:PutObject
- s3:PutObjectAcl
Effect: Allow
# Allow CloudFront distributions to write logs to this bucket
Principal:
Service: cloudfront.amazonaws.com
Resource: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref CloudFrontLogsBucket
- '/*'
S3CloudFrontLogsParameter:
Type: AWS::SSM::Parameter
Properties:
Name: /cloudfront/logs/bucket
Type: String
Value: !Ref CloudFrontLogsBucket
Description: "The S3 bucket to store CloudFront logs in v2 format"
Outputs:
CloudFrontLogsBucketName:
Description: Name of the CloudFront logs bucket
Value: !Ref CloudFrontLogsBucket
The above CloudFormation template does not require any parameters, though you must run this in the us-east-1 region. The template saves the name of the bucket it creates in the SSM Parameter store path `/cloudfront/logs/bucket`. The bucket name will start with the value “cf-logs-” followed by the CloudFormation stack ID without any dashes or slashes, currently 12 hexadecimals. Example: cf-logs-123456789abc
Example command to run this template locally:
aws cloudformation deploy --template-file ./cloudfront-s3-bucket-for-logging-v2.yaml --region us-east-1 --stack-name cloudfront-logging-s3-bucket --no-execute-changeset --no-fail-on-empty-changeset
I strongly recommend the –no-execute-changeset –no-fail-on-empty-changeset parameters, as this will create a changeset that you can then login to the AWS Management Console to “preview” the changes. From the AWS Management Console you can then “execute” the changes if everything looks as intended.
Template 2 of 2: Create Logging source, destination and association to CloudFront Distribution
Template 2: cloudfront-logging-v2.yaml
---
AWSTemplateFormatVersion: "2010-09-09"
Description: "CloudFront Logging"
Parameters:
DistroId:
Type: String
Description: "The CloudFront distribution ID"
Hostname:
Type: String
Description: "The CloudFront distribution host name"
AllowedPattern: "^(?=.{1,255}$)[a-zA-Z0-9-]{1,63}(\\.[a-zA-Z0-9-]{1,63})*\\.?$"
NamePrefix:
Type: String
Description: "The CloudFront distribution ID for the API"
CloudFrontLogsBucketName:
Type: AWS::SSM::Parameter::Value<String>
Default: /cloudfront/logs/bucket
Description: "The S3 bucket to store CloudFront logs in v2 format"
Resources:
# Delivery source
CloudFrontLogDeliverySource:
Type: AWS::Logs::DeliverySource
Properties:
Name: !Sub ${NamePrefix}-CloudFrontLogSource
LogType: ACCESS_LOGS
ResourceArn: !Sub arn:aws:cloudfront::${AWS::AccountId}:distribution/${DistroId}
# Delivery destination
CloudFrontLogDeliveryDestination:
Type: AWS::Logs::DeliveryDestination
Properties:
DestinationResourceArn: !Sub arn:aws:s3:::${CloudFrontLogsBucketName}/${Hostname}
Name: !Sub ${NamePrefix}-CloudFrontLogDestination
OutputFormat: w3c
# Link source to the destination
CloudFrontLogDelivery:
Type: AWS::Logs::Delivery
DependsOn: CloudFrontLogDeliverySource
Properties:
DeliverySourceName: !Sub ${NamePrefix}-CloudFrontLogs
DeliveryDestinationArn: !GetAtt CloudFrontLogDeliveryDestination.Arn
RecordFields:
- "date"
- "time"
- "x-edge-location"
- "sc-bytes"
- "c-ip"
- "cs-method"
- "cs(Host)"
- "cs-uri-stem"
- "sc-status"
- "cs(Referer)"
- "cs(User-Agent)"
- "cs-uri-query"
- "cs(Cookie)"
- "x-edge-result-type"
- "x-edge-request-id"
- "x-host-header"
- "cs-protocol"
- "cs-bytes"
- "time-taken"
- "x-forwarded-for"
- "ssl-protocol"
- "ssl-cipher"
- "x-edge-response-result-type"
- "cs-protocol-version"
- "fle-status"
- "fle-encrypted-fields"
- "c-port"
- "time-to-first-byte"
- "x-edge-detailed-result-type"
- "sc-content-type"
- "sc-content-len"
- "sc-range-start"
- "sc-range-end"
FieldDelimiter: " "
S3SuffixPath: '/{yyyy}/{MM}/{dd}/{HH}'
The above CloudFormation template requires parameters to specify the CloudFront Distribution ID, Hostname (CNAME value used for the distribution), and a prefix value for the name of the 3 resources created. In addition, it sets the same fields found in the legacy CloudFront Standard Logging with the same space field delimiter. In this example we add a special suffix path that sets the year, month, day and hour folders. Given a hostname such as example.com and CloudFront Distribution ID XXXXXXXXXXXX, the S3 path for a specific log file will look like the following example:
/example.com/2025/09/23/08/XXXXXXXXXXXX.2025-09-23-08.abcdef01.gz
Example command to run this template locally:
aws cloudformation deploy --template-file ./cloudfront-logging-v2.yaml --region us-east-1 --stack-name cloudfront-logging-example-com --no-execute-changeset --no-fail-on-empty-changeset --parameter-overrides DistroId=XXXXXXXXXXXX NamePrefix=example Hostname=example.com NamePrefix=support
Note that this 2nd template can be used repeatedly to enable CloudFront Standard Logging v2 on other CloudFront Distributions storing the logs in the same Bucket organized by hostname.
CloudFront Standard Logging v2 thoughts
CloudFront Standard Logging v2 options have been long and coming! The ability to configure additional fields in the logs is a game changer, while also allowing us to remove fields that are not necessary helps keep log file sizes to a minimum for storage and privacy compliance. The added bonus of 2 new destinations, 3 new formats, and setting the date in the folder paths is a gift handed down from the heavens!
Complete list of S3 path suffix variables:
{DistributionId}– Distribution ID as viewed in AWS Management Console{distributionid}– Distribution ID in lowercase{yyyy}– Year in 4 digit format{MM}– Month in 2 digit, zero prefixed format{dd}– Day in 2 digit, zero prefixed format{HH}– Hour in 2 digit, zero prefixed format{accountid}– CloudFront Distribution’s AWS Account ID
A complete list of the CloudFront log fields can be found on the CloudFront Standard Logging reference page (currently missing the new fields). New fields are listed below:
- timestamp
- DistributionId
- timestamp(ms)
- origin-fbl
- origin-lbl
- asn
- c-country
- cache-behavior-path-pattern
- distributionid
- distribution-tenant-id
- connection-id
Full list in a text box for easy copy/paste use:
date
time
x-edge-location
sc-bytes
c-ip
cs-method
cs(Host)
cs-uri-stem
sc-status
cs(Referer)
cs(User-Agent)
cs-uri-query
cs(Cookie)
x-edge-result-type
x-edge-request-id
x-host-header
cs-protocol
cs-bytes
time-taken
x-forwarded-for
ssl-protocol
ssl-cipher
x-edge-response-result-type
cs-protocol-version
fle-status
fle-encrypted-fields
c-port
time-to-first-byte
x-edge-detailed-result-type
sc-content-type
sc-content-len
sc-range-start
sc-range-end
cache-behavior-path-pattern
DistributionId
timestamp
timestamp(ms)
origin-fbl
origin-lbl
asn
c-country
connection-id
distribution-tenant-id
distributionid
Note that some information logged can be or is considered personal identifiable information (PII) and should only be logged for CloudFront distributions where you are allowed to store such information. With the ability to select what columns are stored in your logs you can now exclude fields such as the c-ip field for compliance purposes. The c-ip field is critical for identifying unique views for web analytics, removal of the field may have adverse consequences. Remember to consult with whom ever in your organization manages legal and analytics decisions before making changes.
There appears to be a limit of one destination type for each distribution, meaning you can only have 1 legacy Standard Logging and 1 Standard Logging v2 configured (you can’t add a 2nd Standard Logging v2 destination for example). Keep this in mind when architecting.
CloudFront Standard Logging v2 Lacks most Viewer header CloudFront fields
You cannot log some of the useful CloudFront header fields that are available including most of the CloudFront-Viewer-* headers. Currently only the “asn” and “c-country” fields align with headers CloudFront-Viewer-ASN and CloudFront-Viewer-Country headers. Headers including CloudFront-Viewer-Country-Region (e.g. “OH” for Ohio), CloudFront-Viewer-Latitude, CloudFront-Viewer-Longitude, CloudFront-Viewer-Metro-Code (e.g. “535” for Columbus, Ohio Nielsen Designated Market Area DMA code), CloudFront-Viewer-Postal-Code, and CloudFront-Viewer-Time-Zone are not available. Learn more about CloudFront Viewer headers on the CloudFront request headers page. Hopefully these additional fields will be available in a future Standard Logging v2.1.
CloudFront Standard Logging v2 CloudFormation Example Conclusion
As a software architect, these new CloudFront logging features are very welcome and solve challenges that have plagued organizations for many years. Organizing the log files into folders by date required a script or scheduled lambda function to organize. I personally added the new fields asn, c-country, and cache-behavior-path-pattern to my own logs as they provide insightful information. Specifically, the cache-behavior-path-pattern will allow me to confirm caching options that I develop are working as intended. I currently do not have a use case for the parquet format but now I have that option in my pocket when working with clients. The option to direct logs to Amazon Data Firehose may be the most useful as this eliminates another peg in the cog to get CloudFront log information to Redshift and OpenSearch without the use of Logstash or CloudFront real-time logs (additional resources and expense).
There is potential to open up Standard Logging to support JavaScript to allow for creating custom fields for logging. Imagine if a CloudFront Function for “logging” could add custom fields allowing developers like myself to add our own transformation of the data for compliance and reporting. I have a few use cases if such an option was ever available. For an example of CloudFront functions check out my Open Sesame CloudFront function to block access to a distribution unless the user has the password to access – perfect for hiding a beta or test site in a public environment.
As a big fan of CloudFront I am glad to continually see options grow that give us more capability.