Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
jupyter-naas
GitHub Repository: jupyter-naas/awesome-notebooks
Path: blob/master/AWS/AWS_Read_dataframe_from_S3.ipynb
2973 views
Kernel: Python 3

AWS.png

AWS - Read dataframe from S3

Give Feedback | Bug report

Tags: #aws #cloud #storage #S3bucket #operations #snippet #dataframe

Last update: 2023-11-20 (Created: 2022-04-28)

Description: This notebook demonstrates how to read a dataframe from an Amazon Web Services (AWS) Simple Storage Service (S3) bucket.

Input

Import libraries

import naas try: import awswrangler as wr except: !pip install awswrangler --user import awswrangler as wr from os import environ

Setup variables

Mandatory

  • aws_access_key_id: This variable is used to store the AWS access key ID.

  • aws_secret_access_key: This variable is used to store the AWS secret access key.

  • bucket_path: The name of the S3 bucket from which you want to list the files.

# Mandatory aws_access_key_id = naas.secret.get("AWS_ACCESS_KEY_ID") or "YOUR_AWS_ACCESS_KEY_ID" aws_secret_access_key = naas.secret.get("AWS_SECRET_ACCESS_KEY") or "YOUR_AWS_SECRET_ACCESS_KEY" bucket_path = f"s3://naas-data-lake/example/"

Model

Set environ

environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key

Get dataframe

df = wr.s3.read_parquet(bucket_path, dataset=True) print("Rows:", len(df))

Output

Display result

df