I have an aws RDS (postgres) set up, and I have a table inside with a column called "aws_s3_key_path", where I need a trigger function to check whatever aws s3 key path gets added into this column actually exists in s3 bucket. How to achieve this in a trigger function? Or any other way of implementing this "check"?
Attempt: I looked into the aws_s3 extension and found out this function: aws_s3.table_import_from_s3()
However, it seems only support csv, text, or zip files. I need to check if a geopackage file exists or not (it is a geospatial data format with attributes that resembles a table, but with extra geometry column).
invoices/january/i01.txt
and theinvoices
andjanuary
folders will magically appear. Then, if the object is deleted, the folders will magically disappear (because they never existed in the first place). Can I ask... WHY dxo you need to check if the path exists? Do you need to know if there are files in that path, or just that the path exists (which doesn't really apply to S3)?