36

Trying to run a simple AWS CLI backup script. It loops through lines in an include file, backs those paths up to S3, and dumps output to a log file. When I run this command directly, it runs without any error. When I run it through CRON I get an "Unable to locate credentials" error in my output log.

The shell script:

AWS_CONFIG_FILE="~/.aws/config"

while read p; do
 /usr/local/bin/aws s3 cp $p s3://PATH/TO/BUCKET --recursive >> /PATH/TO/LOG 2>&1
done </PATH/TO/INCLUDE/include.txt

I only added the line to the config file after I started seeing the error, thinking this might fix it (even though I'm pretty sure that's where AWS looks by default).

Shell script is running as root. I can see the AWS config file at the specified location. And it all looks good to me (like I said, it runs fine outside of CRON).

3
  • 2
    Try an absolute path to ~/.aws/config.
    – ceejayoz
    Commented Jul 23, 2014 at 16:00
  • Definitely tried that first (was using /root/.aws/config), but jumped back to ~/ after seeing it in some other threads. Same error either way. Commented Jul 24, 2014 at 18:42
  • 3
    Not a direct answer but a comment about using the API keys: It is better practice (and much easier) to assign roles to your instances, and create policies around those roles, and then you are not required to specify the keys at all, or have them lying around in plaintext on the instance. Unfortunately this can only be specified at instance creation time. As an aside, for copying logfiles (and backups etc) have a look at the s3cmd tools, that provides functionality similar to rsync.
    – nico
    Commented Aug 24, 2015 at 11:57

16 Answers 16

24

If it works when you run it directly but not from cron there is probably something different in the environment. You can save your environment interactively by doing

set | sort > env.interactive

And do the same thing in your script

set | sort > /tmp/env.cron

And then diff /tmp/env.cron env.interactive and see what matters. Things like PATH are the most likely culprits.

2
  • 4
    Thanks! A step toward being able to troubleshoot the problem on my own is basically invaluable. There were definitely several differences in the PATH variable, and I kind of think in this case it was the difference in HOME that was throwing things off. As for my specific issue, I ended up just running this from the user's cron file, instead of /etc/crontab, which solved everything on my end. Thanks again! Commented Jul 26, 2014 at 16:55
  • 1
    Right. adding a correct PATH variable(echo $PATH will tell what it should be) in script usually solves it.
    – Fr0zenFyr
    Commented Nov 23, 2016 at 17:50
46

When you run a job from crontab, your $HOME environment variable is /

The Amazon client looks for either

~/.aws/config

or

~/.aws/credentials

If $HOME = /, then the client won't find those files

To make it work, update your script so that it exports an actual home directory for $HOME

export HOME=/root

and then put a config or credentials files in

/root/.aws/
2
  • 1
    This helped, along with the following fix from stackoverflow.com/a/26480929/354709 which involved adding the absolute path for the aws command - as $PATH wasn't set correctly in the root user.
    – Dan Smart
    Commented Jun 24, 2016 at 16:17
  • 4
    This should be the accepted answer.
    – Madbreaks
    Commented Mar 7, 2017 at 6:52
6

I was able to solve this issue through the following:

export AWS_CONFIG_FILE="/root/.aws/config"
export AWS_ACCESS_KEY_ID=XXXX
export AWS_SECRET_ACCESS_KEY=YYYY
2
  • 3
    But the whole point of doing aws configure is so that you don't have to put credentials in e.g. scripts. See answer posted by @chicks to solve this properly.
    – Madbreaks
    Commented Mar 7, 2017 at 6:53
  • 5
    Don't store AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values in scripts. The first line should have already provided those values.
    – AWippler
    Commented Feb 3, 2018 at 14:18
3

Put this code before your command line to be executed into crontab -e

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
2
  • I tried the first solution with the diff but nothing. the trick for me was the PATH variable. Commented Oct 28, 2017 at 0:19
  • Adding the PATH variable was a simple solution to this problem for me.
    – the4thv
    Commented Aug 26, 2020 at 9:29
1

The aws cli tool's binaries are installed under /usr/local/bin/aws.

The error I had is that the cron user could not access /usr/local/bin/aws while running; it can only access /usr/bin/

What I did was to create a link in /usr/bin for aws with the below command.

root@gateway:~# ln -s /usr/local/bin/aws /usr/bin/aws

I also added some changes in my script; here is a sample function:

starter () {
    echo "
    ==================================================

    Starting Instance

    ==================================================
    "

    /usr/bin/aws ec2 start-instances --instance-ids $instance --region us-east-1

    sleep 30

    echo "Assigning IP Address "

    /usr/bin/aws ec2 associate-address --instance-id $instance  --region us-east-1 --public-ip XX.XX.XX.XX

}

And the cron entry:

30 5 * * * sh /usr/local/cron/magentocron.sh

This method worked for me.

2
  • Mansur, your answer formatting is completely broken.
    – Aldekein
    Commented Sep 27, 2016 at 11:59
  • 1
    using full path /usr/bin/aws is key to solution. Commented Mar 12, 2018 at 1:28
1

This line in the default .bashrc file for the user will prevent non-interactive shells from getting the full user environment (including the PATH variable):

# If not running interactively, don't do anything
[ -z "$PS1" ] && return

Comment the line out to allow $HOME/.bashrc to be executed from a non-interactive context.

I also had to add an explicit source command to my shell script to get the environment set up correctly:

#!/bin/bash
source $HOME/.bashrc

See this answer for additional info.

1

We all know that environment path variable $PATH has location of binaries. $PATH of Crontab might not have location awscli.

What you can do is, find the path of awscli binary.

# which aws
/usr/local/bin/aws

and add the path in $PATH of crontab by adding below line in the beginning of your script(after shebang).

PATH=$PATH:/usr/local/bin/

This worked for me!!!

1
  • Your answer worked for me. Scratching my head for an hour. Thanks, buddy
    – Hussain7
    Commented Jun 25, 2019 at 11:50
0

I know it's not the perfect solution but that worked for me:

export HOME=/home/user
export AWS_CONFIG_FILE="/home/user/.aws/config"
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=XXX
0

Just to add some value added i was having issue with new bash version while using awscli tool installed via PIP i found that nothing will work with this tool with new bash versions.

I was able to resolve by installing aws-apitools-ec2 this can be install by

yum install -y aws-apitools-ec2 

I am attaching its guide for more reference.

http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ec2-clt.pdf

1
  • on ubuntu 16.04 i couldn't find the package. Commented Oct 28, 2017 at 0:18
0

I had the same issue, but after removing the stderr redirect from my cron entry (2>@1), I saw aws: command not found in the log.

This is because the AWS cli was installed in the user's home folder and I had added a line to my user's .bash_profile to add the AWS cli path to the $PATH. Oddly, this is in fact the way that the AWS cli install documentation tells you to install it. But the user's .bash_profile doesn't get used when the user's crontab is executed (at least not in my environment anyway).

So all I did to fix this was ensure my crontab script also had the aws cli in its path. So below the shebang of my script, I now have PATH=~/.local/bin:$PATH.

0

For me this did the trick:

#!/bin/bash

HOME=/home/ubuntu
AWS_CONFIG_FILE="/home/ubuntu/.aws/config"

aws ec2 describe-instances #or whatever command you need to use.

The default user in todays EC2 instances is ubuntu, and the root folder is that users home folder. That's where the aws cli exists as well.

0

Not the best, but I had to provide the config directly in my shell/bash script before the AWS client commands. like:

#!/bin/bash

export AWS_ACCESS_KEY_ID=<ZZZ>
export AWS_SECRET_ACCESS_KEY=<AAA>
export AWS_DEFAULT_REGION=<BBB>
aws s3 cp ....
0

Assuming you are using Ubuntu and trying to add a cronjob for the root user.

  1. Add this to the top of your shell script export HOME=/root
  2. Ensure when you use any aws command in your script you use the full path, ie. /usr/local/bin/aws s3 cp <from> <to>.
  3. sudo mkdir /root/.aws
  4. sudo cp ~/.aws/credentials /root/.aws/
0

If you have attached IAM role for ECS Fargate task role then this solution will work

  1. Add the following line in the entrypoint.sh declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env

  2. Add below line in crontab or cron file SHELL=/bin/bash
    BASH_ENV=/container.env

It worked for me.

0

A solution for me was to set both the config and credential file location via environment variables

export AWS_CONFIG_FILE="/root/.aws/config"
export AWS_SHARED_CREDENTIALS_FILE="/root/.aws/credentials"
0

CREATE THE FILE HERE: /root/.aws/credentials Need to be here cause for the aws cli path is always root.

[default]

AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXX

Create a SCRIPT: /usr/local/backup-script-s3.sh Export the home as root, because for aws cli is needed

export HOME=/root

aws s3 cp --recursive /XXXXXXXX s3://sXXXXXXXXXX

CREATE A CRON JOB TO EXECUTE THE SCRIPT FOR THE LOCAL AND CLOUD BACKUPS:

0 0 * * * sh /usr/local/backup-script-s3.sh #cloud job

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .