ChatOps Journey with Ansible, Hubot, AWS and Windows - Part 2
This is Part 2 of the series of setting up a Chatbot for deploying artifacts to AWS EC2 Windows instances. In this post, I'll set up the Chatbot using Hubot.
I created another directory bot
as the root directory of Hubot code. Hubot stub code can be generated using the Yeoman generator generator-hubot
.
$ yarn global add generator-hubot
$ cd bot
$ yo hubot
Now I got the basic code of the Chatbot. The generated code contains unnecessary dependencies, so I removed them. Those dependencies need to be removed from both package.json
and external-scripts.json
. If you added extra dependencies, they also need to be added in both package.json
and external-scripts.json
. I also added the hubot-hipchat
adapter to communicate with HipChat.
Below is the dependencies in package.json
.
"dependencies": {
"coffee-script": "^1.12.7",
"hubot": "^2.19.0",
"hubot-help": "^0.2.2",
"hubot-hipchat": "^2.12.0-6",
"hubot-scripts": "^2.17.2"
}
Below is the external-scripts.json
file.
[
"hubot-help"
]
Hubot script
Now I can add the script to handle messages. I added the ops.js
in the folder scripts
, which is the place to add custom handlers. Hubot supports both JavaScript and CoffeeScript.
The first thing to consider is what kinds of messages can be handled by the bot. Hubot supports matching fixed messages and using regular expressions. I decided to turn Hubot into a command-line interface, which is very intuitive for developers. I use yargs to parse incoming messages, then invoke the command ansible-playbook
to perform the deployment.
In the code below, robot.respond
is the function to handle incoming messages. Here I specified the regular expression /deploy (.*)/i
to match all messages starting with deploy
. I built the yargs
parser with one positional argument for the build number and another optional argument debug
for enabling debug mode. After the parsing is successful, I used the spawn
to invoke the command ansible-playbook
. If debug mode is enabled, the data from stdout
of the spawned process is sent to the user. When the ansbile-playbook
process exits, I sent different messages based on the exit code. (successful)
and (boom)
are HipChat emoticons. If the parsing failed, the output of yargs
is sent to the user directly, so the user can see the command-line parsing errors. Here I use res.envelope.user.name
to get the user's name to tag the EC2 instances.
const yargs = require('yargs');
const spawn = require('child_process').spawn;
module.exports = (robot) => {
robot.respond(/deploy (.*)/i, (res) => {
let parser = yargs.command('deploy <build_num>', 'deploy a version', (yargs) => {
yargs
.positional('build_num', {
describe: 'build number',
})
.option('debug', {
alias: 'd',
describe: 'enable debug output',
type: 'boolean',
default: false,
});
}, (argv) => {
const user = res.envelope.user.name.replace(/\W+/, '_');
const buildNum = argv.build_num;
res.send(`(waiting) Deploying #${argv.build_num} (tea)`);
const process = spawn(`ansible-playbook -i hosts app.yml --extra-vars "build_num=${buildNum} owner=${user}"`, [], {
cwd: '/etc/ansible',
shell: true,
});
process.stdout.on('data', (data) => {
if (argv.debug) {
res.send(`${data}`);
}
});
process.stderr.on('data', (data) => {
res.send(`${data}`);
});
process.on('close', (code) => {
if (code !== 0) {
res.send('(boom) Deployment failed');
} else {
res.send(`(successful) Deployment completed. You can access it using http://${buildNum}.mycompany.com`);
}
});
})
.help();
parser.parse(`deploy ${res.match[1]}`, (error, argv, output) => {
if (output) {
res.reply(output)
}
});
});
};
To connect HipChat, several environment variables are required for the adapter.
HUBOT_HIPCHAT_JID
- The bot's Jabber ID.HUBOT_HIPCHAT_PASSWORD
- The password of bot's HipChat account.
See more HipChat adapter configurations at here.
The bot can be started using bin/hubot --adapter hipchat
.
Build Docker image
Now I can build the Docker image that includes both Ansible and Hubot code. I installed NodeJS and copied the bot code to /opt/bot
. I also defined the command to run for the container.
RUN groupadd --gid 1000 node \
&& useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# gpg keys listed at https://github.com/nodejs/node#release-team
RUN set -ex \
&& for key in \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
56730D5401028683275BD23C23EFEFE93C4CFFFE \
77984A986EBC2AA786BC0F66B01FBB92821C587A \
; do \
gpg --keyserver pgp.mit.edu --recv-keys "$key" || \
gpg --keyserver keyserver.pgp.com --recv-keys "$key" || \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key" ; \
done
ENV NODE_VERSION 8.9.2
RUN apt-get update && \
apt-get install -y curl
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" \
&& case "${dpkgArch##*-}" in \
amd64) ARCH='x64';; \
ppc64el) ARCH='ppc64le';; \
*) echo "unsupported architecture"; exit 1 ;; \
esac \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" \
&& curl -SLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
&& rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs
ADD bot /opt/bot
RUN cd /opt/bot && npm i
WORKDIR /opt/bot
CMD [ "/opt/bot/bin/hubot", "--adapter", "hipchat" ]
Once a Docker container is started, it connects to HipChat and runs as a bot. Now I can try to interact with it by sending messages like <Bot name> deploy 100
and wait for it to do the job.
Deploy to AWS ECS
The deployment to AWS ECS is an easy task. I used the Docker repository provided by ECS and pushed the image to it. The next step is to create the task definition for the Chatbot. Remember to configure the environment variables through the task definition. Finally I create the cluster and run the task using the definition.
That's all for Part 2. In the next Part 3, I'll discuss using AWS Lambda to make sure instances are not left running for too long to save money.