Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).

Top 50 Ansible Interview Questions

30 min read

Ansible is an open-source engine that automates configuration management, application deployment, and other devOps tasks. Ansible is simple, agentless IT automation technology that can improve your current processes, migrate applications for better optimization, and provide a single language for DevOps practices across your organization.

If you’re looking for Ansible Interview Questions, you are at right place. With the market share of around 4.4%, there are lot of opportunities from many reputed companies in the world. 

Below are top 50 interview questions for candidates who want to prepare for Ansible Interview:

Ansible – ( Beginner Level )
  1. What is Ansible and why is it damn popular?

If you are attending interview for the position of DevOps Engineer, you really need to have in-depth knowledge around DevOps tool, software & processes which are targeted for automating IT. There are various popular automation tools, both open source and commercial product targeted for Enterprise IT. One of the most popular modern automation platform is Ansible.

It is basically an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. The major reasons why Ansible is so popular are simplicity and ease-of-use. Not only this, it has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans–even those not familiar with the program.

Let us agree to the fact that implementing DevOps tool, software & processes can help revolutionize your organization but adopting a DevOps framework doesn’t require updating your entire IT stack to newer agile implementations first. Quite simply, your organization can adopt DevOps through automation, even if you are running only on bare metal, migrating to the cloud, or already going full force into containers.Ansible caters to this need fantastically and is damn popular. The listed below are top 5 reasons of its popularity:

1. Ansible uses a simple syntax (YAML) and is easy for anyone (developers, sysadmins,managers) to understand. APIs are simple and sensible.

2. Ansible does three things in one, and does them very well. Ansible ‘batteries included’ approach means you have everything you need in one complete package.

3. Ansible is fast to learn, fast to set up—especially considering you don’t need to install extra agents or daemons on all your servers!

4. Ansible is efficient. There is no extra software on your servers means more resources for your applications.

5. Extensibility Nature – since Ansible modules work via JSON, you can easily extend Ansible with modules in a programming language you already know.

  1. How is Ansible different from Puppet?

Multiple IT automation tools like Puppet, Chef, CFEngine etc.appeared in the mid and late 2000-2002. They came with their own documentation which was still not up-to-mark for sysadmins to learn and adopt inside Datacenter. One reason why many developers and sysadmins stick to shell scripting and command line configuration was it’s simple, easy to use and years of experience using bash and command-line tools. Why to learn yet another IT automation tool and syntax? – was one of concern showed when lot of such tools appeared during the same year.

Ansible was primarily built  by developers and sysadmins who love the command line, and want to make a tool that helps them manage their servers exactly the same as they have in the past, but in a repeatable and centrally-managed way. One of Ansible’s greatest strengths is its ability to run regular shell commands verbatim, so you can take existing scripts and commands, and work on converting them into idempotent playbooks as time allows.

If Ansible tops the chart of popularity, Puppet is the 2nd most popular automation platform which is available both as open source as well as commercial product. Below are list of major differences between Puppet and Ansible which you should be aware of:

AnsiblePuppet
Developed to simplify complex orchestration and configuration management tasks Puppet can be difficult for new users who must learn Puppet DSL or Ruby, as advanced tasks usually require input from CLI.
The platform is written in Python and allows users to script commands in YAML as an imperative programming paradigm. Written in YAML languagePuppet is written in Ruby language
Automated workflow for Continuous DeliveryVisualization and reporting
Ansible doesn’t require agents on every system, and modules can reside on any server. Puppet uses an agent/master architecture. Agents manage nodes and request relevant info from masters that control configuration info. The agent polls status reports and queries regarding its associated server machine from the master Puppet server, which then communicates its response and required commands using the XML-RPC protocol over HTTPS
The Self-Support offering starts at $5,000 per year, and the Premium version goes for $14,000 per year for 100 nodes each. (Get more info here.)Puppet Enterprise is free for up to 10 nodes. Standard pricing starts at $120 per node. (Get more info here.)
Good GUIGUI – work under progress
CLI accepts commands in almost any languageMust learn the Puppet DSL
  1. What capabilities does Ansible offer?

This interview question identifies a candidate experience around Ansible both theoretically and practically. A Simple way to answer this question could be – 

Ansible works by pushing changes out to all your servers (by default), and requires no extra software to be installed on your servers (thus no extra memory footprint, and no extra daemon to manage), unlike most other configuration management tools

Consider any configuration management(CM) tool. One of its ability is to ensure the same configuration is maintained, no matter if you run it once or 1000s times. Various shell scripts have unintended consequences if you execute them more than once or twice, but Ansible is the tool which can deploy the same configuration to a server over and over again without making any changes after the first deployment activity.

Ansible products offer the following capabilities:

  • Streamlined provisioning

Ansible is quite popular in streamlining the entire process. Provisioning with Ansible is simple and allows you to seamlessly transition into configuration management, orchestration and application deployment using the same simple, human readable, automation language.

  • Configuration management

If you’re looking out for simple solution for CM available in the market today, Ansible is the de-facto. It requires nothing more than a password or SSH key in order to start managing systems and can start managing them without installing any agent software, avoiding the problem of “managing the management” common in many automation systems. There’s no more wondering why configuration management daemons are down, when to upgrade management agents, or when to patch security vulnerabilities in those agents.

Ansible is the simplest solution for configuration management available. It’s designed to be minimal in nature, consistent, secure and highly reliable, with an extremely low learning curve for administrators, developers and IT managers.

With very simple data descriptions of your infrastructure (both human-readable and machine-parsable), Ansible ensure that everyone on your team will be able to understand the meaning of each configuration task. New team members will be able to quickly dive in and make an impact. Existing team members can get work done faster – freeing up cycles to attend to more critical and strategic work instead of configuration management.

  • App deployment

App deployment is a matter of minutes compared to hours in traditional approach to system management. When you define and manage your application deployment, teams are able to effectively manage the entire application lifecycle from development to production.

  • Automated workflow for Continuous Delivery

Ansible provides not only multi-tier but also a multi-step orchestration platform. The push-based architecture of Ansible allows very fine-grained control over operations. It is able to orchestrate configuration of servers in batches, all while working with load balancers, monitoring systems, and cloud or web services.Slicing  1000s of servers into manageable groups and updating them 100 at a time is incredibly simple, and can be done in a half page of automation content.

And this is all possible today using Ansible Playbooks. They keep your applications properly deployed (and managed) throughout their entire lifecycle.

  • Security and Compliance policy integration into automated processes

Ansible has capability to simply define your systems for security. Ansible easily understood Playbook syntax allows you to define secure any part of your system, whether it’s setting firewall rules, locking down users and groups, or applying custom security policies. 

  • Simplified orchestration

This part need a special mention of Ansible Tower. Ansible Tower self-service surveys helps you to delegate your complex orchestration to whomever in your organization needs it.With Ansible and Ansible Tower, orchestrating the most complex tasks becomes merely the click of a button even for the non-technical people in your organization.

  1. Can you list out pros and cons of Ansible?

This is very important question which identified candidate understanding about the limitations around Ansible and tools being used. Undoubtedly, every automation tools available in the market have limitations. Ansible too  have certains pros and cons.

Below are the list of Pros of Ansible which is self-explanatory:

  • Easy installation and initial setup
  • Syntax and workflow is fairly easy to learn for new users.
  • Easy remote execution, and low barrier to entry.
  • Highly secure using SSH.
  • Suitable for environments designed to scale rapidly.
  • Shares facts between multiple servers, so they can query each other.
  • Powerful orchestration engine. Strong focus on areas where others lack, such as zero- downtime rolling updates to multi-tier applications across the cloud.
  • Sequential execution order.
  • Supports both push and pull models.
  • Lack of master eliminates failure points and performance issues. Agent-less deployment and communication is faster than the master-agent model..

Cons:

  • Underdeveloped GUI with limited features.
  • Requires root SSH access and Python interpreter installed on machines, although agents are not required
  • Increased focus on orchestration over configuration management.
  • SSH communication slows down in scaled environments.
  • The syntax across scripting components such as playbooks and templates can vary.
  1. How does Ansible manages machines?

Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized–it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.

Ansible by default manages machines over the SSH protocol. Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.

Ansible uses an inventory file (basically, a list of servers) to communicate with your servers. Like a hosts file (at /etc/hosts) that matches IP addresses to domain names, an Ansible inventory file matches servers (IP addresses or domain names) to groups. Inventory files can do a lot more, but for now, we’ll just create a simple file with one server. One can easily create a file at /etc/ansible/hosts (the default location for Ansible inventory file), and add one server to it as shown below:

$ sudo mkdir /etc/ansible

$ sudo touch /etc/ansible/hosts

The entry under this file look like as shown below:

[example]

www.test.com

…where test is the group of servers you’re managing and www.test.com is the domain name (or IP address) of a server in that group. If you’re not using port 22 for SSH on this server, you will need to add it to the address, like www.test.com:2222, since Ansible defaults to port 22 and won’t get this value from your ssh config file.

Now that you’ve installed Ansible and created an inventory file, it’s time to run a command to see if everything works! Enter the following in the terminal (we’ll do something safe so it doesn’t make any changes on the server):

$ ansible test -m ping -u [username]

…where [username] is the user you use to log into the server. If everything worked, you should see a message that shows www.test.com | success >>, then the result of your ping. If it didn’t work, run the command again with -vvvv on the end to see verbose output. Chances are you don’t have SSH keys configured properly—if you login with ssh username@www.test.com and that works, the above Ansible command should work, too.

  1. What is the minimal requirement for Ansible to work?

Currently Ansible can be run from any machine with Python 2 (versions 2.6 or 2.7) or Python 3 (versions 3.5 and higher) installed (Windows isn’t supported for the control machine). This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.

7.  What Windows Platform does Ansible support?

The supported operating system versions are:

  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016
  • Windows 7
  • Windows 8.1
  • Windows 10

8. Is it possible to manage Windows Nano Server using Ansible?

Windows Nano Server is not currently supported by Ansible, since it does not have access to the full .NET Framework that is used by the majority of the modules and internal components.

9. What is the minimal requirement on Managed Nodes under Ansible?

On the managed nodes, you need a way to communicate, which is normally ssh. By default this uses sftp. If that’s not available, you can switch to scp in ansible.cfg. You also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or later).

10. Does Ansible work with SELinux?

If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before using any copy/file/template related functions in Ansible. You can use the yum module or dnf module in Ansible to install this package on remote systems that do not have it.

11.  What are Inventory parsing and data sources under Ansible?

Under Ansible, the inventory is the most basic building block of Ansible architecture. In Ansible, nothing happens without an inventory. Even ad hoc actions performed on localhost require an inventory, even if that inventory consists just of the localhost.

When executing ansible or ansible-playbook, an inventory must be referenced. Inventories are either files or directories that exist on the same system that runs ansible or ansible-playbook. The location of the inventory can be referenced at runtime with the –inventory-file (-i) argument, or by defining the path in an Ansible config file. Inventories can be static or dynamic, or even a combination of both, and Ansible is not limited to a single inventory. 

The standard practice is to split inventories across logical boundaries, such as staging and production, allowing an engineer to run a set of plays against their staging environment for validation, and then follow with the same exact plays run against the production inventory set. Variable data, such as specific details on how to connect to a particular host in your inventory, can be included along with an inventory in a variety of ways as well, and we’ll explore the options available to you.

Ansible can manage only the servers it explicitly knows about. Provide Ansible with information about servers by specifying them in an inventory file.

Create a server alias called testserver

Create a file called hosts in the playbooks directory. This file will serve as the inventory file

Example: playbooks/hosts

testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 \ansible_ssh_user=vagrant \ ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key

Ansible uses your local SSH client which means that it will understand any aliases that is there in SSH config file. 

Ansible automatically adds one host to the inventory by default: localhost. It understands that localhost refers to the local machine, so it will interact with it directly rather than connecting by SSH

Connect to the server named testserver described in the inventory file named hosts and invoke the ping module:

$ ansible testserver -i hosts -m ping

12. What do you mean by static inventory under Ansible?

The static inventory is the most basic of all the inventory options. Typically, a static inventory will consist of a single file in the ini format. Here is an example of a static inventory file describing a single host, mastery.example.name: 

mastery.example.name 

That is all there is to it. Simply list the names of the systems in your inventory. Of course, this does not take full advantage of all that an inventory has to offer. If every name were listed like this, all plays would have to reference specific host names, or the special all group. This can be quite tedious when developing a playbook that operates across different sets of your infrastructure. At the very least, hosts should be arranged into groups. A design pattern that works well is to arrange your systems into groups based on expected functionality. At first, this may seem difficult if you have an environment where single systems can play many different roles, but that is perfectly fine. Systems in an inventory can exist in more than one group, and groups can even consist of other groups! Additionally, when listing groups and hosts, it’s possible to list hosts without a group. These would have to be listed first, before any other group is defined

13. Does Ansible support Solaris?

By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default tmp directory Ansible uses ( ~/.ansible/tmp). If you see module failures on Solaris machines, this is likely the problem. There are several workarounds:

You can set remote_tmp to a path that will expand correctly with the shell you are using (see the plugin documentation for C shell, fish shell, and Powershell). For example, in the ansible config file you can set:

remote_tmp=$HOME/.ansible/tmp

In Ansible 2.5 and later, you can also set it per-host in inventory like this:

solaris1 ansible_remote_tmp=$HOME/.ansible/tmp

You can set ansible_shell_executable to the path to a POSIX compatible shell. For instance, many Solaris hosts have a POSIX shell located at /usr/xpg4/bin/sh so you can set this in inventory like so:

solaris1 ansible_shell_executable=/usr/xpg4/bin/sh

(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).

14. Which protocols ansible uses to communicate with Linux & Windows?

For Linux , protocol used is SSH

For Windows , Protocol used in WinRM

15. Explain dynamic inventory and its use cases in ansible ?

Often a user of a configuration management system will want to keep inventory in a different software system.Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey CMDB software.

Ansible easily supports all of these options via an external inventory system. The inventory directory contains some of these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack.

Create an AWS infrastructure using the Ansible EC2 dynamic inventory

Suppose you have the requirement to launch an instance and install some packages on top of it in one go,  what would be your approach be? 

To setup dynamic inventory management, you need two files:

https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini

An ec2.py file is a Python script, which is responsible for fetching details of the EC2 instance, whereas the ec2.ini file is configuration file which is used by ec2.py

Ansible uses AWS Python library boto to communicate with AWS using APIs. To allow this communication, export the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables.

You can use the inventory in two ways:

  • Pass it directly to an ansible-playbook command using the -i option and copy the ec2.ini file to current directory where all the Ansible commands are running
  • Copy the ec2.py file to /etc/ansible/hosts, make it executable using chmod +x, and copy the ec2.ini file to /etc/ansible/ec2.ini

An example playbook with EC2 dynamic inventory, which will simply ping all machines

$ ansible -i ec2.py all -m ping

16. What are Ansible Modules?

Ansible modules are components installed with Ansible that do all the heavy lifting. They can be classified as core and extra modules. Main difference between the two is that core modules come with Ansible and are built and maintained by Ansible Inc. and RedHat employees. Extra modules can be easily installed using your distribution’s package manager or directly from GitHub.

Below is the table for core modules :

Module                                                        Function
copyCopies files or folders from local machine to configured server
userCreates, deletes or alters user accounts on the configured server 
npmManages Node.JS packages
pingChecks SSH connection to servers defined in inventory
setupCollects various information about servers
cronManages crontab

Majority of modules expect one or more arguments that tune the way a module works; for example, the copy module has src and dest arguments that tell the module what is the source and destination of the file or directory to be copied. 

Below command will copy a file named “my_app.zip” from the current directory to “/var/www/html” directory on the configured server . 

          # ansible  -m  copy  -a  “src=my_app.zip  dest=/var/www/html”

17. What are Ansible Tasks? Explain in detail with example.

Ansible tasks are atomic actions with defined by name and an accompanying module.

Anatomy of this task is quite simple; its name is “install mysql”, module in use is “yum”, and it has two arguments; name argument refers to the package which needs to be in the state of “installed”. 

This brings us to one important Ansible feature: Ansible does not expect commands or functions that do something – Ansible tasks describe the desired state of the configured server. If a package named “mysql” is installed, Ansible will not install it again. This means that it is perfectly safe to run tasks several times as they will not alter the system if its configuration is in the state described in those tasks.

A single task can only use one module. If, for example, I wanted to install MySQL and start the mysqld service, I would need two tasks to achieve that .

18.Can you explain what are playbooks in Ansible? Explain with some examples.

Tasks for themselves have no real use case so we combine them into playbooks. Therefore, playbooks are collections of tasks that describe a state of the configured server and configure it. Playbooks are written in YAML because it is extremely human and machine readable. 

An example playbook may look like this:

name : Common taskshosts :   webserversbecome :    truetasks :                    –    name :    task  1       . . . .                handlers :                     –  name :    handler  1
  • Reading from the top, line starting with “name” is playbook name. 
  • Next line tells Ansible on which hosts to apply the tasks from this playbook. Hosts are defined in the inventory file. 
  • Third line contains the word “become” which tells Ansible that it should run the tasks with elevated privileges; e.g. as super user. 
  • In the last few lines comes a list of tasks, after which handlers are defined. 

Note : Tasks will be executed one by one in the order they are written in. It is  important to note that in the situation where Ansible executes a playbook on several servers, tasks are running in parallel on all servers.

19. How does handlers work in task execution in ansible ?                           Can you explain with some examples.

During the configuration process, there is sometimes a need to conditionally execute task. Handlers are one of the conditional forms supported by Ansible. A handler is similar to a task, but it runs only if it was notified by a task. 

A task will fire the notification if Ansible recognizes that the task has changed the state of the system . An example situation where handlers are useful is when a task modifies a configuration file of some service, MySQL for example. In order for changes to take effect, the service needs to be restarted .

name :   change  mysql  max_connectionscopy   :   src=edited_my.cnf  dest=/etc/my.cnfnotify  :restart_mysql

Notify keyword acts as a trigger for the handler named “restart_mysql”

20. Do you know how Host key checking is enabled in Ansible?

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.

If you understand the implications and wish to disable this behavior, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:

[defaults]
host_key_checking = False

Alternatively this can be set by the ANSIBLE_HOST_KEY_CHECKING environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

21. Have you heard about Ansible-Doc? 

Yes. Ansible-Doc displays information on modules installed in Ansible libraries. It displays a terse listing of plugins and their short descriptions, provides a printout of their DOCUMENTATION strings, and it can create a short “snippet” which can be pasted into a playbook.

Ansible – ( Intermediate Level )

22. Can I deploy a virtual machine on a standalone ESXi server ?

Yes. vmware_guest can deploy a virtual machine with required settings on a standalone ESXi server.

23. What’s an ad-hoc command under Ansible Terminology?

An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.

This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.

24. Explain patterns with examples in ansible ?

Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in terms of Working With Playbooks it actually means what hosts to apply a particular configuration or IT process to.

Below is the sample example of pattern usage

# ansible <pattern_goes_here> -m <module_name> -a <arguments>
# ansible webservers -m service -a “name=httpd state=restarted”

A pattern usually refers to a set of groups (sets of hosts) – in the above case, machines in the “webservers” group .

The following patterns are equivalent and target all hosts in the inventory:

# all*

23. What is Ansible Vault?

Ansible Vault feature can encrypt any structured data file used by Ansible. This can include group_vars/ or host_vars/ inventory variables, variables loaded by include_vars or vars_files, or variable files passed on the ansible-playbook command line                 with -e @file.yml or -e @file.json. Role variables and defaults are also included!

Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault. If you’d like to not expose what variables you are using, you can keep an individual task file entirely encrypted.

The password used with vault currently must be the same for all files you wish to use together at the same time.

How to update the encrypted data using ansible vault ?

To update the AWS keys added to the encrypted file, you can later use Ansible-vault’s edit subcommand as follows:

$ ansible-vault edit aws_creds.ymlVault password:

The edit command does the following operations:

  • Prompts for a password
  • Decrypts a file on the fly using the AES symmetric cypher
  • Opens the editor interface, which allows to change the content of a file
  • Encrypts the file again after being saved

Another way to update the content of the file. Decrypt the file as follows:

$ ansible-vault decrypt aws_creds.ymlVault password:Decryption successful

Once updated, this file can then be encrypted again

 25. Explain the concept of Blocks under Ansible?

Blocks allow for logical grouping of tasks and in play error handling. Most of what you can apply to a single task can be applied at the block level, which also makes it much easier to set data or directives common to the tasks. This does not mean the directive affects the block itself, but is inherited by the tasks enclosed by a block. i.e. a when will be applied to the tasks, not the block itself.

Block example

tasks:
  – name: Install Apache
    block:
      – yum:
          name: “{{ item }}”
          state: installed
        with_items:
          – httpd
          – memcached
      – template:
          src: templates/src.j2
          dest: /etc/foo.conf
      – service:
          name: bar
          state: started
          enabled: True
    when: ansible_distribution == ‘CentOS’
    become: true
    become_user: root

26. Do you have any idea on how to turn off the facts in Ansible?

If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms. 

In any play, just do this:

– hosts: whatever
  gather_facts: no

27. What are Groups of Groups, and Group Variables under Ansible?

It is also possible to make groups of groups using the :children suffix in INI or the children: entry in YAML. You can apply variables using :vars or vars::

[atlanta]
host1
host2

[raleigh]
host2
host3

[southeast:children]
atlanta
raleigh

[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2

[usa:children]
southeast
northeast
southwest
northwest

28. How to enable Privilege Escalation using Become in Ansible?

Ansible allows you to ‘become’ another user, different from the user that logged into the machine (remote user). This is done using existing privilege escalation tools such as sudo, su, pfexec, doas, pbrun, dzdo, ksu, runas, machinectl and others.

For example, to manage a system service (which requires root privileges) when connected as a non-root user (this takes advantage of the fact that the default value of become_user is root):

– name: Ensure the httpd service is running
  service:
    name: httpd
    state: started
  become: yes

29.How variables are merged in Ansible?

By default variables are merged/flattened to the specific host before a play is run.       This keeps Ansible focused on the Host and Task, so groups don’t really survive outside of inventory and host matching. By default, Ansible overwrites variables including the ones defined for a group and/or host (see the hash_merge setting to change this).                 The order/precedence is (from lowest to highest):

  • all group (because it is the ‘parent’ of all other groups)
  • parent group
  • child group
  • host

When groups of the same parent/child level are merged, it is done alphabetically, and the last group loaded overwrites the previous groups. For example, an a_group will be merged with b_group and b_group vars that match will overwrite the ones in a_group.

30. What are Cache Plugins in Ansible? Any idea how are they enabled?

Cache plugin implement a backend caching mechanism that allows Ansible to store gathered facts or inventory source data without the performance hit of retrieving them from source.

The default cache plugin is the memory plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs.

Enabling Cache Plugins

Only one cache plugin can be active at a time. You can enable a cache plugin in the Ansible configuration, either via environment variable:

export ANSIBLE_CACHE_PLUGIN=jsonfile

or in the ansible.cfg file:

[defaults]
fact_caching=redis

You will also need to configure other settings specific to each plugin. Consult the individual plugin documentation or the Ansible configuration for more details.

31. List  few Non-SSH connection types and how to specify them under Ansible?

Ansible executes playbooks over SSH but it is not limited to this connection type. With the host specific parameter ansible_connection=<connector>, the connection type can be changed. The following non-SSH based connectors are available:

local

This connector can be used to deploy the playbook to the control machine itself.

docker

This connector deploys the playbook directly into Docker containers using the local Docker client. 

32.Why Fact Caching is Useful under Ansible?

With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook.

To benefit from cached facts, you will want to change the gathering setting to smart or explicit or set gather_facts to False in most plays.

Currently, Ansible ships with two persistent cache plugins: redis and jsonfile.

To configure fact caching using redis, enable it in ansible.cfg as follows:

[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400
# seconds

33. What are Registered Variables under Ansible?

Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of “facts” in Ansible. Effectively registered variables are just like facts.

When using register with a loop, the data structure placed in the variable during the loop will contain a results attribute, that is a list of all responses from the module.

– hosts: web_servers

  tasks:

    – shell: /usr/bin/foo
      register: foo_result
      ignore_errors: True

    – shell: /usr/bin/bar
      when: foo_result.rc == 5

34. How Network Module work under Ansible?

Unlike most Ansible modules, network modules do not run on the managed nodes. From a user’s point of view, network modules work like any other modules. They work with ad-hoc commands, playbooks, and roles. Behind the scenes, however, network modules use a different methodology than the other (Linux/Unix and Windows) modules use. Ansible is written and executed in Python. Because the majority of network devices can not run Python, the Ansible network modules are executed on the Ansible control node, where ansible or ansible-playbook runs.

Network modules also use the control node as a destination for backup files, for those modules that offer a backup option. With Linux/Unix modules, where a configuration file already exists on the managed node(s), the backup file gets written by default in the same directory as the new, changed file. Network modules do not update configuration files on the managed nodes, because network configuration is not written in files. Network modules write backup files on the control node, usually in the backup directory under the playbook root directory.

Set the hostname on Cisco Switch using network modules 

Network device is running the Cisco IOS operating system, use the ios_config module, which manages Cisco IOS configuration section.

Below is playbook for setting hostname of cisco switch 

—- hosts: localhost  gather_facts: no  connection: local  tasks:  – name: set a hostname    ios_config:      lines: hostname sw2      provider:        host: 10.0.0.15        username: admin        password: adc123        authorize: true        auth_pass: abcjfe767

Run the playbook 

$ ansible-playbook playbook.yml -v

Verify if the Cisco Switch config is saved correctly 

$ ssh admin@10.0.0.15Password:sw2>

35. How Ansible manages multiple communication protocols?

Because network modules execute on the control node instead of on the managed nodes, they can support multiple communication protocols. The communication protocol (XML over SSH, CLI over SSH, API over HTTPS) selected for each network module depends on the platform and the purpose of the module. Some network modules support only one protocol; some offer a choice. The most common protocol is CLI over SSH. 

You set the communication protocol with the ansible_connection variable:

Value of ansible_connectionProtocolRequiresPersistent?
network_cliCLI over SSHnetwork_os settingyes
netconfXML over SSHnetwork_os settingyes
httpapiAPI over HTTP/HTTPSnetwork_os settingyes
localdepends on providerprovider settingno

36. Why do we get an error SSL CERTIFICATE_VERIFY_FAILED in Windows under Ansible?

When the Ansible controller is running on Python 2.7.9+ or an older version of Python that has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to validate the certificate WinRM is using for an HTTPS connection. If the certificate cannot be validated (such as in the case of a self signed cert), it will fail the verification process.

To ignore certificate validation, add ansible_winrm_server_cert_validation: ignore to inventory for the Windows host.

Ansible – ( Expert Level )

37. How do I handle different machines needing different user accounts or ports to log in with under Ansible?

Setting inventory variables in the inventory file is the easiest way.

For instance, suppose these hosts have different usernames and ports:

[webservers]

asdf.example.com  ansible_port=5000   ansible_user=alice

jkl.example.com   ansible_port=5001   ansible_user=bob

You can also dictate the connection type to be used, if you want:

[testcluster]

localhost           ansible_connection=local

/path/to/chroot1    ansible_connection=chroot

foo.example.com     ansible_connection=paramiko

You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file. 

38. How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?

Switch your default connection type in the configuration file to ‘ssh’, or use ‘-c ssh’ to use Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ‘ssh’ will be used by default if OpenSSH is new enough to support ControlPersist as an option. 

Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.

We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience for new users.

39. How do I configure a jump host to access servers that I have no direct access to under Ansible?

You can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:

[gatewayed]

foo ansible_host=192.0.2.1

bar ansible_host=192.0.2.2

You can create group_vars/gatewayed.yml with the following contents:

ansible_ssh_common_args: ‘-o ProxyCommand=”ssh -W %h:%p -q user@gateway.example.com”‘

Ansible will append these arguments to the command line when trying to connect to any hosts in the group gatewayed. (These arguments are used in addition to any ssh_args from ansible.cfg, so you do not need to repeat global ControlPersist settings in ansible_ssh_common_args.)

Note that ssh -W is available only with OpenSSH 5.4 or later. With older versions, it’s necessary to execute nc %h:%p or some equivalent command on the bastion host.

40.  What is AWX and how is it different from Ansible Tower?

The AWX Project — AWX for short — is an open source community project, sponsored by Red Hat, that enables users to better control their Ansible project use in IT environments. AWX is the upstream project from which the Red Hat Ansible Tower offering is ultimately derived. 

Ansible Tower is a commercial web-based solution that makes Ansible even more easy to use for IT teams of all kinds. It’s designed to be the hub for all of your automation tasks.Ansible Tower helps you scale IT automation, manage complex deployments and speed productivity. Centralize and control your IT infrastructure with a visual dashboard, role-based access control, job scheduling, integrated notifications and graphical inventory management. And Ansible Tower REST API and CLI make it easy to embed Ansible Tower into existing tools and processes.

Tower allows you to control access to who can access what, even allowing sharing of SSH credentials without someone being able to transfer those credentials. Inventory can be graphically managed or synced with a wide variety of cloud sources. It logs all of your jobs, integrates well with LDAP, and has an amazing browsable REST API. Command line tools are available for easy integration with Jenkins as well. Provisioning callbacks provide great support for autoscaling topologies.

41. How do I see a list of all of the ansible_ variables?

Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:

# ansible -m setup hostname

This will print out a dictionary of all of the facts that are available for that particular host. 

42. How do I loop over a list of hosts in a group, inside of a template under Ansible?

A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like this:

{% for host in groups[‘db_servers’] %}    {{ host }}{% endfor %}

If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:

– hosts:  db_servers  tasks:    – debug: msg=”doesn’t matter what you do, just that they were talked to previously.”

Then you can use the facts inside your template, like this:

{% for host in groups[‘db_servers’] %}   {{ hostvars[host][‘ansible_eth0’][‘ipv4’][‘address’] }}{% endfor %}

43. How do I keep secret data in my playbook?

If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see Using Vault in playbooks.

In Ansible 1.8 and later, if you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:

– name: secret task

  shell: /usr/bin/do_something –value={{ secret_value }}

  no_log: True

This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.

The no_log attribute can also apply to an entire play:

– hosts: all

  no_log: True

Though this will make the play somewhat difficult to debug. It’s recommended that this be applied to single tasks only, once a playbook is completed. Note that the use of the no_log attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG environment variable.

44. How to use DSC under Ansible?

DSC Resources are distributed as PowerShell modules, which means that it works similarly to Ansible, just implemented in a different manner. The win_dsc module has been available since the release of Ansible 2.4, and it can influence existing DSC resources whenever it interacts with a Windows host.

To use this module, you will need PowerShell 5.1 or later. Once you make sure that you have the correct version of PowerShell installed on your Windows nodes, using DSC is as easy as executing a task using the win_dsc module.

45. What modules does Ansible provide for orchestrating Docker containers?

Ansible offers the following modules for orchestrating Docker containers:

Docker_service – Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on Swarm. Supports compose versions 1 and 2.

Docker_container – Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a container.

Docker_image – Provides full control over images, including: build, pull, push, tag and remove.

Docker_image_facts- Inspects one or more images in the Docker host’s image cache, providing the information as facts for making decision or assertions in a playbook.

Docker_login – Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which in turn provides password-free pushing and pulling of images to and from the registry.

docker (dynamic inventory) – Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.

Ansible 2.1.0 includes major updates to the Docker modules, marking the start of a project to create a complete and integrated set of tools for orchestrating containers.

46. Explain accessing a variable name programmatically?

An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied via a role parameter or other input. 

Variable names can be built by adding strings together, like so:

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}

Going through hostvars is necessary because it’s a dictionary of the entire namespace of variables. 

‘inventory_hostname’ is a magic variable that indicates the current host you are looping over in the host loop.

47. How to generate crypted passwords for the user module in Ansible?

The mkpasswd utility that is available on most Linux systems is a great option:

mkpasswd –method=sha-512

In OpenBSD, a similar option is available in the base system called encrypt(1):

encrypt

If above utilities are not installed on your system then you can still easily generate these passwords using Python with the passlib hashing library .

# pip install passlibOnce the library is ready, SHA512 password values can then be generated as follows:# python -c “from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))”

48. How to keep secret data in my playbook under Ansible?

Vault can be used in playbooks  to keep secret data .

If you have a task that you don’t want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:

– name: secret task  shell: /usr/bin/do_something –value={{ secret_value }}  no_log: True

This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.

The no_log attribute can also apply to an entire play:

– hosts: all  no_log: True

Note that the use of the no_log attribute does not prevent data from being shown when debugging Ansible itself via the ANSIBLE_DEBUG environment variable.

49. What is the minimal requirement for using Docker modules under Ansible?

Using the docker modules requires having docker-py installed on the host running Ansible. You will need to have >= 1.7.0 installed.

$ pip install ‘docker-py>=1.7.0’

The docker_service module also requires docker-compose

$ pip install ‘docker-compose>=1.7.0’

50. How Ansible module connect to Docker API?

You can connect to a local or remote API using parameters passed to each task or by setting environment variables. The order of precedence is command line parameters and then environment variables. If neither a command line option or an environment variable is found, a default value will be used. The default values are provided under Parameters

Example: Few of the parameters are listed below – 

Control how modules connect to the Docker API by passing the following parameters:

Docker_host – The URL or Unix socket path used to connect to the Docker API. Defaults to unix://var/run/docker.sock. To connect to a remote host, provide the TCP connection string. For example: tcp://192.0.2.23:2376. If TLS is used to encrypt the connection to the API, then the module will automatically replace ‘tcp’ in the connection URL with ‘https’.

Api_version – The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported by docker-py.

Timeout – The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.

tls – Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server. Defaults to False.

tls_verify – Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server. Default is False.

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index