Learn Puppet with Me – Day 3

Estimated Reading Time: 12 minutes

Puppet is an open source framework and toolset for managing the configuration of computer systems. Puppet can be used to manage configuration on UNIX (including OSX) and Linux platforms, and recently Microsoft Windows platforms as well. Puppet is often used to manage a host throughout its lifecycle: from initial build and installation, to upgrades, maintenance, and finally to end-of-life, whenyou move services elsewhere. Puppet is designed to continuously interact with your hosts, unlike provisioning tools which build your hosts and leave them unmanaged.

Puppet has a simple operating model that is easy to understand and implement. The model is made up of three components:-

• Deployment

• Configuration Language and Resource Abstraction Layer

• Transactional Layer

Puppet is usually deployed in a simple client-server model (Figure 1-2). The server is called a “Puppet master”, the Puppet client software is called an agent and the host itself is defined as a node.

The Puppet master runs as a daemon on a host and contains the configuration required for your environment. The Puppet agents connect to the Puppet master via an encrypted and authenticated connection using standard SSL, and retrieve or “pull” any configuration to be applied.

Importantly, if the Puppet agent has no configuration available or already has the required configuration then Puppet will do nothing. This means that Puppet will only make changes to your environment if they are required. The whole process is called a configuration run.

Each agent can run Puppet as a daemon via a mechanism such as cron, or the connection can be manually triggered. The usual practice is to run Puppet as a daemon and have it periodically check with the master to confirm that its configuration is up-to-date or to retrieve any new configuration. However,many people find being able to trigger Puppet via a mechanism such as cron, or manually, better suits their needs. By default, the Puppet agent will check the master for new or changed configuration once every 30 minutes. You can configure this period to suit your environment.

Configuration Language and Resource Abstraction Layer

Puppet uses a declarative language to define your configuration items, which Puppet calls “resources.” This declarative nature creates an important distinction between Puppet and many other configuration tools. A declarative language makes statements about the state of your configuration – for example, it declares that a package should be installed or a service should be started.

Most configuration tools, such as a shell or Perl script, are imperative or procedural. They describe HOW things should be done rather than the desired end state – for example, most custom scripts used to manage configuration would be considered imperative. This means Puppet users just declare what the state of their hosts should be: what packages should be installed, what services should be running, etc. With Puppet, the system administrator doesn’t care HOW this state is achieved – that’s Puppet’s problem. Instead, we abstract our host’s configuration into resources.

Configuration Language

What does this declarative language mean in real terms? Let’s look at a simple example. We have an environment with Red Hat Enterprise Linux, Ubuntu, and Solaris hosts and we want to install the vim application on all our hosts. To do this manually, we’d need to write a script that does the following:

• Connects to the required hosts (including handling passwords or keys)

• Checks to see if vim is installed

• If not, uses the appropriate command for each platform to install vim, for example on Red Hat the yum command and on Ubuntu the apt-get command

• Potentially reports the results of this action to ensure completion and success

Puppet approaches this process quite differently. In Puppet, we define a configuration resource for the vim package. Each resource is made up of a type (what sort of resource is being managed: packages, services, or cron jobs), a title (the name of the resource), and a series of attributes (values that specify the state of the resource – for example, whether a service is started or stopped).

Example:

A Puppet Resource

package { “vim”:

ensure => present,

}

Resource Abstraction Layer

With our resource created, Puppet takes care of the details of how to manage that resource when our agents connect. Puppet handles the “how” by knowing how different platforms and operating systems manage certain types of resources. Each type has a number of “providers.” A provider contains the “how” of managing packages using a particular package management tool. For the package type, forexample, for there are more than 20 providers covering a variety of tools including yum, aptitude, pkgadd, ports, and emerge.

When an agent connects, Puppet uses a tool called “Facter” to return information about that agent, including what operating system it is running. Puppet then chooses the appropriate package provider for that operating system and uses that provider to check if the vimpackage is installed. For example, on Red Hat it would execute yum, on Ubuntu it would execute aptitude, and on Solaris it would use the pkg command. If the package is not installed, then Puppet will install it. If the package is already installed, Puppet does nothing. Puppet will then report its success or failure in applying the resource back to the Puppet master.

INTRODUCING FACTER AND FACTS

Facter is a system inventory tool which returns “facts” about each agent, such as its hostname, IP address, operating system and version, and other configuration items. These facts are gathered when the agent runs. The facts are then sent to the Puppet master, and automatically created as variables available to Puppet.You can see the facts available on your clients by running the facter binary from the command line. Each fact is returned as a key => value pair. For example:

operatingsystem => Ubuntu

ipaddress => 10.0.0.10

Transactional Layer

Puppet’s transactional layer is its engine. A Puppet transaction encompasses the process of configuring each host including:

• Interpret and compile your configuration

• Communicate the compiled configuration to the agent

• Apply the configuration on the agent

• Report the results of that application to the master

The first step Puppet takes is to analyze your configuration and calculate how to apply it to your agent. To do this, Puppet creates a graph showing all resources, their relationships to each other and to each agent. This allows Puppet to work out in what order, based on relationships you create, to apply each resource to your host. This model is one of Puppet’s most powerful features. Puppet then takes the resources and compiles them into a “catalog” for each agent. The catalog is sent to the host and applied by the Puppet agent. The results of this application are then sent back to the master in the form of a report.

The transaction layer allows configurations to be created and applied repeatedly on the host.Puppet calls this idempotent, meaning multiple applications of the same operation will yield the same results. Puppet configuration can be safely run multiple times with the same outcome on your host and hence ensuring your configuration stays consistent.

3. Understanding Puppet Components

If you look at /etc/puppet directory , you will find various components of puppet underlying.

[root@puppet-server puppet]# ls -la

total 28

drwxr-xr-x. 4 puppet puppet 4096 Aug 23 11:41 .

drwxr-xr-x. 79 root root 4096 Aug 25 04:01 ..

-rwxr-xr-x. 1 puppet puppet 2552 Aug 21 17:49 auth.conf

-rwxr-xr-x. 1 puppet puppet 381 Jun 20 18:24 fileserver.conf

drwxr-xr-x. 4 puppet puppet 4096 Aug 25 06:50 manifests

drwxr-xr-x. 11 puppet puppet 4096 Aug 25 06:49 modules

-rwxr-xr-x. 1 puppet puppet 1059 Aug 23 11:41 puppet.conf

Let’s talk about what is manifest.

“Manifest” is Puppet’s term for files containing configuration information. Manifest files have a suffix of .pp. This directory and file is often already created when the Puppet packages are installed. If it hasn’t already been created, then create this directory and file now:

# mkdir /etc/puppet/manifests

Under manifests, there are important files such as nodes.pp, site.pp, template.pp and few classes and definitions. We are going to cover those too here.

Puppet manifests are made up of a number of major components:

• Resources – Individual configuration items

• Files – Physical files you can serve out to your agents

• Templates – Template files that you can use to populate files

• Nodes – Specifies the configuration of each agent

• Classes – Collections of resources

• Definitions – Composite collections of resources

These components are wrapped in a configuration language that includes variables, conditionals, arrays and other features. Later in this chapter we’ll introduce you to the basics of the Puppet language and its elements. In the next chapter, we’ll extend your knowledge of the language by taking you through an implementation of a multi-agent site managed with Puppet.

The site.pp file

The site.pp file tells Puppet where and what configuration to load for our clients. We’re going to store

this file in a directory called manifests under the /etc/puppet directory.

Please Note: Puppet will not start without the site.pp file being present.

Our first step in creating our first agent configuration is defining and extending the site.ppfile. See an example of this file in Listing 1-3.

The site.pp File

import ‘nodes.pp’

$puppetserver = ‘puppet.example.com’

The import directive tells Puppet to load a file called nodes.pp. This directive is used to include any Puppet configuration we want to load.

When Puppet starts, it will now load the nodes.pp file and process the contents. In this case, this file will contain the node definitions we create for each agent we connect. You can also import multiple files like so:

import ‘nodes/*’

import ‘classes/*’

The import statement will load all files with a suffix of .pp in the directories nodes andclasses.

The $puppetserver statement sets a variable. In Puppet, configuration statements starting with a dollar sign are variables used to specify values that you can use in Puppet configuration.

In Listing 1-3, we’ve created a variable that contains the fully qualified domain name of our Puppet master, enclosed in double quotes.

Agent Configuration

Let’s add our first agent definition to the nodes.pp file we’ve just asked Puppet to import. In Puppet manifests, agents are defined using node statements.

# touch /etc/puppet/manifests/nodes.pp.

You can see the node definition we’re going to add in Listing 1-4.

Listing 1-4. Our Node Configuration

node ‘puppet-client.test.com’ {

include sudo

}

Next, we specify an include directive in our node definition. The include directive specifies a collection of configuration that we want to apply to our host. There are two types of collections we can

include in a node:

• Classes – a basic collection of resources

• Modules – an advanced, portable collection of resources that can include classes, definitions, and other supporting configuration.You can include multiple collections by using multipleinclude directives or separating each

collection with commas.

include sudo

include sshd

include vim, syslog-ng

Creating our first module

The next step is to create the sudo module. A module is a collection of manifests, resources, files, templates, classes, and definitions. A single module would contain everything required to configure a particular application. For example, it could contain all the resources (specified in manifest files), files and associated configuration to configure Apache or the sudo command on a host.

Each module needs a specific directory structure and a file called init.pp. This structure allows Puppet to automatically load modules. To perform this automatic loading, Puppet checks a series of directories called the module path. The module path is configured with themodulepath configuration

option in the [main] section of the puppet.conf file. By default, Puppet looks for modules in the /etc/puppet/modules and /var/lib/puppet/modules directories, but you can add additional locations if

required:

[main]

moduledir = /etc/puppet/modules:/var/lib/puppet/modules:/opt/modules

The automatic loading of modules means, unlike our nodes.pp file, modules don’t need to be loaded into Puppet using the import directive.

Module Structure

Let’s start by creating a module directory and file structure in Listing 1-5. We’re going to create this structure under the directory /etc/puppet/modules. We will name the modulesudo. Modules (and classes) must be normal words containing only letters, numbers, underscores and dashes.

Listing 1-5. Module Structure

# mkdir –p /etc/puppet/modules/sudo/{files,templates,manifests}

# touch /etc/puppet/modules/sudo/manifests/init.pp

The manifests directory will hold our init.pp file and any other configuration. The init.pp file is the core of your module and every module must have one. The files directory will hold any files we wish to serve as part of our module. The templates directory will contain any templates that our module might use.

The init.pp file

Now let’s look inside our sudo module, starting with the init.pp file, which we can see in Listing 1-6.

Listing 1-6. The sudo module’s init.pp file

class sudo {

package { sudo:

ensure => present,

}

if $operatingsystem == “Ubuntu” {

package { “sudo-ldap”:

ensure => present,

require => Package[“sudo”],

}

}

file { “/etc/sudoers”:

owner => “root”,

group => “root”,

mode => 0440,

source => “puppet://$puppetserver/modules/sudo/etc/sudoers”,

require => Package[“sudo”],

}

}

Our sudo module’s init.pp file contains a single class, also called sudo. There are three resources in the class, two packages and a file resource.The first package resource ensures that the sudo package is installed, ensure => present. The second package resource uses Puppet’s if/else syntax to set a condition on the installation of the sudoldap package.

The next portion of our source value tells Puppet where to look for the file. This is the equivalent of the path to a network file share. The first portion of this share is modules,which tells us that the file is stored in a module. Next we specify the name of the module the file is contained in, in this case sudo.

Finally, we specify the path inside that module to find the file.

All files in modules are stored under the files directory; this is considered the “root” of the module’s file “share.” In our case, we would create the directory etc under the files directory and create the sudoers file in this directory.

puppet$ mkdir –p /etc/puppet/modules/sudo/files/etc

puppet$ cp /etc/sudoers /etc/puppet/manifests/files/etc/sudoers

Applying Our First Configuration

We’ve created our first Puppet module! Let’s step through what will happen when we connect an agent that includes this module.

1. It will install the sudo package.

2. If it’s an Ubuntu host, then it will also install the sudo-ldap package

3. Lastly, it will download the sudoers file and install it into /etc/sudoers.

Now let’s see this in action and include our new module on the agent we’ve created, Nod1.example.com.

Remember we created a node statement for our host in Listing 1.4:

node ‘node1.example.com’ {

include sudo

}

When the agent connects it will now include the sudo module. To do this we run the Puppet agent again, as shown in Listing 1-7.

Listing 1-7. Applying Our First Configuration

puppet# puppet agent –server=puppet.example.com –no-daemonize –verbose –onetime

notice: Starting Puppet client version 2.6.1

info: Caching catalog for node1.example.com

info: Applying configuration version ‘1272631279’

notice: //sudo/Package[sudo]/ensure: created

notice: //sudo/File[/etc/sudoers]/checksum: checksum changed

‘{md5}9f95a522f5265b7e7945ff65369acdd2’ to ‘{md5}d657d8d55ecdf88a2d11da73ac5662a4’

info: Filebucket[/var/lib/puppet/clientbucket]: Adding

/etc/sudoers(d657d8d55ecdf88a2d11da73ac5662a4)

info: //sudo/File[/etc/sudoers]: Filebucketed /etc/sudoers to puppet with sum

d657d8d55ecdf88a2d11da73ac5662a4

notice: //sudo/File[/etc/sudoers]/content: content changed

‘{md5}d657d8d55ecdf88a2d11da73ac5662a4’

In our next section, we are going to talk more on Puppet Modules.

How to install Nagios on Linux?

Estimated Reading Time: 8 minutes

Last week I thought of setting up Nagios on my Linux Box.I installed a fresh piece of RHEL on my Virtualbox and everything went fine. I thought of putting this complete setup on my blog and here it is : “A Complete Monitoring Tool for your Linux Box”

nagios

Here is my Machine Configuration:

[root@irc ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 5.3 (Tikanga)

[root@irc ~]#

[root@irc ~]# uname -arn

Linux irc.chatserver.com 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux

[root@irc ~]#

1) Create Account Information

Become the root user.

su -l

Create a new nagios user account and give it a password.

/usr/sbin/useradd -m nagios

passwd nagios

Create a new nagcmd group for allowing external commands to be submitted through the web interface. Add both the nagios user and the apache user to the group.

/usr/sbin/groupadd nagcmd

/usr/sbin/usermod -a -G nagcmd nagios

/usr/sbin/usermod -a -G nagcmd apache

2) Download Nagios and the Plugins

Create a directory for storing the downloads.

mkdir ~/downloads

cd ~/downloads

Download the source code tarballs of both Nagios and the Nagios plugins (visit http://www.nagios.org/download/ for links to the latest versions). These directions were tested with Nagios 3.1.1 and Nagios Plugins 1.4.11.

wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.0.tar.gz

wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.11.tar.gz

3) Compile and Install Nagios

Extract the Nagios source code tarball.

cd ~/downloads

tar xzf nagios-3.2.0.tar.gz

cd nagios-3.2.0

Run the Nagios configure script, passing the name of the group you created earlier like so:

./configure –with-command-group=nagcmd

Compile the Nagios source code.

make all

Install binaries, init script, sample config files and set permissions on the external command directory.

make install

make install-init

make install-config

make install-commandmode

Don’t start Nagios yet – there’s still more that needs to be done…

4) Customize Configuration

Sample configuration files have now been installed in the /usr/local/nagios/etc directory. These sample files should work fine for getting started with Nagios. You’ll need to make just one change before you proceed…

Edit the /usr/local/nagios/etc/objects/contacts.cfg config file with your favorite editor and change the email address associated with the nagiosadmin contact definition to the address you’d like to use for receiving alerts.

vi /usr/local/nagios/etc/objects/contacts.cfg

5) Configure the Web Interface

Install the Nagios web config file in the Apache conf.d directory.

make install-webconf

Create a nagiosadmin account for logging into the Nagios web interface. Remember the password you assign to this account – you’ll need it later.

htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Restart Apache to make the new settings take effect.

service httpd restart

Note: Consider implementing the ehanced CGI security measures described here to ensure that your web authentication credentials are not compromised.

6) Compile and Install the Nagios Plugins

Extract the Nagios plugins source code tarball.

cd ~/downloads

tar xzf nagios-plugins-1.4.11.tar.gz

cd nagios-plugins-1.4.11

Compile and install the plugins.

./configure –with-nagios-user=nagios –with-nagios-group=nagios

make

make install

7) Start Nagios

Add Nagios to the list of system services and have it automatically start when the system boots.

chkconfig –add nagios

chkconfig nagios on

Verify the sample Nagios configuration files.

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

If there are no errors, start Nagios.

service nagios start

8) Modify SELinux Settings

Fedora ships with SELinux (Security Enhanced Linux) installed and in Enforcing mode by default. This can result in “Internal Server Error” messages when you attempt to access the Nagios CGIs.

See if SELinux is in Enforcing mode.

getenforce

Put SELinux into Permissive mode.

setenforce 0

To make this change permanent, you’ll have to modify the settings in /etc/selinux/config and reboot.

Instead of disabling SELinux or setting it to permissive mode, you can use the following command to run the CGIs under SELinux enforcing/targeted mode:

chcon -R -t httpd_sys_content_t /usr/local/nagios/sbin/

chcon -R -t httpd_sys_content_t /usr/local/nagios/share/

For information on running the Nagios CGIs under Enforcing mode with a targeted policy, visit the Nagios Support Portal or Nagios Community Wiki.

9) Login to the Web Interface

You should now be able to access the Nagios web interface at the URL below. You’ll be prompted for the username (nagiosadmin) and password you specified earlier.

http://localhost/nagios/

Click on the “Service Detail” navbar link to see details of what’s being monitored on your local machine. It will take a few minutes for Nagios to check all the services associated with your machine, as the checks are spread out over time.

10) Other Modifications

Make sure your machine’s firewall rules are configured to allow access to the web server if you want to access the Nagios interface remotely.

Configuring email notifications is out of the scope of this documentation. While Nagios is currently configured to send you email notifications, your system may not yet have a mail program properly installed or configured. Refer to your system documentation, search the web, or look to the Nagios Support Portal or Nagios Community Wiki for specific instructions on configuring your system to send email messages to external addresses. More information on notifications can be found here.

11) You’re Done

Congratulations! You sucessfully installed Nagios. Your journey into monitoring is just beginning.

Example:

Say, If You Nagios Server is 10.14.236.140. You need to monitor the Linux Machine with IP: 10.14.236.70. You need to follow up like this:

[root@irc objects]# pwd

/usr/local/nagios/etc/objects

[root@irc objects]#

[root@irc objects]# ls

commands.cfg localhost.cfg printer.cfg switch.cfg timeperiods.cfg

contacts.cfg localhost.cfg.orig remotehost.cfg templates.cfg windows.cfg

[root@irc objects]#

The File should looks like:

# HOST DEFINITION

#

###############################################################################

###############################################################################

# Define a host for the local machine

define host{

use linux-server ; Name of host template to use

; This host definition will inherit all variab les that are defined

; in (or inherited by) the linux-server host t emplate definition.

host_name localhost

alias localhost

address 127.0.0.1

}

define host{

use linux-server ; Name of host template to use

; This host definition will inherit all variab les that are defined

; in (or inherited by) the linux-server host t emplate definition.

host_name ideath.logic.com

alias ideath

address 10.14.236.140

}

###############################################################################

###############################################################################

#

# HOST GROUP DEFINITION

#

###############################################################################

###############################################################################

# Define an optional hostgroup for Linux machines

define hostgroup{

hostgroup_name linux-server ; The name of the hostgroup

alias Linux Servers ; Long name of the group

members localhost ; Comma separated list of hosts that belong to this group

}

###############################################################################

###############################################################################

#

# SERVICE DEFINITIONS

#

###############################################################################

###############################################################################

# Define a service to “ping” the local machine

define service{

use local-service ; Name of service template to use

host_name localhost

service_description PING

check_command check_ping!100.0,20%!500.0,60%

}

define service{

use local-service ; Name of service template to use

host_name ideath.logica.com

service_description PING

check_command check_ping!100.0,20%!500.0,60%

}

# Define a service to check the disk space of the root partition

# on the local machine. Warning if < 20% free, critical if # < 10% free space on partition. define service{ use local-service ; Name of service template to use host_name localhost service_description Root Partition check_command check_local_disk!20%!10%!/ } define service{ use local-service ; Name of service template to use host_name ideath.logic.com service_description Root Partition check_command check_local_disk!20%!10%!/ } # Define a service to check the number of currently logged in # users on the local machine. Warning if > 20 users, critical

# if > 50 users.

define service{

use local-service ; Name of service template to use

host_name localhost

service_description Current Users

check_command check_local_users!20!50

}

define service{

use local-service ; Name of service template to use

host_name ideath.logic.com

service_description Current Users

check_command check_local_users!20!50

}

# Define a service to check the number of currently running procs

# on the local machine. Warning if > 250 processes, critical if

# > 400 users.

define service{

use local-service ; Name of service template to use

host_name localhost

service_description Total Processes

check_command check_local_procs!250!400!RSZDT

}

define service{

use local-service ; Name of service template to use

host_name ideath.logic.com

service_description Total Processes

check_command check_local_procs!250!400!RSZDT

}

# Define a service to check the load on the local machine.

define service{

use local-service ; Name of service template to use

host_name localhost

service_description Current Load

check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0

}

define service{

use local-service ; Name of service template to use

host_name ideath.logic.com

service_description Current Load

check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0

}

# Define a service to check the swap usage the local machine.

# Critical if less than 10% of swap is free, warning if less than 20% is free

define service{

use local-service ; Name of service template to use

host_name localhost

service_description Swap Usage

check_command check_local_swap!20!10

}

define service{

use local-service ; Name of service template to use

host_name ideath.logic.com

service_description Swap Usage

check_command check_local_swap!20!10

}

# Define a service to check SSH on the local machine.

# Disable notifications for this service by default, as not all users may have SSH enabled.

define service{

use local-service ; Name of service template to use

host_name localhost

service_description SSH

check_command check_ssh

notifications_enabled 0

}

define service{

use local-service ; Name of service template to use

host_name ideath.logic.com

service_description SSH

check_command check_ssh

check_period 24×7

notifications_enabled 0

is_volatile 0

max_check_attempts 4

normal_check_interval 5

retry_check_interval 1

contact_groups admins

notification_options w,c,u,r

notification_interval 960

notification_period 24×7

check_command check_ssh

}

# Define a service to check HTTP on the local machine.

# Disable notifications for this service by default, as not all users may have HTTP enabled.

define service{

use local-service ; Name of service template to use

host_name localhost

service_description HTTP

check_command check_http

notifications_enabled 0

}

define service{

use local-service ; Name of service template to use

host_name ideath.logic.com

service_description HTTP

check_command check_http

notifications_enabled 0

is_volatile 0

max_check_attempts 4

normal_check_interval 5

retry_check_interval 1

contact_groups admins

notification_options w,c,u,r

notification_interval 960

notification_period 24×7

check_command check_http

}

Ideath.logic.com is the hostname of 10.14.236.70.

Do make entry in /etc/hosts if it is unable to resolve the IP(or else check the DNS).

How to configure Directory Indexing in Apache?

Estimated Reading Time: 1 minute

While attending RHCE examination, I faced a lot of question related to Apache. While installation and configuration of Apache was the first topic, I found this topic very useful and would like to share with everyone who is going to attend RHCE certification.

apache318x260

A Quick one line step to configure it is:

Edit the /etc/httpd/conf/httpd.conf file :

Just Look at the line starting:

[Please note: Do add lesser than sign in front of directory]
directory “/var/www/html/pdfs”

Options Indexes FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all

/Directory

Restart the Apache.
Try browsing http://localhost/pdfs

Installing Open-Xchange on Ubuntu 12.04

Estimated Reading Time: 6 minutes

I thought of starting my day today with Open-Xchange. I had Vmware Workstation installed on one of Windows 7 Enterprise machine. I installed a minimal Ubuntu 12.04 as VM and ready to install. Here it goes:

ox

1. Pre-Requisite:

Installed Ubuntu 12.04

With apt-get utility working.(Internet Connectivity)

Ensure you have a FQDN name under /etc/hosts

iRedMail(Mail Server) software downloaded from http://iredmail.org/download.html

2. Edit the /etc/apt/sources.list and add the following entry:

deb http://download.opensuse.org/repositories/server:/OX:/ox6/xUbuntu_12.04/ /

3. Import the key:

$sudo wget http://software.open-xchange.com/oxbuildkey.pub-O – | apt-key add –

Ensure it showing as “Ok”.

4. Update the machine:

sudo apt-get update

4. Lets install iRedMail server

$ apt-Get install bzip2

$ cd / root

$ mkdir install

$ cd /root/install

$ wget https://bitbucket.org/zhb/iredmail/downloads/iRedMail-0.8.4.tar.bz2

$ tar xjf iRedMail-0.8.2.tar.bz2

$ cd / root/install/iRedMail-0.8.2 /

$ bash iRedMail.sh

It will finish as shown below:

********************************************************************

* Start iRedMail Configurations

********************************************************************

< INFO > Create self-signed SSL certification files.

< INFO > Create required system accounts: vmail, iredapd, iredadmin.

< INFO > Configure Apache web server and PHP.

< INFO > Configure MySQL database server.

mysqladmin: connect to server at ‘localhost’ failed

error: ‘Access denied for user ‘root’@’localhost’ (using password: NO)’

< INFO > Configure Postfix (Message Transfer Agent).

< INFO > Configure Policyd (postfix policy server, code name cluebringer).

< INFO > Configure Dovecot (pop3/imap/managesieve server, version 2).

< INFO > Configure ClamAV (anti-virus toolkit).

< INFO > Configure Amavisd-new (interface between MTA and content checkers).

drop_priv: No such username:

< INFO > Configure SpamAssassin (content-based spam filter).

< INFO > Configure iRedAPD (postfix policy daemon).

< INFO > Configure iRedAdmin (official web-based admin panel).

< INFO > Configure Fail2ban (authentication failure monitor).

< INFO > Configure Awstats (logfile analyzer for mail and web server).

< INFO > Configure Roundcube webmail.

< INFO > Configure phpMyAdmin (web-based MySQL management tool).

*************************************************************************

* iRedMail-0.8.4 installation and configuration complete.

*************************************************************************

< Question > Would you like to *REMOVE* sendmail now? [Y|n]Y

< INFO > Removing package(s): sendmail

Reading package lists… Done

Building dependency tree

Reading state information… Done

Package sendmail is not installed, so not removed

0 upgraded, 0 newly installed, 0 to remove and 107 not upgraded.

< Question > Would you like to use firewall rules provided by iRedMail now?

< Question > File: /etc/default/iptables, with SSHD port: 22. [Y|n]n

< INFO > Skip firewall rules.

< INFO > Deliver administration emails to postmaster@ubuntu.mail.com.

< INFO > Updating ClamAV database (freshclam), please wait …

ClamAV update process started at Fri Jun 21 08:18:21 2013

WARNING: DNS record is older than 3 hours.

WARNING: Invalid DNS reply. Falling back to HTTP mode.

Downloading main.cvd [100%]

main.cvd updated (version: 54, sigs: 1044387, f-level: 60, builder: sven)

Reading CVD header (daily.cvd): OK (IMS)

daily.cvd is up to date (version: 17389, sigs: 1361238, f-level: 63, builder: guitar)

Reading CVD header (bytecode.cvd): OK (IMS)

bytecode.cvd is up to date (version: 214, sigs: 41, f-level: 63, builder: neo)

Database updated (2405666 signatures) from db.local.clamav.net (IP: 203.178.137.175)

********************************************************************

* URLs of installed web applications:

*

* – Webmail: httpS://ubuntu.localdomain/mail/

* – Admin Panel (iRedAdmin): httpS://ubuntu.localdomain/iredadmin/

* + Username: postmaster@ubuntu.mail.com, Password: ajeetraina@ubuntu.mail.com

*

********************************************************************

* Congratulations, mail server setup completed successfully. Please

* read below file for more information:

*

* – /root/install/iRedMail-0.8.4/iRedMail.tips

*

* And it’s sent to your mail account postmaster@ubuntu.mail.com.

*

* Please reboot your system to enable mail services.

6. Now install Open-Xchange server through the below command:

aptitude install \

open-xchange open-xchange-authentication-database \

open-xchange-admin-client open-xchange-admin-lib \

open-xchange-admin-plugin-hosting open-xchange-admin-plugin-hosting-client \

open-xchange-admin-plugin-hosting-lib open-xchange-configjump-generic \

open-xchange-admin-doc open-xchange-contactcollector \

open-xchange-conversion open-xchange-conversion-engine \

open-xchange-conversion-servlet open-xchange-crypto \

open-xchange-data-conversion-ical4j open-xchange-dataretention \

open-xchange-genconf open-xchange-genconf-mysql \

open-xchange-imap open-xchange-mailfilter \

open-xchange-management open-xchange-monitoring \

open-xchange-passwordchange-database open-xchange-passwordchange-servlet \

open-xchange-pop3 open-xchange-publish open-xchange-publish-basic \

open-xchange-publish-infostore-online open-xchange-publish-json \

open-xchange-publish-microformats open-xchange-push-udp \

open-xchange-resource-managerequest open-xchange-server \

open-xchange-settings-extensions open-xchange-smtp \

open-xchange-spamhandler-default open-xchange-sql open-xchange-subscribe \

open-xchange-xerces-sun open-xchange-subscribe-json \

open-xchange-subscribe-microformats open-xchange-subscribe-crawler \

open-xchange-templating open-xchange-threadpool open-xchange-unifiedinbox \

open-xchange-admin-plugin-hosting-doc open-xchange-charset \

open-xchange-group-managerequest open-xchange-i18n open-xchange-jcharset \

open-xchange-sessiond open-xchange-calendar-printing \

open-xchange-user-json open-xchange-gui-wizard-plugin \

open-xchange-report-client \

open-xchange-configjump-generic-gui \

open-xchange-gui open-xchange-gui-wizard-plugin-gui \

open-xchange-online-help-de \

open-xchange-online-help-en open-xchange-online-help-fr open-xchange-gui-lang-community-ru-ru \

9. Run this command:

$ /etc/init.d/mysql restart

10.

echo PATH=$PATH:/opt/open-xchange/sbin/ >> ~/.bashrc &&. ~/.bashrc

11.

echo “GRANT ALL PRIVILEGES ON *.* TO ‘openexchange’@’localhost’ IDENTIFIED BY ‘open_password’;” > /tmp/openXchange_pri.sql

12. $ mysql -u root < /tmp/openXchange_pri.sql mysql -p

13.

$ /opt/open-xchange/sbin/initconfigdb –configdb-pass=open_password

14.

$/opt/open-xchange/sbin/initconfigdb –configdb-pass=open_password

15.

$ /opt/open-xchange/sbin/oxinstaller –no-license –servername=oxserver \

–configdb-pass=open_password –master-pass=open_master_password –ajp-bind-port=localhost –servermemory 1024

$ /opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P mysql123

16.

mkdir /var/opt/filestore

chown open-xchange:open-xchange /var/opt/filestore

17.

/opt/open-xchange/sbin/registerfilestore -A oxadminmaster -P mysql123 \

-t file:/var/opt/filestore -s 1000000

18.

/opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P mysql123 \

-n oxdatabase -p mysql123 -m true

19.

$ a2enmod proxy proxy_ajp proxy_balancer expires deflate headers rewrite mime setenvif

20.

$ /etc/init.d/apache2 force-reload

21.

mcedit /etc/apache2/conf.d/proxy_ajp.conf

22.

$ vim /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /var/www/> AllowOverride None Order allow,deny allow from all RedirectMatch ^/$ /ox6/ Options +FollowSymLinks +SymLinksIfOwnerMatch </Directory> # deflate AddOutputFilterByType DEFLATE text/html text/plain text/javascript application/javascript text/css text/xml application/xml text/x-js application/x-javascript # pre-compressed files AddType text/javascript .jsz AddType text/css .cssz AddType text/xml .xmlz AddType text/plain .po AddEncoding gzip .jsz .cssz .xmlz SetEnvIf Request_URI “\.(jsz|cssz|xmlz)$” no-gzip ExpiresActive On <Location /ox6> # Expires (via ExpiresByType to override global settings) ExpiresByType image/gif “access plus 6 months” ExpiresByType image/png “access plus 6 months” ExpiresByType image/jpg “access plus 6 months” ExpiresByType image/jpeg “access plus 6 months” ExpiresByType text/css “access plus 6 months” ExpiresByType text/html “access plus 6 months” ExpiresByType text/xml “access plus 6 months” ExpiresByType text/javascript “access plus 6 months” ExpiresByType text/x-js “access plus 6 months” ExpiresByType application/x-javascript “access plus 6 months” ExpiresDefault “access plus 6 months” Header append Cache-Control “private” Header unset Last-Modified Header unset Vary # Strip version RewriteEngine On RewriteRule v=\w+/(.+) $1 [L] # Turn off ETag Header unset ETag FileETag None </Location> <Location /ox6/ox.html> ExpiresByType text/html “now” ExpiresDefault “now” Header unset Last-Modified Header set Cache-Control “no-store, no-cache, must-revalidate, post-check=0, pre-check=0” # Turn off ETag Header unset ETag FileETag None </Location> <Location /ox6/index.html> ExpiresByType text/html “now” ExpiresDefault “now” Header unset Last-Modified Header set Cache-Control “no-store, no-cache, must-revalidate, post-check=0, pre-check=0” # Turn off ETag Header unset ETag FileETag None </Location> </VirtualHost>22.$ sudo /etc/init.d/apache2 restart
23.
sudo /etc/init.d/open-xchange-groupware start
24.$ /opt/open-xchange/sbin/createcontext -A oxadminmaster -P open_master_password -c 1 \-u oxadmin -d “Context Admin” -g Admin -s User -p admin_password -L defaultcontext \-e oxadmin@company.com -q 1024 –access-combination-name=all

svn –password “” –username anonymous co https://svn.open-xchange.com/ox-quickinstall/

http://paste.ubuntu.com/5789872/

http://paste.ubuntu.com/5789888/

Next,

$ /opt/open-xchange/sbin/createcontext -A oxadminmaster -P mysql123 -c 1 \

-u oxadmin -d “Context admin” -g aAdmin -s User -p mysql123 -L defaultcontext \

-e oxadmin@clevercircuits.com -q 1024 –access-combination-name=all

$ /opt/open-xchange/sbin/createcontext -A oxadminmaster -P mysql123 -c 1 \

-u oxadmin -d “Context admin” -g admin -s User -p mysql123d -L defaultcontext \

-e oxadmin@clevercircuits.com -q 1024 –access-combination-name=groupware_standard

$ /opt/open-xchange/sbin/createuser -c 1 -A oxadmin -P mysql123 -u testuser \

-d “Test User” -g test -s User -p secret -e testuser@clevercircuits.com \

–imaplogin testuser –imapserver 127.0.0.1 –smtpserver 127.0.0.1

Learn Puppet with Me – Day 2

Estimated Reading Time: 2 minutes

Today we are going to learn about Puppet Modules.

What is Puppet Modules? Puppetlabs defines it as “Modules are self-contained bundles of code and data. You can write your own modules or you can download pre-built modules from the Puppet Forge.”Nearly all Puppet manifests belong in modules. The sole exception is the main site.pp manifest, which contains site-wide and node-specific code.

puppetlabs-memcache

Modules are how Puppet finds the classes and types it can use — it automatically loads any classor defined type stored in its modules.

Module Layout

On disk, a module is simply a directory tree with a specific, predictable structure:

  • <MODULE NAME>
    • manifests
    • files
    • templates
    • lib
    • facts.d
    • tests
    • spec

We will start with basic module and slowly move towards the complex module structure.

Let’s begin:

#mkdir modules/memcached
#mkdir modules/memcached/manifests
#mkdir modules/memcached/files
#vi nodes.pp

node ‘puppetagent1.cse.com’ {
include memcached
}
#define memcached class in the file init.pp
#vi modules/memcached/manifests/init.pp

class memcached {
package { ‘memcached’:
ensure => installed,
}

file { ‘/etc/memcached.conf’:
source => puppet:///modules/memcached/memcached.conf’,
owner => ‘root’,
group => ‘root’,
mode => ‘0644’,
require => Package[‘memcached’],
}

service { ‘memcached’:
ensure => running,
enable => true,
require => [Package[‘memcached’], File[ ‘/etc/memcached.conf’]]
}
}

That’s all. You can go ahead and run puppet agent -t on puppet client machine to get memcache ready.

Learn Puppet With Me – Day 1

Estimated Reading Time: 2 minutes

Today is the day 1 of Learn Puppet with Me. I am starting this thread for those who want to learn Puppet smoothly.

Puppet is an Automation IT tool and I have already talked about its capabilities in my last post related to Puppet.

Puppet-in-bits

Let’s demystify the puppet fundamentals through this easy step.

Day 1: How to create a file with content “Hello, World” on puppet agent?

Say, I have a puppet master and agent ready. All I want is to create a file in puppet agent either running the command in puppet agent or directly fetching it from puppet master. Here we go –

Run the below commands on Puppetmaster Machine:

1. Create a directory called puppet:

#mkdir puppet

2. Change to puppet directory:

#cd puppet

3. Under it , create a subfolder called manifests:

#mkdir manifests

4. Create a file called site.pp under manifests:

#vi manifests/site.pp

import ‘nodes.pp’

5. Create a file called nodes.pp under manifest and add the following entries:

#vi manifests/nodes.pp

node’puppetagent1.cse.com’ {
file { ‘/tmp/hello’:
content => “hello, world\n”,
}
}

6. That’s all. Now test your manifests with the puppet apply command.

#puppet apply manifests/site.pp

OR

7. Run the following command on puppet agent:

# puppet agent -t

Verify if puppet created the file with the contents on the puppet agent machine.

It’s very simple way to create a file on puppet client through puppet master.

In next episode, we will talk about the Puppet Style and parameters.

How to setup RAID 1 on Ubuntu Linux?

Estimated Reading Time: 4 minutes

RAID 1 creates a mirror on the second drive. .You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.

raid1

You may have to create the RAID device first by indicating the RAID device with the block major and minor numbers. Be sure to increment the “2” number by one each time you create an additional RAID device.

# mknod /dev/md1 b 9 2

This will create the device if you have already used /dev/md0.

Create RAID 1

# mdadm –create /dev/md1 –level=1 –raid-devices=2 /dev/sdb7 /dev/sdb8

–create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md1.

–level=1
The level option determines what RAID level you will use for the RAID.

–raid-devices=2 /dev/sdb7 /dev/sdb8
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.

Verify the Create of the RAID

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[1] sdb7[0]

497856 blocks [2/2] [UU]

[======>…………..] resync = 34.4% (172672/497856) finish=0.2min speed=21584K/sec

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

# tail /var/log/messages

You can also verify that RAID is being built in /var/log/messages.

May 19 09:21:45 ub1 kernel: [ 5320.433192] md: raid1 personality registered for level 1

May 19 09:21:45 ub1 kernel: [ 5320.433620] md2: WARNING: sdb7 appears to be on the same physical disk as sdb8.

May 19 09:21:45 ub1 kernel: [ 5320.433628] True protection against single-disk failure might be compromised.

May 19 09:21:45 ub1 kernel: [ 5320.433772] raid1: raid set md2 active with 2 out of 2 mirrors

May 19 09:21:45 ub1 kernel: [ 5320.433913] md: resync of RAID array md2

May 19 09:21:45 ub1 kernel: [ 5320.433926] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.

May 19 09:21:45 ub1 kernel: [ 5320.433934] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.

May 19 09:21:45 ub1 kernel: [ 5320.433954] md: using 128k window, over a total of 497856 blocks.

Create the File System ext 3.
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.

# mke2fs -j /dev/md1

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

124928 inodes, 497856 blocks

24892 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67633152

61 block groups

8192 blocks per group, 8192 fragments per group

2048 inodes per group

Superblock backups stored on blocks:

8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

Mount the RAID on the /raid Partition

In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

# mount /dev/md1 /raid

# df
The df command will verify that it has mounted.

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda2 5809368 2699256 2817328 49% /

varrun 1037732 104 1037628 1% /var/run

varlock 1037732 0 1037732 0% /var/lock

udev 1037732 80 1037652 1% /dev

devshm 1037732 12 1037720 1% /dev/shm

/dev/sda1 474440 49252 400691 11% /boot

/dev/sda4 474367664 1738024 448722912 1% /home

/dev/md1 482090 10544 446654 3% /raid

You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md1 /raid defaults 0 2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.

Create a Failed RAID Disk

In order to test your RAID 1 you can fail a disk, remove it and reinstall it. This is an important feature to practice.

# mdadm /dev/md1 -f /dev/sdb8
This will deliberately make the /dev/sdb8 faulty.

mdadm: set /dev/sdb8 faulty in /dev/md1

root@ub1:/etc/network# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[2](F) sdb7[0]

497856 blocks [2/1] [U_]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

Hot Remove the Failed Disk
You can remove the faulty disk from the RAID array.

# mdadm /dev/md1 -r /dev/sdb8

mdadm: hot removed /dev/sdb8

Verify the Process

You should be able to see the process as it is working.

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb7[0]

497856 blocks [2/1] [U_]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

Add a Replacement Drive HOT

This will allow you to add a device into the array to replace the bad one.
# mdadm /dev/md1 -a /dev/sdb8

mdadm: re-added /dev/sdb8

Verify the Process.

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[2] sdb7[0]

497856 blocks [2/1] [U_]

[=====>……………] recovery = 26.8% (134464/497856) finish=0.2min speed=26892K/sec

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

How to setup RAID 0 on Ubuntu Linux?

Estimated Reading Time: 3 minutes

RAID 0 will create striping to increase read/write speeds as the data can be read and written on separate disks at the same time. This level of RAID is what you want to use if you need to increase the speed of disk access.You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.

raid0
These commands must be done as root or you must add the sudo command in front of each command.

# mdadm –create /dev/md0 –level=0 –raid-devices=2 /dev/sdb5 /dev/sdb6

–create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md0.

–level=0
The level option determines what RAID level you will use for the RAID.

–raid-devices=2 /dev/sdb5 /dev/sdb6
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.

Check the development of the RAID.

# cat /proc/mdstat

Personalities : [raid0]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks
unused devices:

# tail /var/log/messages
You can also verify that RAID is being built in /var/log/messages.

May 19 09:08:51 ub1 kernel: [ 4548.276806] raid0: looking at sdb5

May 19 09:08:51 ub1 kernel: [ 4548.276809] raid0: comparing sdb5(497856) with sdb6(497856)

May 19 09:08:51 ub1 kernel: [ 4548.276813] raid0: EQUAL

May 19 09:08:51 ub1 kernel: [ 4548.276815] raid0: FINAL 1 zones

May 19 09:08:51 ub1 kernel: [ 4548.276822] raid0: done.

May 19 09:08:51 ub1 kernel: [ 4548.276826] raid0 : md_size is 995712 blocks.

May 19 09:08:51 ub1 kernel: [ 4548.276829] raid0 : conf->hash_spacing is 995712 blocks.

May 19 09:08:51 ub1 kernel: [ 4548.276831] raid0 : nb_zone is 1.

May 19 09:08:51 ub1 kernel: [ 4548.276834] raid0 : Allocating 4 bytes for hash.

Create the ext 3 File System
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.

# mke2fs -j /dev/md0

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

62464 inodes, 248928 blocks

12446 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=255852544

8 block groups

32768 blocks per group, 32768 fragments per group

7808 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376

Writing inode tables: done

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

Create a Place to Mount the RAID on the File System

In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

# mkdir /raid

Mount the RAID Array

# mount /dev/md0 /raid

You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md0 /raid defaults 0 2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.

Hope you find this article helpful.

How do I bind NIC interrupts to selected CPU?

Estimated Reading Time: 2 minutes

I read this interesting mailing thread few weeks back. I won’t be late to share this with open source enthusiast like you. Here goes the story:

nic2

I have a 4 Quad server, am trying to bind NIC eth0 interrupt(s) to CPU4 and CPU5. As of now, my eth0 is found bind to all the 8’s.
#grep eth0 /proc/interrupts | awk ‘{print $NF}’ | sort

eth0-0
eth0-1
eth0-2
eth0-3
eth0-4
eth0-5
eth0-6
eth0-7

How to move ahead?

Solution: Follow these steps to get it done.

As I am using Broadcom card(bnx2), I am going to run this command and reboot my machine.

Open the terminal:

echo “options bnx2 disable_msi=1” > /etc/modprobe.d/bnx2.conf

then reboot, after you’ll only see one irq for eth0.

Next, run this command:

echo cpumask > /proc/irq/IRQ-OF-ETH0-0/smp_affinity

I believe the mask for cpu4 is 10 and cpu5 is 20.
(don’t forget to disable irqbalance)

you can only bind the irqs for one nic to one core at a time.

or you could do something fancy/silly with isolcpus and….

isolcpus all but 4/5 so that all irqs will be scheduled on 4/5. this will
mean that the kernel can only schedule tasks on cpu4/5.

Hope it helps !!!
then use cpusets/taskset/tuna to move all the processes off cpu 4/5… and
you’ll have to use taskset/cpuset/tuna for every task to ensure its not
using cpu4/5

Puppet Module for JBOSS

Estimated Reading Time: 5 minutes

Recently one of my colleague called me up with a problem statement where he was finding difficulty configuring JBOSS through puppet. I tried to help him through one of VMware Workstation box on my Dell Inspiron.

puppetlabs_1304099092_11

I tried to google but couldn’t find the working example. I tried my hands of my own and YES…I did it finally.

I am sharing the overall idea how to deploy and configure JBOSS through Puppet.

Let’s say you have the following steps which you manually perform for installing JBOSS on your Linux machine:

1.$ su -c “yum install java-1.6.0-openjdk-devel”

2.$ java –version

3.wget http://download.jboss.org/jbossas/7.1/jboss-as-7.1.1.Final/jboss-as-7.1.1.Final.zip

4.$ unzip jboss-as-7.1.1.Final.zip -d /usr/share

5.$ adduser jboss

6.$ chown -fR jboss.jboss /usr/share/jboss-as-7.1.1.Final/

7.$ su jboss

8.$ cd /usr/share/jboss-as-7.1.1.Final/bin/

9.$ ./add-user.sh

You should see the following message on the console after executing the command:

What type of user do you wish to add?

a) Management User (mgmt-users.properties)

b) Application User (application-users.properties)

(a): a

We select “a”, next you should see the following message:

Enter the details of the new user to add.

Realm (ManagementRealm) :

Username : jboss

Password :

Re-enter Password :

* hit enter for Realm to use default, then provide a username and password

We select the default value for the Realm (ManagementRealm), by hitting enter, and select “jboss” as our username. By default, we supply “jb0ss” as our password, of course, you can provide any password you prefer here.

Step 4: Start the JBoss AS 7 server:

Once the appropriate JBoss users are created, we are now ready to start our new JBoss AS 7 server. With JBoss AS 7, a new standalone and domain model has been introduced. In this tutorial, we focus on starting up a standalone server. The domain server will be part of a future tutorial.

Startup a JBoss 7, standalone instance:

A standalone instance of JBoss 7 can be starting by executing:

$ ./standalone.sh -Djboss.bind.address=0.0.0.0 -Djboss.bind.address.management=0.0.0.

We can automate those steps for client through Puppet.Let’s start writing puppet init.pp from scratch. I will be delivering step by step of init.pp to achieve every components of init.pp.

Line 1 – 4

The Line 1 to 4 does nothing but downloading JBOSS to /usr/share directory. What we are going to do is put the downloaded jboss-as-7.1.1.Final on /var/lib/puppet/files directory on puppet master and push it to the puppet-client at /usr/share/jboss-as directory.

Here is the below init.pp

The above init.pp define a class jboss-custom, takes JBOSS-as-7.1.1-Final from the puppet-master /var/lib/puppet/files/ and push it to the puppet-client.

Que: How does it know which directory to pull the files from?

Answer: Under /etc/puppet/fileserver.conf, we define those path and permission as shown below:

Shall we start?

Ensure that you have put JBOSS-as-7.1.1-Final under /var/lib/puppet/files directory with permission:

#chown –R puppet:puppet /var/lib/puppet

The permission is very important and shouldn’t be skipped.

Now run the command from the puppet-client to check if it runs without any issue:

Wow !!! Our first program went well and the server has pushed the file to the puppet-client successfully.

Line:5 to 9

The easiest way of performing the overall step is writing a shell script which will run on the remote machine:

Go to /var/lib/puppet/files and create a script called jbossdeploy.sh

#!/bin/bash

groupadd jbossas

useradd -g jbossas -p deQcvEr1PRPSM jbossas

chown -fR jbossas:jbossas /usr/share/jboss-as-7.1.1.Final/

cd /usr/share/jboss-as/

#!/usr/bin/expect

spawn ./add-user.sh

expect “(a):”

send “a”

expect “Realm (ManagementRealm):”

send “ManagementRealm”

expect “Username:”

send “jbossas”

expect “Password:”

send “jbossas”

expect “Re-enter Password:”

send “jbossas”

cd /usr/share/jboss-as-7.1.1.Final/bin

./standalone.sh -Djboss.bind.address=0.0.0.0 -Djboss.bind.address.management=0.0.0.0&

The above script will create jboss user and group, run add-user.sh command under /usr/share/jboss-as/jboss-as-7.1.1.Final/bin directory. I have used expect library (ensure it is already installed) in perl.

Lets modify the init.pp so as to accommodate this script execution as shown:

class jboss-custom {

file {‘/usr/share/jboss-as/jboss-7.1.1.Final’:

owner => ‘root’,

group => ‘root’,

mode => ‘0440’,

source => ‘puppet://puppet-server.test.com/files/jboss-as-7.1.1.Final’

}

file { ‘/usr/share/jboss-as/jbossdeploy.sh’:

source => ‘puppet://puppet-server.test.com/files/jbossdeploy.sh’

}

exec { “/usr/share/jboss-as/jbossdeploy.sh”:}

}

If you run now the following it goes all fine and start the JBOSS application server.

Lets test it.

puppet agent –test –verbose –server puppet-server.test.com

info: Caching catalog for puppet-client.test.com

info: Applying configuration version ‘1345944985’

notice: /File[/usr/share/jboss-as/jbossdeploy.sh]/content:

— /usr/share/jboss-as/jbossdeploy.sh 2012-08-29 17:37:01.365003616 -0400

+++ /tmp/puppet-file20120829-18751-81r0p8-0 2012-08-29 17:44:40.993732919 -0 400

@@ -1,12 +1,10 @@

#!/bin/bash

groupadd jbossas

useradd -g jbossas -p deQcvEr1PRPSM jbossas

chown -fR jbossas:jbossas /usr/share/jboss-as-7.1.1.Final/

-cd /usr/share/jboss-as/jboss-as-7.1.1.Final/bin

+cd /usr/share/jboss-as/

#!/usr/bin/expect

-/usr/bin/expect << EOD

-spawn sh add-user.sh

+spawn ./add-user.sh

expect “(a):”

send “a”

expect “Realm (ManagementRealm):”

@@ -17,7 +15,6 @@

send “jbossas”

expect “Re-enter Password:”

send “jbossas”

-EOD

cd /usr/share/jboss-as-7.1.1.Final/bin

./standalone.sh -Djboss.bind.address=0.0.0.0 -Djboss.bind.address.management=0. 0.0.0&

info: FileBucket adding {md5}afc9bd6b8229da628396b90f2759f41f

info: /File[/usr/share/jboss-as/jbossdeploy.sh]: Filebucketed /usr/share/jboss-a s/jbossdeploy.sh to puppet with sum afc9bd6b8229da628396b90f2759f41f

notice: /File[/usr/share/jboss-as/jbossdeploy.sh]/content: content changed ‘{md5 }afc9bd6b8229da628396b90f2759f41f’ to ‘{md5}140ab2a8605d1164793c2175aa972675’

notice: /Stage[main]/Jboss-custom/Exec[/usr/share/jboss-as/jbossdeploy.sh]/retur ns: executed successfully

notice: /File[/usr/share/jboss-as/jboss-7.1.1.Final]/ensure: created

notice: Finished catalog run in 0.79 seconds

[root@puppet-client ~]#