Fog – An Open Source Cloning Solution

If you are a system admin who is still sticking to Clonezilla, you must probably look out to try FOG – a very fast and easy to deploy cloning solution. Compared to Clonezilla, Fog imaging process usually takes less than few minutes. With the current release, fog includes the ability and support for Linux and multiboot resizable imaging too. However,FOG can be used to image Windows XP, Vista, Windows 7 and Window 8 PCs using PXE, PartClone, and a Web GUI to tie it together.

foggu

I had one of my friend who runs cyber cafe business and keen on deploying the similar configuration on all the 50 Windows machine. I visited his cyber cafe in the inauguration day where I came up with the idea of cloning and rescue suite. I decided to help him out with the implementation of Fog – really a great tool.

  1. Installing Ubuntu 12.04.1

As a pre-requisite, install Ubuntu 12.04.1 on the physical machine.

Ensure Ubuntu Desktop packages are selected to be installed.

  1. Installing Fog Cloning Solution
  2. Open Firefox
    2. Go tohttp://www.fogproject.org and download FOG.
    3. Open Terminal Applications->Accessories->Terminal
    4. cd Desktop (Remember Linux is case sensitive)
    5. tar -xvzf fog*
    6. cd fog*
    7. cd bin
    8. sudo ./installfog.sh
    9. Select opt 2 Enter
    10. N Enter
    11. Default IP Enter
    12. You dont need to set up a router IP but I will in case I ever use the server for DHCP.
    13. Set up a DNS IP, just accept the default.
    14. No do not change the default network interface.(you may not get this prompt if you have 1 nic)
    15. I will not be using FOG for DHCP. (will require changing my current DHCP server.)
    16. Note your IP settings and continue.
    17. Enter to Acknowledge.
    18. (I like to notify the FOG group, they have made a great product and deserve my feedback, choice is yours here)
    19. gksu gedit /var/www/fog/commons/config.php and put your mysql password you typed during install in “MYSQL_PASSWORD”, “<passwordhere>” save and close. It has been noted that you also should change the MYSQL password here while you are at it /opt/fog/service/etc/config.php.
    20. Browse to http://localhost/fog/management
    21. Click install!
    22. Click to log in. You can now reach this webpage from anywhere on the network where your server is installed by using it’s ip address. eg http://192.168.0.100/fog/managementI would reccomend putting an A record in your DNS called FOGSERVER this will make things easier to remember.

default: fog/password

Creating an Image in FOG

  The following instructions walk administrators through the process of configuring a computer for image creation in FOG.

Create Image in FOG

Log into the FOG management console

  1. FOG management console URL or address is:

http://ip address/fog/management

  1. Click the Image Management icon
  2. Click New Image button in the left section of screen
  3. Enter the following Information
  4. Image Name: Use clear concise name
  5. Consider keeping name short and model or OS specific
  6. Image Description: Enter clear concise description
  7. Storage Group: Default
  8. Image File: Will automatically be entered. You may edit if you want
  9. Image Type
  10. Windows XP = Single Partition (NTFS Only, Resizable)
  11. Windows Vista/7 = Multiple Partition Image – Single Disk (Not Resizable)
  12. Click Add

Inventory Machine

  1. Boot host machine to FOG pxe boot menu
  2. Select Perform Full Host Registration and Inventory
  3. Enter the computer Host Name and press enter
  4. Leave IP Address field blank and press enter
  5. Press the ? and press enter to get the list of Image ID’s
  6. Enter the Image ID number
  7. Press the ? and press enter to get the list of Operating System ID’s
  8. Enter the Operating System ID number
  9. Choose Y or N to add to Active Directory
  10. NOTE: XP choose Y, Vista/7 choose N
  11. Leave Primary User field blank and press enter
  12. Leave both Asset #’s blank unless you utilize
  13. Select N as you do not want to Image this machine and press enter
  14. Enter “fog” and press enter
  15. Load machine with all software and drivers you need.
  16. Windows XP you need an image for each model
  17. Windows Vista & Windows 7 you can use the same image for any model but hard drive size can cause issues.

**DO NOT Activate Windows in Vista or 7**

Optional 

It is a good idea to change the default boot order to enable the network boot to be your first boot item. It is not required, however. All brands have a slightly different way of doing this. Check your manufacturer’s manual for assistance figuring out how to boot into the BIOS to make this change.

  1. Ensure all Windows Updates are Current
  2. Make sure all service packs are current
  3. Make sure current with .NET framework
  4. Install the FOG Client
  5. Enter the IP address of your FOG server
  6. Leave all other options checked
  7. Windows XP continue to STEP 13
  8. Windows Vista & Windows 7 continue to STEP 25

 

 Windows XP

  1. Install the Windows XP Service Pack 2 Support Tools
  2. http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=49ae8576-9bb9-4126-9761-ba8011fabf38
  3. Download Windows XP Service Pack 3 Deployment Tools
  4. http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=673a1019-8e3e-4be0-ac31-70dd21b5afa7
  5. Open the deploy.cab file
  6. Select all files within
  7. Right Click and select Extract
  8. Select the following destination
  9. C:\Windows\System32\Sysprep
  10. NOTE: If the Sysprep folder doesn’t exist you need to create it
  11. Double Click the setupmgr.exe file in the C:\Windows\System32\Sysprep folder
  12. Click Next
  13. Select Modify Existing
  14. Click Browse Button
  15. Go to the C:\Windows\System32\Sysprep folder if not already there
  16. Click the drop down menu in the “Files of type” field
  17. Select Sysprep Inf Files (*.inf)
  18. Select the sysprep.inf file
  19. Click Open
  20. Click Next
  1. Make sure “Sysprep setup” is selected
  2. Click Next
  3. Select “Windows XP Professional”
  4. Click Next
  5. Select “Yes, fully automate the installation” if you use the same product key for all machines. If not select “No, do not fully automate the installation”
  6. Click Next
  7. Change the following settings:
  8. Name and Organization
  9. Time Zone

iii. Product Key

  1. Computer name
  2. Set to Automatically generate computer names (FOG will rename for you)
  3. Administrator Password
  4. Click File
  5. Click Save
  6. The save location should be C:\Windows\System32\Sysprep
  7. Click OK
  8. Close the Setup Manager program
  9. Run Disk Cleanup
  10. Select all items
  11. Click the More Options Tab
  12. Under the “System Restore and Shadow Copies” section click the Clean Up button
  13. Click OK
  14. Click the Delete Files button
  15. Run Disk Defragment Tool
  16. Run the defragmenting tool and ensure there is not more than 5%-10% fragmentation
  17. 1%-2% is ideal
  18. When ready to Sysprep the machine ensure you have installed all programs you want on the machine and removed everything you do not.

NOTE: You cannot turn the machine back on after SYSPREP’ing until after you have taken the image.

  1. When ready to sysprep the machine proceed to step 12.
  2. Navigate to C:\Windows\System32\Sysprep
  3. Double Click the Sysprep.exe file
  4. When the Sysprep program appears choose the following settings
  5. Options section
  6. Check “Use Mini-Setup”
  7. Shutdown Mode: “Shut down”
  8. Click Reseal button
  1. The machine will sysprep and then shut down
  2. Continue to STEP 37

**Leave the machine OFF until instructed otherwise**

Windows Vista & Windows 7

Windows Vista & Windows 7 do not have the Support Tools XP does. These instructions will cover this difference along with different SYSPREP steps.

Retrieving NETDOM.exe Instructions

You will need to perform STEP 6 in the Windows XP instructions on a Windows XP machine. You only need to do this one time.

Once installed navigate to C:\Program Files\Support Tools

Locate the NETDOM.exe file and copy

On a flash drive or network share you manage create a folder called “Support Tools”

Paste the NETDOM.exe file into the new Support Tools folder

  1. Copy the “Support Tools” folder (see “Retrieving NETDOM.exe Instructions”)
  2. Navigate to C:\Program Files
  3. Paste Support Tools folder into Program Files folder
  4. Path will be C:\Program Files\Support Tools
  5. NETDOM.exe will be inside the Support Tools folder
  6. Run Disc Cleanup Utility
  7. Check all items
  8. Click More Options tab
  9. In the System Restore and Shadow Copies section click Clean Up
  10. Click Delete button
  11. Click OK
  12. Click Delete Files button
  13. Run Disc Defragmenter Utility
  14. You can choose to simply Analyze the disc first. Your disc must be less than 15% fragmented. If it is higher then defragment it.
  15. I recommend defraging it anyway to get it as low as possible

**Ensure you are prepared to SYSPREP this machine as you will have to repeat the following steps if you have to boot the machine after SYSPREP’ing**

Optional

Past versions of FOG required Vista and 7 images to have the following three commands run immediately before continuing to STEP 23

Run CMD.exe as Administrator

bcdedit /set {bootmgr} device boot

bcdedit /set {default} device boot

bcdedit /set {default} osdevice boot

**It is not required to run these commands**

  1. Copy the unattend.xml file to the following location:
  2. c:\windows\system32\sysprep
  3. Run CMD.exe as Administrator
  4. Run the following command:
  5. cd c:\windows\system32\sysprep
  6. Now run the following command to sysprep the drive:
  7. sysprep /generalize /oobe /shutdown /unattend:c:\windows\system32\sysprep\unattend.xml
  8. SYSPREP will take a few moments to run
  9. The machine will shut down when done
  10. Continue to STEP 37

**Leave the machine OFF until instructed otherwise**

 

Uploading the Image

Choosing shutdown after task completion ensures you don’t begin unpacking your sysprep’ed machine by booting to Windows should the image upload fails.

  1. Log into the FOG Management Console
  2. Click the Task Management icon
  3. Click List All Hosts
  4. Locate the host you Inventoried in the Inventory Machine section of these instructions
  5. Click the Upload arrow for the indicated host
  6. Add a check to the following option
  7. Shutdown after task completion?
  8. Click Upload Image
  9. Ensure the host machine PXE boots and not to the hard drive

**NOTE: If the machine begins to boot to Windows for any amount of time you will need to SYSPREP again if your image has not been uploaded successfully

Test Image

  1. When the image has been uploaded test it on a different machine
  2. Ensure you use the same make and model if using a Windows XP machine
  3. Vista and Windows 7 machines are not model or brand specific but you may run into driver issues. You will need to address those issues either prior to upload or after.

46. Test as many possible scenarios prior to implementing your image to production

0
0

IT Automation through Ansible

Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.

One of the main reason why I choose Ansible over other IT automation tools like Puppet, Chef etc. – It manages machines in an agentless manner. As OpenSSH is one of the most peer reviewed open source components, the security exposure of using the tool is greatly reduced. Ansible is decentralized – it relies on your existing OS credentials to control access to remote machines; if needed it can easily connect with Kerberos, LDAP, and other centralized authentication management systems.

Today we are going to start with Ansible. This guide is basically built for begineers and novice who want to dirty their hands with Ansible. Let’s begin.

Setting up Ansible Server and Client:

We have three machines. I assume that there is already ansible running on machine1. We are going to install packages and few configuration changes on DB(machine2) under this post. I will discuss Web part on next post.

Here are the machine details:

Machine1> – 192.168.1.5Ansible)

Machine2> – 192.168.1.6 ( DB)

Machine3> – 192.168.1.7(Web)

As shown below, create two folders – db and web under /etc/ansible/roles.

ansible-2

The handlers , tasks and templates are recommended directory structure for ansible to place respective YAML files.

Don’t worry about the contents as of now. We just need empty directory and file structure.

Let’s start creating a first file called playbook.yml:

root@ansible-host:/etc/ansible# cat playbook.xml—# Common role playbook- hosts: alltasks: []

– hosts: 192.168.1.6

sudo: yes

roles:

– db

As shown above, playbook.yml sits under /etc/ansible and contains list of ansible machines where the deployment has to be done. A small typo above – the second .194 machine is actually 196.

Under each hosts, we need to mention roles name so that the specific role is called upon while the deployment phase is initiated on each hosts.

Let’s talk about DB contents first:

Folder: handlers

root@ansible-host:/etc/ansible/roles/db# cat handlers/main.yml—- name: start mysqlservice: name=mysql state=started- name: restart mysqlservice: name=mysql state=restarted
root@ansible-host:/etc/ansible/roles/db/tasks# cat install.yml—- name: Install mysqlapt: name={{ item }} state=latestwith_items:- mysql-server

– python-mysqldb

– php5-mysql

– libapache2-mod-auth-mysql

notify: start mysql

root@ansible-host:/etc/ansible/roles/db/tasks# cat mysql_secure_installation.yml—- name: create mysql root passcommand: /usr/bin/openssl rand -base64 16register: mysql_root_passwd

– name: update mysql root passwd

mysql_user: name=root host={{ item }} password={{ mysql_root_passwd.stdout }}

with_items:

– “{{ ansible_hostname }}”

– 127.0.0.1

– ::1

– localhost

– name: copy user my.cnf file with root passwd credentials

template: src=dotmy.cnf.j2 dest=/root/.my.cnf owner=root group=root mode=0600

– name: delete anonymous mysql user

mysql_user: name=”” state=absent

– name: remove mysql test database

mysql_db: name=test state=absent

– name: create database blog

mysql_db: name=blog state=present

– name: create database user with name ‘blog’ and password ‘blog’ with all DB privileges and with GRANT options

mysql_user: name=blog password=blog priv=*.*:ALL,GRANT state=present

root@ansible-host:/etc/ansible/roles/db/tasks# cat main.yml—- include: install.yml- include: mysql_secure_installation.yml
cat dotmy.cnf.j2[client]user=rootpassword={{ mysql_root_passwd.stdout }}root@ansible-host:/etc/ansible/roles/db/templates# cat my.cnf.j2[client]

user=root

password={{ mysql_root_passwd.stdout }}

That’s all for DB to function well.

Hence, we are ready to execute the commands on remote 192.168.1.6 machine through ansible.

root@ansible-host:/etc/ansible# ansible-playbook playbook.yml

PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************

ok: [192.168.1.6]

PLAY [192.168.1.6] **********************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.1.6]

TASK: [db | Install mysql] ****************************************************
changed: [192.168.1.6] => (item=mysql-server,python-mysqldb,php5-mysql,libapache2-mod-auth-mysql)

TASK: [db | create mysql root pass] *******************************************
changed: [192.168.1.6]

TASK: [db | update mysql root passwd] *****************************************
changed: [192.168.1.6] => (item=ansible-client)
changed: [192.168.1.6] => (item=127.0.0.1)
changed: [192.168.1.6] => (item=::1)
changed: [192.168.1.6] => (item=localhost)

TASK: [db | copy user my.cnf file with root passwd credentials] ***************
changed: [192.168.1.6]

TASK: [db | delete anonymous mysql user] **************************************
ok: [192.168.1.6]

TASK: [db | remove mysql test database] ***************************************
ok: [192.168.1.6]

TASK: [db | create database blog] *********************************************
changed: [192.168.1.6]

TASK: [db | create database user with name ‘blog’ and password ‘blog’ with all DB privileges and with GRANT options] ***
changed: [192.168.1.6]

NOTIFIED: [db | start mysql] **************************************************
ok: [192.168.1.6]

PLAY [192.168.1.6] **********************************************************

GATHERING FACTS ***************************************************************
ok: [192.168.1.6]

TASK: [web | Install web] *****************************************************
changed: [192.168.1.6] => (item=python-pip,python-mysqldb)

TASK: [web | install tornado and torndb] **************************************

192.168.1.6 : ok=13 changed=7 unreachable=0 failed=0

Done. Now you can easily SSH and check on the remote DB machine that MySQL is successfully installed.

Isn’t the magic? You didn’t need any agent running on the remote machine. Just OpenSSH enables the IT automation.

Catch you in my next post.

0
0

It was an Openstack Day…OSI 2014

Yesterday I attended Open Source India 2014 event which happened in NIMHANS Convention Center, Bengaluru. If you are new to OSI, let me brief you that the Open Source India is around 11 years old and believe me..its Asia’s largest convention on open source technology. Founded by EFI group, the main motto of this organization is to bridge the gap between industry and the open source community. This event is  a step forward for bringing some successful open source implementations  in the form of keynotes, discussions, workshops.

osi-0

I reached out to the venue at around 9:30 AM. I was pretty aware of the event schedule.The two-day event featured tracks on topics including Web App Development, Mobile App Development, IT Infrastructure Day, Cloud Day, Kernel Day, Database Day, FOSS For Everyone, IT Implementation Success Stories and Open Stack Mini Conference.

HP was the platinum partner while Microsoft, MongoDB, Wipro, Oracle, Zimbra were the other vendors who were ready in their booth to welcome you with their open source offerings. I took the first 20 minutes visiting each booths just to have a glimpse before I entered into Hall-1 for the morning keynote.

iso-ms

Rajeev Pandey, an HP Distinguished Technologist, started the keynote on “A Deployment Architecture for OpenStack in the Enterprise”. He talked about HP Helion Cloud and its offering but with the disclaimer that its open source event and what open source offering HP has.

osi-3

“FOSS Adoption in four classes of institutions in India” was next topic of discussion.  The speaker shared very interesting survey on the open source adoption by research organization, Higher education, government and IT-SME. The survey is public and published under http://www.au-kbc.org/survey

The session titled “Free & Open Source Enterprise Linux” by Kamal Dodeja, Global Sales consulting Manager, Oracle India was well presented and very informative. Oracle speaker talked about the contribution towards XFS, MySQL, Virtualbox, OpenJDK, Xen, Java, .Net, dTrace, Eclipse, Metro, InnoDB and Glass Fish.

iso-ms

I raised a question regarding the latest inclusion of MariaDB by RHEL 7 replacing MySQL and the speaker looks convincing on this as he said that if you are MySQL user looking for support, you come to us…and if you want to play around with code, then explore through MariaDB. It was interesting to see that even if MySQL core is still open source, the Openstack recently in their icehouse recommends MariaDB rather than MySQL. This was very interactive session and good to know how Oracle has still preserved their open source offering after Sun acquisition.

osi-oracle

It was a tea break and lunch too. I skipped a session on Wikipedia as I want to make myself ready for post Lunch event. The HP Helion sales team knows how to sell their product and had tech challenges in place.Soon I left for next event “OpenStack Mini Conf”.

I bagged HP Helion Jacket answering one of the query related to “Rackspace”. It was a great feeling altogether and a good interactive session. It was followed by “State of the Doplhins & the Penguins” by Sanjay Manwani, MySQL, India Driector and Ramesh Srinivasan, Senior Director, Oracle Linux and Virtualization. This session covered a complete history of Open Source movement by Oracle till date, timeline with their FOSS contributions.

“OpenStack Development and Contribution Workflow” by Swapnil Kulkarni was the next topic which I was eagerly waiting for. The speaker talked about step by step implementation of Openstack and how to contribute to the openstack through git. Though it was complete demonstration, the small font of Linux commands was something which put the audience to boredom. However, I appreciate his knowledge and subject matter expertise. I answered to couple of questions related to github during this session.

Skipped for tea in between, as I was eagerly waiting for next big session” Open stack Nova Deep Dive and Nova Instance Management lifecycle”. This was truly a great session. Anil Bidari is a trainer at Cloudenabled and very well presented about “Behind the scene while you fork any instance”. He referred to https://www.youtube.com/watch?v=Y0GBxFYeM1s as one of his openstack demo he has put for everyone. I am eagerly waiting for his recorded presentation. Simply awesome presentation !!!

“Ironic” – something very new to me, was next topic from HP Helion Group. Ironic is a bare metal provisioning tool which has been recently incubated into Icehouse edition. The presentation was very descriptive. Ironic architecture was presented well. I had couple of questions as it was very similar to Puppet Razor and it was answered convincingly.

openstack-irnoc

It was 6:00 PM and the event was soon to be wrapped up for the next day. But still I had an enough energy to listen for more speakers. The last but not the least was something I was keen to attend – “Docker as hypervisor driver for OpenStack Nova Compute”.

Docker is an open platform for developers and system administrators to build, ship and run distributed application. This is something which is coming up and could be threat to virtual world. To explain in simple words, if you have Fedora VM  you use user libraries/binaries + kernel component. While we say Fedora Docker, it means you just need user libraries(and not kernel components). It is a concept very closely related to user namespace.

Overall, the event was very informative. I met with couple of college students, Cloud experts and interacted with vendors.

0
0

Installing Skype on CentOS 6.5

Getting skype working on CentOS 6.5 has been a daunting job for lot of system administrators. Usually I found system administrators posting this query in facebook and other Linux groups. This lead me to try out Skype on one of the available CentOS 6.5 machine. This guide should work for RHEL and latest Fedora versions too.

Important Note published by CentOS Team:

“…Starting with Aug 4th 2014, no version of Skype older than 4.3 works due to the changes that were implemented in the authentication mechanism. Any attempt to use older versions lead to an error message similar to “Cannot contact server” ( the exact message varies depending on the version that was used )…”

1. Login to CentOS 6.5 machine

1

2. Update the EPEL repository:

2

3. Ensure that the latest epel release is installed as shown below:

3

4. Download the skype 4.3 from skype website and unzip it under /usr/src folder:

4

5. Rename the unzip to skype:

5

6. Create the below file:

6

7. Modify the permission of the file as instructed:

7

8. Provide the appropriate links:

8

9. Ensure that these 32 bit packages are installed:( Please note that the skype installation on CentOS and Ubuntu OS ususally fails due to 32 bit packages compatibility.

9

10. Skype is ready now to be run .  You can run skype commandline directly or open it through the GUI if you are not using it through putty:

10

Still finding difficulty in getting the skype working, post your equiry at http://collabnix.com/forum

0
0

Automating Oracle Weblogic Server installation through shell script

Automation always saves your considerable time. Especially when you have to follow the similar step for hundreds of machine, automated scripts and tools have always been a great weapon for system administrators.Today I spent considerable time to setup Weblogic Server 10.3.6 on my CentOS 7.0 machine through shell script. This unattended script uses autoexpect rather than silent.xml or WLST as suggested by Oracle. Let me share the steps I followed to deploy the Weblogic Server:

Ensure you have the following software in place downloaded from Oracle Website.We can’t use wget for this as it requires Oracle Login for downloading these pieces of software.

Links for Software Downloads:

a. Download jdk-7u67-linux-i586.gz from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
b. Download wls1036_dev.zip from http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-main-097127.html

Once you download the above software, create a directory called /softwaretmp/ and download the above software into this directory:

#mkdir /softwaretmp/

#cd /softwaretmp/

Weblogg

2. Create an empty file called prepare.sh and paste the following shell script (shown below):

#!/bin/bash
echo “Checking if WebLogic Server is already runing. If its running, stopping it and reinstalling it from scratch”
pkill -9 java
pkill -9 Weblogic
rm -fr /u01/oracle/wlsdomains/base_domain
rm -fr /u01/oracle/fmw/wlserver_10.3
rm -fr /u01/jdk/jdk7
pkill -9 java
echo “Initializing the Installation”
groupadd orainstall
useradd -g orainstall oracle
mkdir -p /u01/jdk
cd /u01/jdk
tar -zxvf /softwaretmp/jdk-7u67-linux-i586.gz
ln -s jdk1.7.0_67 jdk7
mkdir -p /u01/oracle/fmw/wlserver_10.3
cp -rf /softwaretmp/wls1036_dev.zip /u01/oracle/fmw/wlserver_10.3/
echo ”
MW_HOME=/u01/oracle/fmw/wlserver_10.3; export MW_HOME
JAVA_HOME=/u01/jdk/jdk7; export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH; export PATH
” >> ~/.bash_profile
cd ~
. ./.bash_profile
cd /u01/oracle/fmw/wlserver_10.3
unzip wls1036_dev.zip
./configure.sh
. $MW_HOME/wlserver/server/bin/setWLSEnv.sh
mkdir -p /u01/oracle/wlsdomains
cp -rf /softwaretmp/script.exp /u01/oracle/fmw/wlserver_10.3/wlserver/common/bin/
cd /u01/oracle/fmw/wlserver_10.3/wlserver/common/bin
./script.exp
echo ” Expect Script is well run”
cd /u01/oracle/wlsdomains/base_domain/
mkdir -p servers/AdminServer/security
mkdir -p servers/managedserver_1/security
cd servers/AdminServer/security
echo
“username=weblogic
password=Oracle9ias
” >> boot.properties
cd /u01/oracle/wlsdomains/base_domain/servers/managedserver_1/security
echo
“username=weblogic
password=Oracle9ias
” >> boot.properties
echo ” Weblogic Server 10.3.6 Configuration is all done !!”
echo “Starting the Weblogic Server”
cd /softwaretmp/
./startweblogic.sh
./startManagedServer.sh

3. You need to add two more scripts to the /softwaretmp directory:

Create a empty file called startweblogic.sh and paste the below content:

#!/usr/bin/expect -f
cd /u01/oracle/wlsdomains/base_domain/bin/
spawn ./startWebLogic.sh
expect “Enter username to boot WebLogic server: ”
send “weblogic\r”
expect “$ ”
expect “Enter password to boot WebLogic server: ”
send “Oracle9ias\r”
expect “$ ”
send “exit\r”

Also, create another empty file called startManagedServer.sh and paste the below lines:

#!/usr/bin/expect -f
cd /u01/oracle/wlsdomains/base_domain/bin/
spawn ./startManagedWebLogic.sh managedserver_1
expect “Enter username to boot WebLogic server: ”
send “weblogic\r”
expect “$ ”
expect “Enter password to boot WebLogic server: ”
send “Oracle9ias\r”
expect “$ ”
send “exit\r”

Save the file.

4. Now comes the important step. You need to create script.exp file through series of steps. You can always create script.exp through the following steps:

a. Ensure that the autoexpect software through the following command:

#yum install autoexpect

b. Start the autoexpect tool through the following command:

#autoexpect -s

It will output that autoexpect has already initiated and will be saved under script.exp

c. Now run the following command under /u01/oracle/fmw/wlserver_10.3/wlserver/common/bin

#cd /u01/oracle/fmw/wlserver_10.3/wlserver/common/bin

#./config.sh

Follow the general steps for selecting the right options as per your infrastructure.

Once completed, ensure you run the following command:

#exit

autoexpect stopped.

Once you have completed the above steps, a script.exp gets created which has to be copied to /softwaretmp directory.

Still finding difficulty? Post your questions at http://collabnix.com/forum

0
0

How to integrate Redmine with Git?

Redmine built through Ruby on Rails has been impressive free and open source web-based project management. I have been Trac quite for some time and find Redmine very similar.

One of my company colleague was finding difficulty integrating Redmine with Git. I decided to help him from the scratch and it went flawless.

I had a VMware Workstation running on my Inspiron. I installed CentOS 6.3 but it should work for CentOS 6.2 too. I followed these below steps:

Installing Pre-requisite Packages

# yum install openssl-devel zlib-devel gcc gcc-c++ automake autoconf readline-devel curl-devel expat-devel gettext-devel patch mysql mysql-server mysql-devel httpd httpd-devel apr-devel apr-util-devel libtool apr

Install Ruby on Rails

Download Ruby source code from http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p194.tar.gz

#wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p194.tar.gz

# tar xzvf ruby-1.9.3-p194.tar.gz# cd ruby-1.9.3-p194

# ./configure –enable-shared

# make

# make install

# ruby –v

ruby 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux], MBARI 0x6770, Ruby Enterprise Edition 2010.01

Install Ruby Gems

wget http://production.cf.rubygems.org/rubygems/rubygems-1.8.24.tgz

# tar xvzf rubygems-1.8.24.tgz

# cd rubygems-1.8.24

# ruby setup.rb

# gem -v

1.8.24

# gem install rubygems-update# update_rubygems

Install Rake

# gem install rake

Install Rails

# gem install rails

Install Passenger

#gem install passenger

Install Redmine

Download the redmine software throughhttp://rubyforge.org/frs/download.php/76259/redmine-2.0.3.tar.gz

cd /usr/local/share

# wget http://rubyforge.org/frs/download.php/76259/redmine-2.0.3.tar.gz

# tar xzvf redmine-2.0.3.tar.gz

# cd redmine-2.0.3

# ln -s /usr/local/share/redmine-2.0.3/public /home/code/public_html/redmine

Configure MySQL

Create the Redmine MySQL database:

# mysql -u root -p

mysql> create database redmine character set utf8;

mysql> create user ‘redmine’@’localhost’ identified by ‘Pass123’;mysql> grant all privileges on redmine.* to ‘redmine’@’Pass123’;

Configure database.yml:

# cd /usr/local/share/redmine-2.0.3

# vi config/database.yml

production: adapter: mysql database: redmine host: localhost username: redmine password: my_password

Generate a session store secret:

# gem install -v=0.4.2 i18n

# gem install -v=0.8.3 rake

# gem install -v=1.0.1 rack

# rake generate_session_store

While you run the last command(shown above) you might encounter error messages related to rmagick

We can skip ImageMagick completely and execute the following command:

# bundle install –without development test rmagick

Setup permission:

# chown -R apache:apache /usr/local/share/redmine-2.0.3

# find /usr/local/share/redmine-2.0.3 -type d -exec chmod 755 {} \;

# find /usr/local/share/redmine-2.0.3 -type f -exec chmod 644 {} \;

Configuring Virtual Host

Your apache configuration for virtualhost should look like this:

ServerName codebinder.comServerAlias www.codebinder.comRailsBaseURI /RailsEnv productionDocumentRoot /home/code/public_html/redmine/public<Directory /home/code/public_html/redmine/public>

Options -MultiViews

</Directory>

Open http://codebinder.com and you will be able to see the redmine page successfully.

Installing Git

# yum install git git-core

Redmine User Guide

Open http://codebinder.com and you will see this page:

For the first time, admin/admin are the credentials.

You will find redmine default page. Click on Administration.

Click on Settings option.

Choose Repositories.

If you have subversion or Darcs or Mercurial or CVS or Git, you will find path enabled.

Since Git is installed on linux machine, the path /usr/bin/git will get displayed.

How to Create a New Project?

Click on New Project.

Let’s create a new project as shown below:

Click on Create and Continue once you entered all the needed entries.

You will be able to see foo project information by clicking on overview tab.

We are going to import git repository for this project.

Under Settings > Repositories, click on New repository as shown above slide.

A Contractor can create a repository called “foo” as shown below:

So we have a git repository created at /home/code/gitrepos/foo/.git which we need to include in the redmine page as shown below.

Hence, you see that Git repository has been created and integrated successfully with Redmine.

Hope I have put all the steps very clearly.

0
0

Running Hadoop on Ubuntu 14.04 ( Multi-Node Cluster)

This is an introductory post on Hadoop for new begineers who want step by step instruction for deploying Hadoop on the latest Ubuntu 14.04 box.Hadoop allows for the distribution processing of large data sets across clusters of computers. It uses Map Reduce programming model. It is designed to scale up from single servers to 1000 of machines, each offering local computation and storage.

HDFS is the distributed file system that is available with Hadoop.MapReduce tasks use HDFS to read and write data.HDFS deployment includes a single Name Node and multiple Data Nodes.  In this section, we will setup a Name Node and multiple Data Nodes.

Hadoop Architecture Design:

Machine IP Type of Node Hostname
192.168.1.5 Master Node master.hadoopnode.com
192.168.1.6 Data Node 1 datanode1.hadoopnode.com
192.168.1.4 Data Node 2 datanode2.hadoopnode.com

Let’s talk about YARN..

In a simple language, YARN is basically a Hadoop Next Generation Map Reduce called Map Reduce v2.In short, it is a cluster management technology. YARN combines a central resource manager that reconciles the way applications use Hadoop system resources with node manager agents that monitor the processing operations of individual cluster nodes.

The fundamental idea of YARN is to split up the two major functionalities of the Job Tracker, resource management and job scheduling/monitoring, into separate daemons. YARN split up the two major responsibilities of the Job Tracker/Task Tracker into separate entities:

  • a global Resource Manager
  • a per-application Application Master
  • a per-node slave Node Manager
  • a per-application Container running on a Node Manager

Putting together, the YARN component can be visualized as shown below:

Hadoop-Post1-1

What are the pre-requisites:

1. Install 3 number of VMs of Ubuntu 14.01.1 on Virtual box. While installing ensure that OpenSSH server package is selected which configures SSH service automatically.

image2.png

Ensure that the Bridge Adapter option is configured (as shown below). This ensures that all the nodes can communicate with each other.

image3.png
Setting up Master Node

1. Login to master.hadoopnode.com as normal user through putty. As you see below, the master node has IP address 192.168.1.5. Ensure that the full FQDN name is provided for this host.

image4.png

As shown above, I logged in as user1 which was created by default during the installation time.We are soon going to create a user and group for Hadoop.

image5.png\

2. Open /etc/hosts file through vi editor and add the following entries:

image6.png

As shown above, you need to add the hostname and IP Address of each nodes so that they can identify and ping each other through hostname and IP address both.

Setting up User and Group for Hadoop

3. Let’s create a user for Hadoop. First you need to create a group called hadoop and add a new user called hduser to the newly created hadoop group as shown below:

Point4

4. Ensure that the newly created hadoop user is added to the sudo user(shown below):

The above step is an important step and shouldn’t be skipped.

Enabling Password-less SSH

5.Make sure that hduser can SSH to its own account without password.

Point5

  1. For the first time, try SSH to localhost running ssh hduser@localhost.It will ask for password so as to add this host in the list of known hosts. Run the exit command and try to SSH again. This time it shouldn’t ask for password (as shown below).

As shown above, the hduser can SSH to its own account without any password.

 Disabling IPv6

 [OPTIONAL] It is always recommended to disable the IPv6 since the system is going to use 0.0.0.0 for different Hadoop configuration. Follow the below steps to disable IPv6 on the master node.

Point6

8. Reboot the machine to let the system update with the kernel parameters correctly.

Point6

Remember that you might skip the above IPv6 under certain conditions where you have just testing environment.

9.Re-SSH to the master node through putty again.

Configuring JAVA

1. Download the JDK from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html( shown below)

image12.png

I downloaded jdk-7u71-linux-i586.tar.gz as per my machine architecture. If you are running x86_64 architecture machine, you will need to download x64.rpm from the same link.

11.Create a directory called java under /usr/local through mkdir utility.

Point6

12.Upload the Oracle JDK binaries into java directory of the Ubuntu machine through WinSCP or other whichever available.

13. Unpack the compressed JDK software as shown below:

Point6

Once unzipped, you will see the following listing of files.

Point6

14.Copy the Oracle JDK unzipped binaries into /usr/local/java directory as shown below:

Point7

Verify that all the binaries are copied.

Point6

15. Setup the environmental variable for JAVA_HOME. Open /etc/profile through nano or VI editor and add the following lines at the end of the file.

16. Save the file.

17. Run the following command to point out to the correct Oracle JDK location.

Point7

18. For JDK to be available for use, run the following command.

Point7

19. It is very important to run the below command to reload your system wide PATH under /etc/profile.

Point7

20. You can also verify if JAVA_HOME is working or not.

Point7

NOTE: We need to configure JAVA the same way we followed above for all the nodes.

21. Before configuring Hadoop, we need to make data node 1 and data node 2 ready. Let’s configure them too.

Setting up Data Node 1

1. Login to one of the data node say, datanode1.hadoopnode.com as a normal user through putty. As you see below, this machine has IP address of 192.168.1.6.

Point7

As shown above, I logged in as user2 which was created by default during the installation time. We are soon going to create user and group for Hadoop.

2. Open /etc/hosts file through vi editor and add the following entries:

Point7

NOTE: Follow the above step on Data Node 2 too.

3. As similar as we created hduser and hadoop group for masternode, follow the same steps for data node 1 and data node 2 too.

Point7

Ensure you don’t miss the below step for allowing sudo access for the hduser.

Point7

4. This is an important part of data node configuration. We are going to configure passwordless SSH so that masternode can SSH to all datanodes without password.

Note: Run the below step on Master Node only.

Point7

Try logging to the slave node from master node without password as shown below:

Point7

5. Follow step 10 to 20 discussed above for configuring JAVA_HOME on this node too.

Setting up Data Node 2:

1. Login to data node 2 as shown below:

Point7

2. Configure /etc/hosts as similar as what we configured for data node 1.

Point7

3. Configure User and group for hadoop.

Point7

4. Again, this is an important step which IS TO BE RUN ON MASTER ONLY.

Point7

The above command lets passwordless SSH from master to the data node 2.

5.Follow step 10 to 20 discussed above for configuring JAVA_HOME on this node too. Once you configure on both the data node you might see something like shown below:

Point7

Configuring Hadoop: 

NOTE: The below commands to be run on all the master and data nodes.

22. Download Hadoop binaries from http://mirror.olnevhost.net/pub/apache/hadoop/core/hadoop-2.3.0/hadoop-2.3.0.tar.gz.  Run the wget utility(shown below) to download Hadoop binaries from remote Hadoop website.

Point7

23. Unzip the hadoop binaries as shown below:

Point7

Once you run the above command it will extract the binaries and place it under the same location. You need to copy it to /usr/local directory.

24. Unzip the hadoop tar directly into /usr/local/hadoop-2.3.0 folder :

Point7

25. Create a symbolic link for hadoop directory under /usr/local/hadoop as shown below:

Point7

26. Provide the ownership to hduser and hadoop group to execute the hadoop binaries:

Point7

27. Switch to hduser through the following command:

28.Open .bashrc placed under home directory of hduser and add the following entries at the end of the file:

Point7

29. Save the file. Run the command called bash to let the environment variable effective as shown below:

Point7

30. Edit the following hadoop-env.sh for letting hadoop know where does JAVA_HOME resides. Once the entry is done, you should be able to able to see the following results.

Point7

31.Verify that the hadoop installation through the following command:

Point7

This shows that Hadoop is properly configured.

32.As the above Hadoop configuration is run on all the nodes, ensure that /usr/local/hadoop is the path where Hadoop resides on all the nodes. Follow the same steps for all the data nodes too. For example, if you follow the steps above on data node 1 you should expect the following results:

Point7

Now we have master node and data node ready with the basic Hadoop installation.

PLEASE NOTE: In a newer version of hadoop, there is one slight additional steps for environmental variable setup for JAVA_HOME to work. Open the file hadoop-config.sh under /usr/local/hadoop/libexec and make the following entry too.

Configuring Master Node:

33. Let us first configure Master Node File configurations.

34.Ensure that you are logged in as hduser and running the below commands.

Point7

Create required files as shown below on master node.

5. Open the hdfs.xml file and add the following entries:

Point7
36. Open the file $HADOOP_INSTALL/etc/hadoop/core-site.xml and let hadoop module know where master node(name node) resides.

Point7

Put the entries only inside the configuration tab and not outside.

37.Format the HDFS filesystem on the master node as shown below:

Point7

It takes few seconds and the final results gets displayed:

Point7

38. Edit the file $HADOOP_INSTALL/etc/hadoop/slaves with all the data nodes entries into master node.

Point7

Configuring Data Nodes

39. Login to one of the data node(say datanode1) and create the following files:

Point7

Once you make the entry , the file should look like as shown below:

Point7

41. Let the data node know where does master node (namenode) resides through editing the core-site.xml file.

Point7

Instead of IP address above, you can use hostname of master node if you have correct entries under /etc/hosts.

42. Now open the master node session and run the below command:

As shown below it tries to ssh to data node and start the respective services in the data nodes.

Point7

Ensure that the required services (name node, data node and YARN ) are running through jps command:

43. One can verify

Point7\

44. Ensure that the required services are running at the data node end too.

Point7

Also, verify on data node 2 as shown below:

Point7

45. You can access Hadoop Name Node details under http://<masternode>:50070.

Under this link, you can access Datanodes , logs for each namenodes, snapshot of events and overall DFS details too.

Point7

Point7

Point7

As shown above, there are two Data Node represented as Live Nodes.

Click on Data Node section on the top of the Web URL to see the data node status

Under this link, you can access Datanodes , logs for each namenodes, snapshot of events and overall DFS details too.

Point7

You can visualize the secondary namenode through the following URL:

Point7

You can see the datanode 1 status under the URL:

Point7

In the similar manner, you can see the data node 2 status through URL:

Point7

Before I wrap up..

We will now look into basic HDFS shell commands which forms the basis for running Map Reduce jobs. Hadoop file system usually referred as fs shell commands are used to perform various file operations. For Example: copy, changing permissions, viewing the file contents, changing ownership of files, creating directories and much more. You can see the various options available through the below command:

Point7

Listing the size of DFS:

Point7

Creating a directory in HDFS. This is very similar to unix command.

Point7

Listing the HDFS file system:

Point7

Copy a file from your local system to HDFS file system:

Point7

As shown above, first create a empty file called alpha in some local folder. Add some contents to it through editor. Use –copyFromLocal option to copy it from local file system to HDFS file system.

Copy a file from HDFS to local system:

Point7

As shown above, first delete the alpha file from your local machine. Now try running –copyToLocal option to copy it from HDFS to your local machine.

Displaying the length of the file contained in a directory:

Point7

Displaying the stat information for a HDFS path:

Point7

You can always refer to HDFS man pages for detailed options for command line utilities for operation on HDFS.

In our next episode, we will cover the various ecosystem of Hadoop.

0
0