Total Pageviews

Sunday, February 1, 2015

Autolab on VMware

I had been looking to set up a VMware lab for the longest. I purchased a rack mounted blade server, but it was too loud for the office. I decided to buy a beefed up laptop. 16 gig I7. Thus far I have used it to set up an instance of Debian, to study for Linux+, a small Rocks Cluster:

https://wiki.rocksclusters.org/wiki/index.php/Main_Page

And a CenTOS machine using VMware. But it was the VMware lab I wanted. I stumbled up AutoLab, from LabGuides:

http://www.labguides.com/autolab

Simply put, Autolab allows you set set up a nifty little VMware network, consisting of a router, a NAS, Domain Controller, Vsphere Center, and three EXSi host.

Closest JPEG I could find. So its missing an EXSi host.


The first step is to of course download the software and unzip it. This created a somewhat confusing directory tree. There are four different versions of AutoLab. One for VM Player, one for VM Workstation, and two for EXSi hosts. I may do an ExSi host later, but for now this is the workstation version. The download makes for provisions in case you want to upgrade from a earlier version of EXSi and provides a number of directories I didn't use. I will be using 5.5 and not interested in any of the other versions at this point. Later, I may try an upgrade, since I need to be able to do that if I want to get certified.

The NAS needs to be built first. Once built, the directory structure looks like this:
 total 229720
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 VIM_50/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 VIM_41/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 View51/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 View50/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 VeeamBR/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 Veeam1/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 ESXi50/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 ESXi41/
drwxrwxrwx   2 nobody  wheel   512B Sep 20  2012 ESX41/
drwxrwxrwx   2 nobody  wheel   512B Sep 22  2012 VIM_51/
drwxr-xr-x   6 root    wheel   512B Oct  7  2012 ../
drwxrwxrwx   2 nobody  wheel   512B Nov 11  2012 vCD_15/
drwxrwxrwx   2 nobody  wheel   512B Nov 11  2012 vCD_51/
drwxrwxrwx   2 nobody  wheel   512B Jan  2  2013 ESXi51/
drwxrwxrwx  17 nobody  wheel   1.0K Oct  4  2013 Automate/
drwxrwxrwx   2 nobody  wheel   512B Oct 12  2013 View52/
drwxrwxrwx   2 nobody  wheel   512B Jan 15  2014 View53/
-rw-rw-rw-   1 nobody  wheel   2.8K Sep  2 00:51 ChangeLog.txt
drwxrwxrwx   2 nobody  wheel   512B Sep  2 11:44 View60/
drwxrwxrwx   3 nobody  wheel   512B Jan 31 08:53 VMTools/
-rw-rw-rw-   1 nobody  wheel   176M Jan 31 09:04 SQLManagementStudio_x64_ENU.exe
drwxrwxrwx   4 nobody  wheel   2.5K Jan 31 09:18 ESXi55/
-rw-rw-rw-   1 nobody  wheel    48M Jan 31 10:05 VMware-vSphere-CLI-5.1.0-780721.exe
drwxrwxrwx  22 root    wheel   512B Jan 31 10:06 ./
drwxrwxrwx  15 nobody  wheel   512B Jan 31 10:49 VIM_55/

Sunday, January 25, 2015

Software Depot

http://modules.sourceforge.net

Prior to working here, if I installed some software built from source, and those binaries and libraries were located in a directory not in the user's path, I would have to change each user's path who wanted to use the software. Upon arriving at this job, I was exposed to 'Modules' for the first time. See a detailed description in the link above. Basically modules allows a user to change his or her PATH on the fly. Let's say the user need gcc-5.0.0, but the version of gcc in their path is 4.1.2. The way I have set up modules, the user executes:

module load gcc-5.0.0/gcc-5.0.0

Now if the user types:

which gccc

They get:

/software/depot/gcc-rhel6/bin/gcc

And their new PATH is:

/software/depot/gcc-rhel6/bin

And their LD_LIBRARY_PATH is:

/software/depot/gcc-rhel6/lib

/software/depot is an NFS mount from a Netapp's machine. I have built lots of software for RHEL5 and RHEL6 machines, both Intel and AMD and using the '--prefix' installed the software in this NFS mount. Depending of the machines a person is logged into, they see different modules. I have even installed a small number of packages that our CRAY users access. I simply mount the appropriate sub-drectory on the CRAY and create module files to point to it.

Tuesday, December 9, 2014

Using Shared Libraries

I after viewing a webinar on shared libraries, I thought I would post the examples. Many times we are asked to build software and we don't know what goes where. Hopefully this will help clear things up. I will use two small 'c' programs to demonstrate the idea. First the program that will be used to build the library displayuid.c: 

/* This will be so file displayuid.c */
#include 
#include 
void display_uid () {
  int real = getuid();
  int euid = geteuid();
  printf("The REAL UID =: %d\n", real);
  printf("The EFEECTIVE UID =: %d\n", euid);
} 
Next the main program itself, standard.c: 
/* This will be so the main program standard.c  */
#include 
int main () {
  printf("This is from the main program\n");
  display_uid();
  return 0;
}  
First compile the program that is to be used as a library: 
$ gcc -c -fPIC displayuid.c
The program is compiled with 'PIC' to make it relocatable. The resulting output will be a '.o' file:
displayuid.o
Next create the actual shared library:
gcc -shared -o libdisplayuid.so displayuid.o 
libdisplayuid.so will be the name of the new shared library, 
displayuid.o is the name of the object file that was created earlier.
Now as root place the shared library in a location available system 
wide:
# mkdir /usr/local/lib/tup
# cp libdisplayuid.so /usr/local/lib/tup
# chmod -R 755 /usr/local/lib/tup
Continuing as root, make the library known system wide: 
# echo "/usr/local/lib/tup" > /etc/ld.so.conf.d/tup.conf
# ldconfig
(rebuilds the cache)
# ldconfig -p |grep libdisplay
(checks that it is there) 
Now compile the test program: 
$ gcc -L/usr/local/lib/tup standard.c \
-o standard -ldisplayuid
Finally run the program:
./standard 
ldd standard  

Sunday, January 20, 2013

Reattaching ZFS filesystem

I had let my external storage sit unattached for quite a while. I had blown away the system that had ZFS support on it, so had to start over. The filesystem was still there, I just didn't have anything to mount it. I had to build a Rocks cluster:

http://www.rocksclusters.org/wordpress/

 for another project. Luckily Rocks is based on Centos 6.3. After getting the frontend of the cluster up, I installed the ZFS software. After that, these are the steps I took to get the filesystem back:

 [root@cluster ~]# zpool  import
  pool: NAS
    id: 8081959002011209582
 state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
    the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

    NAS         ONLINE
      raidz2-0  ONLINE
        sdb     ONLINE
        sdc     ONLINE
        sdd     ONLINE
        sde     ONLINE


[root@cluster ~]#[root@cluster ~]# zpool  import NAS
cannot import 'NAS': pool may be in use from other system, it was last accessed by localhost.localdomain (hostid: 0x7f0100) on Sun Jan  1 18:50:23

use '-f' to import anyway

[root@cluster ~]# zpool import -f  NAS

[root@cluster ~]# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
NAS   77.8G  2.60T  77.8G  /NAS
[root@cluster ~]# ls -larth /NAS
total 13K
-rw-r--r--  1 root root     0 Nov 16  2011 junk
drwxr-xr-x  7 root root     9 Nov 16  2011 home
drwxr-xr-x 13  501 jboss   13 Nov 19  2011 tools
drwxr-xr-x  5 root root     6 Dec  1  2011 .
drwxr-xr-x  3 root root     3 Dec  1  2011 old-pc
drwxr-xr-x 29 root root  4.0K Jan 20 17:07 ..
[root@cluster ~]#

Built Frontend for Cluster

I have a 6 terabyte enclosure. Four 1.5 TB drives. I'm running raidz, so the final amount of storage is less than three TB. But I am more concerned about the data, than the space. I had let the RAID sit for a long time, but having started back up on my Linux From Scratch project again, I thought I needed to back it up. I bought an additional 1.5 exteranl drive and backed my work up, but disks are disks. I had been backing up to 64 GB flash drive and it died, so time to break out the real disks. In the mean time I had a need to bring up a Rocks cluster and start experimenting again. So after a couple of false starts I was able to build the frontend for a cluster and reattach my 'RAID'.



login as: gramos
Using keyboard-interactive authentication.
Password:
Rocks 6.1 (Emerald Boa)
Profile built 20:08 20-Jan-2013

Kickstarted 15:38 20-Jan-2013
[gramos@cluster ~]$
[gramos@cluster ~]$
[gramos@cluster ~]$ df -h /NAS
Filesystem            Size  Used Avail Use% Mounted on
NAS                   2.7T   78G  2.6T   3% /NAS
[gramos@cluster ~]$

Thursday, December 27, 2012

Recovering a password on a ZFS '/' filesystem

Somehow the password file got clobbered on a workstation. I was pretty easy to recover a password file on a UFS filesystem, but it is a another thing if the root partition is ZFS. These are the step I took to recover the /etc/shadow file:

Recovering a root password on a zfs filesystem.

  1. Boot the machine into single user:
Ok> boot cdrom –s
  1. Find out what pools are available to import. In this case we are looking for rpool:
# zpool import
  1. Since rpool is available, we need to import it:
# zpool import rpool
The system returns with:
The system will report messages similar to this:

cannot mount '/export': failed to create mountpoint

cannot mount '/export/home': failed to create mountpoint

cannot mount '/rpool': failed to create mountpoint

Although the ZFS file systems in the pool cannot be mounted, they exist.

  1. zfs list will return what is mounted:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT

rpool 12.5G 54.4G 97K /rpool

rpool/ROOT 6.97G 54.4G 21K legacy

rpool/ROOT/s10s_u10wos_17b 6.97G 54.4G 6.97G /

rpool/dump 1.00G 54.4G 1.00G -

rpool/export 2.53G 54.4G 23.5K /export

rpool/export/home 2.53G 54.4G 2.53G /export/home

rpool/swap 2G 56.4G 16K –


  1. The mount point we are interested in is rpool/ROOT/s10s_u10wos_17b


# zfs get mountpoint rpool/ROOT/s10s_u10wos_17b

NAME PROPERTY VALUE SOURCE

rpool/ROOT/s10s_u10wos_17b mounted no -

  1. Change the mountpoint of rpool/ROOT/s10s_u10wos_17b
# zfs set mountpoint=/mnt rpool/ROOT/s10_u10wos_17b

  1. Mount rpool/ROOT/s10s_u10wos_17b

# zfs mount rpool/ROOT/s10s_u10wos_17b

  1. Change the password for root.
# cd /mnt/etc

What I did now was get rid of the 2nd field in both the /etc/passwd and /etc/shadow.

  1. Umount the filesystem or just reboot.

# cd /

# zfs umount rpool/ROOT/s10s_u10wos_17b

  1. Reset the mountpoint back to /.

# zfs set mountpoint=/ rpool/ROOT/s10_u10wos_17b


11. Reboot the system and you can log in to the system with root again.

#reboot


Recovering a mirrored disk drive

I reloaded a Solaris machine with the latest version of Solaris 10. Prior to reloaded it, I had two drive, one with '/' and '/usr' on it, and the other had two other partitions. The reload broke the first mirror, but I wanted to recover the mirrors on the 2nd drive. These are the steps I took.


bash-3.2# metastat
d31: Mirror
Submirror 0: d41
State: Needs maintenance
Submirror 1: d51
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 60516672 blocks (28 GB)

d41: Submirror of d31
State: Needs maintenance
Invoke: metasync d31
Size: 60516672 blocks (28 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s1 0 No Okay Yes


d51: Submirror of d31
State: Needs maintenance
Invoke: metasync d31
Size: 60516672 blocks (28 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t3d0s1 0 No Okay Yes


d5: Mirror
Submirror 0: d15
State: Needs maintenance
Submirror 1: d25
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 30721344 blocks (14 GB)

d15: Submirror of d5
State: Needs maintenance
Invoke: metareplace d5 c1t0d0s5
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s5 0 No Maintenance Yes


d25: Submirror of d5
State: Needs maintenance
Invoke: metasync d5
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s5 0 No Okay Yes


d4: Mirror
Submirror 0: d14
State: Needs maintenance
Submirror 1: d24
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31840704 blocks (15 GB)

d14: Submirror of d4
State: Needs maintenance
Invoke: metareplace d4 c1t0d0s4
Size: 31840704 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s4 0 No Maintenance Yes


d24: Submirror of d4
State: Needs maintenance
Invoke: metasync d4
Size: 31840704 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s4 0 No Okay Yes


d1: Mirror
Submirror 0: d11
State: Needs maintenance
Submirror 1: d21
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 30721344 blocks (14 GB)

d11: Submirror of d1
State: Needs maintenance
Invoke: metareplace d1 c1t0d0s1
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s1 0 No Maintenance Yes


d21: Submirror of d1
State: Needs maintenance
Invoke: metasync d1
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s1 0 No Okay Yes


d0: Mirror
Submirror 0: d10
State: Needs maintenance
Submirror 1: d20
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 16393536 blocks (7.8 GB)

d10: Submirror of d0
State: Needs maintenance
Invoke: metasync d0
Size: 16393536 blocks (7.8 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Okay Yes


d20: Submirror of d0
State: Needs maintenance
Invoke: metasync d0
Size: 16393536 blocks (7.8 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes


d30: Mirror
Submirror 0: d40
State: Needs maintenance
Submirror 1: d50
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 49160256 blocks (23 GB)

d40: Submirror of d30
State: Needs maintenance
Invoke: metasync d30
Size: 49160256 blocks (23 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s0 0 No Okay Yes


d50: Submirror of d30
State: Needs maintenance
Invoke: metasync d30
Size: 49160256 blocks (23 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t3d0s0 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c1t3d0 Yes id1,sd@n500000e016c7bfc0
c1t2d0 Yes id1,sd@n500000e016bed170
c1t1d0 Yes id1,sd@n5000c50005ed9edf
c1t0d0 Yes id1,sd@n500000e016beef40
bash-3.2#