Total Pageviews

Thursday, December 27, 2012

Recovering a password on a ZFS '/' filesystem

Somehow the password file got clobbered on a workstation. I was pretty easy to recover a password file on a UFS filesystem, but it is a another thing if the root partition is ZFS. These are the step I took to recover the /etc/shadow file:

Recovering a root password on a zfs filesystem.

  1. Boot the machine into single user:
Ok> boot cdrom –s
  1. Find out what pools are available to import. In this case we are looking for rpool:
# zpool import
  1. Since rpool is available, we need to import it:
# zpool import rpool
The system returns with:
The system will report messages similar to this:

cannot mount '/export': failed to create mountpoint

cannot mount '/export/home': failed to create mountpoint

cannot mount '/rpool': failed to create mountpoint

Although the ZFS file systems in the pool cannot be mounted, they exist.

  1. zfs list will return what is mounted:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT

rpool 12.5G 54.4G 97K /rpool

rpool/ROOT 6.97G 54.4G 21K legacy

rpool/ROOT/s10s_u10wos_17b 6.97G 54.4G 6.97G /

rpool/dump 1.00G 54.4G 1.00G -

rpool/export 2.53G 54.4G 23.5K /export

rpool/export/home 2.53G 54.4G 2.53G /export/home

rpool/swap 2G 56.4G 16K –


  1. The mount point we are interested in is rpool/ROOT/s10s_u10wos_17b


# zfs get mountpoint rpool/ROOT/s10s_u10wos_17b

NAME PROPERTY VALUE SOURCE

rpool/ROOT/s10s_u10wos_17b mounted no -

  1. Change the mountpoint of rpool/ROOT/s10s_u10wos_17b
# zfs set mountpoint=/mnt rpool/ROOT/s10_u10wos_17b

  1. Mount rpool/ROOT/s10s_u10wos_17b

# zfs mount rpool/ROOT/s10s_u10wos_17b

  1. Change the password for root.
# cd /mnt/etc

What I did now was get rid of the 2nd field in both the /etc/passwd and /etc/shadow.

  1. Umount the filesystem or just reboot.

# cd /

# zfs umount rpool/ROOT/s10s_u10wos_17b

  1. Reset the mountpoint back to /.

# zfs set mountpoint=/ rpool/ROOT/s10_u10wos_17b


11. Reboot the system and you can log in to the system with root again.

#reboot


Recovering a mirrored disk drive

I reloaded a Solaris machine with the latest version of Solaris 10. Prior to reloaded it, I had two drive, one with '/' and '/usr' on it, and the other had two other partitions. The reload broke the first mirror, but I wanted to recover the mirrors on the 2nd drive. These are the steps I took.


bash-3.2# metastat
d31: Mirror
Submirror 0: d41
State: Needs maintenance
Submirror 1: d51
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 60516672 blocks (28 GB)

d41: Submirror of d31
State: Needs maintenance
Invoke: metasync d31
Size: 60516672 blocks (28 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s1 0 No Okay Yes


d51: Submirror of d31
State: Needs maintenance
Invoke: metasync d31
Size: 60516672 blocks (28 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t3d0s1 0 No Okay Yes


d5: Mirror
Submirror 0: d15
State: Needs maintenance
Submirror 1: d25
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 30721344 blocks (14 GB)

d15: Submirror of d5
State: Needs maintenance
Invoke: metareplace d5 c1t0d0s5
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s5 0 No Maintenance Yes


d25: Submirror of d5
State: Needs maintenance
Invoke: metasync d5
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s5 0 No Okay Yes


d4: Mirror
Submirror 0: d14
State: Needs maintenance
Submirror 1: d24
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31840704 blocks (15 GB)

d14: Submirror of d4
State: Needs maintenance
Invoke: metareplace d4 c1t0d0s4
Size: 31840704 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s4 0 No Maintenance Yes


d24: Submirror of d4
State: Needs maintenance
Invoke: metasync d4
Size: 31840704 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s4 0 No Okay Yes


d1: Mirror
Submirror 0: d11
State: Needs maintenance
Submirror 1: d21
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 30721344 blocks (14 GB)

d11: Submirror of d1
State: Needs maintenance
Invoke: metareplace d1 c1t0d0s1
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s1 0 No Maintenance Yes


d21: Submirror of d1
State: Needs maintenance
Invoke: metasync d1
Size: 30721344 blocks (14 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s1 0 No Okay Yes


d0: Mirror
Submirror 0: d10
State: Needs maintenance
Submirror 1: d20
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 16393536 blocks (7.8 GB)

d10: Submirror of d0
State: Needs maintenance
Invoke: metasync d0
Size: 16393536 blocks (7.8 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Okay Yes


d20: Submirror of d0
State: Needs maintenance
Invoke: metasync d0
Size: 16393536 blocks (7.8 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes


d30: Mirror
Submirror 0: d40
State: Needs maintenance
Submirror 1: d50
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 49160256 blocks (23 GB)

d40: Submirror of d30
State: Needs maintenance
Invoke: metasync d30
Size: 49160256 blocks (23 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t2d0s0 0 No Okay Yes


d50: Submirror of d30
State: Needs maintenance
Invoke: metasync d30
Size: 49160256 blocks (23 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t3d0s0 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c1t3d0 Yes id1,sd@n500000e016c7bfc0
c1t2d0 Yes id1,sd@n500000e016bed170
c1t1d0 Yes id1,sd@n5000c50005ed9edf
c1t0d0 Yes id1,sd@n500000e016beef40
bash-3.2#

Thursday, October 11, 2012

Changing Passwords on Multiple Machines



In the past, when it came to password changes for all users, we had to log into each machine, and manually change the password for each user. Not only was this time consuming, but subject to human error such as incorrect passwords, which in turn could cause an account to be locked out. In the past I have used NIS, but that was deemed to be a security problem. I looked around a small expect script that when run and supplied a username and password, would in turn run the passwd program to generate a new password for the user. That was the first step, but it needed to be surrounded by a lot more code in order to change passwords for all the users and all of the systems. The first issue was to be able to go through the entire password file and generate a new encrypted password for each user. It was decided to come up with a master passwd file that contained all of the users for any machine in the system. That is some users had accounts on all of the machines, other users might only have an account on one or two master. There is one system that contains that entire list of users, let’s call it MasterServer.  So the idea was to use the password file from MasterServer, strip out the first column of that password file and it will contain all of the users. Next generate the ascii of what is to be associated with each user. In other words, the new password the user will type in, not the encrypted password. The ascii type in word is generated from the following website:
To run this program you simply set the program length to 14 characters, set the password format to “base64” and set the number of passwords to 88, then click generate. The generated output is then associated with the column one entries from the passwd file.
generate-file is a script that merges the usernames with the ascii passwords in the following format:
username:ascii-password
It must also not generate any new passwords for users that are locked, daemons that don’t use passwords, or for the user ‘root’. In the case of the root accounts, each machine must have its own unique root password. The output of the generate-file script is password-file, it will be fed into the password-change-program.

The heart of the password-change-program is a small expect script, if expect is not installed, it must be installed along with its dependencies. For a Solaris 10 installation, these packages were installed. If the package’s name is indented, it is a requirement of its previous package:
expect-5.45-sol10-sparc-local

coreutils-8.11-sol10-sparc-local

tcl-8.5.10-sol10-sparc-local

        tk-8.5.10-sol10-sparc-local

                render-0.8-sol10-sparc-local

                xrender-0.8.3-sol10-sparc-local

                zlib-1.2.5-sol10-sparc-local

        gcc-3.4.6-sol10-sparc-local (and libgcc)

        libiconv-1.14-sol10-sparc-local

        libintl-3.4.0-sol10-sparc-local

gmp-4.2.1-sol10-sparc-local


All of these packages were downloaded from www.sunfreeware.com. That site is no longer around and has been replaced by unixpackages.com which you have to pay to download from. You can always download the sources and compile them yourself.
The password-change-program takes as its input password-file and uses the system’s passwd program to generate the encrypted passwords for all users. It is important to note that while the machine one is using to generate the new passwords, that system would have mixed results for someone attempting to log into it. It is impossible to determine what state the system is in. It may have already generated the new password for a particular user, before that user is notified, thus if he or she attempts a login at that time they will be denied access. Fortunately the program is being run on a management system that is primary used by the system administrators to manage other machines and users have been notified that the password change is in progress. The reason one would have mixed results is the encrypted field in the /etc/shadow file is being manipulated, and depending where in the process the password-change-program is, the user’s account may or may not have been updated. Once the password-change-program completes, one has a master list of encrypted passwords and new expiration dates.
The final piece of the puzzle is a script called pullshadow. It must be pointed out that this entire procedure  must be executed by a user that has access to all the machines the passwords are being updated on. Not only does that user need access, but also needs the ability to log into all remote machines without a password. This will be accomplished by using ssh and no password secure access. To create the no password access, if it is not already set up, perform the following steps:
Assuming the is being created for user everywhere-user with remote access to remote-machine.
ssh-keygen -t rsa
That will generate a key for user everywhere-user in the user’s home directory under .ssh
Now move that key to the user’s home directory on the remote machine that the user must access.

cat ~everywhere-user/.ssh/id_rsa.pub | \
ssh everywhere-user@remote-machine  "cat >> \
~everywhere-user/.ssh/authorized_keys"

The user will be prompted for a password. After this initial setup, a password will no longer be required. Now the pullshadow script will take advantage of this setup to retrieve each shadow file from all remote machines and update that shadow password file. Which machines are updated will be defined by a file named hosts, located in the current working directory. The hosts file in the current directory, is not to be confused with the /etc/hosts file. It has no bearing on that file, it only the name of the remote host. pullshadow uses the contents of host to determine which machines will be updated. pullshadow will copy the remote /etc/shadow file to the local machine, update it, and then copy it back. All users who were not locked out and that had a password will be updated. Each root account on the remote machine will have a unique password as well.


Tuesday, October 9, 2012

Recovering a file from a snapshot

We recently put together a little test bed of six workstations and 3 servers. All of this equipment is pretty old, but serves our purpose for testing things. I created a ZFS file system called home. Within home, I added a single file called junk which contained a single sentence. I then created a snapshot of "home", and finally restored that junk file:



cd /home

vi junk

zfs snaphot home@tuesday3

cd /home

rm junk

cd /home

cd .zfs

cd snapshot

cd Tuesday3

cp junk /home