Projekat

Općenito

Profil

Akcije

Podrška #14309

Zatvoren

rmlh-1: linux softraid (md) mirror raid level=1 /dev/md9

Dodano od Ernad Husremović prije više od 16 godina. Izmjenjeno prije oko 16 godina.

Status:
Zatvoreno
Prioritet:
Normalan
Odgovorna osoba:
Kategorija:
-
Početak:
20.05.2008
Završetak:
% završeno:

90%

Procjena vremena:

Opis

napraviti raid1 (mirror + hotspare), na njoj napraviti LVM grupu rmlhvg2

Rješenje: local premount skripta
  • /etc/initramfs-tools/scripts/local-premount/lvm_rmlhvg2

Povezani tiketi 3 (0 otvoreno3 zatvorenih)

korelira sa ubuntu - Nove funkcije #14239: rmlh-1: linux softraid (md) raid level=1ZatvorenoErnad Husremović12.05.2008

Akcije
korelira sa ubuntu - Podrška #14310: rmlh-1 kako promjeniti rootZatvorenoErnad Husremović20.05.2008

Akcije
korelira sa voip - Podrška #14437: test rmlh servera officesa, privremena zamjena za ifoldZatvorenoErnad Husremović03.06.2008

Akcije
Akcije #1

Izmjenjeno od Ernad Husremović prije više od 16 godina

na /dev/sdb, sdc, sdd napravio linux autoraid particije

root@rmlh-1:~# fdisk /dev/sdb

The number of cylinders for this disk is set to 48641.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdb: 400.0 GB, 400088457216 bytes
255 heads, 63 sectors/track, 48641 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xda7ec7c5

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-48641, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-48641, default 48641): 
Using default value 48641

Command (m for help): t fd
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

imamo potrebne particije - blok device-ove

root@rmlh-1:~# sfdisk -s /dev/sdb1 /dev/sdc1 /dev/sdd1

390708801
390708801
390708801

mdadm --create /dev/md9 --raid-devices=2 --level=1 --spare-devices=1 /dev/sdb1 /dev/sdc1 /dev/sdd1

root@rmlh-1:~# mdadm --create /dev/md9 --raid-devices=2 --level=1 --spare-devices=1 /dev/sdb1 /dev/sdc1 /dev/sdd1

mdadm: array /dev/md9 started.

uređaj je kreiran, i vrši se njegova inicijalizacija

root@rmlh-1:~# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.0% (320448/390708736) finish=101.5min speed=64089K/sec

unused devices: <none>

Akcije #2

Izmjenjeno od Ernad Husremović prije više od 16 godina

hajmo napraviti LVM volumen

prvo fizički kažemo da će se ovo koristiti za lvm

root@rmlh-1:~# pvcreate /dev/md9
  Physical volume "/dev/md9" successfully created

pa onda kreiramo grupu na tom device-u

root@rmlh-1:~# vgcreate rmlhvg2 /dev/md9

  Volume group "rmlhvg2" successfully created

i na kraju sam volumen

root@rmlh-1:~# lvcreate -n root_1 -L 20G rmlhvg2
  Logical volume "root_1" created
Akcije #3

Izmjenjeno od Ernad Husremović prije više od 16 godina

pravimo ext3, mountamo na root_1

root@rmlh-1:~# mkfs.ext3 /dev/mapper/rmlhvg2-root_1

root@rmlh-1:~# mkdir /mnt/root_1
root@rmlh-1:~# mount /dev/mapper/rmlhvg2-root_1 /mnt/root_1

Akcije #4

Izmjenjeno od Ernad Husremović prije više od 16 godina

nakon restarta md9 se nije podigao, da li je razlog nepostojanje definicije u mdadm.conf ?

root@rmlh-1:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf

root@rmlh-1:~# cat /etc/mdadm/mdadm.conf | grep ARRAY

ARRAY /dev/md9 level=raid1 num-devices=2 spares=1 UUID=eceac079:7895bfad:0ec0c405:8829961e

Akcije #5

Izmjenjeno od Ernad Husremović prije više od 16 godina

prema ovom howtoforge članku u grub mora da se stavi

/boot/grub/menu.lst

fallback=1

Akcije #6

Izmjenjeno od Ernad Husremović prije više od 16 godina

  • Odgovorna osoba promijenjeno iz Ernad Husremović u Jasmin Beganović
  • % završeno promijenjeno iz 0 u 20

međutim ipak često se desi da prilikom boot-a se ne podigne volume rmlvg2 koji je na /dev/md9

Akcije #7

Izmjenjeno od Ernad Husremović prije više od 16 godina

Grub fallback option

I’m not the world’s biggest fan of the GRUB boot loader. It’s nice that you don’t have to run a program after modifying the configuration file grub.conf, but I had problems in 2002 getting it to work with a LVM setup on Suse Linux. I ended up back with LILO for LVM.

Currently I use GRUB on some production servers, and just learned a new option that is very handy for when you build a new kernel and aren’t sure if it will boot. Administrators with remote power reset or console servers have it easy, but this gets you part-way there for free in software.

GRUB supports the option “fallback” to choose other kernels if the default kernel fails to boot.

The beginning of my grub.conf looks like this now:

default=6
timeout=15
fallback 5

You can add multiple fallback kernel choices.

Another useful option to investigate is “savedefault”.

Akcije #8

Izmjenjeno od Ernad Husremović prije više od 16 godina

  • % završeno promijenjeno iz 20 u 60

izgleda da ovo radi

/boot/grub/menu.lst

default 2
timeout 10
fallback=0

## hiddenmenu
# Hides the menu by default (press ESC to see the menu)
# hiddenmenu

# 0
title           Ubuntu 8.04, kernel 2.6.24-16-generic
root            (hd0,0)
kernel          /vmlinuz-2.6.24-16-generic root=/dev/mapper/rmlhvg1-root ro
initrd          /initrd.img-2.6.24-16-generic
quiet

# 1
title           Ubuntu 8.04, kernel 2.6.24-16-generic (recovery mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.24-16-generic root=/dev/mapper/rmlhvg2-root_2 ro single
initrd          /initrd.img-2.6.24-16-generic

# 2
title           Ubuntu 8.04, kernel 2.6.18-053.10hernad3-openvz
root            (hd0,0)
kernel          /vmlinuz-2.6.18-053.10hernad3-openvz root=/dev/mapper/rmlhvg2-root_1 ro
initrd          /initrd.img-2.6.18-053.10hernad3-openvz
quiet

title           Ubuntu 8.04, kernel 2.6.18-053.10hernad3-openvz (recovery mode)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-053.10hernad3-openvz root=/dev/mapper/rmlhvg2-root_1 ro single
initrd          /initrd.img-2.6.18-053.10hernad3-openvz

title           Ubuntu 8.04, memtest86+
root            (hd0,0)
kernel          /memtest86+.bin
quiet

izgleda da mu ovaj timeout pomaže da se sve boot-a kako treba

Akcije #10

Izmjenjeno od Ernad Husremović prije više od 16 godina

ma jok ... čitav dan sam ostao na ovome .. a čudno bi bilo da sam nešto brzo riješio :(

Akcije #11

Izmjenjeno od Ernad Husremović prije više od 16 godina

krenuću od rješenja:

napravio sam skriptu koja se prilikom init procesa pokreće, tako da se sigurno podigne rmlhvg2 volumen:

root@rmlh-1:/etc/initramfs-tools/scripts# cat local-premount/lvm_rmlhvg2

#!/bin/sh

# init-premount script for lvm2.

PREREQS="" 
prereqs()
{
    echo $PREREQS
}

mountroot_fail()
{
    if [ !  -e /dev/mapper/rmlhvg2-root_1 ]; then
        cat <<EOF
  rmlhvg2  jos nije uppppppppppppppppppp ????
EOF
        sleep 5
        exit 1
    fi
}

case $1 in
# get pre-requisites
prereqs)
    prereqs
    exit 0
    ;;
mountfail)
    mountroot_fail
    exit 0
    ;;
esac

. /scripts/functions

cat <<EOF
  ------------------------ rmlhvg2 local-premont  ----------------------
EOF

lvm vgchange -ay

cat <<EOF
  =======================================================================
EOF

sleep 5

add_mountroot_fail_hook

exit 0

napomena: gornji sleep-ovi ne piju vode zato što se, pretpostavljam ova skripta izvšava kao "&" - background proces

Akcije #12

Izmjenjeno od Ernad Husremović prije više od 16 godina

nakon toga update initramfs-a

root@rmlh-1:/etc/initramfs-tools/scripts# update-initramfs -k all -u

update-initramfs: Generating /boot/initrd.img-2.6.24-16-generic
update-initramfs: Generating /boot/initrd.img-2.6.18-053.10hernad3-openvz

Akcije #13

Izmjenjeno od Ernad Husremović prije više od 16 godina

ono što me je pravo odvuklo jeste to što čitav dan sam radio sa pogrešnim /etc/mdadm/mdadm.conf

ovaj conf je pak potreban init procesu (tačnije conf se smješta u initramfs image) jer se na osnovu njega vrši asembliranje matrice

kako je ovo uošte radilo sa ranijim image-om ja pojma nemam

Akcije #14

Izmjenjeno od Ernad Husremović prije više od 16 godina

  • % završeno promijenjeno iz 60 u 100
Akcije #15

Izmjenjeno od Ernad Husremović prije više od 16 godina

gore vidim da je

root@rmlh-1:/boot# mdadm --detail --scan

ARRAY /dev/md9 level=raid1 num-devices=2 spares=1 UUID=eceac079:7895bfad:0ec0c405:8829961e

/etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Sun, 11 May 2008 15:46:34 +0200
# by mkconf $Id$

ARRAY /dev/md9 level=raid1 num-devices=2 spares=1 UUID=eceac079:7895bfad:0ec0c405:8829961e

Akcije #16

Izmjenjeno od Ernad Husremović prije više od 16 godina

  • Naslov promijenjeno iz rmlh-1: linux softraid (md) raid level=1 /dev/md9 u rmlh-1: linux softraid (md) mirror raid level=1 /dev/md9

došlo je do čučnjavanja struje/ups-a, nakon toga je server server ušao u busybox režim jer je počeo raditi resync

u busybox-u sam uradio sljedeće komande

(busybox) cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [====>................]  resync = 23.7% (92617408/390708736) finish=149.9min speed=33125K/sec

root@rmlh-1:~# ls /dev/mapper

nije bio aktivan rmlhvg2

pa sam u busybox-u takođe pokrenuo komandu

(busybox) lvm vgchange -ay

i aktivirao rmlhvg2

na kraju

(busybox) exit

i sistem se nastavio normalno boot-ati

ono što sam primjetio da se po izlasku desio onaj moj local premount

izgleda da ovu skriptu potrebno pomjeriti radnije da u sličnoj situaciji ne bi ulijetao u busybox

Akcije #17

Izmjenjeno od Ernad Husremović prije više od 16 godina

  • % završeno promijenjeno iz 100 u 90

očigledno da treba ovo uraditi:

prije ne nego isporučimo server trebamo uraditi sljedeću simulaciju

  1. lvm_rmlhvg2 skriptu prebaciti u /etc/initramfs-tools/scripts/init* (pretpostavljam da je najbolje staviti u init-bottom)
  2. napraviti ponovo initramfs images
    • update-initramfs -k all -u
  3. ugasiti server nasilnim shutdown-om
    • znači napraviti simulaciju nestanka struje, ovim bi matrica trebala biti out-of-sync
  4. boot-tati

cilj je da se izbjegne busybox

Akcije #18

Izmjenjeno od Jasmin Beganović prije više od 16 godina

skripta prebačena

root@rmlh-1:/etc/initramfs-tools/scripts/local-premount# mv lvm_rmlhvg2  /etc/initramfs-tools/scripts/init-bottom/

provjerio

root@rmlh-1:/etc/initramfs-tools/scripts/local-premount# cd /etc/initramfs-tools/scripts/init-bottom/
root@rmlh-1:/etc/initramfs-tools/scripts/init-bottom# ls
lvm_rmlhvg2

update initramfs image-a

root@rmlh-1:/etc/initramfs-tools/scripts/init-bottom# update-initramfs -k all -u
update-initramfs: Generating /boot/initrd.img-2.6.24-16-generic
update-initramfs: Generating /boot/initrd.img-2.6.18-053.10hernad3-openvz
root@rmlh-1:/etc/initramfs-tools/scripts/init-bottom#

idem sada simulirati nestanak struje

Akcije #19

Izmjenjeno od Jasmin Beganović prije više od 16 godina

simulirao netsanak struje izvlačanjem kabla, pop ponovno uključivanju server se počeo sa bootanjem što je očekivano

Akcije #20

Izmjenjeno od Jasmin Beganović prije više od 16 godina

uhvatio da je se init-bottom/lvm_rmlhvg2 odradila

server se je podigao normalno

Akcije #21

Izmjenjeno od Jasmin Beganović prije više od 16 godina

nakon par gašenja paljenja nasilno raid ušao u resync , i progresivno ubrzao kako je se sistem podigao

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.1% (649600/390708736) finish=3466.0min speed=1874K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.1% (649728/390708736) finish=3692.6min speed=1760K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.1% (747072/390708736) finish=1552.5min speed=4184K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.2% (914240/390708736) finish=721.3min speed=9003K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.3% (1422784/390708736) finish=276.9min speed=23422K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.4% (1607296/390708736) finish=230.2min speed=28158K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [>....................]  resync =  0.4% (1672512/390708736) finish=229.6min speed=28225K/sec

unused devices: <none>
Akcije #22

Izmjenjeno od Jasmin Beganović prije više od 16 godina

resync je uspješno završio

root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]
      [===================>.]  resync = 99.9% (390699648/390708736) finish=0.0min speed=22126K/sec

unused devices: <none>
root@rmlh-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md9 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]
      390708736 blocks [2/2] [UU]

unused devices: <none>

iz loga

Jun 11 21:54:30 rmlh-1 kernel: md: md9: sync done.
Jun 11 21:54:30 rmlh-1 kernel: RAID1 conf printout:
Jun 11 21:54:30 rmlh-1 kernel:  --- wd:2 rd:2
Jun 11 21:54:30 rmlh-1 kernel:  disk 0, wo:0, o:1, dev:sdb1
Jun 11 21:54:30 rmlh-1 kernel:  disk 1, wo:0, o:1, dev:sdc1

Akcije #23

Izmjenjeno od Jasmin Beganović prije više od 16 godina

  • Status promijenjeno iz Dodijeljeno u Zatvoreno

to bi bilo to

Akcije #24

Izmjenjeno od Jasmin Beganović prije oko 16 godina

ovo je rješeno u Intrepid Ibex-u

Boot degraded raid setting

Traditionally, booting an Ubuntu installation with the root filesystem on a degraded RAID drops the system into a busybox prompt
 in the initramfs. This is the safest choice as it will prevent any further possible harm to data and let administrator pick what
 to do, but was causing issues with server hosted in remote locations. A system administrator can now statically configure their
 machines to continue on booting even if a disk is bad in the array by issuing the following command:

echo "BOOT_DEGRADED=true" | sudo tee -a /etc/initramfs-tools/conf.d/mdadm

Additionally, this can be specified on the kernel boot line with the

bootdegraded=[true|false]


parameter.

Akcije

Također dostupno kao Atom PDF