RHEV/ovirt – can’t switch SPM role – async_tasks are stucked

On the host with the SPM role

$ vdsClient -s 0 getAllTasksStatuses
{'status': {'message': 'OK', 'code': 0}, 'allTasksStatus': {'feb3aaa5-ec1c-42a6-8f17-f7c94891b43f': {'message': '1 jobs completed successfully', 'code': 0, 'taskID': '631fd441-0955-49da-9376-1cba24764aa7', 'taskResult': 'success', 'taskState': 'finished'}, 'b4fe0c6d-d458-4ed2-a9e2-2c0d41914b8f': {'message': '1 jobs completed successfully', 'code': 0, 'taskID': '67e1a2e8-3747-43fa-b0dd-fc469a6f6a02', 'taskResult': 'success',
'taskState': 'finished'}}}

On the RHEV/ovirt manager

$ for i in b4fe0c6d-d458-4ed2-a9e2-2c0d41914b8f feb3aaa5-ec1c-42a6-8f17-f7c94891b43f; do psql --dbname=engine --command="DELETE FROM async_tasks WHERE vdsm_task_id='${i}'"; done
$ for j in b4fe0c6d-d458-4ed2-a9e2-2c0d41914b8f feb3aaa5-ec1c-42a6-8f17-f7c94891b43f; do vdsClient -s 0 clearTask ${j}; done

ROSE Xeon CW (2015) & power2max Rotor 3D+

Rahmen: ROSE Xeon CW 2015

Innenlager: Rotor Pressfit 4630 (PF46-68-30)

Kurbel: Rotor 3D+ mit p2m Spider

Spacer laut Specs: 1x A + 1x E auf der Ds, 1x A auf der NDs

Spacer verbaut: 2x A auf der Ds

Mit den von Rotor vorgesehen Spacern schleift der Spider am Rahmen. Laut ROSE und einem Sportlabor ist es kein Problem den 2.5mm Spacer von der NDs auf die Ds zustecken.

RHEV/ovirt – find stucked / zombie tasks

Random notes

$ vdsClient -s 0 getAllTasksStatuses
$ vdsClient stopTask <taskid>
$ vdsClient clearTask <taskid>
$ su - postgres
$ psql -d engine -U postgres
> select * from job order by start_time desc;
> select DeleteJob('702e9f6a-e2a3-4113-bd7d-3757ba6bc4ef');


/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select * from job;"

entropy inside a virtual machine

Sometimes my ceph-(test!)deployments inside a VM failed.

The Problem is that the kernel/cpu can not provide enough entropy (random numbers) for the ceph-create-keys command – so it stuck/hang. It is not a ceph problem! This can also happen with ssl commands.

But first things first – we need to check the available entropy on a system:

cat /proc/sys/kernel/random/entropy_avail

The read-only file entropy_avail gives the available entropy.
Normally, this will be 4096 (bits), a full entropy pool (see man 4 random)

Values less than 100-200, means you have a problem!

For a virtual machine we can create a new device – virtio-rng. Here is a xml example for libvirt.

<rng model='virtio'>
  <backend model='random'>/dev/random</backend>

That is ok for ONE virtual machine on the hypervisor. Usually we find more than one virtual machine. Therefore we need to install the rng-tools package on the virtual machines.

$pkgmgr install rng-tools
systemctl enable rngd
systemctl start rngd

That’s it! That solved a lot of my problems πŸ˜‰

Openstack Horizon – leapyear bug

Switching the language in the dashboard ends with a error.

day is out of range for month

eg. https://bugs.launchpad.net/horizon/+bug/1551099

[Mon Feb 29 09:20:05 2016] [error] Internal Server Error: /settings/
[Mon Feb 29 09:20:05 2016] [error] Traceback (most recent call last):
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/django/core/handlers/base.py", line 112, in get_response
[Mon Feb 29 09:20:05 2016] [error]     response = wrapped_callback(request, *callback_args, **callback_kwargs)
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/horizon/decorators.py", line 36, in dec
[Mon Feb 29 09:20:05 2016] [error]     return view_func(request, *args, **kwargs)
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/horizon/decorators.py", line 52, in dec
[Mon Feb 29 09:20:05 2016] [error]     return view_func(request, *args, **kwargs)
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/horizon/decorators.py", line 36, in dec
[Mon Feb 29 09:20:05 2016] [error]     return view_func(request, *args, **kwargs)
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/django/views/generic/base.py", line 69, in view
[Mon Feb 29 09:20:05 2016] [error]     return self.dispatch(request, *args, **kwargs)
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/django/views/generic/base.py", line 87, in dispatch
[Mon Feb 29 09:20:05 2016] [error]     return handler(request, *args, **kwargs)
[Mon Feb 29 09:20:05 2016] [error]   File "/usr/lib64/python2.6/site-packages/django/views/generic/edit.py", line 171, in post
[Mon Feb 29 09:20:05 2016] [error]     return self.form_valid(form)
[Mon Feb 29 09:20:05 2016] [error]   File "/srv/www/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/views.py", line 38, in form_valid
[Mon Feb 29 09:20:05 2016] [error]     return form.handle(self.request, form.cleaned_data)
[Mon Feb 29 09:20:05 2016] [error]   File "/srv/www/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/forms.py", line 89, in handle
[Mon Feb 29 09:20:05 2016] [error]     expires=_one_year())
[Mon Feb 29 09:20:05 2016] [error]   File "/srv/www/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/settings/user/forms.py", line 32, in _one_year
[Mon Feb 29 09:20:05 2016] [error]     now.minute, now.second, now.microsecond, now.tzinfo)
[Mon Feb 29 09:20:05 2016] [error] ValueError: day is out of range for month

SUSE Openstack Cloud – sleshammer – pre/post scripts – pxe trigger

Enable root login for the sleshammer image

(it is used by the suse cloud as a hardware discovery image)

The sleshammer image will mount “/updates” over nfs from the admin node and execute the control.sh. This script will check if there are some pre/post-hooks and will possibly execute them.

root@admin:/updates # cat /updates/discovered-pre/set-root-passwd.hook
echo "root" | passwd --stdin root

sleep 10

Make sure that the hook set as executable!

SUSE Openstack Cloud supports only pre and post scripts. discovered is the state – discovery or hardware-installed should also work.

BTW: You can also create custom control.sh-script (and also hooks) for a node!

mkdir /updates/d52-54-00-9e-a6-90.cloud.default.net/
cp /updates/control.sh /updates/d52-54-00-9e-a6-90.cloud.default.net/

Some random notes – discovery/install

default pxelinux configuration
(see http://admin-node:8091/discovery/pxelinux.cfg/)

DEFAULT discovery
LABEL discovery
  KERNEL vmlinuz0
  append initrd=initrd0.img crowbar.install.key=machine-install:34e4b23a970dbb05df9c91e0c1cf4b512ecaa7b839c942b95d86db1962178ead69774a9dc8630b13da171bcca0ea204c07575997822b3ec1de984da97fca5b84 crowbar.hostname=d52-54-00-8b-c2-17.cloud.default.net crowbar.state=discovery

allocated node

The sleshammer-image will wait for this entry (.*_install) on the admin-node once you allocate a node.

DEFAULT suse-11.3_install
LABEL suse-11.3_install
  KERNEL ../suse-11.3/install/boot/x86_64/loader/linux
  append initrd=../suse-11.3/install/boot/x86_64/loader/initrd   crowbar.install.key=machine-install:34e4b23a970dbb05df9c91e0c1cf4b512ecaa7b839c942b95d86db1962178ead69774a9dc8630b13da171bcca0ea204c07575997822b3ec1de984da97fca5b84 install= autoyast= ifcfg=dhcp4 netwait=60

openvswitch and OpenFlow


Layer 1

ovs-ofctl del-flow BRIDGE
ovs-ofctl add-flow BRIDGE priority=500,in_port=1,actions=output:2
ovs-ofctl add-flow BRIDGE priority=500,in_port=2,actions=output:1
ovs-ofctl dump-flows BRIDGE

Layer 2

ovs-ofctl del-flow BRIDGE
ovs-ofctl add-flow BRIDGE dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=output:2
ovs-ofctl add-flow BRIDGE dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=output:1
ovs-ofctl add-flow BRIDGE dl_type=0x806,nw_proto=1,actions=flood
ovs-ofctl dump-flows BRIDGE 

Layer 3

ovs-ofctl del-flow BRIDGE
ovs-ofctl add-flow BRIDGE priority=500,dl_type=0x800,nw_src=,nw_dst=,actions=normal
ovs-ofctl add-flow BRIDGE priority=800,ip,nw_src=,actions=mod_nw_tos=184,normal
ovs-ofctl add-flow BRIDGE arp,nw_dst=,actions=output:1
ovs-ofctl add-flow BRIDGE arp,nw_dst=,actions=output:2
ovs-ofctl add-flow BRIDGE arp,nw_dst=,actions=output:3
ovs-ofctl dump-flows BRIDGE 

Layer 4

ovs-ofctl del-flow BRIDGE 
ovs-ofctl add-flow BRIDGE arp,actions=normal
ovs-ofctl add-flow BRIDGE priority=500,dl_type=0x800,nw_proto=6,tp_dst=80,actions=output:3
ovs-ofctl add-flow BRIDGE priority=800,ip,nw_src=,actions=normal
ovs-ofctl dump-flows BRIDGE 



Priority rules

When no priority is set is the default – 32768! Allowed values are from 0 to 65536. A higher priority will match at first.


dl_type and nw_proto

dl_type and nw_proto are filters to match a specific network packet. Generally dl_type is for L2 (matches ethertype) and nw_proto (matches IP protocol type) for L3 actions. For example:

dl_type=0x800 – for ipv4 packets

dl_type=0x86dd – for ipv6 packets

dl_type=0x806 and nw_proto=1 – match only arp requests (ARP opcode, see layer 2)

dl_type=0x800 or ip (as keyword, see layer 3) has the same meaning

ip and nw_proto=17 – udp packets

ip and nw_proto=6 – tcp packets

Parameters for actions can be (excerpt)

  • normal – Default mode, OVS acts like a normal L2 switch
  • drop – drops all packets
  • output – defineΒ the output port for a packet/rule
  • resubmit – useful for multiple tables, resend a packet to a port or table
  • flood – forword all packets on all port except the port on which it was received
  • strip_vlan – remove a vlan tag from a packet
  • set_tunnel – set a tunnel id (gre & vxlan)
  • mod_vlan_vid – add a vlan tag for a packet
  • learn – complex foo πŸ˜‰

ovs-ofctl man page

Example from a openstack node (w/ GRE, see table 22) – ovs flows from the br-tun device

[root@node1 ~]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=1221.218s, table=0, n_packets=0, n_bytes=0, idle_age=1221, priority=0 actions=drop
cookie=0x0, duration=1221.323s, table=0, n_packets=747, n_bytes=54800, idle_age=0, priority=1,in_port=1 actions=resubmit(,2)
cookie=0x0, duration=1220.226s, table=0, n_packets=0, n_bytes=0, idle_age=1220, priority=1,in_port=2 actions=resubmit(,3)
cookie=0x0, duration=1221.126s, table=2, n_packets=0, n_bytes=0, idle_age=1221, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
cookie=0x0, duration=1221.051s, table=2, n_packets=747, n_bytes=54800, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
cookie=0x0, duration=1220.974s, table=3, n_packets=0, n_bytes=0, idle_age=1220, priority=0 actions=drop
cookie=0x0, duration=1218.706s, table=3, n_packets=0, n_bytes=0, idle_age=1218, priority=1,tun_id=0x3f7 actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=1217.462s, table=3, n_packets=0, n_bytes=0, idle_age=1217, priority=1,tun_id=0x442 actions=mod_vlan_vid:2,resubmit(,10)
cookie=0x0, duration=1220.898s, table=4, n_packets=0, n_bytes=0, idle_age=1220, priority=0 actions=drop
cookie=0x0, duration=1220.821s, table=10, n_packets=0, n_bytes=0, idle_age=1220, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-&gt;NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]-&gt;NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
cookie=0x0, duration=1220.742s, table=20, n_packets=0, n_bytes=0, idle_age=1220, priority=0 actions=resubmit(,22)
cookie=0x0, duration=1220.666s, table=22, n_packets=137, n_bytes=21860, idle_age=13, priority=0 actions=drop
cookie=0x0, duration=1220.093s, table=22, n_packets=610, n_bytes=32940, idle_age=0, hard_age=1217, dl_vlan=2 actions=strip_vlan,set_tunnel:0x442,output:2
cookie=0x0, duration=1219.970s, table=22, n_packets=0, n_bytes=0, idle_age=1219, hard_age=1218, dl_vlan=1 actions=strip_vlan,set_tunnel:0x3f7,output:2

Gentoo – initramfs with busybox, lvm and some more…


mkdir -p /usr/src/initramfs/{bin,lib/modules,dev,etc,mnt/root,proc,root,sbin,sys}
cp -a /dev/{null,console,tty,sda*} /usr/src/initramfs/dev/


USE="static make-symlinks -pam -savedconfig" emerge --root=/usr/src/initramfs/ -av busybox

LVM provides already a static binary πŸ™‚

cp /sbin/lvm.static /usr/src/initramfs/lvm