Ovirt Error: Upload to a Local Storage Domain Is Supported Only Through Ssh

At the end of concluding week, I spied an exciting tweet most oVirt:

libgfapi-ready

Not long subsequently I started using oVirt and Gluster together, the projects started talking about a way to amend Gluster performance by enabling virtualization hosts to admission Gluster volumes directly, using Gluster's libgfapi, rather than through a FUSE-mounted location on the virtualization host. There was a petty chip of fit and stop work to be done, so nosotros'd all exist basking in the glow of ~30% amend Gluster storage performance.

That was near four years ago. There ended up being kind of a lot of different little things that needed fixing to make this feature work in oVirt. You can follow many of the twists and turns in bugzilla.

All along, I was eagerly awaiting the characteristic both as a cool new oVirt+Gluster development and as a welcome selection for speeding upwards my own lab. Deejay has always been the weakest part of my hardware setup. My servers each have a unmarried pair of 1TB drives in mirrored RAID, shared between Gluster and the OS, and my VM's virtual drives had been stored in triplicate in replica three Gluster volumes. More than recently, with the appearance of Gluster arbiter bricks, I've been able to get the split-brain protection of replica 3 volumes with only two copies of the information, and that sped things upward a scrap, but did nil to dampen my appetite for libgfapi.

Since I need my oVirt setup to go things done, I usually don't test RC versions of new oVirt components there, only I couldn't await whatsoever longer and took the plunge. I installed the RC2 updates on each of my virt hosts, and on my engine, I installed a slightly newer versionof the code, from the experimental repo, which contained a few last bits that hadn't fabricated RC2. And so, on my engine, I ran:

# engine-config -s LibgfApiSupported=true # systemctl restart ovirt-engine

Any VMs that were already running before the upgrade continued running without libgfapi, and if I migrated them to some other host, they'd plow up on that host still using the sometime access method. When I restarted my VMs, they returned using libgfapi. I could tell which was which by grepping through the qemu processes on a particular VM host.

# ps ax | grep qemu | grep 'file=gluster\|file=/rhev'  -drive file=/rhev/data-center/00000001-0001-0001-0001-00000000025e/616be2b6-71db-4f54-befd-be6a444775d7/images/3f7877e7-e532-44a0-8735-c7b2ca06de3b/48ee34fc-ae12-494c-892f-4229fe1fef9d  -drive file=gluster://ten.0.20.i/data/616be2b6-71db-4f54-befd-be6a444775d7/images/6597f45a-51cd-4da5-b078-a2652baf78e4/cc3a575e-27b8-4176-b922-9466273153be              

The qemu command lines are super long, and so I cutting them down just to include the line specifying the virtual drives. In the first example, the bulldoze is being accessed through a FUSE mount, and the second, there'due south a direct connection to the Gluster volume.

So, how was performance?

I tried a few different tests, starting with runningddon one of my VMs:

# dd bs=1M count=1024 if=/dev/zilch of=examination conv=fdatasync && rm test

I ran this a agglomeration of times on a VM in both storage configurations and the libgfapi configuration came out nigh 44% faster on average.

For a more "real earth" exam, I figured I'd measure the time it takes to complete a common task of mine: configuring a test Kubernetes cluster from iii Fedora Atomic Host VMs using the upstream ansible scripts. I recorded and averaged the time it took to complete this chore across multiple runs on VMs running in each storage configuration, and found that libgfapi was 11% faster.

zram madness

Non too bad, but similar I said earlier, my oVirt setup can employ all the storage speed help it can go. My servers don't take a lot of deejay merely they do have quite a scrap of RAM, 256GB apiece, so I've long wondered how I could use that RAM to wring more speed out of my setup. For a few months I've been experimenting with using Gluster volumes backed by RAM-disks, using zram devices.

This actually works pretty well, and I was seeing speeds similar to what I get running on the SSD in my laptop. Of course, RAM-disks mean losing everything on the disk in the event of a reboot (expected or otherwise), but using replica 3 Gluster volumes, I could reboot one host at a time without losing everything else. Upon bringing back the rebooted host, I'd run a little script to recreate the zram device and the mount points, and then follow the Gluster instructions for replacing a failed brick.

# cat fast.sh ZRAMSIZE=$((1024 * 1024 * 1024 * 50)) modprobe zram echo ${ZRAMSIZE} > /sys/course/block/zram0/disksize mkfs -t xfs /dev/zram0 mkdir -p /gluster-bricks/fast mount /dev/zram0 /gluster-bricks/fast mkdir /gluster-bricks/fast/brick

Nevertheless, if all of my machines went downward at one time, due to a power failure in the lab or something like that, replication wouldn't aid me. I wondered if I could notwithstanding get a significant heave out of a mixture of zram and regular disk backed volumes, with each of my servers hosting ane zram-backed brick, 1 regular disk-backed brick, and one regular disk-backed arbiter brick, all combined into one distributed-replicated Gluster volume.

brick-house

I ran my same ansible-kubernetes setup tests with the VM drives hosted from my "fast" Gluster domain, and the tests run 32% faster than with the my regular disk-backed (and now libgfapi-enabled) "data" storage domain. Pretty overnice, and, in this sort of setup, a power loss would hateful that each of four replica groups would exist missing one brick, with a remaining data brick and an arbiter brick still effectually to maintain the data and let me to repair things.

I desire to experiment a flake farther with automated tiering in Gluster, where I'd connect a RAM-disk boosted volume like this to the volume for my main data domain, and frequently-accessed files would automatically migrate to the faster storage. As information technology is now, my fast domain has to be relatively small-scale, and so I have to budget my use of it.

I've written an updated version of this howto for oVirt 3.3 at the Red Hat Community blog.

The latest version of the open source virtualization platform, oVirt, has arrived, which means it'south time for the third edition of my "running oVirt on a unmarried motorcar" web log mail service. I'm delighted to written report that this ought to be the shortest (and least-updated, I hope) post of the three so far.

When I wrote my first "Up and Running" post terminal year, getting oVirt running on a single car was more of a hack than a supported configuration. Wrangling large groups of virtualization hosts is oVirt's reason for existence. oVirt is designed to run with its manager component, its virtualization hosts, and its shared storage all running on separate pieces of hardware. That'due south how you'd desire information technology set up for production, just a project that requires a bunch of hardware only for boot the tires is going to find its tires united nations-kicked.

Fortunately, this changed in August'due south oVirt three.ane release, which shipped with an All-in-One installer plugin, but, every bit a glance at the volume of strikethrough text and UPDATE notices in my mail service for that release, there were more than than a few bumps in the 3.one road.

In oVirt 3.2, the process has gotten much smoother, and should be as simple equally setting upward the oVirt repo, installing the right packet, and running the install script. Also, there's now a LiveCD image available that you can burn onto a USB stick, boot a suitable organization from, and give oVirt a try without installing annihilation. The downsides of the LiveCD are its size (two.1GB) and the fact that it doesn't persist. But, that 2d bit is 1 of its virtues, as well. The All in One setup I draw below is one that you tin can keep around for a while, if that's what you lot're subsequently.

Without further ado, hither'due south how to become up and running with oVirt on a single machine:

HARDWARE REQUIREMENTS: You demand a machine with x86-64 processors with hardware virtualization extensions. This flake is not-negotiable–the KVM hypervisor won't piece of work without them. Your machine should have at least 4GB of RAM. Virtualization is a RAM-hungry affair, so the more memory, the amend. Go along in mind that any VMs you run will demand RAM of their own.

It'due south possible to run an oVirt in a virtual automobile–I've taken to testing oVirt on oVirt itself most of the time–but your virtualization host has to be set for nested KVM for this to piece of work. I've written a bit about running oVirt in a VM hither.

SOFTWARE REQUIREMENTS: oVirt is adult on Fedora, and any given oVirt release tends to runway the most recent Fedora release. For oVirt three.2, this means Fedora 18. I run oVirt on minimal Fedora configurations, installed from the DVD or the netboot images. With oVirt 3.1, a lot of people ran into trouble installing oVirt on the default LiveCD Fedora media, largely due to conflicts with NetworkManager. When I teseted 3.two with the With 3.2, the installer script disabled NM on its ain, merely I had to manually enable sshd (sudo service sshd start && sudo chkconfig sshd on).

A lot of oVirt community members run the projection on CentOS or Scientific Linux using packages built by Andrey Gordeev, and official packages for these "el6" distributions are in the works from the oVirt projection proper, and should be bachelor shortly for oVirt three.two. I've run oVirt on CentOS in the past, but right now I'one thousand using Fedora eighteen for all of my oVirt machines, in club to get access to new features like the nested KVM I mentioned earlier.

NETWORK REQUIREMENTS: Your exam machine must have a host proper name that resolves properly on your network, whether you're setting that up in a local dns server, or in the /etc/hosts file of whatever machine you expect to access your test machine from. If yous take the hosts file editing route, the installer script will complain most the hostname–you can safely forge alee.

CONFIGURE THE REPO: Somewhat confusingly, oVirt three.1 is already in the Fedora eighteen repositories, but due to some packaging problems I'm non fully up-to-speed on, that version of oVirt is missing its web admin console. In any instance, nosotros're installing the latest, iii.2 version of oVirt, and for that we must configure our Fedora 18 system to utilize the oVirt project's yum repository.

sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm

SILENCING SELINUX (OPTIONAL): I typically run my systems with SELinux in enforcing way, but it's a common source of oVirt issues. Correct now, at that place's definitely ane (now fixed), and maybe 2 SELinux-related bugs affecting oVirt iii.2. And then…

sudo setenforce 0

To brand this setting persist across reboots, edit the 'SELINUX=' line in /etc/selinux/config to equal'permissive'.

INSTALL THE ALL IN ONE PLUGIN: The bundle below will pull in everything we demand to run oVirt Engine (the management server) besides as plow this management server into a virtualization host.

sudo yum install ovirt-engine-setup-plugin-allinone

RUN THE SETUP SCRIPT: Run the script below and reply all the questions. In almost every case, you can stick to the default answers. Since we're doing an All in I install, I've tacked the relevant argument onto the command beneath. You can run "engine-setup -h" to check out all available arguments.

One of the questions the installer will ask deals with whether and which system firewall to configure. Fedora 18 now defaults to Firewalld rather than the more familiar iptables. In the handful of tests I've done with the iii.ii release code, I've had both success and failure configuring Firewalld through the installer. On i automobile, throwing SELinux into permissive way allowed the Firewalld config process to complete, and on another, that workaround didn't work.

If you choose the iptables road, make sure to disable Firewalld and enable iptables before you run the install script (sudo service firewalld finish && sudo chkconfig firewalld off && sudo service iptables commencement && sudo chkconfig iptables on).

sudo engine-setup --config-allinone=yes

TO THE ADMIN Console: When the engine-setup script completes, visit the spider web admin panel at the URL for your engine machine. It volition be running at port lxxx (unless you've called a unlike setting in the setup script). Choose "Administrator Portal" and log in with the credentials y'all entered in the engine-setup script.

From the admin portal, click the "Storage" tab and highlight the iso domain you lot created during the setup-script. In the pane that appears below, cull the "Information Center" tab, click "Attach," check the box next to your local data center, and hit "OK." Once the iso domain is finished attaching, click "Actuate" to actuate it.

Now you have an oVirt management server that's configured to double equally a virtualization host. You have a local data domain (for storing your VM'south virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, y'all tin can copy an image onto your ovirt-engine motorcar, and from the command line, run, "engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso" to load the image. Otherwise (and this is how I do it), y'all tin can mount the iso NFS share from wherever you similar. Your images don't become in the root of the NFS share, just in a nested set of folders that oVirt creates automatically that looks like: "/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You tin just drop them in there, and after a few seconds, they should register in your iso domain.

Once you lot're up and running, yous can brainstorm installing VMs. I made the "creating VMs" screencast below for oVirt 3.1, but the process hasn't changed significantly for 3.ii:

[youtube:http://www.youtube.com/watch?5=C4gayV6dYK4&HTML5=1%5D

I'm a big fan of virtualization — the power to take a server and slice information technology upwards into a bunch of virtual machines makes life trying out and writing about software much, much easier than it'd be in a ane instance per server world.

Things get tricky, however, when the software you want to endeavor out is itself intended for hosting virtual machines. These days, all of the virtualization work I do centers effectually the KVM hypervisor, which relies on hardware extensions to practice its thing.

Over the by year or so, I've dabbled in Nested Virtualization with KVM, in which the KVM hypervisor passes its hardware-assisted prowess on to guest instances to enable those guest to host VMs of their own. When I showtime dabbled in this, ten or then months ago, my nested virtualization only sort-of worked — my VMs proved unstable, and I shelved further investigation for a while.

Recently, though, nested KVM has been working pretty well for me, both on my notebook and on some of the much larger machines in our lab. In fact, with the help of a new feature  slated for oVirt three.2, I've taken to testing whole oVirt installs, complete with live migration between hosts, all within a single host oVirt machine. Pretty sugariness, since oVirt forms both my primary testing platform and one of the primary projects I look to test.

All my tests with nested KVM have been with Intel hardware, because that'southward what I accept in my labs, only it'southward my understanding that nested KVM works with AMD processors equally well, and that the feature is really more mature on that gear.

To join in on the nested fun, you must first check to run across if nested KVM support is enabled on your motorcar by running:

true cat /sys/module/kvm_intel/parameters/nested

If the answer is "Northward," you tin enable it by running:

echo "options kvm-intel nested=one" > /etc/modprobe.d/kvm-intel.conf

After adding that kvm-intel.conf file, reboot your automobile, later which "cat /sys/module/kvm_intel/parameters/nested" should return "Y."

I've used nested KVM with virt-manager, the libvirt front-end that ships with most Linux distributions, including my own distro of selection, Fedora. With virt-manager, I configure the VM I want to use equally a hypervisor-within-a-hypervisor by clicking on the "Processor" item in the VM details view, and clicking the "Copy host configuration" button to ensure that my guest case boots with the same ready of CPU features offered past my host processor. For good measure, I expand the "CPU Features" bill of fare list and ensure that the feature "vmx" is set to "require."

virt-manager-nested

Not besides taxing, but it turns out that with oVirt, enabling nested virtualization in guests is even easier, thanks to VDSM hooks. VDSM hooks are scripts executed on the host when key events occur. The version of VDSM that volition accompany oVirt 3.2 includes a nestedvt hook that does exactly what I described above — it runs a check for nested KVM support, and if that back up is found, it adds the require vmx chemical element to your VM'south definition.

I've tested this both with oVirt 3.2 alpha, and with the current, oVirt 3.1 version. In the latter example, I simply installed the vdsm-hook-nestedvt bundle from oVirt's nightly repository, and it worked fine with the current stable version of vdsm.

ovirtonovirt

I mentioned higher up that I've been able to examination oVirt on oVirt in this way, and functioning hasn't been remarkably bad, but I wanted to get a better handle on the operation hit of nesting. I settled, unscientifically, on running mock builds of the ovirt-engine source packet, a existent life job that involves CPU and I/O work.

I ran the build operation 4 times on a VM running under oVirt, and four times on a VM running under an oVirt instance which was itself running under oVirt. I outfitted both the nested and the non-nested VM with 4GB of RAM and two virtual cores. I was using the same concrete machine for both VMs, but I ran the tests one at a time, rather than in .parallel.

The four builds on the "existent" VM averaged out to fourteen minutes, fifteen seconds, and the build quartet on the nested VM averaged 28 minutes, xviii seconds. And then, I recorded a definite performance hit with the nested virtualization, just not a large enough hit to dissuade me from further nested KVM exploration.

Speaking of further exploration, I'm looking very forward to attending the adjacent oVirt Workshop later this month, which volition take identify at NetApp's Sunnyvale campus from January 22-24.

If y'all're in the Bay Area and you'd like to learn more about oVirt, I'd love to see yous there. The effect is gratuitous of charge (much like oVirt itself) and all the agenda and registration details are available on the oVirt project site at http://www.ovirt.org/NetApp_Workshop_January_2013. Registration closes on Jan 15th, so get on it!

I've been installing oVirt three.1 on some shiny new lab equipment, and I came across a pair of interesting snags with engine-iso-uploader, a tool y'all tin can employ to upload iso images to your oVirt installation.

I installed the tool on a F17 client machine and festooned the control with the many arguments required to send an iso image off through the network to the iso domain of my oVirt rig. The command failed with the bulletin, "Mistake:root:mount.nfs: Connection timed out."

I had an idea what might be incorrect. The iso domain I set up is hosted past Gluster, and exposed via Gluster's built-in NFS server, which only supports NFSv3. Fedora 17 is set by default to require NFSv4, and when I changed /etc/nfsmount.conf to make Nfsvers=iii, I got effectually that NFS fault — only to hit another, weirder mistake: "ERROR: A user named vdsm with a UID and GID of 36 must be defined on the system to mountain the ISO storage domain on iso1 equally Read/Write."

Vdsm is the daemon that runs oVirt virtualization hosts, so vdsm needs to be able to read and write to the storage domains. I was surprised, though, that the client automobile I was using to upload an iso had to have its ain vdsm user to do the job. Anyhow, I created the vdsm user with the 36.36 IDs, and the command worked.

Engine-iso-uploader does its business organisation with NFS by default, just there'south another option to upload via ssh, which, I imagine, would avoid the demand for that vdsm user. I gave it a quick try, hit a new error, ERROR: Fault message is "unable to examination the bachelor space on /iso1", and shelved further messing around w/ the tool, for now.

My favored method for getting iso images into my iso domain remains mounting the NFS share and dropping them in there. What I'd really like to see is a style to do this straight from the oVirt web admin console.

I of the cooler new features in oVirt 3.ane is the platform'southward support for creating and managing Gluster volumes. oVirt's web admin console now includes a graphical tool for configuring these volumes, and vdsm, the service for responsible for controlling oVirt'southward virtualization nodes, has a new sibling, vdsm-gluster, for treatment the back end work.

Gluster and oVirt make a good team — the calibration out, open up source storage project provides a nice way of weaving the local storage on private compute nodes into shared storage resources.

To demonstrate the basics of using oVirt's new Gluster functionality, I'm going to accept the all-in-one engine/node oVirt rig that I stepped through recently and catechumen it from an all-on-i node with local storage, to a multi-node ready configuration with shared storage provided by Gluster volumes that tap the local storage available on each of the nodes. (Thanks to Robert Middleswarth, whose web log posts on oVirt and Gluster I relied on while learning virtually the philharmonic.)

The all-in-one installer leaves you with a single machine that hosts both the oVirt management server, aka ovirt-engine, and a virtualization node. For storage, the all-in-i setup uses a local directory for the data domain, and an NFS share on the single machine to host an iso domain, where OS install images are stored.

Nosotros'll get-go the all-in-ane to multi-node conversion by putting our local virtualization host, local_host, into maintenance manner by clicking the Hosts tab in the web admin panel, clicking the local_host entry, and choosing "Maintenance" from the Hosts navigation bar.

Once local_host is in maintenance mode, we click edit, modify to the Default data heart and host cluster from the drop down menus in the dialog box, and then hit OK to save the change.

This is assuming that yous stuck with NFS every bit the default storage type while running through the engine-setup script. If non, head over to the Information Centers tab and edit the Default data centre to set "NFS" every bit its type. Next, head to the Clusters tab, edit your Default cluster, fill up the check box next to "Enable Gluster Service," and striking OK to save your changes. So, go back to the Hosts tab, highlight your host, and click Activate to bring information technology dorsum from maintenance mode.

Now caput to a terminal window on your engine motorcar. Fedora 17, the Os I'one thousand using for this walkthrough, includes version 3.ii of Gluster. The oVirt/Gluster integration requires Gluster iii.3, so nosotros need to configure a separate repository to get the newer packages:

# cd /etc/yum.repos.d/ # wget http://repos.fedorapeople.org/repos/kkeithle/glusterfs/fedora-glusterfs.repo

Adjacent, install the vdsm-gluster package, restart the vdsm service, and start up the gluster service:

# yum install vdsm-gluster # service vdsmd restart # service glusterd start

The all-in-1 installer configures an NFS share to host oVirt's iso domain. We're going to be exposing our Gluster volume via NFS, and since the kernel NFS server and Gluster'due south NFS server don't play well nicely together, we have to disable the sometime server.

# systemctl stop nfs-server.service && systemctl disable nfs-server.service

Through much trial and fault, I found that it was also necessary to restart the wdmd service:

# systemctl restart wdmd.service

In the move from v3.0 to v3.1, oVirt dropped its NFSv3-but limitation, but that requirement remains for Gluster, so we take to edit /etc/nfsmount.conf and ensure that Defaultvers=3, Nfsvers=three, and Defaultproto=tcp.

Adjacent, edit /etc/sysconfig/iptables to add the firewall rules that Gluster requires. You tin paste the rules in just before the reject lines in your config.

# glusterfs -A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT -A INPUT -p tcp --dport 111 -j ACCEPT -A INPUT -p udp --dport 111 -j ACCEPT -A INPUT -p tcp -1000 multiport --dport 38465:38467 -j Have

And so restart iptables:

# service iptables restart

Next, decide where y'all want to store your gluster volumes — I store mine under /information — and create this directory if need be:

# mkdir /data

Now, head dorsum to the oVirt web admin console, visit the Volumes tab, and click Create Volume. Give your new book a name, and choose a volume blazon from the drop down card. For our first book, let'south choose Distribute, and and so click the Add Bricks button. Add a single brick to the new volume by typing the path y'all desire into the the Brick Directory field, clicking Add together, and and so OK to salve the changes.

Make sure that the box next to NFS is checked under Access Protocols, and and then click OK. You should see your new volume listed — highlight it and click Start to start it upwards. Follow the same steps to create a second book, which we'll use for a new ISO domain.

For at present, the Gluster volume managing director neglects to fix brick directory permissions correctly, then later on adding bricks on a car, you have to return to the terminal and run chown -R 36.36 /information (assuming /data is where you lot are storing your volume bricks) to enable oVirt to write to the volumes.

One time yous've set your permissions, return to the Storage tab of the spider web admin console to add data and iso domains at the volumes we've created. Click New Domain, choose Default data center from the data heart drop down, and Information / NFS from the storage type driblet down. Fill the export path field with your engine's host name and the volume name from the Gluster volume you created for the data domain. For instance: "demo1.localdomain:/data"

Wait for data domain to become active, and repeat the above process for the iso domain. For more information on setting up storage domains in oVirt 3.one, run into the quick get-go guide.

Once the iso domain comes upwardly, BAM, you're Glusterized. At present, compared to the default all-in-one install, things aren't too different yet — yous have one machine with everything packed into it. The difference is that your oVirt rig is fix to accept on new nodes, which will be able to access the NFS-exposed data and iso domains, as well as contribute some of their own local storage into the pool.

To cheque this out, you'll need a second test machine, with Fedora 17 installed (though you tin can recreate all of this on CentOS or another Enterprise Linux starting with the packages here). Take your F17 host (I first with a minimal install), install the oVirt release package, download the same fedora-glusterfs.repo nosotros used higher up, and make sure your new host is accessible on the network from your engine motorcar, and vice versa. Also, the bug preventing F17 machines running a iii.5 or higher kernel from attaching to NFS domains isn't fixed all the same, so make sure you're running a 3.iii or three.four version of the kernel.

Caput over to the Hosts tab on your web admin panel, click New, supply the requested information, and click OK. Your engine will reach out to your new F17 machine, and whip it into a new virtualization host. (For more info on calculation hosts, again, come across the quick start guide.)

Your new host will require virtually of the same Glusterizing setup steps that you applied to your engine server: make sure that vdsm-gluster is installed, edit /etc/nfsmount.conf, add the gluster-specific iptables rules and restart iptables, create and chown 36.36 your data directory.

The new host should see your Gluster-backed storage domains, and yous should be able to run VMs on both hosts and migrate them back and forth. To take the next stride and printing local storage on your new node into service, the steps are pretty like to those we used to create our first Gluster volumes.

First, though, we have to run the control "gluster peer probe NEW_HOST_HOSTNAME" from the engine server to get the engine and it's new buddy hooked up Glusterwise (this another of the wrinkles I hope to see ironed out before long, taken care automatically in the background).

Nosotros can create a new Gluster volume, data1, of the type Replicate. This book type requires at least ii bricks, and we'll create ane in the /data directory of our engine, and one in the /data directory of our node. This works simply the same as with the first Gluster book we set upwardly, merely brand sure that when adding bricks, you select the correct server in the drib down menu:

Just equally before, nosotros have to return to the command line to chown -R 36.36 /data on both of our machines to gear up the permissions correctly, and start the volumes we've created.

On my test setup, I created a second data domain, named data1, stored on the replicated Gluster domain, with the storage path set to localhost:/data1, on the rationale that VM images stored on the data1 domain would stay in sync across the pair of hosts, enabling either of my hosts to tap local storage for running a particular VM paradigm. Merely I'yard a newcomer to Gluster, and so consult the documentation for more than clueful Gluster guidance.

Yesterday I removed Fedora 17 from the server I use for oVirt testing, mainly, because I've been experiencing random reboots on the server, and I haven't been able to figure out why. I'm pretty sure I wasn't having these issues on Fedora xvi, just I can't go back to that release because the official packages for oVirt are built simply for F17. There are, however, oVirt packages built for Enterprise Linux (aka RHEL and its children), and I know that some in the oVirt community have been running with these packages with success.

So, I figured I'd install CentOS half dozen on my machine and either escape my random reboots or, if the reboots continued, learn that in that location's probably something wrong with my hardware. Plus, I'd escape a second problems I've been experiencing with Fedora 17, the ane in which a recent rebase to the Linux 3.5 kernel (F17 shipped originally with a iii.3 kernel) seems to have broken oVirt's ability to access NFS shares, thereby breaking oVirt.

Installing oVirt iii.i on CentOS 6 went very smoothly — the steps involved were pretty much the same as those for Fedora. Before I knew it, I was support and running with a CentOS-based oVirt 3.1 rig but like my F17 one, complete with my F17 test server template and my F17 VMs for my in-progress gluster/ovirt integration writeup, all repatriated from my oVirt export domain.

Still… all is not well.

My Fedora 17 VMs aren't running ordinarily on my new CentOS vi host, and what I'grand seeing reminds me of a problems I encountered several weeks ago when I beginning upgraded from oVirt iii.0 to the oVirt three.1 beta. The solution came in the form of a bugfix from the qemu projection upstream — that'southward a real do good of running a leading edge distro like Fedora — when bug are stock-still upstream, yous don't have to wait forever for them to bladder along to you lot.

Likewise, the closer y'all are to upstream, the faster you go access to new features. Not long after updating my qemu to address the F16/F17 VM booting issue, I took to running qemu packages even closer to upstream, from the Fedora Virtualization Preview repository. The oVirt 3.1 direction engine supports live snapshots, but requires at to the lowest degree qemu 1.1, which is slated for Fedora 18.

Of course, the downside of tracking the leading edge is that with frequent changes come frequent opportunities for breakage. The changes that don't direct address your hurting points are pure downside, like the NFS-disabling kernel rebase I mentioned earlier. Too fast versus too slow.

So what at present?

I wasn't experiencing these random reboots on my other F17 system — my Thinkpad X220, which I've pressed into service as a second oVirt node. I accept this F17-based node hooked up to my el6 oVirt engine, and if I ready my Fedora VMs to launch only on this node, they run just fine. This motorcar has simply 8GB of RAM, though, and that limits how many VMs I can run on it. Also, since my F17 and el6 nodes are running different versions of qemu, live migration between them doesn't work.

  • I could shift my in-progress ovirt/gluster testing to el6, VMs of which run just fine with the older qemu, merely I'd prefer to keep testing with Fedora, and the newest code.
  • I could, instead of hitting the brakes and running el6 on my test server, hit the gas and throw F18 on there. Mayhap that'd solve my random reboot issue, though I'1000 not certain if my disabled NFS travails would follow me frontwards.
  • I could figure out how to rebuild the new qemu packages on el6. I've started downwardly this path already, just rpmbuild is voicing some complaints that seem related to systemd, which F17 uses and el6 does not.
  • I could find out that my random reboot problems weren't the mistake of F17 after all, which would send me poring over my hardware and possibly returning to F17.

For now, I'm going to play some more than with updating my qemu on el6, while squeezing my F17 VMs into my smaller F17-based node to get this ovirt/gluster howto finished.

Then perhaps I'll have a long walk on the beach and meditate on the merits of too slow versus too fast in Linux distros, and ponder whether the Giants will sweep the Astros this night.

Update: I was able to rebuild Fedora qemu 1.one packages for CentOS. I commented out some systemd-dependent stuff from the spec file. I had to rebuild a couple of other packages, as well, which I found in Fedora'south buildsystem. Now, my Fedora 17 VMs run well on my CentOS six oVirt host (which hasn't randomly rebooted all the same), and I can migrate VMs between it and my F17-based node.

And the Giants won, as well.

Update: I've written an updated version of this guide for oVirt 3.2.

Last February or so, I wrote a post nigh getting up and running with oVirt, the open source virtualization direction projection, on a single test machine. Various things have changed since then, such every bit a shiny new oVirt three.1 release, so I'yard going to update the process in this post.

What you need:

A exam automobile, ideally an x86_64 organisation with multiple cores, hardware virtualization extensions and plenty of RAM (like 4GB or more). The default Bone for oVirt 3.i is Fedora 17, and that's what I'll be writing about here. Your test automobile must accept a host proper noun that resolves properly on your network, whether you're setting that up in a local dns server, or in the /etc/hosts file of whatsoever automobile y'all expect to admission your test machine from.

UPDATE: For my Fedora oVirt installs, I've been using a minimal install of Fedora, which is an option if yous outset from the DVD or network install images. I collaborate with my minimal installs via ssh. If you're using a minimal install with ssh, my instructions work only fine. Even so, if you outset from the default Fedora LiveCD media, you'll need to take a couple of extra steps. You must disable NetworkManager: (sudo systemctl stop NetworkManager.service && sudo systemctl disable NetworkManager.service), you must enable sshd: (sudo systemctl start sshd && sudo systemctl enable sshd), and so reboot for good measure before proceeding with the balance of the steps.

(Problems Note: With the latest Fedora 17 kernel, I'm hitting https://bugzilla.redhat.com/show_bug.cgi?id=845660, preventing nfs domains from attaching, and so for now, yous'll want to run a previous fedora kernel. (Bug Notation Annotation: This problems, at long last, is just about squashed. Stay tuned.))

The package vdsm-4.10.0-10 squashed the higher up bug dead. Make sure you're up to engagement w/ it to avoid issues w/ post 3.5 Fedora kernels.

(A NEW BUG NOTE: There'south a new, three.2 version of ovirt-engine-sdk in the Fedora 17 update repo. The oVirt 3.1 packages that depend on the sdk don't call specifically for version 3.1, just they appear not to work with 3.2. For now, you must downgrade to the 3.1 version of the sdk in order for the all-in-one installer and other features to work properly: "yum downgrade ovirt-engine-sdk" I've filed a issues, here: https://bugzilla.redhat.com/show_bug.cgi?id=869457 — you can cc yourself on the bug for progress updates.)

All-in-One Install:

oVirt 3.ane now includes an installer plugin for setting upward the sort of single machine installation I wrote about previously. Information technology's proficient for testing out oVirt, and if you want to expand from your unmarried auto install to cover additional nodes and storage, you can do that. Read on for the steps involved, and/or scout this handy screencast I made of the process:

[youtube:http://www.youtube.com/watch?5=Aq3ctFhBIhk%5D

1. Install the ovirt-release packet on your Fedora 17 auto: "yum install http://www.ovirt.org/releases/ovirt-release-fedora.noarch.rpm”

two. Install the ovirt-engine all-in-one package: "yum install ovirt-engine-setup-plugin-allinone"

2a. As pointed out by oVirt customs member Adrián, in the comments below, yous can ensure that the install script allows enough time for the host to add itself by editing "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py" to brand the "waitForHostUp timeout larger, similar and so:

def waitForHostUp():
utils.retry(isHostUp, tries=40, timeout=300, sleep=five)

three. Run engine-setup: "engine-setup" and answer all the questions.

I've plant that the all-in-one installer sometimes times out during the install process. If the script times out during the terminal "AIO: Calculation Local host (This may take several minutes)" step, yous can go on to the web admin panel to consummate the process. If it times out at an before point, like waiting for the jboss-equally server to start, you should run "engine-cleanup" and then re-run "engine-setup".

four. When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will exist running at port 80 (unless you lot've called a different setting in the setup script). Choose "Administrator Portal" and log in with the credentials you entered in the engine-setup script.

From the admin portal, accept a expect at the "Storage" and "Hosts" tabs. If the all-in-ane procedure completed, you should run across a host named "local_host" with a status of "Upwardly" under Hosts, and you lot should see a storage domain named "local_host-Local" under "Storage."

If your local_host is all the same installing, yous'll need to wait for information technology to end before proceeding. You lot should exist able to view its progress from the events console at the bottom of the console interface. In one case the host is finished installing, click on your "local_host" and hit the "Maintenance" link to put it into maintenance mode. One time your host is in maintenance mode, you'll exist able to click on the "Configure Local Storage" link, where y'all enter the same local storage path you entered into the engine-setup script, and and then hitting "OK."

five. Once the configure local storage procedure is complete (whether this was taken care of during engine-setup, or if yous had to practice information technology manually in footstep iv) click on the storage tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the "Data Centre" tab, click "Adhere," check the box next to your local data center, and hit "OK." In one case the iso domain is finished attaching, click "Activate" to, uh, activate it.

6. Now you have an oVirt direction server that's configured to double every bit a virtualization host. Y'all take a local data domain (for storing your VM's virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, y'all tin copy an image onto your ovirt-engine machine, and from the command line, run, "engine-iso-uploader upload -i iso NAME_OF_YOUR_ISO.iso" to load the epitome. Otherwise (and this is how I do it), y'all can mount the iso NFS share from wherever you like. Your images don't go in the root of the NFS share, merely in a nested set of folders that oVirt creates automatically that looks like: "/nfsmountpoint/BIG_OLE_UUID/images/11111111-1111-1111-1111-111111111111/NAME_OF_YOUR_ISO.iso. You tin can just driblet them in there, and after a few seconds, they should annals in your iso domain.

Once you're up and running, you can begin installing VMs. For your viewing pleasure, here'southward another screencast, about creating VMs on oVirt:

[youtube:http://www.youtube.com/sentry?v=C4gayV6dYK4%5D

Across All in Ane (or skipping it all together):

Installing: A "regular" multi-machine install of oVirt works in pretty much the same way, except that in step two, you but install "yum install ovirt-engine" and during the "engine-setup" process, you lot won't be asked about installing VDSM or a local information domain on your engine. I typically skip creating an iso domain on my engine, every bit I use a carve up NAS device for my iso domain needs.

The local data center, cluster and storage domain created every bit role of the all-in-i installation option are merely attainable to the virtualization host installed locally on the engine. Shifting to a multi-machine setup involves moving that local host to the Default datacenter and cluster, which starts with putting the host into maintenance manner, clicking edit, and switching the Information Center and Cluster values to "Default" (or to another, not-local set of data center and cluster values).

Hosts: In one case the setup script is finished, you can caput over to the web admin console to add hosts and storage domains. oVirt hosts tin can exist either regular Fedora 17 boxes or machines installed with oVirt Node. In either case, you add one of these machines as an oVirt host by clicking "New" under the "Hosts" tab in the web admin panel, and providing a proper name, IP accost (or host name) and root password for your hosthoped-for, and clicking OK. A dialog will complain about configuring ability management, but it's non strictly required.

When adding an oVirt Node-based system as a host, you tin can also provide the ovirt-engine accost and admin countersign in the admin interface of the node, which will add the node to your ovirt-engine server, pending approving through the web admin panel.

Storage: A multi-machine setup requires a shared storage domain, such every bit one backed by NFS or iSCSI. Setting upwards an NFS storage domain involves clicking "New Domain" on the "Storage" tab, giving the new data domain a name and configuring its export path. Setting up an iSCSI domain is similar, simply involves entering the IP address of your iSCSI target, discovering available LUNs, and selecting one to use.

When Things Get Wrong:

A few things to practice/check when things go wrong.

one. Put selinux into permissive style: "setenforce 0" I run my systems with selinux enabled, just in that location are sometimes selinux-related bugs. Putting your examination system into permissive mode will become you by the errors.

2. Check the logs:

  • ovirt-engine install log lives at /var/log/ovirt-engine/engine-setup*.log
  • jboss app server logs live at /var/log/ovirt-engine/boot.log and /var/log/ovirt-engine/server.log
  • ovirt-engine logs live at /var/log/ovirt-engine/engine.log — y'all can tail -f /var/log/ovirt-engine/engine.log to sentry what the engine is doing
  • vdsm logs live (on each virt host) at /var/log/vdsm/vdsm.log — you tin watch these to see what's going on with individual virt hosts

3. Visit us at #ovirt on OFTC. My handle there is jbrooks. If you don't get an answer there, send a message to users@ovirt.org.

Faking It:

I mentioned right at the summit that if you desire to test oVirt virtualization, you need a car with hardware virtualization extensions. The oVirt management engine can live happily within a VM, but for hosting VMs, you need those extensions.

While near physical machines these days come with those extensions, virtual machines don't have them. There'south such a thing every bit nested KVM virtualization, but it's catchy to gear up and pretty unstable when you can set up it up.

There is a mode to test out oVirt without hardware virtualization extensions, only the take hold of is that you can't really run any VMs on 1 of these "faux" installs. Why bother? Well, in that location's a lot to exam and see in oVirt that falls short of running VMs–I fabricated my whole installing oVirt hotwo video on a VM running inside of my real oVirt rig, for case. You can get a experience for installing hosts and configuring storage, and managing Gluster volumes (a topic I haven't covered hither, but will, soon, in another post, till then see here for more info on oVirt/Gluster). For more than on oVirt w/ Gluster, come across here.

For the all-in-one setup instructions above, right subsequently stride 2:

  • install the "fake qemu" packet (yum install vdsm-hook-faqemu)
  • edit /etc/vdsm/vdsm.conf, changing line # fake_kvm_support = simulated to fake_kvm_support = truthful
  • supervene upon the contents of the of /usr/share/vdsm-bootstrap/vds_bootstrap.py (information technology'll exist there post step 2) with the file at http://gerrit.ovirt.org/cat/5611%2C3%2Cvds_bootstrap/vds_bootstrap.py%5E0
  • continue to step iii

That vds_bootstrap.py stride shouldn't be required, and I'm going to file a bug about it as soon equally I end this postal service. For more data on this topic, see: http://wiki.ovirt.org/wiki/Vdsm_Developers#Fake_KVM_Support.

If y'all're trying to configure a separate fake host, for now, you'll need to do it on a regular vdsm (not oVirt Node) host, though this should soon change. Only, for your regular host, before trying to add the host through the oVirt web admin console:

  • run "yum install vdsm-hook-faqemu vdsm"
  • edit /etc/vdsm/vdsm.conf, irresolute line # fake_kvm_support = false to fake_kvm_support = true

Either manner, you'll need that modded /usr/share/vdsm-bootstrap/vds_bootstrap.py file on your engine, and y'all simply have to modify this file once, until/unless a future parcel update restores the faqemu-ignorant file.

There'southward work underway over at the oVirt Projection to produce some screencasts of the open source virtualization management platform in activity. Since you can notice oVirt in action each day in my home part, I set out to chip in and create an oVirt screencast, using tools available on my Fedora 17 desktop.

Here'south the five minute screencast, which focuses on creating VMs on oVirt, with a bit of live migration thrown in:

The first step was getting my oVirt examination rig into shape. I'm running oVirt three.1 on a pair of machines: a quad core Xeon with 16GB of RAM and a couple of SATA disks, and my Thinkpad X220, with its dual cadre processor and 8GB of RAM. I've taken to running much of my desktop-type tasks on a virtual machine running under oVirt, thereby liberating my Thinkpad to serve every bit a 2d node, for live migration and other multi-node-needin' tests. Both machines run the 64-bit flavour of Fedora 17.

For storage, I've taken to using a pair of Gluster volumes, with bricks that reside on both of my oVirt nodes, which consume the storage via NFS. I likewise utilize a little desktop NAS device, an Iomega StorCenter ix2-200, for hosting install images and iSCSI disks.

For the screencasting, I started out with the desktop record characteristic that's built into GNOME Shell. It's really easy to use, hit control-shift-alt R to offset recording, and the same combo to terminate. After a couple of test recordings, however, I found that when I loaded the WebM-formatted video files that the GNOME feature produces into a video editor (I tried with PiTiVi and with OpenShot) only the first second of the video would load.

Rather than delve whatsoever deeper into that mystery, I swapped screencasting tools, opting for gtk-recordMyDesktop (yum install gtk-recordmydesktop), which produces screencasts, in OGV format, that my editing tools were happy to import properly.

I started out editing with PiTiVi–I didn't intend to do too much editing, but I did want to speed upwards the video during parts of the recording that didn't directly involve oVirt, such as the installation process for the TurnKeyLinux WordPress apparatus I used in the video. I was aiming for no more than five minutes with this, and I hate it when screencasts include a bunch of semi-expressionless space. I plant, however, that PiTiVi doesn't offer this feature, so I switched over to OpenShot, which is bachelor for Fedora in the RPM Fusion repositories.

I played back my recording in the OpenShot preview window, and when I came to a spot where I wanted to speed things up, I made a cutting, played on to the end of the to-be-sped section, and made a second cutting, earlier right clicking on the clip, choosing how much to accelerate it, and then dragging the following scrap of video back to fill up the gap.

Even so, I found that my cuts were getting out of sync–I'd zoom in to frame-by-frame resolution, make my cut exactly where I wanted it, and and so when I watched it dorsum, the cut wasn't where I'd fabricated it. I don't know if it was an consequence with the cut, or a problem with the preview function, merely again, I didn't want to delve besides deeply hither, so I asked the Groovy Oracle of Google what the best video format was for employ with OpenShot. MPEG4, it answered, in the ragged voice of some forum post or something.

Fine. Dorsum to the command line to install another tool: Transmageddon Video Converter. I know that you tin exercise anything with ffmpeg on the command line, but I find the GUI-osity of Transmageddon, which I've used at some signal in the past, easier than searching effectually for the right ffmpeg arguments. So, bam, from OGV to MP4, and, indeed, OpenShot appeared to adopt the format bandy. My cuts worked every bit expected.

I ended the video with a screen shot from the oVirt web site, stretched over a handful of seconds, and I exported the video, sans audio, for the narration step in the procedure. I played the video back in GNOME Mplayer (for some reason, my usual video player, Totem, kept crashing on me) and used Brazenness (an absolutely killer piece of open source software, with support for Linux, Win and Os X) to record my sound. I used the microphone on my webcam–not exactly high terminate stuff–which picked up some abrasive background noise.

Fortunately, Audacity comes with a pretty sweet dissonance removal characteristic–you highlight a clamper of audio with no other sound but the background noise, and tell Brazenness to excise that noise from the whole recording. I thought it worked pretty well, because.

With my sound exported (I chose FLAC) I brought it into OpenShot, did a bit of dragging around to sync things correct, extended the video chunks at beginning and the end of the piece to brand way for my opening and closing remarks, and exported the affair, opting for what OpenShot identified as a "Web" profile. I uploaded the finished screencast to YouTube, and there it is.

We're about 1 week away from the release of oVirt iii.1, and I'm getting geared up past sifting through the electric current Release Notes Draft, in search of what's working, what withal needs piece of work, and why ane might get excited about installing or updating to the new version.

Web Admin

In version iii.1, oVirt's web admin console has picked upward a few handy refinements, starting with new "guide me" buttons and dialogs sprinkled through the interface. For example, when you create a new VM through the web console, oVirt doesn't automatically add a virtual disk or network adapter to your VM. You lot add these elements through a secondary settings pane, which can exist easy to overlook, particularly when yous're getting started with oVirt. In 3.1, there'south now a "guide me" window that suggests calculation the nic and disk, with buttons to press to direct you to the right places. These "guide me" elements work similarly elsewhere in the web admin console, for example, directing users to the next right deportment after creating a new cluster or adding a new host.

Storage

Several of the enhancements in oVirt iii.i involve the project'due south handling of storage. This version adds back up for NFSv4 (oVirt 3.0 only supported NFSv3), and the option of connecting external iSCSI or FibreChannel LUNs directly to your VMs (as opposed to connecting only to disks in your data or iso domains.

oVirt 3.one besides introduces a slick new admin console for creating and managing Gluster volumes, and support for hot-pluggable disks (too as hot pluggable nics). With the Gluster and hotplug features, I've had mixed success during my tests so far–in that location appear to be wrinkles left to iron out among the component stacks that power these features.

Installer

One of the iii.1 features that most caught my eye is proof-of-concept support for setting up a whole oVirt 3.i install on a single server. The feature, which is packaged upward as "ovirt-engine-setup-plugin-allinone" adds the option to configure your oVirt engine machine as a virtualization host during the engine-setup procedure. In my tests, I've had mixed success with this option during the engine-setup process–sometimes, the local host configuration part of the setup fails out on me.

Even when the engine-setup pace hasn't worked for me, I've had no trouble calculation my ovirt-engine machine as a host by clicking the "Hosts" tab in the web admin console, choosing the menu option "New," and filling out information in the dialog box that appears. All the Ethernet bridge fiddling required from 3.0 (come across my previous howto) is now handled automatically, and it'due south like shooting fish in a barrel to tap the local storage on your engine/host motorcar through the "Configure Local Storage" menu particular under "Hosts."

Another new installer enhancement offers users the choice of borer a remote postgres database server for storing oVirt configuration data, in improver to the locally-hosted postgres default.

oVirt 3.1 now installs with an HTTP/HTTPS proxy that makes oVirt engine (the projection'south management server) accessible on ports fourscore/443, versus the 8080/8443 arrangement that was the default in 3.0. This indeed works, though I found that oVirt's proxy prevented me from running FreeIPA on the same server that hosts the engine. Not the end of the world, only engine+identity provider on the same machine seemed like a good philharmonic to me.

Forth similar lines, oVirt three.i adds back up for Red Hat Directory Server and IBM Tivoli Directory Server every bit identity providers, neither of which I've tested so far. I'm interested to meet if the 389 directory server (the upstream for RHDS) will exist supported every bit well.

Having reached a proficient break point in my Gluster/Openstack/Fedora tests, I thought I'd preupgrade the F16 VM I've been using for ovirt engine to F17, en route to the oVirt 3.1 beta.

That didn't go so well. During the mail service-preupgrade office (uh, the upgrade), the installer balked at upgrading the jboss-as package that shipped with oVirt 3.0. Afterward, the VM wouldn't boot correctly.

Fortunately, I was prepared for failure, detaching my iso domain in accelerate, and shuttling the templates and VMs I wanted to proceed to the export domain, which I besides detached.

peterswasking.blogspot.com

Source: https://jebpages.com/tag/ovirt/

0 Response to "Ovirt Error: Upload to a Local Storage Domain Is Supported Only Through Ssh"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel