Loop Block experts on List?
gwirth79 at gmail.com
Sun Dec 19 12:49:33 PST 2010
On 12/18/2010 04:19 PM, Tony Su wrote:
> Yes, cloop devices are typically used to mount optical devices and from what
> I understand many USB devices (the alternative is udev today).
cloop stands for compressed loopback. It refers to a file. It does NOT
mount an optical device. Although it was typically associated with
optical media like CDROM's and DVD's, it is not limited to those cases,
and can also be used on hard drives, USB sticks, embedded flash, or any
media that can hold a file.
udev has nothing to do with block devices other than it might create the
device file (something like /dev/sda) depending on the rules it implements.
> Loop devices
> are pretty interesting, I also understand it can be a major component for
> network deployment, eg part of some PXE and possibly used for network
> diskless terminals.
PXE has no concept of block devices. Actually, it has no concept of
devices at all. It's purpose is to (eventually) put a kernel and perhaps
an initrd into RAM then execute the kernel. What happens after that is
up to the kernel.
Are you perchance referring to nbd (network block device)?
> In general, my questions all apply to "block device" loop devices, not "file
> system" loop devices. "block devices" is the more common deployment which
> maps at the block level as opposed to the "file system" type which accesses
> the loop storage through the OS virtual file system.
I don't understand you here. How can there be a loopback device without
a file system? If you are accessing a raw device, then there is no
loopback, only the device driver.
Or are you referring to the contents of the loopback device (a file)?
> 1. Deploying a file system on top of another file system often causes
> partition alignment issues, VMware in particular extensively writes about
> this issue when transporting vmware disks from one system to another. Vmware
> disks which are run on the system they were originally created are
> automatically aligned when created. I can't find anything written anywhere
> whether loop devices are subject to this same issue although to me it's
If the underlying hard drive handles things in 512-byte sectors, then
you don't have to worry about it. If you have one of the new huge disks
with Advance Sector Format (4KB blocks), then you need to make sure the
real partitions are properly aligned. You should also align the
partitions inside the virtual drive (which is actually a file).
> 2. Are RDM (Raw Device Mapping) loop files interoperable cross platform? So,
> as an example can an unformatted RDM file which simply contains blocks be
> created on one OS, then be deployed as a loop device on another device? Is
> there any difference if the file is simply disk blocks or if it has been
> partitioned and formatted?
It doesn't matter. A file is just a string of bytes. It's up to the OS
or application how it wants to map them. Order occurs at a higher level.
> 3. After long searching, I think that RDM typically maps sector to sector,
> and this is because "since forever" the basic OS I/O disk operation common
> to practically every PC OS is the 512 byte sector. For that reason,
> typically all disk formatting larger than something like 16mb is either
> based on 512 byte sectors or some multiple thereof. The common sense result
> obviously is that loop device geometry should likely be based on 512 byte
> sectors to maximize the likelihood of RDM mapping consistency, but today
> disk capacities are getting so large that we will be seeing 4kbyte (4x 5112
> byte) sectors (Western Digital's Advanced Format drives are already
> formatted this way) to improve large disk performance. I can see this could
> have a really horrendous impact on something like a Loop device,
> particularly if files are defragmented in a way that decreases the
> occurrence of 4 sequential sectors. Is there a known way to approach this
> aside from regular special defragmentation(eg must re-sequence sectors, not
> just compact)? I suppose the alternative to a defragmentation utility is the
> tried and true "move the file to another partition and back again."
That's a matter for the file system. Use a file system that has native
4KB (or multiple thereof) block sizes.
> 4.Is loop device disk geometry important? A loop device fs is a
> quasi-virtual device where the disk geometry obviously isn't constrained by
> the physical disk geometry, but is there a "best practice" for loop device
> disk geometry? I see a multitude of examples of "single cylinder"
> configurations for devices up to about 80mb, but no documentation or
> guidance I can find for anything larger. Unfortuantely, I don't think I can
> rely on VMware VMFS as guidance because VMFS is a kind of intermediate
> virtualization layer between the loop disk fs and the physical fs which
> could change relevant parameters. Am not as familiar with other
> virtualization disk technologies (like Virtual Box, Xen, etc) which tend to
> sit directly on the physical fs, maybe someone who is expert on this can
> offer some expert opinion?
There's no such thing as loop device disk geometry. You are confusing
the loop device with virtual disks, which are specially formatted files
designed to work with a virtual machine. The beginning portion of the
file has data that represents the equivalent of a partition table on a
regular disk. Regular disks no longer use CHS (Cylinder Head Sector)
geometry but LBA (Logical Block Addressing) instead.
> 5. Is anyone deploying Loop Root Device files, and if so, did you have to
> compile loop device support into the initrd or was it there by default?
I've done extensive work with loopback root devices over the last year
in embedded systems, where the root file system is some compressed image
like squashfs and an overlay. You do not need an initrd if you compile
the capability into the kernel.
If you do want to use a loopback device for the root file system, you
need to take some extra steps with init because of the sequence of doing
the mount and then the pivot root. Basically you will need to do,
assuming an initrd:
Load kernel into RAM
Load initrd into RAM
Run init in initrd
load modules for device holding filesystem
create device files for device holding filesystem (abstract, might be
NFS for example)
load modules for filesystems access (if not in kernel)
mount filesystem holding the loopback file
load loopback module (if not in kernel)
create loopback device files
mount loopback filesystem
mount other stuff as needed
pivot root to loopback filesystem
finish init process
More information about the KPLUG-List