Dr. Mark Humphrys

School of Computing. Dublin City University.

Online coding site: Ancient Brain

coders   JavaScript worlds

Search:

Free AI exercises


Files




Contiguous file allocation

Files all in one unbroken sequence on the physical disk.

Problems like with contiguous memory allocation.
What happens if file grows? Has to be rewritten into larger slot. Takes ages.




Non-contiguous allocation

Like with paging in memory, disk is divided into blocks.

File is held in a collection of blocks scattered over the disk.
If file needs more blocks, it can take them from anywhere on disk that blocks are free.




Index of where the blocks of the file are

Like pages in memory, blocks can "flow" like liquid into slots around the disk. Don't all need to be in the same place.
Need some index of where the bits of the file are.
Various indexing systems using linked lists or tables:

  


Demonstration of fragmentation (files split into multiple parts).
From below.




Shell script to see blocks allocated to files


  
# compare actual file size with blocks used 

for i in *
do
 ls -ld $i
 du -h $i
done


Results like (on system with 1 k blocks):


   
-rwxr-xr-x 1 me mygroup 857 Jun 27  2000 synch
1.0K    synch

-rwxr--r-- 1 me mygroup 1202 Oct 25  2013 profile
2.0K    profile

-rwxr-xr-x 1 me mygroup 1636 Oct 28  2009 yo.java
2.0K    yo.java

-rwxr--r-- 1 me mygroup 2089 Oct  8 00:03 flashsince
3.0K    flashsince

-rwxr-xr-x 1 me mygroup 9308 Oct 19  2010 yo.safe
10K     yo.safe


  

An extreme experiment to demonstrate wasted space ("slack space") in file systems.
This person makes 100,000 files of 5 bytes each.
This is only 500 k of actual data.
But it needs 400 M of disk space to store it.
From here.





Contiguous file allocation is good where possible

Unlike in memory, where contiguous allocation is dead, in files it has made a comeback.

The reason is that there is a lot of disk space and it is cheap, but the speed of read/writing disk has not increased so much - indeed it has got worse relative to CPU speed.

To write a contiguous file to disk you don't have to jump the disk head around the disk. You just keep writing after the last location you wrote - the head is in the correct position.

So modern systems try to use contiguous allocation for small files, only use non-contiguous for large files. To achieve this, they may allocate more space than possibly needed. Some wasted disk space is worth it if it makes disk I/O faster.

  


Demonstration of fragmentation (files split into multiple parts) and defragmentation (reducing such splitting and making files contiguous).
From here.




Cache blocks in RAM for speed

Also to speed things up: OS caches recently-accessed blocks in memory in case needed again. Avoid another disk I/O.




RAM drive




ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.      New 250 G VPS server.

Note: Links on this site to user-generated content like Wikipedia are highlighted in red as possibly unreliable. My view is that such links are highly useful but flawed.