

Since this is approximately 100MB, we ran out of disk space quickly, as we intended. Finally, we told it to chunk the inserts into 1024 bytes at a time, and to attempt to add 999,999 of those chunks.

We used the special file /dev/zero as the ‘if’ (input file), and our existing dump as the ‘of’ (output file). dev/ram1 16M 15M 0 100% /home/greg/ramtestįirst we created the dump, then we found the size of it, and told dd via the ‘seek’ argument to start adding data to it at the 3685 byte mark (in other words, we appended to the file). To simulate what happened, let’s create a database dump and then bloat it until there it takes up all available space: At this point, we’ve got a directory/filesystem that is just under 16 MB large (we could have made it much closer to 16 MB by specifying a -m 0 to mke2fs, but the actual size doesn’t matter). Finally, we reset the permissions on the directory such that an ordinary user (e.g. Then we mounted our new filesystem to the directory we just created. It’s a fairly verbose program by default, but there is nothing in the output that’s really important for this example. dev/ram1 16M 140K 15M 1% /home/greg/ramtestįirst we created a new directory to server as the mount point, then we used the mke2fs utility to create a new file system (ext2) on the RAM disk at /dev/ram1. $ sudo chown greg:greg /home/greg/ramtestįilesystem Size Used Avail Use% Mounted on $ sudo mount /dev/ram1 /home/greg/ramtest This filesystem will be automatically checked every 29 mounts orġ80 days, whichever comes first. Writing superblocks and filesystem accounting information: done Maximum filesystem blocks= 16777216 2 block groupsĨ192 blocks per group, 8192 fragments per group We’ll check it out with:Ĩ19 blocks (5.00%) reserved for the super user Most Linux distributions already have these ready to go. In Linux, it’s easy enough to create such a thing by using a RAM disk.
Text editor for large files free#
To demonstrate the problem and the solution, we’ll need a disk partition that has little-to-no free space available. It would have taken too long to copy the file somewhere else to edit it, so I did a low-level edit using the Unix utility dd. This was a very large, multi-gigabyte file, and the amount of space left on the disk was measured in megabytes. Without going into all the reasons, we needed the databases to use template1 as the template database, and not template0. Unfortunately, they were very low on disk space and the file needed to be modified. One situation that came up recently was a client who needed to import a large Postgres dump file into a new database. Running out of disk space seems to be an all too common problem lately, especially when dealing with large databases.
