[ Ze-Bo @ 16.06.2005. 10:08 ] @
Da li postoji logicno objasnjenje zasto su perfomase FBSD 5.4 znatno losije u odnosu na CentOS pri radu sa diskom.

Pri testiranju sa postmark testom, dobio sam drasticne razlike u perfomansama na identicnom hardveru.

Za Linux
read (24.27 megabytes per second)
written (24.29 megabytes per second)

dok za FBSD
read (10.41 megabytes per second)
written (10.42 megabytes per second)


Takodje po mail listama vidim da dosta ljudi ima problema sa perfomansama na 5.X

Takodje, mada nisam testirao, izgleda da 4.10 ima bolje perfomase u odnosu na 5.4.

[Ovu poruku je menjao random dana 17.06.2005. u 04:38 GMT+1]
[ tweeester @ 16.06.2005. 10:14 ] @
Cini mi se da FBSD radi sinhronizovano (odmah upisuje izmene na disk) dok Linux baferuje write. To bi mogao biti razlog ako nisam omasio debelo.
[ neetzach @ 16.06.2005. 12:24 ] @
Jesi li ukljucio soft-updates na FreeBSD-u?
[ random @ 17.06.2005. 03:37 ] @
Stvarno, da li si uključio soft updates na testiranom fs-u? Ili, ako nisi, da li si onda pod Linuxom mountovao FS sinhrono?

Nisi napisao ni koji fajl sistem je testiran...
[ neetzach @ 17.06.2005. 05:36 ] @
U svakom slucaju FreeBSD-ov UFS je sinhron kao sto je vec receno, iako (cini mi se, ali nisam 100% siguran) moze da se moutuje asinhrono. Doduse, soft-updates omogucava donekle asinhron rad, takodje. Linuxov fs (mislim na ext2/3) je asinhron, ali iako ima bolje performanse daleko je "opasniji" za koriscenje utoliko sto je mnogo podlozniji velikom gubitku podataka... Sve ima svoju cenu.
[ Sundance @ 17.06.2005. 05:38 ] @
Citat:
tweeester: Cini mi se da FBSD radi sinhronizovano (odmah upisuje izmene na disk) dok Linux baferuje write. To bi mogao biti razlog ako nisam omasio debelo.


Mislim da nisu tako glupi :) I/O buffering radi i libc i svaka normalna user-mode biblioteka koja enkapsulira I/O sistemske pozive (read(), write() etc.) Jednostavno je puno učinkovitije od direktnog pozivanja sistemskih poziva svaki put.

Iz knjige The Design and Implementation of the FreeBSD Operating System (5.2-)

Mislim da se sve dolje odnosi na UFS1/2... (ffs)

Reading and Writing to a File

Having opened a file, a process can do reads or writes on it. The procedural path through the kernel is shown in Figure 8.32 (on page 368). If a read is requested, it is channeled through the ffs_read() routine. Ffs_read() is responsible for converting the read into one or more reads of logical file blocks. A logical block request is then handed off to ufs_bmap(). Ufs_bmap() is responsible for converting a logical block number to a physical block number by interpreting the direct and indirect block pointers in an inode. Ffs_read() requests the block I/O system to return a buffer filled with the contents of the disk block. If two or more logically sequential blocks are read from a file, the process is assumed to be reading the file sequentially. Here, ufs_bmap () returns two values: first, the disk address of the requested block and then the number of contiguous blocks that follow that block on disk. The requested block and the number of contiguous blocks that follow it are passed to the cluster() routine. If the file is being accessed sequentially, the cluster() routine will do a single large I/O on the entire range of sequential blocks. If the file is not being accessed sequentially (as determined by a seek to a different part of the file preceding the read), only the requested block or a subset of the cluster will be read. If the file has had a long series of sequential reads, or if the number of contiguous blocks is small, the system will issue one or more requests for read-ahead blocks in anticipation that the process will soon want those blocks. The details of block clustering are described at the end of this section.

Each time that a process does a write system call, the system checks to see whether the size of the file has increased. A process may overwrite data in the middle of an existing file—in which case space would usually have been allocated already (unless the file contains a hole in that location). If the file needs to be extended, the request is rounded up to the next fragment size, and only that much space is allocated (see "Allocation Mechanisms" later in this section for the details of space allocation). The write system call is channeled through the ffs_write () routine. Ffs_write() is responsible for converting the write into one or more writes of logical file blocks. A logical block request is then handed off to ffs_balloc(). Ffs_balloc() is responsible for interpreting the direct and indirect block pointers in an inodc to find the location for the associated physical block pointer. If a disk block does not already exist, the ffs_alloc() routine is called to request a new block of the appropriate size. After calling chkdq() to ensure that the user has not exceeded his quota, the block is allocated, and the address of the new block is stored in the inode or indirect block. The address of the new or already-existing block is returned. Ffs_write() allocates a buffer to hold the contents of the block. The user's data are copied into the returned buffer, and the buffer is marked as dirty. If the buffer has been filled completely, it is passed to the cluster() routine. When a maximally sized cluster has been accumulated, a noncontiguous block is allocated, or a seek is done to another part of the file, and the accumulated blocks are grouped together into a single I/O operation that is queued to be written to the disk. If the buffer has not been filled completely, it is not considered immediately for writing. Instead, the buffer is held in the expectation that the process will soon want to add more data to it. It is not released until it is needed for some other block—that is, until it has reached the head of the free list or until a user process does an fsync system call. When a file acquires its first dirty block, it is placed on a 30-second timer queue. If it still has dirty blocks when the timer expires, all its dirty buffers are written. If it subsequently is written again, it will be returned to the 30-second timer queue.
[ Ze-Bo @ 17.06.2005. 07:02 ] @
Citat:
random: Stvarno, da li si uključio soft updates na testiranom fs-u? Ili, ako nisi, da li si onda pod Linuxom mountovao FS sinhrono?

Nisi napisao ni koji fajl sistem je testiran...


Na FBSD je UFS2+S.
Linux je ext3.

Obe instalacije su standardne.

.... moracu da skinem FBSD 4.11 da resim ovu dilemu....

Logicno je pretpostavidi da ce se to odraziti i na DB perfomanse....


Citat:
neetzach: U svakom slucaju FreeBSD-ov UFS je sinhron kao sto je vec receno, iako (cini mi se, ali nisam 100% siguran) moze da se moutuje asinhrono. Doduse, soft-updates omogucava donekle asinhron rad, takodje. Linuxov fs (mislim na ext2/3) je asinhron, ali iako ima bolje performanse daleko je "opasniji" za koriscenje utoliko sto je mnogo podlozniji velikom gubitku podataka... Sve ima svoju cenu.


Ali ne bas toliku, 2.5 puta sporije je jako puno.
Mislim da je ext3 po takodje sinhron.
[ diff @ 17.06.2005. 07:57 ] @
Bilo sinhrono, bilo asinhrono, trebalo bi da se odrazi samo na brzinu upisa, a koliko vidim, poprilicna razlika je i u citanju.
Prijavi rezultate sa 4.11, bas me interesuje.
[ Ze-Bo @ 21.06.2005. 13:01 ] @
kako stvari stoje, FBSD 4.10 je za nijansu sporiji od 5.4, mada mala napomena:
FBSD 5.4 i CentOS su amd64 distribucije, a 4.10 je i386...

Zakljucak: Izgleda da je ext3 FS brzi od UFS2.

Upravo trazim uporedne testove ta dva FS..


PS ext3 je po defaultu u ordered mode