Home > Software > dosfs.c bugfix — FAT filesystem for embedded systems

dosfs.c bugfix — FAT filesystem for embedded systems

August 24th, 2008

I’ve not been maintaining my web pages, especially given that CMSMS’s bugfix resulted in my friend’s computer being hacked by A) some IRC people who installed bot software and B) some people that tried to use my friend’s computer to hack into some FBI site.  *grumble*

The worst part is that I followed the upgrade instructions as documented in trying to fix the hole, but there was no mention of removing the depreciated file with the vulnerability.  Shame on you CMSMS.  That’s when attack B happened.  *grumble some more*

Anyhow,  I just wanted to post this to save some people the headaches that I suffered about a year ago when we fixed some bugs in dosfs.  Although I submitted the bugfix to him about a year ago, and again a month ago, he seems to have stopped maintaining the code.  So, here’s the fix.  It’s mostly based on initializing variables and (more importantly) a fix in the seek() code by John Canny, my advisor.   Anyhow, here’s the patch.

			

Software

  1. | #1

    Thank you for posting the patch, it really saved some debugging work.
    I noticed that DFS_Seek() still wasn’t working due to an error in the endcluster calculation. Here is the correct formula:
    endcluster = div(offset, fileinfo->volinfo->secperclus * SECTOR_SIZE).quot;

  2. admin
    | #2

    Thanks.. Though I’m a bit confused. I thought the patch I provided fixed the endcluster calculation — is it still wrong?

  3. | #3

    Yes, in your patch we can read:
    endcluster = div(fileinfo->pointer + offset, fileinfo->volinfo->secperclus * SECTOR_SIZE).quot;

    “fileinfo->pointer + offset” is wrong, as we want to know the cluster number at a given offset – this isn’t related to the current file position.
    Probably you used DFS_Seek after opening a file, where fileinfo->pointer is 0. This should work. But if DFS_Seek() is called multiple times to jump between different file positions, it will fail once we cross the cluster boundary

  4. marvin
    | #4

    Hi,

    I tried the above patches. It improved my dosfs based application, but I still have an error where files get corrupted. Did you guys come across any other bugs with dosfs?

    Thanks

  5. marvin
    | #5

    I posted a question inquiring if anytone had any further problems with this driver. For some reason my post was removed. This driver presently wipes out / corrupts my files that are greater than 65526 bytes with FAT32. Has anyone had simlar problem, and know of a fix?

    Thank you!

  6. admin
    | #6

    What kind of problem(s) are you still experiencing, and what function calls are you using? I only do seeks and writes, and it works for me.

    Reza

  7. marvin
    | #7

    Thanks Reza for responding.

    I’m using DOSFS on a TI MSP430F2619. Using IAR compiler.
    A segment of the code is shown below. I know this code is not effient in terms of writing 4 byets at a time to storage, but I’m just trying to excercise this driver for correctness right now. I’m using a 512MB SD card. I believe my sector read/write routines are correct, I’ve tested them pretty well across many MB worth of sectors.

    This code works until I set mycounter above 16384. File gets corrupted (unreadable by windows xp) for mycounter >= 16385

    code snippet …

    unsigned long mycounter=16384;

    pstart=0;
    if (DFS_GetVolInfo(0, sector, pstart, &vi)) {
    printf(“Error getting volume information\n”);
    for(;;);
    }

    //————————————————————
    // File-6 write test
    if (DFS_OpenFile(&vi, “MYDIR1/WRTEST_6.BIN”, DFS_WRITE,
    sector, &fi)) {
    for(;;);
    }
    if(fi.filelen != fi.pointer)
    {
    for(;;);
    }

    for(mycounter=0;mycounter<dMmaxCount;mycounter++){
    *(unsigned long*)(&sector2)=mycounter;
    DFS_WriteFile(&fi, sector, sector2, &cache, 4);
    }

    for(;;);

  8. marvin
    | #8

    ok, my submission was deleted again … strange.

  9. admin
    | #9

    They comments are moderated; you have to wait for me to approve them.

    First, why are you setting mycounter in the beginning when you reset it to 0 in the for(;;) loop?

    So this is what looks like is going on. If your writting 4 bytes at a time, then your filepointer will be 65536 bytes long after that many writes, which is equal to 2**16. FILEINFO.pointer is 32 bits, so your not hitting that, but something is going on with that value. Try writing in 3 byte increments or 5 — I’m wondering if something magic happens with that value that might not happen if you skip it. Also, step through the code and see if any of the variables change in an strange way at that point.

  10. marvin
    | #10

    Thanks admin,

    I tried your suggestion, but similar problem.

    Just for my information, for your application, what is the biggest file byte size you have created with dosfs, and what was the most files created on the volume?

    I’m still looking at this but kind of stuck right now.

  11. marvin
    | #11

    One additional thing that I’m noticing is that when I create and write smaller files that are readable by an editor on windows, that the time creation stamp does not show the expected “01:01:00am, Jan 1, 2006” as should happen according to the dosfs source code.

    Does this hint at a possible problem, maybe related to what I’m dealing with?

    Thanks

  12. marvin
    | #12

    Me again … the creation time is shown as blank under the “Date Created” column when I view the file information windows explorer.

  13. admin
    | #13

    @marvin
    For speed, I pre-create a 1G file and only use dosfs to tell me where that file starts. Start out with a formatted flash, then use dd (unix/cygwin) to create a blank file. I find the starting sector, and assume that the file is linear. After that, I stop using dosfs and just write sectors myself and keep track of where I am. Much much faster, which is what I need. So I’ve not run into that problem. Have you tried emailing the author about it?

    Also, there are file editing programs where you can look at the raw sectors and see if they contain the information they need to (i.e. date info) or to see if something gets corrupted.

  14. marvin
    | #14

    I figured out my problem … oh boy. My low level write_sector() routine was destroying the data in the 512-byte buffer passed to it. As it wrote each byte out to the SD-card spi-card interface in this case, it was over writing each byte in the original buffer with 0xff. The was destroying the information in the “srcatch” buffer that dosfs uses to manage the system. I figured this almost out by luck … mostly, but I was looking at the raw image, and some things about the data made me take a closer look at the sector write routine.

    What a trip this has been figuring out …

    Thank you very much for making an effort to help me with this. I very much appreciated it.

    One thing I’m still not seeing though is the creation time when viewing the file under windows. I hope this does not imply some additional problem … do you have any thoughts on that?

    Regarding the way you write data ti the 1GB file, assuming it is linear …. is that a safe assumption? I’m new to FAT32 details, so I’m just wondering.

  15. admin
    | #15

    I’m just using the mmc code from TI to write sectors. Haven’t had a problem with it. For the creation time, I would compare the creation time of a file created with windows and one by dosfs by a bit editor to see what’s going on.

    I use FAT16, not 32, so I’m not sure if works the same way, but I’ve been told by multiple people that if you create a large file on an empty filesystem, it will be contiguous. Though I figure I should test it out to make sure.

  16. marvin
    | #16

    I am using the TI mmc code as well for the msp430. That is where the buffer overwrite was occurring.

    From reading the FAT32 spec’s and from looking at clusters numbers as I debugged my problem, I think you are right about the contiguous cluster numbers. I noticed that from some printouts I did when I had only one file on the filesystem.

  17. marvin
    | #17

    @admin
    Regarding the disccussion of proper patch,

    >>Thanks.. Though I’m a bit confused. I thought the patch I provided >>fixed the endcluster calculation — is it still wrong?

    Who is right regarding the endcluster calculation?

  18. admin
    | #18

    I’ve not looked at the submitted patch; but just look at the two and figure out which one is right, and let me know :)

    I made all sorts of mods to the dosfs code for performance reasons, which helped a lot. I had various buffers for caching different things. I removed it all in favor for just not using the dossf code for writing after I found the starting sector, and all was good after that. Next test is to see if I can get lower power consumption by writing bigger blocks of data.

  19. Hamsaa
    | #19

    I never ever post but this time I will,Thanks alot for the great blog.

  20. DucDat
    | #20

    I’ve added some function so that it can work with long file name and it work well,but there are some SD card (4Gb) that I cannot read the Volume information by DFS_GetVolInfo(),Could someone help me out?

  21. | #21

    Alas, sorry, I’ve not touched that code in ages. You need to use the HCSD (or is it SDHC) protocol, which is different.

  1. No trackbacks yet.