In my absence…

Anyone wondering why I’m not posting much lately can be enlightened with the following two reasons:

  1. It’s mid-semester and all my classes have issued assignments
  2. Not much has been happening, so there hasn’t been much to write about

Number 1 is fairly self-explanatory – with much work comes little time for pleasure (of the electronics kind). I’ve now finished my CPU cache simulator for my CPU Architectures unit and am ready to begin the analysis report, and I’ve completed the next maths assignment due next week. That leaves my System Design and Methodologies report, which is a subject so dull (and, unfortunately, mandatory) that I and many, many, many others haven’t even attended it. Basically, it boils down to slides lecturing on exactly what a system is, and how user, physical, software and other components can contribute to its failure. Dull, boring stuff which thankfully can be equally taught by reading the online notes, rather than listen to a lecturer narrate them.

Number 2 is somewhat dependant on number 1, as I can’t achieve much if I’m always working. I’m still wrapping my head around the AVR32 UC3B microcontrollers and fixing bugs in the mainline code. So far I think I’ve got a handle on how to restructure the main code to support multiple architectures, although I’m not sure yet how to handle the Doxygen documentation. My current thought is to set targets for the documentation generation, based on the architecture. That will allow for slight differences in the documentation depending on the selected architecture — while the APIs should remain the same (thankfully, my existing design seems to be adequate to support different architectures with only minor tweaks) there will be some extra bits for each architecture which won’t apply on other architectures.

What do you think – should I work towards a unified documentation of some kind, or should I focus on a documentation-per-architecture model?

Yesterday I got a great email from a LUFA user named Mike:

Hi Dean,

I tried to add the info below to the thread at http://groups.google.com/group/myusb-support-list/browse_thread/thread/a09f2ab82b0e9347/350315e8df595de3?lnk=gst&q=zip#350315e8df595de3 but there is no “Reply” option for some reason, just “Reply to Author”.

Anyway, I though the following might be useful for you to know:

After having the same problem as that mentioned in the thread I spent a bit of time with Netmon & other tools on this and this is what I think is happening:

The zip file as delivered from the server has:
Content-Encoding: gzip
Content Type: application/zip

So the LUFA zip file (of type application/zip) is being encoded with gzip before it is sent. So it is in effect “double zipped”.

Firefox seems to cope with this OK, and strips the GZIP container when it is received, and saves the zip file that’s inside this as expected. You can then open this zip file to get the LUFA files.

On the other hand Internet Explorer (even IE8) doesn’t strip the GZIP container. From reading the blog at http://blogs.msdn.com/wndp/archive/2006/08/21/Content-Encoding-not-equal-Content-Type.aspx it seems that IE uses a “hack” to compensate for servers that incorrectly set the “Content-Encoding: gzip” on all zipped files, even when they are not gzipped a second time before being sent. This seems to be a holdout from the dark ages of the web, and making changes to IE’s behaviour now would introduce compatibility issues (e.g. I believe that the VRML viewer expects IE to hand off the HTTP GZIP-compressed content to it without messing with it).

So MS makes a special case for files with *both* Content-Encoding and Content Type set to zip, and ignores one of them.

So, end result, if you use IE, you end up with a double zipped container. To get at the files inside you need to rename the downloaded file to LUFA090401.gz (instead of zip). Then open it with WinZip. Winzip will ask for the name of the file stored inside. Type in LUFA090401.zip, and then you should be able to open the zip file just fine.
(Alternatively, you can also open the originally IE-downloaded file in WinRAR directly without changing the extension to .gz, and then follow the same steps).

I believe (but can’t test it of course) that if the server changed the Content-Type to “application/octet-stream” then all would be OK in IE as it would remove the “double-zipping” of the file.

Hope this is useful,

Mike

Which is very interesting indeed – and it solves the odd “corrupt zip” some of you have been complaining about. I’ve turned off the site compression for all files other than text documents, which has rectified the problem thanks to Mike’s analysis. Everyone please send him some karma for his efforts.
Until next time, stay tuned all!
 

Comments

No comments so far.

Leave a Reply

 
(will not be published)
 
 
Comment
 
 

 

Vital Stats

  • 35 Years Old
  • Australian
  • Lover of embedded systems
  • Firmware engineer
  • Self-Proclaimed Geek

Latest Blog Posts

RSS