file size difference might not matter much in terms of video memory, but processing time should be better.
Not true. Video cards (whether in a console or PC) mostly accept just raw pixels coming in, so the game engine converts whatever compressed format you are using into raw uncompressed pixels before sending them to the card. Same processing time, same video memory load, no matter if you use compression or not.
One common exception is the DirectX Texture format (DXT or DDS). As I understand it, some cards support keeping these in compressed form in video memory, uncompressing them in hardware to render them. So the files take up less room both in storage (on the harddisk or CD) and in memory (on the card).
Some info here about DDS, worth slogging through the technical jargon…
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_c/directx/graphics/programmingguide/gettingstarted/direct3dtextures/compressed/compressedtextureresources.asp
There are quite a few DXT flavors available… compressed vs. uncompressed.
Some nice examples of DXT texture issues on UDN…
http://udn.epicgames.com/Content/TextureComparison
I’ve heard there are other card-supported texture formats too, but they’re mostly confined to one chip manufacturer or another.
So for me, JPG is usually a waste of time, since it is lossy and doesn’t support alpha and doesn’t stay compressed in memory. But I guess if the hardware doesn’t support DXT or somesuch, or you’re developing with a web-based engine, then JPG might be good.
There are also paletted textures, those with 256 or less colors. Often you can get away with a 16-color palette for detail textures, effects and the like. Gray textures are sometimes best as 8bit paletted files, for example lightmaps or bump maps.
Hope that helps.