Friday, October 10, 2008

Converting System.Drawing.Bitmap to XNA Texture2D

For those of you who visit this blog for art or writing topics, this is your fair warning: this post will hold no interest for you.

For those independent XNA game developers who find this post via Google, I hope I can help with a problem I've seen talked about in a few places on the 'net.

In the game I am currently working on, I have a need to load a bunch of bitmaps into memory, and then I need to turn those bitmaps into texture objects. I'm using a blend of GDI+ and XNA in my application, you see. Previously, when I was using MDX in conjunction with GDI+, this was no problem because there was a direct conversion available. Since XNA is dual-targeted at both the 360 and the Windows platforms, there isn't a conversion available.

On various forums I've seen solutions batted about relating to doing a per-pixel copy of the images from one format to the other (often using Bitmap.GetPixel, which is horribly slow -- you're much better off using Bitmap.LockBits, but even that is not nearly ideal).

The solution I have is simply relating to memory streams, since a Bitmap can be saved to a stream, and a Texture2D can be loaded from a stream. This approach might seem like a waste of memory, but it's the most processor-efficient way to do this. My game is able to process several dozen 28x28 images in under two seconds using this approach. The trick is to do the conversions just in little bits, as you need the images, rather than doing them all up front (which would take forever, and really give the garbage collector fits). I leave that part up to you. Here's the C# code for the actual conversion, which is quite simple:
Bitmap b = new Bitmap( nameOfFile );
Texture2D tx = null;
using ( MemoryStream s = new MemoryStream() )
b.Save( s, System.Drawing.Imaging.ImageFormat.Png );
s.Seek( 0, SeekOrigin.Begin ); //must do this, or error is thrown in next line
tx = Texture2D.FromFile( GraphicsDevice, s );

That's all there is to it!

(Added point of interest: It seems that XNA is unable to load GIF files -- presumably a licensing thing, knowing GIF -- but of course regular .NET is able to load those just fine. Using this sort of code provides a way for you to load GIFs or any other format that .NET supports but that XNA does not into XNA Texture2D objects. This is handy for me, because at present my project has... uh... just over 8,500 GIF files in it.)

Thursday, October 9, 2008

Smooth Scaling Tiled Sprites In XNA

For those of you who visit this blog for art or writing topics, this is your fair warning: this post will hold no interest for you.

For those independent XNA game developers who find this post via Google, I hope I can help with a problem I've seen a lot of frustration on (and experienced frustration with myself). For background with the problem, see these posts (#1 and #2).

To summarize what you will find at those links, basically there is a "problem" when scaling images in XNA or DirectX wherein if you use the Sprite/SpriteBatch objects to draw a series of tiles, you'll get cruddy little lines, grids, or seams between many of your tiles. But the problem ONLY happens when zooming in (i.e., scaling 2D textures to a resolution higher than their native resolution), and it's fairly inconsistent. Sometimes half a pixel or so, sometimes up to a pixel, but never more, and it doesn't always even make a grid between every tile.

The main theories on this were that this was some sort of floating-point rounding error, or that this has to do with odd-sized textures (that perhaps are not powers of 2 -- mine, for instance, are 28x28), or that this was related to lacking the Clamp state of the u and v axes of the SampleState. Personally, my money was on some sort of "off by one" issue relating to zero-indexed widths and heights. None of these are correct.

As one enterprising programmer on the above links figured out, the real culprit is interpolation. By default, when scaling textures, Bilinear Interpolation is used to make it look nicer. If you are familiar with how that algorithm works, basically it's using a 2x2 grid of pixels adjacent to each target pixel, and blending them together. That works great in the middle, but at the edge of each tile there is nothing there -- each sprite tile is rendered independently (for the most part), which is why the black line creeps in. That line isn't a gap at all, it turns out, but rather a factor of the interpolation.

The quickest solution to this is to use point-based interpolation, which is basically no interpolation at all. In XNA, the C# code would be this:
GraphicsDevice.SamplerStates[0].MagFilter = Microsoft.Xna.Framework.Graphics.TextureFilter.Point;

Problem solved, right? Well, yeah, but now we have a new problem -- without interpolation, your zoom is going to look awful. Programmers on the message boards had a bevy of potential solutions to this, some involving custom shaders, some involving replacing the SpriteBatch class, others involving manual edits to every image used in their game.

I have a vastly simpler solution (both in terms of programming effort/time, and in terms of processor time). Here's my rationale: this is an interpolation problem based on the fact that each tile is rendered separately, right? So the problem is not that we're scaling these tiles up, but rather that we're scaling them up one-by-one. If only there was a way to combine them all before rendering the current frame, and then scale them up together!

But wait, I hear you say -- something like that doesn't sound processor-friendly, right? That would basically double the amount of rendering we need to do, wouldn't it? If that's not what you were thinking, ten points for you for remembering that we're already doing that -- it's called the back buffer!

Since we're already rendering these sprites to the back buffer, then flipping them to the screen all at once, we've already got this pretty much handled. All we need to do is tweak the size of the back buffer before rendering, and it will automatically scale up to the view area -- perfect interpolation, great quality, no lines. The C# code looks like this:
float zoom = 0.8;
this.GraphicsDeviceService.ResetDevice( (int)Math.Round( this.ClientWidth * zoom ),
(int)Math.Round( this.ClientHeight * zoom ) );
I'm assuming here that you're using WinForms-hosted XNA code like from this example ( If not, you'll have to fiddle with how to get this working in your environment. The basics are to set up a PresentationParameters variable with your desired width/height and to then do a graphicsDevice.Reset() and pass in that variable.

A few last points of interest:

- You'll notice that the zoom is inversed here. The zoom of 0.8 is actually equivalent to zooming in 1.2. The reason for the inversion is that we are shrinking our back buffer relative to the surface it will be rendered to.

- "this.ClientWidth" is assuming that you are calling this method from the Form, Panel, or whatever handle is your render target.

- As you may have already noticed, this method isn't compatible with your traditional "camera" approach, where you move a viewport relative to the world coordinates. To implement scrolling in your window (which is presumably the point here), you'll want to implement a global offset to your X and Y coordinates that are passed to SpriteBatch.Draw. NO NEED to do some massive global update of all your objects' coordinates as your window moves -- that's crazy. Leave your game world coordinates alone, and just do an offsetting of them as they are rendered in SpriteBatch.Draw. That way everything gets rendered efficiently, no massive updates are needed, and there isn't significant processor overhead incurred.

Happy coding!