Someone posted a sad comment the other day mourning the loss of Flash content on this blog. Sorry, I follow what I’m into. Sometimes that’s Flash, sometimes (very briefly) it was Silverlight. I think I even got into Python here for a while. The iPhone stuff may be a diversion, or it may be my future path. Time will tell.
Anyway, for an app I’m working on, I needed to take a screenshot of OpenGL ES rendered content. I assumed there was some built in function for that, but much searching led me to the conclusion that it’s a roll-your-own kind of thing. So, after a couple of days, I was finally able to piece together at least three or four different semi-working solutions from various forums and mailing lists, combined with some hacking about, to come up with a solution that actually works.
The first and last steps are easy.
First step, you read the GL data into a raw byte array with glReadPixels. Simple enough.
Last step, you save a UIImage to the Photo Album with UIImageWriteToSavedPhotosAlbum.
The tough part is getting that byte array into a UIImage. My first attempt was to use [UIImage imageFromData:data]. But the problem with that is that that method expects data to be in a file format of one of the supported image types of UIImage, whereas glReadPixels is just raw pixel data.
Digging around some more, I found [UIImage imageWithCGImage:imageRef]. You can get a CGImageRef with CGImageCreate.
CGImageCreate requires a CGDataProviderRef. And you can create one of those with CGDataProviderCreateWithData, using the results from glReadPixels! Finally, a path from one end to the other.
glReadPixels -> CGDataProviderCreateWithData -> CGImageCreate -> [UIImage imageWithCGImage:] -> UIImageWriteToSavedPhotosAlbum
Yay!
But wait. One more snag. OpenGL uses standard Cartesian coordinates. In other words, +Y is up, -Y is down. So the byte array you get with glReadPixels (and thus your final image) will be upside down. A bit of fancy bit-twiddling fixed that up. Here are the final methods, meant to be used within a UIView with a CAEAGLLayer class (just like the EAGL class in the OpenGL ES template file).
[c]-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders “upside down” so swap top to bottom into new array.
// there’s gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}[/c]
I'm pretty proud of myself for figuring this all out. And it works great. But for the love of God, if there's an easier way, please let me know. It really, really, really seems like there should be. And if you pros see any horrendous memory leaks or anything of the sort in there, let me know. For instance, I feel like i should be freeing those malloc'd buffers when I'm done, but if I do that, the thing crashes. I don't know me so much about malloc. Does it get freed when it goes out of scope anyway?
A couple of posts at the Apple DevForums talks about the same:
https://devforums.apple.com/message/11625#11625
and
https://devforums.apple.com/message/23781#23781
Hi, just wanted to say I’m enjoying your iPhone posts, I’ve been thinking of doing some development on that platform and it’s nice to read about someone approaching it in a similar way to how I work, from a similar perspective. So thanks.
-TP
wow this is perfect! was looking for something like this not long ago and mostly all I could find was using UIGraphicsGetImageFromCurrentImageContext(), but that doesn’t work for opengl es views. I could only find snippets of information here and there regarding saving opengl, but you’ve got it all in one place and working perfectly! thanks!
You are right about your concern with freeing memory though, I’m no pro, but I think there may be a few leaks. Malloc doesn’t get freed automatically (unless you send it to an NSData which owns it). In your case, you need to free buffer (preferably immediately after filling buffer2, cos its not needed anymore). Also free buffer2 after the image has finished saving (best place is the callback for when the CGImage is released (see code below). You also need to free everything you created using CGxxxCreatexxx() functions after you’ve used them. So thats the CGDataProviderRef, CGColorSpaceRef, and CGImageRef. Finally I personally prefer to not use the convenience method for [UIImage imageWithCGImage], but manually alloc and initWithCGImage. With the former, the UIImage is automatically released (which is good in case you forget to release it, but it may hang around for a while). By manually allocing, you have the responsibility of releasing yourself, but you can release it immediately after it is no longer needed (in the UIImageWriteToSavedPhotosAlbum finished callback). That is just a personal preference (and generally recommended it seems especially on iphone).
hope that made sense!
(P.S. I posted a little memory management tut at http://www.memo.tv/memory_management_with_objective_c_cocoa_iphone though its more about cocoa, whereas all the CGxxx stuff is not, but the concepts about retaining and releasing are the same).
Here is the full code btw with the memory stuff and callbacks. Thanks again for putting it all together and posting, saved me a lot of time!
// callback for CGDataProviderCreateWithData
void releaseData(void *info, const void *data, size_t dataSize) {
NSLog(@”releaseData\n”);
free((void*)data); // free the
}
// callback for UIImageWriteToSavedPhotosAlbum
– (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
NSLog(@”Save finished\n”);
[image release]; // release image
}
-(void)saveCurrentScreenToPhotoAlbum {
CGRect rect = [[UIScreen mainScreen] bounds];
int width = rect.size.width;
int height = rect.size.height;
NSInteger myDataLength = width * height * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for(int y = 0; y <height; y++) {
for(int x = 0; x <width * 4; x++) {
buffer2[int((height – 1 – y) * width * 4 + x)] = buffer[int(y * 4 * width + x)];
}
}
free(buffer); // YOU CAN FREE THIS NOW
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, releaseData);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider); // YOU CAN RELEASE THIS NOW
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef]; // change this to manual alloc/init instead of autorelease
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
UIImageWriteToSavedPhotosAlbum(image, self, (SEL)@selector(image:didFinishSavingWithError:contextInfo:), nil); // add callback for finish saving
}
makes sense that you can’t free buffer2 til after the save. i assumed that the data provider and other methods were copying over the data, but perhaps they are merely retaining a link to it. that’s why the crash. thanks!
also, i wasn’t aware that you had to release the cg stuff like that. good to know.
Keith,
I’m a bit curious about that too. For my latest one I’m working on I needed to do the standard-c mixed in with the Obj-C for multi-dim arrays like that. For me though I wasn’t re-using it several times, so it gets destroyed ( afaik anyway ) when the view goes away. I’m doing a decently large ( 50×50 array of custom structs ) array and I haven’t seemed to run into any serious memory leaks on it.
And yeah, I feel your pain. Sometimes it feels like things that should be straight-forward are very much an uphill battle – I think we’ve gotten spoiled from working with higher-level languages for so long!
How about this:
UIGraphicsBeginImageContext(cv.bounds.size);
[cv.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self, nil, nil);
Pressing the Home and Sleep/Wake buttons simultaneously, very briefly, will save a screen shot to the Photo Album.
Thanks so much for this example. I’m having an issue with the glReadPixels actually and maybe someone here can help. I’ve ripped heavily from the GLPaint example from Apple and have a painting app where I’m adding undo/redo support. So, just like Apple’s example I build line segments as the user drags and once the user ends the touch I want to snap a screen shot of that canvas (I save that off as a buffer to recall if the user does an undo later.. I save a stack of 10 deep for undo). This works well using you’re code with one problem, the very last line segment stamped down isn’t saved in the screenshot. I’m guessing it’s some sort of buffer issue or a glflush type issue? I’ve tried multiple variations to try and get it to work with no luck. Has anyone seen anything like this? I can try and post a small example if more info is needed.
Thanks!
Daniel
My version requires half the memory by being creative swapping the image vertically, and treats colors holistically as 4-byte ints.
void releaseScreenshotData(void *info, const void *data, size_t size) {
free((void *)data);
};
– (UIImage *)screenshotImage {
NSInteger myDataLength = backingWidth * backingHeight * 4;
// allocate array and read pixels into it.
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders “upside down” so swap top to bottom into new array.
for(int y = 0; y < backingHeight / 2; y++) {
for(int x = 0; x < backingWidth; x++) {
//Swap top and bottom bytes
GLuint top = buffer[y * backingWidth + x];
GLuint bottom = buffer[(backingHeight – 1 – y) * backingWidth + x];
buffer[(backingHeight – 1 – y) * backingWidth + x] = top;
buffer[y * backingWidth + x] = bottom;
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * backingWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return myImage;
}
Great work! I have sucessfully used your code to generate screenshots. One doubt – the saved PNGs do not have an alpha channel. If I try to display a saved PNG over another then I cannot see the image below at all (blend modes do not have any effect). If I do a CMD-i on the saved PNGs the info window shows that there is no alpha channel. Comments?
Has anyone ever worked out the alpha issue? If you’re trying to save an image, that contains alpha values, you only get shades of grey/black where your image should be transparent.
Try it!
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
Hi,
Thanks for the code.
I just want to extract the pixels from a UIView without going through renderInContext which is really too slow. How can I use your code without building a CAEAGLLayer (I tried to use your code just like it is but I get a black image with noise on the top…)
Thanks
On line 6 you wrote. glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
Where is the data being extracted from? Because I am in a similar situation, I have a RGB float buffer and I need to use it to convert it into a CGImage.
Can you please tell me how I can get to accomplish such a thing?
Very good example. I’ve been looking for this solution for the last couple days.
Thank you very much.
hi,
not exactly fair, but it still might interest you;
UIApplication has some private methods to make screenshots.
– (struct CGImage *)_createDefaultImageSnapshot;
– (void)_writeApplicationDefaultPNGSnapshot;
using the first method, saving a screenshot to an arbitrairy location is as simple as
- (void)snap
{
// save-path
NSArray * dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);
NSString *path = [dirs objectAtIndex:0];
// make snapshot
CGImageRef snpsht = [self _createDefaultImageSnapshot];
// save image
NSData * pngdata = UIImagePNGRepresentation([UIImage imageWithCGImage:snpsht]);
[pngdata writeToFile:[path stringByAppendingPathComponent:@"screenshot.png"] atomically:YES];
[self _writeApplicationDefaultPNGSnapshot];
}
but again, keep in mind that these are private methods.
appologies, that last line of code
[self _writeApplicationDefaultPNGSnapshot];
wasn’t supposed to be there…
Thanks for the blog it helped me allot but how to store a landscape view into image…it does work for vertical that is 320 height and 480 width but what about the 480 width and 320 height pictures
Thanks for sharing. Short and sweet solution. You’ve opened my eyes to a few new tricks.
Hi,
Is there a way we can get the full size image from the openGL?
Thanks!
Hi KP,
It’s really very helpful for me in my project. It saved my effort. Thanks a lot to post such a nice Tutorial.
Pannag Sanket, the code given in this tutorial by KP, is already to capture full size image (320X480) for a Portrait iPhone screen.
Thanks for figuring this out.
Other commenters are right: you have several pretty serious memory leaks in there. Every method or function whose name includes “new”, “alloc”, “Create” or “copy” requires a balancing release/free. So you need this after creating the UIImage, but before returning it:
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
free(buffer);
free(buffer2);
My previous post was incorrect on one point: the CGImage continues to rely on buffer2, so you can’t free it immediately.
(It also continues to rely on other things you provide, but they’re reference counted, so they don’t actually get trashed until the CGImage is done with them. But malloc/free has no reference counting.)
A simple solution is to create a callback function that frees the data:
void freeImageData(void *info, const void *data, size_t size) {
free((void*)data);
}
Then tell the CGDataProvider to call your callback when it’s done with the data:
CGDataProviderCreateWithData(NULL, buffer2, myDataLength, freeImageData);
This seems to work.
going from screen -> buffer is cool, but I’m wondering how to write back from buffer -> screen? anyone know?
The tutorial above is really helpful. In my app, i needed to take a screen shot of my EAGLView and use it in the app as a UIImage.
It works well with the code. But when i get the image its a little bit scaled. The image if i save it to photos album and compare, there is a scaling happening to the image.
I use the screenshot as a UIImage at the time i take the screenshot. So there is a jumpy feeling..
So any one got any hint to solve this scaling issue
You should add a flattr button to your blogs! I really would like to donate for your great help!
Great post. There’s all kinds of weird (and sometimes wrong) information on this topic out there – it’s great to have more reliable info here, along with comments that help even more.
Anybody having black line issues at the top – make sure your EAGL view is the same size as the pixels you’re grabbing with this function.
Thanks a lot! This really helped.
I tried to swap between top and bottom line.
GLubyte* swapBuffer = new GLubyte[width*4];
for( int y=0; y<height/2; y++ )
{
memcpy( swapBuffer, &buffer[(height-1-y)*width*4], width*4 );
memcpy( &buffer[(height-1-y)*width*4], &buffer[y*width*4], width*4 );
memcpy( &buffer[y*width*4], swapBuffer, width*4 );
}
delete [] swapBuffer;
Hello Taiky,
Here is your code a bit tweaked.
//the data in the buffere is mirrored in y direction
int halfHeight = height / 2;
int bytesPerRow = width * 4;
int lastIndex = height – 1;
GLubyte* swapBuffer = (GLubyte *) malloc(bytesPerRow);
for(int i = 0; i < halfHeight; i++)
{
//replace row
memcpy( swapBuffer, &buffer[(lastIndex – i) * bytesPerRow], bytesPerRow );
memcpy( &buffer[(lastIndex – i) * bytesPerRow], &buffer[i * bytesPerRow], bytesPerRow);
memcpy( &buffer[i * bytesPerRow], swapBuffer, bytesPerRow );
}
free(swapBuffer);
It turns out that if you encode the UI image to PNG all EXIF data is truncated and the orientation is not preserved. That's why your solutions works in all cases.
Thanks!
Thank you for this solution! Used it in my app.
However, one issue. My buffer clears to black once I start drawing to it after the save. It doesn’t seem to retain what I just saved. Doesn’t happen on my iPod (4.0) but happens on 3GS (4.1)
Anyone know how to prevent this?
Thanks for sharing.
Regarding the alpha channel issue – setting bitmapInfo to kCGImageAlphaLast did the trick for me.
I have gone through all the comments, i did not find my relevant ans.
Please can some one help me out in UNDO, REDO functionality in GLPaint.
I tried in this way :
Saved all points in an array , when i say Undo, drawing all points with Play: method, but it is drawing very slowly.
Please can some body help me out.
thanks for your tutorial.
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
====>
this is very easy way to swap( Available in iOS 4.0 and later. 🙂 )
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef scale:1.0f orientation:UIImageOrientationDownMirrored];
——————————————————————————————————–
imageWithCGImage:scale:orientation:
Creates and returns an image object with the specified scale and orientation factors.
+ (UIImage *)imageWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
Parameters
imageRef
The Quartz image object.
scale
The scale factor to use when interpreting the image data. Specifying a scale factor of 1.0 results in an image whose size matches the pixel-based dimensions of the image. Applying a different scale factor changes the size of the image as reported by the size property.
orientation
The orientation of the image data. You can use this parameter to specify any rotation factors applied to the image.
Return Value
A new image object for the specified Quartz image, or nil if the method could not initialize the image from the specified image reference.
Discussion
This method does not cache the image object. You can use the methods of the Core Graphics framework to create a Quartz image reference.
Availability
Available in iOS 4.0 and later.
Declared In
UIImage.h
————————————————————————————————
Thanks for sharing. The rotation is easier using…
UIImage *myImage = [UIImage imageWithCGImage:imageRef scale:1.0f orientation:UIImageOrientationDownMirrored];
Thanks! You gave me a hint that I freed my buffer too early.
I’m working with OpenGL ES using frame buffer objects. I figured out that you can save time using a fragment shader and (an additional) hidden frame buffer to do the mirroring and other resize transformations. Instead of a fragment shader u can could also apply the inverse transformation on the texture vertices to mirror, rotate and resize it.
Instead of using nested four loops, using the following to switch the pixels. This is will execute in parallel. So as the cores grow in ipad or idevices, the switch will get faster and faster. This is probably faster even now then a nested loop
dispatch_apply((size_t)(height/2)),
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0),
^(size_t line) {
//height/2 means, you will only iterate over half the image…but you are going to
//swap twice 🙂
//magic here…look for someone who talks about saving memory
//by doing two operations in one loop
//using the line variable, you can calculate the pixel address.
//pixel address = line * image width * 4
// (increment address by 1 ) to get all four colors (RGBA)
//good luck
});
#sen
about scaling issue fix
try use width & height multiply by 2
(320 * 480 ) change to (640 x 960)
Hope this help.
I spend over 4 hours for this.
It’s worth mentioning that you can do this sort of thing with Core Image on iOS now, and as a bonus it does these sorts of transforms on the GPU, if available.
After you get the pixel data in your buffer, You can create a new CIImage with [CIImage imageWithData:] and then apply a standard CGAffineTransform to it.
// need to do this or else you won’t have a context to render from GPU->CPU with.
CIContext *cicontext = [CIContext contextWithOptions:nil];
CIImage *img = [CIImage imageWithData:[NSData dataWithBytes:buffer length:320 * 480 * 4]];
CGAffineTransform vflip = CGAffineTransformMake(1, 0, 0, -1, 0, [img extent].size.height);
img = [img imageByApplyingTransform:vflip];
Untested code, but it’s something like that. Then it’s just a matter of converting from CIImage to UIImage. Not too hard.
Hi Keith,
I am using this awesome code in my project. It worked great but with the recent release of iOS 6 it doesn’t work on the device. It does work in the iOS 6 Simulator. Any idea what could be wrong ? I am using iPhone 4S + iOS 6.
I too see that the code no longer works after updated to iOS 6 on my iPhone 4. The image generated now just looks completely black.
Anyone have any ideas?
I found this thread:
http://stackoverflow.com/questions/12092865/glreadpixel-stopped-working-with-ios6-beta
Setting “preserveBackbuffer” to “YES” did the trick for me upon an initial change/test.
Could anyone figured it out, its not working on iphone5 with ios6. Gives a black blank image.