¿Por qué glReadPixels () falla en este código en iOS 6.0?

El siguiente es el código que uso para leer una imagen de una escena de OpenGL ES:

-(UIImage *)getImage{ GLint width; GLint height; glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width); glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height); NSLog(@"%d %d",width,height); NSInteger myDataLength = width * height * 4; // allocate array and read pixels into it. GLubyte *buffer = (GLubyte *) malloc(myDataLength); glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer); // gl renders "upside down" so swap top to bottom into new array. // there's gotta be a better way, but this works. GLubyte *buffer2 = (GLubyte *) malloc(myDataLength); for(int y = 0; y < height; y++) { for(int x = 0; x < width * 4; x++) { buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x]; } } // make data provider with data. CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL); // prep the ingredients int bitsPerComponent = 8; int bitsPerPixel = 32; int bytesPerRow = 4 * width; CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; // make the cgimage CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent); // then make the uiimage from that UIImage *myImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpaceRef); free(buffer); free(buffer2); return myImage; } 

Esto funciona en iOS 5.xy versiones inferiores, pero en iOS 6.0 esto ahora está devolviendo una imagen negra. ¿Por qué glReadPixels() falla en iOS 6.0?

 CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer; eaglLayer.drawableProperties = @{ kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8 }; 

conjunto

 kEAGLDrawablePropertyRetainedBacking = YES 

(No sé por qué este consejo va bien … ///)

Pruebe este método para obtener su imagen de captura de pantalla. La imagen de salida es MailImage

 - (UIImage*)screenshot { // Create a graphics context with the target size // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext CGSize imageSize = [[UIScreen mainScreen] bounds].size; if (NULL != UIGraphicsBeginImageContextWithOptions) UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0); else UIGraphicsBeginImageContext(imageSize); CGContextRef context = UIGraphicsGetCurrentContext(); // Iterate over every window from back to front for (UIWindow *window in [[UIApplication sharedApplication] windows]) { if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen]) { // -renderInContext: renders in the coordinate space of the layer, // so we must first apply the layer's geometry to the graphics context CGContextSaveGState(context); // Center the context around the window's anchor point CGContextTranslateCTM(context, [window center].x, [window center].y); // Apply the window's transform about the anchor point CGContextConcatCTM(context, [window transform]); // Offset by the portion of the bounds left of and above the anchor point CGContextTranslateCTM(context, -[window bounds].size.width * [[window layer] anchorPoint].x, -[window bounds].size.height * [[window layer] anchorPoint].y); // Render the layer hierarchy to the current context [[window layer] renderInContext:context]; // Restore the context CGContextRestoreGState(context); } } // Retrieve the screenshot image Mailimage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return Mailimage; }