Saturday, April 14, 2018

Core ML: UIImage from RGBA byte array not fully shown

Leave a Comment

In combination with Core ML, I am trying to show a RGBA byte array in an UIImage using the following code:

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(bytes, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast); CFRelease(colorSpace);  CGImageRef cgImage = CGBitmapContextCreateImage(context); CGContextRelease(context);  UIImage *image = [UIImage imageWithCGImage:cgImage scale:0 orientation:UIImageOrientationUp]; CGImageRelease(cgImage);  dispatch_async(dispatch_get_main_queue(), ^{     [[self predictionView] setImage:image]; }); 

I create the image data like this:

 uint32_t offset = h * width * 4 + w * 4;  struct Color rgba = colors[highestClass];  bytes[offset + 0] = (rgba.r);  bytes[offset + 1] = (rgba.g);  bytes[offset + 2] = (rgba.b);  bytes[offset + 3] = (255 / 2); // semi transparent 

The image size is 500px by 500px. However the full image is not shown, it looks like the image is shown 50% zoomed in.

I started searching for this issue, and found others having the same issue as well. That's why I decided to edit my StoryBoard and set different values for the Content Mode, currently I use Aspect Fit. However, the result remains the same.

I also tried to draw a horizontal line in the center of the image to show how much the image is zoomed in. It confirms that the image is zoomed in 50%.

I wrote the same code in swift, which is working fine. See the code and result on swift here:

let offset = h * width * 4 + w * 4 let rgba = colors[highestClass] bytes[offset + 0] = (rgba.r) bytes[offset + 1] = (rgba.g) bytes[offset + 2] = (rgba.b) bytes[offset + 3] = (255/2) // semi transparent  let image = UIImage.fromByteArray(bytes, width: width, height: height,                scale: 0, orientation: .up,                bytesPerRow: width * 4,                colorSpace: CGColorSpaceCreateDeviceRGB(),                alphaInfo: .premultipliedLast) 

https://github.com/hollance/CoreMLHelpers/blob/master/CoreMLHelpers/UIImage%2BCVPixelBuffer.swift

enter image description here

And below the wrong result in objective-c. You can see that it's very pixelated compared to the swift one. The phone is an iPhone 6s.

What am I missing or doing wrong?

iPhone 6s screenshot

XCode screenshot

2 Answers

Answers 1

I am trying to show a RGB byte array

Then kCGImageAlphaPremultipliedLast is incorrect. Try to switch to kCGImageAlphaNone.

Answers 2

I found out my problem. It turned out that it had nothing to do with the image stuff itself. There was a bug that the values (width and height) of 500does not fit in uint8_t. That's why the image was shown smaller. Very stupid. Changing it to the right values worked.

If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment