How to find UIImage Bottleneck
I have an app that uses UIImage objects. Up to this point, I've been using
image objects initialized using something like this:
UIImage *image = [UIImage imageNamed:imageName];
using an image in my app bundle. I've been adding functionality to allow
users to use imagery from the camera or their library using
UIImagePickerController. These images, obviously, can't be in my app
bundle, so I initialize the UIImage object a different way:
UIImage *image = [UIImage imageWithContentsOfFile:pathToFile];
This is done after first resizing the image to a size similar to the other
files in my app bundle, in both pixel dimensions and total bytes, both
using Jpeg format (interestingly, PNG was much slower, even for the same
file size). In other words, the file pointed to by pathToFile is a file of
similar size as an image in the bundle (pixel dimensions match, and
compression was chosen so byte count was similar).
The app goes through a loop making small pieces from the original image,
among other things that are not relevant to this post. My issue is that
going through the loop using an image created the second way takes much
longer than using an image created the first way.
I realize the first method caches the image, but I don't think that's
relevant, unless I'm not understanding how the caching works. If it is the
relevant factor, how can I add caching to the second method?
The relevant portion of code that is causing the bottleneck is this:
[image drawInRect:self.imageSquare];
Here, self is a subclass of UIImageView. Its property imageSquare is
simply a CGRect defining what gets drawn. This portion is the same for
both methods. So why is the second method so much slower with similar
sized UIImage object?
Is there something I could be doing differently to optimize this process?
EDIT: I no longer think the issue has to do with the method of
initializing the UIImage object. I just changed access to the bundle file
to use imageWithContentsOfFile and didn't notice a difference in
performance. So there must be something about the structure of the data
that is different. For the sake of discussion, here are some statistics.
Using a file generated in Photoshop: Total file size: 1,111,073 Image
dimensions: 1600 x 1067 pixels Time to generate 532 pieces: 4.805 seconds
Using a file generated by the UIImagePickerController and then resized and
saved in iOS: Total file size: 419,333 Image dimensions: 1200 x 896 pixels
Time to generate 474 pieces: 28.190 seconds
That's a huge difference, in my opinion, especially considering the file
is smaller, has fewer pixels, and there are fewer (474 vs. 532) iterations
of the loop.
Could Photoshop be saving using a compression type that is more efficient
to process than iOS uses?
No comments:
Post a Comment