MPSCNNBinaryKernel(3) MetalPerformanceShaders.framework MPSCNNBinaryKernel(3)
NAME
MPSCNNBinaryKernel
SYNOPSIS
#import <MPSCNNKernel.h>
Inherits MPSKernel.
Instance Methods
(nonnull instancetype) - initWithDevice:
(nullable instancetype) - initWithCoder:device:
(void) -
encodeToCommandBuffer:primaryImage:secondaryImage:destinationImage:
(MPSImage *__nonnull) -
encodeToCommandBuffer:primaryImage:secondaryImage:
Properties
MPSOffset primaryOffset
MPSOffset secondaryOffset
MTLRegion clipRect
NSUInteger destinationFeatureChannelOffset
MPSImageEdgeMode primaryEdgeMode
MPSImageEdgeMode secondaryEdgeMode
NSUInteger kernelWidth
NSUInteger kernelHeight
NSUInteger primaryStrideInPixelsX
NSUInteger primaryStrideInPixelsY
NSUInteger secondaryStrideInPixelsX
NSUInteger secondaryStrideInPixelsY
BOOL isBackwards
id< MPSNNPadding > padding
id< MPSImageAllocator > destinationImageAllocator
Additional Inherited Members
Detailed Description
This depends on Metal.framework Describes a convolution neural network
kernel. A MPSCNNKernel consumes two MPSImages, primary and secondary,
and produces one MPSImage.
Method Documentation
- (MPSImage * __nonnull) encodeToCommandBuffer: (nonnull id<
MTLCommandBuffer >) commandBuffer(MPSImage *__nonnull)
primaryImage(MPSImage *__nonnull) secondaryImage
Encode a MPSCNNKernel into a command Buffer. Create a texture to hold
the result and return it. In the first iteration on this method,
encodeToCommandBuffer:sourceImage:destinationImage: some work was left
for the developer to do in the form of correctly setting the offset
property and sizing the result buffer. With the introduction of the
padding policy (see padding property) the filter can do this work
itself. If you would like to have some input into what sort of MPSImage
(e.g. temporary vs. regular) or what size it is or where it is
allocated, you may set the destinationImageAllocator to allocate the
image yourself.
This method uses the MPSNNPadding padding property to figure out how to
size the result image and to set the offset property. See discussion in
MPSNeuralNetworkTypes.h.
Parameters:
commandBuffer The command buffer
primaryImage A MPSImages to use as the primary source images for
the filter.
secondaryImage A MPSImages to use as the secondary source images
for the filter.
Returns:
A MPSImage or MPSTemporaryImage allocated per the
destinationImageAllocator containing the output of the graph. The
returned image will be automatically released when the command
buffer completes. If you want to keep it around for longer, retain
the image. (ARC will do this for you if you use it later.)
- (void) encodeToCommandBuffer: (nonnull id< MTLCommandBuffer >)
commandBuffer(MPSImage *__nonnull) primaryImage(MPSImage *__nonnull)
secondaryImage(MPSImage *__nonnull) destinationImage
Encode a MPSCNNKernel into a command Buffer. The operation shall
proceed out-of-place. This is the older style of encode which reads
the offset, doesn't change it, and ignores the padding method.
Parameters:
commandBuffer A valid MTLCommandBuffer to receive the encoded
filter
primaryImage A valid MPSImage object containing the primary source
image.
secondaryImage A valid MPSImage object containing the secondary
source image.
destinationImage A valid MPSImage to be overwritten by result
image. destinationImage may not alias primarySourceImage or
secondarySourceImage.
- (nullable instancetype) initWithCoder: (NSCoder *__nonnull)
aDecoder(nonnull id< MTLDevice >) device
NSSecureCoding compatability While the standard
NSSecureCoding/NSCoding method -initWithCoder: should work, since the
file can't know which device your data is allocated on, we have to
guess and may guess incorrectly. To avoid that problem, use
initWithCoder:device instead.
Parameters:
aDecoder The NSCoder subclass with your serialized MPSKernel
device The MTLDevice on which to make the MPSKernel
Returns:
A new MPSKernel object, or nil if failure.
Reimplemented from MPSKernel.
- (nonnull instancetype) initWithDevice: (nonnull id< MTLDevice >) device
Standard init with default properties per filter type
Parameters:
device The device that the filter will be used on. May not be NULL.
Returns:
A pointer to the newly initialized object. This will fail,
returning nil if the device is not supported. Devices must be
MTLFeatureSet_iOS_GPUFamily2_v1 or later.
Reimplemented from MPSKernel.
Property Documentation
- clipRect [read], [write], [nonatomic], [assign]
An optional clip rectangle to use when writing data. Only the pixels in
the rectangle will be overwritten. A MTLRegion that indicates which
part of the destination to overwrite. If the clipRect does not lie
completely within the destination image, the intersection between clip
rectangle and destination bounds is used. Default: MPSRectNoClip
(MPSKernel::MPSRectNoClip) indicating the entire image.
clipRect.origin.z is the index of starting destination image in batch
processing mode. clipRect.size.depth is the number of images to process
in batch processing mode.
See Also: MPSKernel clipRect
- destinationFeatureChannelOffset [read], [write], [nonatomic], [assign]
The number of channels in the destination MPSImage to skip before
writing output. This is the starting offset into the destination image
in the feature channel dimension at which destination data is written.
This allows an application to pass a subset of all the channels in
MPSImage as output of MPSKernel. E.g. Suppose MPSImage has 24 channels
and a MPSKernel outputs 8 channels. If we want channels 8 to 15 of this
MPSImage to be used as output, we can set
destinationFeatureChannelOffset = 8. Note that this offset applies
independently to each image when the MPSImage is a container for
multiple images and the MPSCNNKernel is processing multiple images
(clipRect.size.depth > 1). The default value is 0 and any value
specifed shall be a multiple of 4. If MPSKernel outputs N channels,
destination image MUST have at least destinationFeatureChannelOffset +
N channels. Using a destination image with insufficient number of
feature channels result in an error. E.g. if the MPSCNNConvolution
outputs 32 channels, and destination has 64 channels, then it is an
error to set destinationFeatureChannelOffset > 32.
- (id<MPSImageAllocator>) destinationImageAllocator [read], [write],
[nonatomic], [retain]
Method to allocate the result image for
-encodeToCommandBuffer:sourceImage: Default: defaultAllocator
(MPSTemporaryImage)
- isBackwards [read], [nonatomic], [assign]
YES if the filter operates backwards. This influences how
strideInPixelsX/Y should be interpreted.
- kernelHeight [read], [nonatomic], [assign]
The height of the MPSCNNKernel filter window This is the vertical
diameter of the region read by the filter for each result pixel. If the
MPSCNNKernel does not have a filter window, then 1 will be returned.
- kernelWidth [read], [nonatomic], [assign]
The width of the MPSCNNKernel filter window This is the horizontal
diameter of the region read by the filter for each result pixel. If the
MPSCNNKernel does not have a filter window, then 1 will be returned.
- padding [read], [write], [nonatomic], [retain]
The padding method used by the filter This influences how
strideInPixelsX/Y should be interpreted. Default:
MPSNNPaddingMethodAlignCentered |
MPSNNPaddingMethodAddRemainderToTopLeft | MPSNNPaddingMethodSizeSame
Some object types (e.g. MPSCNNFullyConnected) may override this default
with something appropriate to its operation.
- primaryEdgeMode [read], [write], [nonatomic], [assign]
The MPSImageEdgeMode to use when texture reads stray off the edge of
the primary source image Most MPSKernel objects can read off the edge
of the source image. This can happen because of a negative offset
property, because the offset + clipRect.size is larger than the source
image or because the filter looks at neighboring pixels, such as a
Convolution filter. Default: MPSImageEdgeModeZero.
See Also: MPSKernelEdgeMode
- primaryOffset [read], [write], [nonatomic], [assign]
The position of the destination clip rectangle origin relative to the
primary source buffer. The offset is defined to be the position of
clipRect.origin in source coordinates. Default: {0,0,0}, indicating
that the top left corners of the clipRect and primary source image
align. offset.z is the index of starting source image in batch
processing mode.
See Also: subsubsection_mpsoffset
- primaryStrideInPixelsX [read], [nonatomic], [assign]
The downsampling (or upsampling if a backwards filter) factor in the
horizontal dimension for the primary source image If the filter does
not do up or downsampling, 1 is returned.
- primaryStrideInPixelsY [read], [nonatomic], [assign]
The downsampling (or upsampling if a backwards filter) factor in the
vertical dimension for the primary source image If the filter does not
do up or downsampling, 1 is returned.
- secondaryEdgeMode [read], [write], [nonatomic], [assign]
The MPSImageEdgeMode to use when texture reads stray off the edge of
the primary source image Most MPSKernel objects can read off the edge
of the source image. This can happen because of a negative offset
property, because the offset + clipRect.size is larger than the source
image or because the filter looks at neighboring pixels, such as a
Convolution filter. Default: MPSImageEdgeModeZero.
See Also: MPSKernelEdgeMode
- secondaryOffset [read], [write], [nonatomic], [assign]
The position of the destination clip rectangle origin relative to the
secondary source buffer. The offset is defined to be the position of
clipRect.origin in source coordinates. Default: {0,0,0}, indicating
that the top left corners of the clipRect and secondary source image
align. offset.z is the index of starting source image in batch
processing mode.
See Also: subsubsection_mpsoffset
- secondaryStrideInPixelsX [read], [nonatomic], [assign]
The downsampling (or upsampling if a backwards filter) factor in the
horizontal dimension for the secondary source image If the filter does
not do up or downsampling, 1 is returned.
- secondaryStrideInPixelsY [read], [nonatomic], [assign]
The downsampling (or upsampling if a backwards filter) factor in the
vertical dimension for the secondary source image If the filter does
not do up or downsampling, 1 is returned.
Author
Generated automatically by Doxygen for
MetalPerformanceShaders.framework from the source code.
Version MetalPerformanceShaders-Thu2Jul 13 2017 MPSCNNBinaryKernel(3)
Mac OS X 10.12.6 - Generated Sun Oct 29 14:53:42 CDT 2017