fovi.utils.fastaugs.transforms
- class fovi.utils.fastaugs.transforms.Compose(transforms)[source]
Bases:
objectComposes several transforms together. :param transforms: list of transforms to compose. :type transforms: list of
TransformobjectsExample
>>> transforms.Compose([ >>> transforms.CenterCrop(10), >>> transforms.ToTensor(), >>> ])
- class fovi.utils.fastaugs.transforms.RandomApply(transforms, p, seed=None, device=None)[source]
Bases:
objectRandomly apply transforms with probability p
- class fovi.utils.fastaugs.transforms.ToNumpy[source]
Bases:
objectConverts the given Image to a numpy array.
- class fovi.utils.fastaugs.transforms.ToChannelsFirst[source]
Bases:
objectConverts batch from BxHxWxC to BxCxHxW
- class fovi.utils.fastaugs.transforms.ToChannelsLast[source]
Bases:
objectConverts batch from BxCxHxW to BxHxWxC
- class fovi.utils.fastaugs.transforms.ToDevice(device)[source]
Bases:
objectMoves tensor to device.
- class fovi.utils.fastaugs.transforms.ToFloat(value)[source]
Bases:
objectConverts tensor to float using .float()
- class fovi.utils.fastaugs.transforms.ToFloatDiv(value, dtype=torch.float32)[source]
Bases:
objectConverts tensor to float using division.
- class fovi.utils.fastaugs.transforms.MultiSample(transforms, num_copies, return_input=False, clone_input=True)[source]
Bases:
objectPerforms transforms muliple times, return multiple copies of the input.
- Parameters:
transforms – List of transforms
num_copies – number of copies to produce / how many times to run the trasforms
return_input – whether to return the input as an output (default = False)
clone_input – whether to clone the input before each application of transforms (default = True)
- class fovi.utils.fastaugs.transforms.NormalizeGPU(mean, std, inplace=True, device=default_device)[source]
Bases:
objectx = (x - mean) / std
- class fovi.utils.fastaugs.transforms.CircularMask(output_size, blur_span=24.0, tol=.0005, device='cpu')[source]
Bases:
objectApply a circular mask to each image.
Works for a single TensorImage or TensorBatch, on cpu or gpu.
- property blur_radius
- class fovi.utils.fastaugs.transforms.ToGrayscaleTorchGPU(num_output_channels=1)[source]
Bases:
objectConvert image to Grayscale
- class fovi.utils.fastaugs.transforms.ToGrayscaleGPU(num_output_channels=1)[source]
Bases:
objectConvert image to Grayscale
- class fovi.utils.fastaugs.transforms.ColorJitter(p=1.0, hue=0.0, saturation=0.0, value=0.0, contrast=0.0, seed=None, device=None)[source]
Bases:
objectRandom apply (at batch level) jitter to the hue, saturation, value (aka brightness), and contrast of an RGB image or batch.
Parameters are stored for easy replay.
- Parameters:
p (float) – probability that jitter should be applied. If input is a batch, the p operates at the batch level (i.e., either all images are jittered, or none, with probability p). Also, if jitter is applied, each property is jittered by a random value in the range specified (see below for how to set range for each property).
python (contrast (float or tuple of) – float (min, max)) – range over which to jitter hue Should have -.5 < hue < .5 hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. hue=0 for no change, hue=(-.5,.5) maximum color randomization.
python – float (min, max)) – range over which to jitter saturation. saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter value / brightness. value_factor is chosen uniformly from [max(0, 1 - value), 1 + value] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter contrast in RGB. contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.
- class fovi.utils.fastaugs.transforms.RandomGaussianBlur(p=.5, kernel_size=6, sigma_range=(.1, 2.), num_sigmas=10, seed=None, device=None)[source]
Bases:
objectGaussian blur augmentation in SimCLR https://arxiv.org/abs/2002.05709
- class fovi.utils.fastaugs.transforms.RandomHorizontalFlip(p=.5, seed=None, device=None)[source]
Bases:
objectFlip the input horizontally around the y-axis with probability p.
Works for a single TensorImage or TensorBatch, on cpu or gpu, with flipping deteremined and applied separately for each individual image.
- Parameters:
p (float) – probability of applying the transform. Default: 0.5.
- Targets:
TensorImage, TensorBatch
Not applied to ArrayImage or ArrayBatch because we rely on PyTorch grid_sample, which only works for tensors.
For ArrayImage or ArrayBatch use albumentations HorizontalFlip.
- class fovi.utils.fastaugs.transforms.RandomGrayscale(p=.5, num_output_channels=3, seed=None, device=None)[source]
Bases:
objectConvert image to Grayscale
- class fovi.utils.fastaugs.transforms.RandomBrightness(p=1.0, scale_range=(.6, 1.4), max_value=1.0, seed=None, device=None)[source]
Bases:
objectRandomly Adjust Brightness with probability p, with scale_factor drawn uniformly between scale_range`=[min,max] separately for each image in a batch, clamping at maximum brightness `max_value.
new_img = img * scale_factor
- class fovi.utils.fastaugs.transforms.RandomContrast(p=1.0, scale_range=(.6, 1.4), max_value=1.0, seed=None, device=None)[source]
Bases:
objectRandomly Adjust Contrast with probability p, with scale_factor drawn uniformly between scale_range`=[min,max] separately for each image in a batch, clamping at maximum brightness `max_value.
new_img = (1-scale_factor)*img.mean() - (scale_factor) * img
- class fovi.utils.fastaugs.transforms.RandomSolarization(p: float = 0.5, threshold: float = 0.5, seed: int = None, device=None)[source]
Bases:
objectSolarize the image randomly with a given probability by inverting all pixel values above a threshold. If img is a Tensor, it is expected to be in [bs, 1 or 3, H, W] format, where bs is the batch size. :param solarization_prob (float): :type solarization_prob (float): probability of the image being solarized. Default value is 0.5 :param threshold (float): :type threshold (float): all pixels equal or above this value are inverted.
- class fovi.utils.fastaugs.transforms.RandomRotate(p=.5, max_deg=45, x=.5, y=.5, angles=None, pad_mode='zeros', seed=None, device=None)[source]
Bases:
objectRandomly rotate image around a point.
Works for a single TensorImage or TensorBatch, on cpu or gpu, with flipping deteremined and applied separately for each individual image.
- Parameters:
- Targets:
TensorImage, TensorBatch
Not applied to ArrayImage or ArrayBatch because we rely on PyTorch grid_sample, which only works for tensors.
For ArrayImage or ArrayBatch use albumentations HorizontalFlip.
- class fovi.utils.fastaugs.transforms.RandomZoom(p=.5, zoom=(.5, 1.0), x=.5, y=.5, pad_mode='zeros', seed=None, device=None)[source]
Bases:
objectRandomly zoom image around a point.
Works for a single TensorImage or TensorBatch, on cpu or gpu, with zoom deteremined and applied separately for each individual image.
- Parameters:
p (float) – probability of applying the transform. Default: 0.5.
- Targets:
TensorImage, TensorBatch
Not applied to ArrayImage or ArrayBatch because we rely on PyTorch grid_sample, which only works for tensors.
For ArrayImage or ArrayBatch use albumentations.
- __init__(p=.5, zoom=(.5, 1.0), x=.5, y=.5, pad_mode='zeros', seed=None, device=None)[source]
p is the probability that each image is rotated x,y determine the focal point for the zoom operation, between [0,1]
can be an int or float, in which case the range for x is (x,x), and the range for y is (y,y), or it can be a list/tuple specifying the range x=(left,right), y=(top,bottom)
Default is x=.5, y=.5, which focuses the zoom at the center of the image. To randomly zoom on different points set x=(0,1), y=(0,1)
- class fovi.utils.fastaugs.transforms.RandomPatchShuffle(sizes, p: float = 0.5, seed: int = None, img_size=224, device=None)[source]
Bases:
objectRandomly shuffle an image, dividined into NxN square patches (assumes square images).
Operates over a TensorImage (CxHxW) or TensorImageBatch (BxCxHxW).
- Parameters:
sizes (patch size, or list of patch_sizes, in proportion of image size)
(float) (p)
- class fovi.utils.fastaugs.transforms.RandomColorJitterYIQ(p=.80, hue=0., saturation=0.0, value=0.0, brightness=0.0, contrast=0.0, seed=None, device=None)[source]
Bases:
objectRandomly change the hue, saturation, and value of an image using a Yiq conversion for computational efficiency. Brightness and contrast jitter can be applied on the resulting RGB.
It’s expected that you will do either Yiq value jitter or RGB brightness jitter, but not both.
The transformation matrix is created like so, so we get all of these operations in 1 step: mat = mat3(brightness) @ mat3(contrast) @ Yiq2Rgb @ hue_mat(h) @ sat_mat(s) @ val_mat(v) @ Rgb2Yiq
- Parameters:
p (float) – probability that jitter should be applied.
python (contrast (float or tuple of) – float (min, max)) – range over which to jitter hue Should have 0<= hue <= 180 or -180 <= min <= max <= 180. hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. hue=0 for no change, hue=(-180,180) maximum color randomization.
python – float (min, max)) – range over which to jitter saturation. saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter brightness in YiQ. brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter brightness in RGB. brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter brightness in RGB. brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] or the given [min, max]. Should be non negative numbers.
- __init__(p=.80, hue=0., saturation=0.0, value=0.0, brightness=0.0, contrast=0.0, seed=None, device=None)[source]
- class fovi.utils.fastaugs.transforms.RandomColorJitter(p=1.0, hue=0.0, saturation=0.0, value=0.0, contrast=0.0, seed=None, device=None)[source]
Bases:
objectRandomly apply (per image) jitter to the hue, saturation, value (aka brightness), and contrast of an RGB image or batch.
Parameters are stored for easy replay.
- Parameters:
p (float) – probability that jitter should be applied. If input is a batch, the p operates at the batch level (i.e., either all images are jittered, or none, with probability p). Also, if jitter is applied, each property is jittered by a random value in the range specified (see below for how to set range for each property).
python (contrast (float or tuple of) – float (min, max)) – range over which to jitter hue Should have -.5 < hue < .5 hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. hue=0 for no change, hue=(-.5,.5) maximum color randomization.
python – float (min, max)) – range over which to jitter saturation. saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter value / brightness. value_factor is chosen uniformly from [max(0, 1 - value), 1 + value] or the given [min, max]. Should be non negative numbers.
python – float (min, max)) – range over which to jitter contrast in RGB. contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] or the given [min, max]. Should be non negative numbers.
- class fovi.utils.fastaugs.transforms.RandomRotateObject(p=.5, max_deg=45, ctr_x=.5, ctr_y=.5, scale=(1.0, 2.0), dest_x=(.25, .75), dest_y=(.25, .75), pad_mode='border', seed=None, device=None)[source]
Bases:
objectRandomly rotate, rescale, and re-position an object.
Works for a single TensorImage or TensorBatch, on cpu or gpu, with parameters deteremined and applied separately for each individual image.
- Parameters:
p (float) – probability of applying the transform. Default: 0.5.
- Targets:
TensorImage, TensorBatch
Not applied to ArrayImage or ArrayBatch because we rely on PyTorch grid_sample, which only works for tensors.
For ArrayImage or ArrayBatch use albumentations HorizontalFlip.