fovi.utils.lora

class fovi.utils.lora.LoRAParam(weight_shape, r: int = 8, alpha: float = 8.0, init: str = 'zeros', device='cuda')[source]

Bases: Module

Parametrization that adds low-rank updates to a weight matrix.

Implements LoRA (Low-Rank Adaptation) by decomposing weight updates as:

W_eff = W + (alpha/r) * (B @ A)

For 2D weights (out, in), applies directly. For Conv weights (out, in, kH, kW), flattens to (out, in*kH*kW), applies BA, then reshapes back.

r

Rank of the low-rank decomposition.

Type:

int

alpha

Scaling factor for the adaptation.

Type:

float

scaling

Computed as alpha / r.

Type:

float

is_conv

Whether this is for a convolutional layer.

Type:

bool

B

Low-rank factor B of shape (out_dim, r).

Type:

nn.Parameter

A

Low-rank factor A of shape (r, in_dim).

Type:

nn.Parameter

__init__(weight_shape, r: int = 8, alpha: float = 8.0, init: str = 'zeros', device='cuda')[source]

Initialize LoRA parametrization.

Parameters:
  • weight_shape (tuple) – Shape of the weight to parametrize. Either (out, in) for Linear or (out, in, kH, kW) for Conv2d.

  • r (int, optional) – Rank of the low-rank matrices. Defaults to 8.

  • alpha (float, optional) – Scaling factor. Defaults to 8.0.

  • init (str, optional) – Initialization strategy. One of: - “zeros”: Initialize both A and B to zeros. - “kaimingA_zeroB”: Kaiming init for A, zeros for B. - “gaussian”: Small Gaussian init for both. Defaults to “zeros”.

  • device (str, optional) – Device for the parameters. Defaults to ‘cuda’.

forward(W_base: Tensor) Tensor[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

fovi.utils.lora.apply_lora(module: Module, param_name: str = 'weight', r: int = 8, alpha: float = 8.0, init: str = 'kaimingA_zeroB', device='cuda')[source]

Adds a LoRA parametrization to module.<param_name>. Freezes the base weight by default. Returns the parametrization object for convenience.

fovi.utils.lora.remove_lora(module: Module, param_name: str = 'weight', merge: bool = True)[source]

Removes LoRA parametrization. If merge=True, leaves the effective weight (folds LoRA into base).