fovi.sensing.manifold

class fovi.sensing.manifold.CorticalSensorManifold(cmf_a, fov, k=10)[source]

Bases: object

3-D cortical sensor manifold based on Rovamu and Virsu (1984) (see also Motter (2009)). Relevant coordinate systems:

  • \((x, y)\) -> visual cartesian coordinates

  • \((r, \theta)\) -> visual polar coordinates

  • \((\rho, z, \phi)\) -> cortical cylindrical coordinates

  • \((x_c, y_c, z)\) -> cortical cartesian coordinates

We use the magnification function: \(M(r)=\frac{k}{r+a}\), where:

  • \(k\): scaling factor that gives a good match to cortical mm. irrelevant for foveated sampling.

  • \(a\): critical parameter controlling magnification. smaller == stronger magnification / foveation

Due to our choice of magnification function, this is essentially a 3d extension of the complex logarithmic map (Schwartz, 1980), where both preserve local isotropy unlike the Schwartz (1980) model, our 3D version also preserves global/meridional isotropy, since there is no warping due to flattening

__init__(cmf_a, fov, k=10)[source]
Parameters:
  • cmf_a (float) – a parameter in cortical magnification function (CMF), in degrees

  • fov (float) – visual field size in degrees

  • k (float) – scaling factor that gives a good match to cortical mm. irrelevant for foveated sampling.

m(r)[source]

cortical magnification as a function of eccentricity r

Parameters:

r (float) – visual radius

Returns:

cortical magnification in mm/deg, evaluated at r

Return type:

float

rho_3d(r)[source]

compute cortical radius in 3d cylindrical coordinates: \(m(r)*\sin(r)\)

Counterintuitively from the equation, the units are mm rather than mm/rad:

  • this is because there is a left over radian term from the derivation

  • \(r=m(r)*\sin(r)*d\theta/d\phi\). While \(d\theta\) and \(d\phi\) cancel out, \(d\theta\) is in radians while \(d\phi\) is unitless.

  • \(d\phi\) is unitless because it is part of an infinitesimal distance \(r*d\phi\) along the cortical surface that is in units mm. r keeps the mm units and \(d\phi\) is unitless.

  • We use \(m(r)\) in mm/deg: thus \(m(r)*\sin(r)\) -> mm*rad/deg, so we convert to mm by multiplying by 180 deg/pi rad.

Parameters:

r (float) – visual radius

Returns:

cortical radius in mm

Return type:

float

phi_3d(theta)[source]

cortical phi in 3d cylindrical coordinates. \(\phi=\theta\)

Parameters:

theta (float) – Visual polar angle (in radians).

Returns:

Cortical cylindrical coordinate \(\phi\) (in radians).

Return type:

float

dm_dr(r)[source]

derivative of cortical magnification function with respect to radius (\(dm/dr\))

Parameters:

r (float) – Visual radius in degrees

Returns:

\(dm/dr\) in mm/deg^2

Return type:

float

drho_dr(r)[source]

derivative of cortical cylindrical radius with respect to visual field radius (\(d\rho/dr\))

Parameters:

r (float) – Visual radius in degrees

Returns:

\(d\rho/dr\) in mm/deg

Return type:

float

z_integrand(r)[source]

integrand for cortical z in 3d cylindrical coordinates: \((m(r)^2 - (d\rho/dr)^2)^{0.5}\)

Parameters:

r (float) – Visual radius in degrees

Returns:

\((m(r)^2 - (d\rho/dr)^2)^{0.5}\) in mm/deg

Return type:

float

z_3d(r)[source]

cortical z in 3d cylindrical coordinates computed with numerical integration via interpolation of fine mesh of precomputed values

Parameters:

r (float) – Visual radius in degrees

Returns:

cortical z in mm

Return type:

float

map_3d(x, y)[source]

map cartesian \((x,y)\) coordinates to 3D cortical cylindrical coordinates \((\rho,z,\phi)\)

Parameters:
  • x (float) – visual cartesian x coordinate

  • y (float) – visual cartesian y coordinate

Returns:

3D cortical cylindrical coordinates \((\rho, z, \phi)\)

Return type:

rho_z_phi (tuple[float, float, float])

map_to_xyz(rho_z_phi)[source]

map 3d cortical cylindrical coordinates to cortical cartesian coordinates for plotting convenience

Parameters:

rho_z_phi (tuple) – 3D cortical cylindrical coordinates

Returns:

A tuple containing:
  • x_c (float): cortical cartesian x coordinate

  • y_c (float): cortical cartesian y coordinate

  • z (float): cortical cartesian z coordinate

Return type:

result (tuple)

normalize_coords(coords)[source]

normalize coordinates to lie between -1 and 1

Parameters:

coords (np.ndarray) – un-normalized coordinates (n, d)

Returns:

normalized coordinates (n, d)

Return type:

np.ndarray

cort_cartesian_to_cort_cylindrical(x_y_z)[source]

Map cortical cartesian coordinates to cortical cylindrical coordinates for reverse mapping

Parameters:

x_y_z (tuple) – cortical cartesian coordinates \((x_c, y_c, z)\)

Returns:

A tuple containing:
  • rho (float): r in cortical cylindrical coordinates

  • z (float): z in cortical cylindrical coordinates

  • phi (float): \(\phi\) in cortical cylindrical coordinates

Return type:

result (tuple)

vis_cartesian_to_cort_cartesian(x_y)[source]

Map visual cartesian coordinates to cortical cartesian coordinates

Parameters:

x_y (np.ndarray) – visual cartesian coordinates \((x,y)\)

Returns:

cortical cartesian coordinates \((x_c, y_c, z)\)

Return type:

np.ndarray

r_from_z(z)[source]

Use a numerical root-finding method to determine eccentricity from the cortical z coordinate

Parameters:

z (float) – cortical z coordinate in mm

Returns:

visual eccentricity (radius) in degrees

Return type:

float

cylindrical_to_visual_polar(rho_z_phi)[source]

reverse map from cortical cylindrical coordinates to visual polar coordinates

Parameters:

rho_z_phi (tuple) – tuple \((\rho, z, \phi)\) of cortical cylindrical coordinates

Returns:

A tuple containing:
  • r (float): visual eccentricity (radius) in degrees

  • phi (float): visual polar angle in radians

Return type:

result (tuple)

init_visual_mesh(rs=None)[source]

initialize a grid of visual points to specify the bounds of the 3d mesh before sampling points on it

Parameters:

rs (np.ndarray, optional) – array of r values to test.

Returns:

A tuple containing:
  • grid_pts_3d_xyz (np.ndarray): (n, 3) array of cortical cartesian coordinates

  • grid_pts_polar (np.ndarray): (n, 2) array of visual polar coordinates

Return type:

result (tuple)

init_cortical_mesh(grid_pts_3d_xyz, num_coords=10000)[source]

initialize a 3d mesh of approximately evenly spaced cortical points

NOTE:

  • this is not evenly-spaced enough to be useful, but we keep it around for reference

  • instead, we define the mesh by sampling visual radii according to the CMF, and performing locally isotropic sampling of angles.

  • this is equivalent to uniform sampling on the cortical sensor manifold

  • the sensor manifold is thus used primarily for receptive field sampling once the mesh has been defined

Parameters:
  • grid_pts_3d_xyz (np.ndarray) – (n,3) array of cortical cartesian coordinates used as a starting point to determine bounds

  • num_coords (int) – number of coordinates to generate on the surface

Returns:

(n,3) array of cortical cartesian coordinates that are approximately evenly spaced

Return type:

np.ndarray

reverse_map(mesh_points_xyz)[source]

reverse map from cortical cartesian coordinates to visual cartesian and polar coordinates

Parameters:

mesh_points_xyz (np.ndarray) – (n,3) cortical cartesian mesh pts

Returns:

A tuple containing:
  • cartesian_visual (np.ndarray): (n, 2) visual cartesian mesh pts

  • polar_visual (np.ndarray): (n, 2) visual polar mesh pts

Return type:

result (tuple)

fovi.sensing.manifold.vis_cartesian_to_cortical_cartesian_coords(cartesian_coords, cmf_a, fov, as_tensor=False, device='cpu', k=10)[source]
  • Map visual cartesian coordinates to 3d cortical cartesian coordinates using the 3D cortical model.

  • We use this when coordinates are sampled elsewhere, and we want to map them into the 3d model for receptive field sampling.

  • This is the main function we use to implement the cortical sensor manifold, in combination with isotropically magnified visual field sampling

Note, the CMF is \(M(r)=\frac{k}{r+a}\)

Parameters:
  • cartesian_coords (np.ndarray) – (n,2) visual cartesian coordinates in (x,y) format

  • cmf_a (float) – \(a\) value in the CMF, in degrees

  • fov (float) – field-of-view in degrees

  • as_tensor (bool, optional) – whether to return as a tensor. Defaults to False.

  • device (str or torch.Device, optional) – if as_tensor=True, which device to use

  • k (float, optional) – scaling value for CMF

Returns:

(n, 3) array of cortical cartesian points \((x_c, y_c, z)\)

Return type:

np.ndarray or torch.Tensor

fovi.sensing.manifold.vis_cartesian_to_cortical_cylindrical(cartesian_coords, cmf_a, fov, as_tensor=False, device='cpu', k=10)[source]

Map visual cartesian coordinates to cortical cylindrical coordinates using the 3D cortical model.

  • This is not usually used, since the cortical cartesian coordinates are more useful typically

  • We use this when coordinates are sampled elsewhere, and we want to map them into the 3D model for receptive field sampling.

Note, the CMF is \(M(r)=\frac{k}{r+a}\)

Parameters:
  • cartesian_coords (np.ndarray) – (n,2) visual cartesian coordinates in (x,y) format

  • cmf_a (float) – \(a\) value in the CMF, in degrees

  • fov (float) – field-of-view in degrees

  • as_tensor (bool, optional) – whether to return as a tensor. Defaults to False.

  • device (str or torch.Device, optional) – if as_tensor=True, which device to use

  • k (float, optional) – scaling value for CMF

Returns:

(n, 3) array of cortical cylindrical points \((\rho, z, \phi)\)

Return type:

np.ndarray or torch.Tensor

fovi.sensing.manifold.cortical_cylindrical_to_cortical_cartesian(rho_z_phi, cmf_a, fov, k=10)[source]

map cortical cylindrical coordinates to cortical cartesian coordinates

Parameters:
  • rho_z_phi (np.ndarray) – (n, 3) array of cortical cylindrical points \((\rho, z, \phi)\)

  • cmf_a (float) – \(a\) value in CMF

  • fov (float) – field-of-view in degrees

  • k (float, optional) – scaling value for CMF

Returns:

(n,3) array of cortical cartesian coordinates \((x_c, y_c, z)\)

Return type:

np.ndarray