fovi.sensing.manifold
- class fovi.sensing.manifold.CorticalSensorManifold(cmf_a, fov, k=10)[source]
Bases:
object3-D cortical sensor manifold based on Rovamu and Virsu (1984) (see also Motter (2009)). Relevant coordinate systems:
\((x, y)\) -> visual cartesian coordinates
\((r, \theta)\) -> visual polar coordinates
\((\rho, z, \phi)\) -> cortical cylindrical coordinates
\((x_c, y_c, z)\) -> cortical cartesian coordinates
We use the magnification function: \(M(r)=\frac{k}{r+a}\), where:
\(k\): scaling factor that gives a good match to cortical mm. irrelevant for foveated sampling.
\(a\): critical parameter controlling magnification. smaller == stronger magnification / foveation
Due to our choice of magnification function, this is essentially a 3d extension of the complex logarithmic map (Schwartz, 1980), where both preserve local isotropy unlike the Schwartz (1980) model, our 3D version also preserves global/meridional isotropy, since there is no warping due to flattening
- rho_3d(r)[source]
compute cortical radius in 3d cylindrical coordinates: \(m(r)*\sin(r)\)
Counterintuitively from the equation, the units are mm rather than mm/rad:
this is because there is a left over radian term from the derivation
\(r=m(r)*\sin(r)*d\theta/d\phi\). While \(d\theta\) and \(d\phi\) cancel out, \(d\theta\) is in radians while \(d\phi\) is unitless.
\(d\phi\) is unitless because it is part of an infinitesimal distance \(r*d\phi\) along the cortical surface that is in units mm. r keeps the mm units and \(d\phi\) is unitless.
We use \(m(r)\) in mm/deg: thus \(m(r)*\sin(r)\) -> mm*rad/deg, so we convert to mm by multiplying by 180 deg/pi rad.
- drho_dr(r)[source]
derivative of cortical cylindrical radius with respect to visual field radius (\(d\rho/dr\))
- z_integrand(r)[source]
integrand for cortical z in 3d cylindrical coordinates: \((m(r)^2 - (d\rho/dr)^2)^{0.5}\)
- z_3d(r)[source]
cortical z in 3d cylindrical coordinates computed with numerical integration via interpolation of fine mesh of precomputed values
- map_3d(x, y)[source]
map cartesian \((x,y)\) coordinates to 3D cortical cylindrical coordinates \((\rho,z,\phi)\)
- map_to_xyz(rho_z_phi)[source]
map 3d cortical cylindrical coordinates to cortical cartesian coordinates for plotting convenience
- normalize_coords(coords)[source]
normalize coordinates to lie between -1 and 1
- Parameters:
coords (np.ndarray) – un-normalized coordinates (n, d)
- Returns:
normalized coordinates (n, d)
- Return type:
np.ndarray
- cort_cartesian_to_cort_cylindrical(x_y_z)[source]
Map cortical cartesian coordinates to cortical cylindrical coordinates for reverse mapping
- vis_cartesian_to_cort_cartesian(x_y)[source]
Map visual cartesian coordinates to cortical cartesian coordinates
- Parameters:
x_y (np.ndarray) – visual cartesian coordinates \((x,y)\)
- Returns:
cortical cartesian coordinates \((x_c, y_c, z)\)
- Return type:
np.ndarray
- r_from_z(z)[source]
Use a numerical root-finding method to determine eccentricity from the cortical z coordinate
- cylindrical_to_visual_polar(rho_z_phi)[source]
reverse map from cortical cylindrical coordinates to visual polar coordinates
- init_visual_mesh(rs=None)[source]
initialize a grid of visual points to specify the bounds of the 3d mesh before sampling points on it
- Parameters:
rs (np.ndarray, optional) – array of r values to test.
- Returns:
- A tuple containing:
grid_pts_3d_xyz (np.ndarray): (n, 3) array of cortical cartesian coordinates
grid_pts_polar (np.ndarray): (n, 2) array of visual polar coordinates
- Return type:
result (tuple)
- init_cortical_mesh(grid_pts_3d_xyz, num_coords=10000)[source]
initialize a 3d mesh of approximately evenly spaced cortical points
NOTE:
this is not evenly-spaced enough to be useful, but we keep it around for reference
instead, we define the mesh by sampling visual radii according to the CMF, and performing locally isotropic sampling of angles.
this is equivalent to uniform sampling on the cortical sensor manifold
the sensor manifold is thus used primarily for receptive field sampling once the mesh has been defined
- Parameters:
grid_pts_3d_xyz (np.ndarray) – (n,3) array of cortical cartesian coordinates used as a starting point to determine bounds
num_coords (int) – number of coordinates to generate on the surface
- Returns:
(n,3) array of cortical cartesian coordinates that are approximately evenly spaced
- Return type:
np.ndarray
- reverse_map(mesh_points_xyz)[source]
reverse map from cortical cartesian coordinates to visual cartesian and polar coordinates
- Parameters:
mesh_points_xyz (np.ndarray) – (n,3) cortical cartesian mesh pts
- Returns:
- A tuple containing:
cartesian_visual (np.ndarray): (n, 2) visual cartesian mesh pts
polar_visual (np.ndarray): (n, 2) visual polar mesh pts
- Return type:
result (tuple)
- fovi.sensing.manifold.vis_cartesian_to_cortical_cartesian_coords(cartesian_coords, cmf_a, fov, as_tensor=False, device='cpu', k=10)[source]
Map visual cartesian coordinates to 3d cortical cartesian coordinates using the 3D cortical model.
We use this when coordinates are sampled elsewhere, and we want to map them into the 3d model for receptive field sampling.
This is the main function we use to implement the cortical sensor manifold, in combination with isotropically magnified visual field sampling
Note, the CMF is \(M(r)=\frac{k}{r+a}\)
- Parameters:
cartesian_coords (np.ndarray) – (n,2) visual cartesian coordinates in (x,y) format
cmf_a (float) – \(a\) value in the CMF, in degrees
fov (float) – field-of-view in degrees
as_tensor (bool, optional) – whether to return as a tensor. Defaults to False.
device (str or torch.Device, optional) – if as_tensor=True, which device to use
k (float, optional) – scaling value for CMF
- Returns:
(n, 3) array of cortical cartesian points \((x_c, y_c, z)\)
- Return type:
np.ndarray or torch.Tensor
- fovi.sensing.manifold.vis_cartesian_to_cortical_cylindrical(cartesian_coords, cmf_a, fov, as_tensor=False, device='cpu', k=10)[source]
Map visual cartesian coordinates to cortical cylindrical coordinates using the 3D cortical model.
This is not usually used, since the cortical cartesian coordinates are more useful typically
We use this when coordinates are sampled elsewhere, and we want to map them into the 3D model for receptive field sampling.
Note, the CMF is \(M(r)=\frac{k}{r+a}\)
- Parameters:
cartesian_coords (np.ndarray) – (n,2) visual cartesian coordinates in (x,y) format
cmf_a (float) – \(a\) value in the CMF, in degrees
fov (float) – field-of-view in degrees
as_tensor (bool, optional) – whether to return as a tensor. Defaults to False.
device (str or torch.Device, optional) – if as_tensor=True, which device to use
k (float, optional) – scaling value for CMF
- Returns:
(n, 3) array of cortical cylindrical points \((\rho, z, \phi)\)
- Return type:
np.ndarray or torch.Tensor