geowatch.tasks.fusion.methods.network_modules module

This module should be reorganized into architectures as it consists of smaller modular network components

geowatch.tasks.fusion.methods.network_modules.drop_path(x, drop_prob: float = 0.0, training: bool = False)[source]

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, the original name is misleading as ‘Drop Connect’ is a different form of dropout in a separate paper… See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 … I’ve opted for changing the layer and argument names to ‘drop path’ rather than mix DropConnect as a layer name and use ‘survival rate’ as the argument.

From: from timm.models.layers import drop_path

class geowatch.tasks.fusion.methods.network_modules.RobustModuleDict(modules: Mapping[str, Module] | None = None)[source]

Bases: ModuleDict

Regular torch.nn.ModuleDict doesnt allow empty str. Hack around this.

Example

>>> from geowatch.tasks.fusion.methods.network_modules import *  # NOQA
>>> import string
>>> torch_dict = RobustModuleDict()
>>> # All printable characters should be usable as keys
>>> # If they are not, hack it.
>>> failed = []
>>> for c in list(string.printable) + ['']:
>>>     try:
>>>         torch_dict[c] = torch.nn.Linear(1, 1)
>>>     except KeyError:
>>>         failed.append(c)
>>> assert len(failed) == 0
repl_dot = '#D#'
repl_empty = '__EMPTY'
pop(key: str) Module[source]

Remove key from the ModuleDict and return its module.

Parameters:

key (string) – key to pop from the ModuleDict

class geowatch.tasks.fusion.methods.network_modules.RobustParameterDict(parameters: Any = None)[source]

Bases: ParameterDict

Regular torch.nn.ParameterDict doesnt allow empty str. Hack around this.

Example

>>> from geowatch.tasks.fusion.methods.network_modules import *  # NOQA
>>> import string
>>> torch_dict = RobustParameterDict()
>>> # All printable characters should be usable as keys
>>> # If they are not, hack it.
>>> failed = []
>>> for c in list(string.printable) + ['']:
>>>     try:
>>>         torch_dict[c] = torch.nn.Parameter(torch.ones((1, 1)))
>>>     except KeyError:
>>>         failed.append(c)
>>> assert len(failed) == 0
>>> for v in torch_dict.values():
>>>     assert list(v.shape) == [1, 1]
repl_dot = '#D#'
repl_empty = '__EMPTY'
pop(key: str) Module[source]
class geowatch.tasks.fusion.methods.network_modules.OurDepthwiseSeparableConv(in_chs, out_chs, kernel_size=3, stride=1, dilation=1, padding=0, residual=False, pw_kernel_size=1, norm='group', noli='swish', drop_path_rate=0.0)[source]

Bases: Module

DepthwiseSeparable block Used for DS convs in MobileNet-V1 and in the place of IR blocks that have no expansion (factor of 1.0). This is an alternative to having a IR with an optional first pw conv.

From timm

feature_info(location)[source]
forward(x)[source]
class geowatch.tasks.fusion.methods.network_modules.DWCNNTokenizer(in_chn, out_chn, norm='auto')[source]

Bases: Sequential

self = DWCNNTokenizer(13, 2) inputs = torch.rand(2, 13, 16, 16) self(inputs)

class geowatch.tasks.fusion.methods.network_modules.LinearConvTokenizer(in_channels, out_channels)[source]

Bases: Sequential

Example

>>> from geowatch.tasks.fusion.methods.network_modules import *  # NOQA
>>> LinearConvTokenizer(1, 512)
class geowatch.tasks.fusion.methods.network_modules.ConvTokenizer(in_chn, out_chn, norm=None)[source]

Bases: Module

Example

from geowatch.tasks.fusion.methods.network_modules import * # NOQA self = ConvTokenizer(13, 64) print(‘self = {!r}’.format(self)) inputs = torch.rand(2, 13, 128, 128) tokens = self(inputs) print(‘inputs.shape = {!r}’.format(inputs.shape)) print(‘tokens.shape = {!r}’.format(tokens.shape))

Benchmark:

in_channels = 13 tokenizer1 = ConvTokenizer(in_channels, 512) tokenizer2 = RearrangeTokenizer(in_channels, 8, 8) tokenizer3 = DWCNNTokenizer(in_channels, 512) tokenizer4 = LinearConvTokenizer(in_channels, 512) print(util_netharn.number_of_parameters(tokenizer1)) print(util_netharn.number_of_parameters(tokenizer2)) print(util_netharn.number_of_parameters(tokenizer3)) print(util_netharn.number_of_parameters(tokenizer4))

print(util_netharn.number_of_parameters(tokenizer4[0])) print(util_netharn.number_of_parameters(tokenizer4[1])) print(util_netharn.number_of_parameters(tokenizer4[2])) print(util_netharn.number_of_parameters(tokenizer4[3]))

inputs = torch.rand(1, in_channels, 128, 128)

import timerit ti = timerit.Timerit(100, bestof=1, verbose=2)

tokenizer1(inputs).shape tokenizer2(inputs).shape

for timer in ti.reset(‘tokenizer1’):
with timer:

tokenizer1(inputs)

for timer in ti.reset(‘tokenizer2’):
with timer:

tokenizer2(inputs)

for timer in ti.reset(‘tokenizer3’):
with timer:

tokenizer3(inputs)

for timer in ti.reset(‘tokenizer4’):
with timer:

tokenizer4(inputs)

input_shape = (1, in_channels, 64, 64)

print(tokenizer2(torch.rand(*input_shape)).shape) downsampler1 = torch.nn.Sequential(*[

util_netharn.ConvNormNd(

dim=2, in_channels=in_channels, out_channels=in_channels, groups=in_channels, norm=None, noli=None, kernel_size=3, stride=2, padding=1,

), util_netharn.ConvNormNd(

dim=2, in_channels=in_channels, out_channels=in_channels, groups=in_channels, norm=None, noli=None, kernel_size=3, stride=2, padding=1,

), util_netharn.ConvNormNd(

dim=2, in_channels=in_channels, out_channels=in_channels, groups=in_channels, norm=None, noli=None, kernel_size=3, stride=2, padding=1,

),

])

downsampler2 = torch.nn.Sequential(*[
util_netharn.ConvNormNd(

dim=2, in_channels=in_channels, out_channels=in_channels, groups=in_channels, norm=None, noli=None, kernel_size=7, stride=5, padding=3,

),

]) print(ub.urepr(downsampler1.output_shape_for(input_shape).hidden.shallow(30), nl=1)) print(ub.urepr(downsampler2.output_shape_for(input_shape).hidden.shallow(30), nl=1))

forward(inputs)[source]
class geowatch.tasks.fusion.methods.network_modules.RearrangeTokenizer(in_channels, agree, window_size)[source]

Bases: Module

A mapping to a common number of channels and then rearrange

Not quite a pure rearrange, but is this way for backwards compat

forward(x)[source]
geowatch.tasks.fusion.methods.network_modules.torch_safe_stack(tensors, dim=0, *, out=None, item_shape=None, dtype=None, device=None)[source]

Behaves like torch.stack, but does not error when tensors is empty.

When tensors are not empty this is exactly torch.stack().

When tensors are empty, it constructs an empty output tensor based on explicit expected item shape if available, otherwise it assumes items would have had a shape of [0]. Likewise dtype and device should be specified otherwise they use torch.empty() defaults.

Parameters:
  • tensors (List[Tensor]) – sequence of tensors to concatenate. Passed to torch.stack().

  • dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive). Passed to torch.stack().

  • out (Tensor) – passed to torch.stack().

  • item_shape (Tuple[int, …]) – what the shape of an item should be. used to construct a default output.

  • dtype (torch.dtype) – the expected output datatype when tensors is empty.

  • device (torch.device | str | int | None) – the expected output device when tensors is empty.

Example

>>> from geowatch.tasks.fusion.methods.network_modules import *  # NOQA
>>> import ubelt as ub
>>> grid = list(ub.named_product({
>>>     # 'num': [0, 1, 2, 3],
>>>     'num': [0, 7],
>>>     'item_shape': ['auto', None],
>>>     'shape': [[], [0], [2], [2, 3], [2, 0, 3]],
>>>     'dim': [0, 1],
>>> }))
>>> results = []
>>> for item in grid:
>>>     print(f'item={item}')
>>>     dim = item['dim']
>>>     shape = item['shape']
>>>     item['shape'] = tuple(item['shape'])
>>>     if item['item_shape'] == 'auto':
>>>         item['item_shape'] = item['shape']
>>>     num = item['num']
>>>     tensors = [torch.empty(shape)] * num
>>>     if dim >= len(shape):
>>>         continue
>>>     out = torch_safe_stack(tensors, dim=dim,
>>>         item_shape=item['item_shape'])
>>>     row = {
>>>         **item,
>>>         'out.shape': out.shape,
>>>     }
>>>     print(f'row={row}')
>>>     results.append(row)
>>> import pandas as pd
>>> import rich
>>> df = pd.DataFrame(results)
>>> for _, subdf in df.groupby('shape'):
>>>     subdf = subdf.sort_values(['shape', 'dim', 'item_shape', 'num'])
>>>     print('')
>>>     rich.print(subdf.to_string())