geowatch.tasks.fusion.architectures.sits module

import sys sys.path.append(‘/home/joncrall/code/SITS-Former/code’) from model import classification_model as clf

import liberator lib = liberator.Liberator() lib.add_dynamic(clf.BERTClassification) lib.expand([‘model’]) print(lib.current_sourcecode())

class geowatch.tasks.fusion.architectures.sits.PositionalEncoding(d_model, max_len=366)[source]

Bases: Module

forward(time)[source]
class geowatch.tasks.fusion.architectures.sits.BERTEmbedding(num_features, dropout=0.1)[source]

Bases: Module

BERT Embedding which is consisted with under features
  1. InputEmbedding : project the input to embedding size through a lightweight 3D-CNN

  2. PositionalEncoding : adding positional information using sin/cos functions

sum of both features are output of BERTEmbedding

Parameters:
  • num_features – number of input features

  • dropout – dropout rate

forward(input_sequence, doy_sequence)[source]
class geowatch.tasks.fusion.architectures.sits.BERT(num_features, hidden, n_layers, attn_heads, dropout=0.1)[source]

Bases: Module

Parameters:
  • num_features – number of input features

  • hidden – hidden size of the SITS-Former model

  • n_layers – numbers of Transformer blocks (layers)

  • attn_heads – number of attention heads

  • dropout – dropout rate

forward(x, doy, mask)[source]
class geowatch.tasks.fusion.architectures.sits.MulticlassClassification(hidden, num_classes)[source]

Bases: Module

forward(x, mask)[source]
class geowatch.tasks.fusion.architectures.sits.BERTClassification(bert: BERT, num_classes)[source]

Bases: Module

Downstream task: Satellite Time Series Classification

Parameters:
  • bert – the BERT-Former model

  • num_classes – number of classes to be classified

forward(x, doy, mask)[source]