ml4chem.atomistic.models package

Submodules

ml4chem.atomistic.models.autoencoders module

class ml4chem.atomistic.models.autoencoders.Annealer(warm_up=50, step=50, n_cycles=5)[source]

Bases: object

Annealing class

Based on on https://arxiv.org/abs/1903.10145.

Parameters
  • warm_up (int, optional) – Number of epochs that we let reconstruction to dominate VAE, by default 50

  • step (int, optional) – Number of steps to increase from 0 to 1, by default 50

  • n_cycles (int, optional) – The number of cycles we will repeat the annealing, by default 5

update(epoch)[source]

Update annealing value

Parameters

epoch (int) – Epoch on the training process.

Returns

Float number with annealing magnitude.

Return type

annealing

class ml4chem.atomistic.models.autoencoders.AutoEncoder(hiddenlayers=None, activation='relu', one_for_all=False, **kwargs)[source]

Bases: ml4chem.atomistic.models.base.DeepLearningModel, torch.nn.modules.module.Module

Fully connected atomic autoencoder

AutoEncoders are very interesting models where usually the input is reconstructed (input equals output). These models are able to learn data coding in an unsupervised manner. They are composed by an encoder that takes an input and concentrate (encodes) the information in a lower/larger dimensional space (aka latent space). Subsequently, a decoder takes the latent space and tries to reconstruct the input. It is been reported that when the output is not equal to the input, the model learns how to ‘translate’ input into output e.g. image coloring.

This module uses autoencoders for pipelines in chemistry.

Parameters
  • hiddenlayers (dict) – Dictionary with encoder, and decoder layers in the Auto Encoder.

  • activation (str) – The activation function.

  • one_for_all (bool) – Use one autoencoder model for all atoms instead of a model per atom type as in the Behler-Parrinello scheme. Default is False.

Notes

When defining the hiddenlayers keyword argument, input and output dimensions are automatically determined. For example, suppose you have an input data point with 10 dimensions and you want to autoencode with targets having 14 dimensions, a latent space with 4 dimensions and just one hidden layer with 5 nodes between input-layer / latent-layer and latent-layer / output-layer. Your hiddenlayers dictionary would look like this:

>>> hiddenlayers = {'encoder': (5, 4), 'decoder': (4, 5)}

That would generate an autoencoder with topology (10, 5, 4 | 4, 5, 14).

NAME = 'AutoEncoder'
decode(z, symbol=None)[source]

Decode latent vector, z

Parameters
  • z (array) – Latent vector.

  • symbol (str, optional) – Chemical symbol. Default is None.

Returns

Tensor with reconstruction.

Return type

reconstruction

encode(x, symbol=None)[source]

Encode input

Parameters
  • x (array) – Input array.

  • symbol (str, optional) – Chemical symbol. Default is None.

Returns

Latent vector.

Return type

z

forward(X)[source]

Forward propagation

This method takes an input and applies encoder and decoder layers.

Parameters

X (list) – List of inputs either raw or in the feature space.

Returns

outputs – Decoded latent vector.

Return type

tensor

get_latent_space(X, svm=False, purpose=None)[source]

Get latent space for training ML4Chem models

This method takes an input and use the encoder to return latent space in the structure needed for training ML4Chem models or visualization.

Parameters
  • X (list) – List of inputs either raw or in the feature space.

  • svm (bool) – Whether or not these latent vectors are going to be used for kernel methods.

  • purpose (str) – The purpose for this latent space. This is just useful for the case where the latent space will be preprocessed (purpose=’preprocessing’).

Returns

latent_space – Latent space with structure: {‘hash’: [(‘H’, [latent_vector]]}

Return type

dict

Notes

The latent space saved with this function creates a dictionary that can operate with other parts of this package. Note that if you would need to get the latent space for an unseen structure then you will have to forward propagate and get the latent_space.

classmethod name()[source]

Returns name of class

prepare_model(input_dimension, output_dimension, data=None, purpose='training')[source]

Prepare the model

Parameters
  • input_dimension (int) – Input’s dimension.

  • output_dimension (int) – Output’s dimension.

  • data (object) – Data object created from the handler.

  • purpose (str) – Purpose of this model: ‘training’, ‘inference’.

class ml4chem.atomistic.models.autoencoders.VAE(hiddenlayers=None, activation='relu', one_for_all=False, **kwargs)[source]

Bases: ml4chem.atomistic.models.autoencoders.AutoEncoder

Variational Autoencoder (VAE)

This module uses variational autoencoders for pipelines in chemistry.

Parameters
  • hiddenlayers (dict) – Dictionary with encoder, and decoder layers in the Auto Encoder.

  • activation (str) – The activation function.

  • variant (str) –

    The following variants are supported:

    • ”multivariate”: decoder outputs a distribution with mean and variance, we minimize the negative of the log likelihood plus the KL-Divergence. Useful for continuous variables. Feature range [-inf, inf].

    • ”bernoulli”: decoder outputs a layer with sigmoid activation function, and we minimize cross-entropy plus KL-diverence. Features must be in a range [0, 1].

    • ”dcgan”: decoder outputs a single layer with tanh, and loss equals to KL-Diverngence plus MSELoss. Useful for feature ranges [-1, 1].

  • one_for_all (bool) – Use one autoencoder model for all atoms instead of a model per atom type as in the Behler-Parrinello scheme. Default is False.

Notes

When defining the hiddenlayers keyword argument, input and output dimensions are automatically determined. For example, suppose you have an input data point with 10 dimensions and you want to autoencode with targets having 14 dimensions, a latent space with 4 dimensions and just one hidden layer with 5 nodes between input-layer / latent-layer and latent-layer / output-layer. Your hiddenlayers dictionary would look like this:

>>> hiddenlayers = {'encoder': (5, 4), 'decoder': (4, 5)}

That would generate an autoencoder with topology (10, 5, 4 | 4, 5, 14).

NAME = 'VAE'
decode(z, symbol=None)[source]

Decode latent vector, z

Parameters
  • z (array) – Latent vector.

  • symbol (str, optional) – Chemical symbol. Default is None.

Returns

Tensor with reconstruction.

Return type

reconstruction

Notes

See page 11 “Kingma, D. P. & Welling, M. Auto-Encoding Variational Bayes. (2013)”.

encode(x, symbol=None)[source]

Encode input

Parameters
  • x (array) – Input array.

  • symbol (str, optional) – Chemical symbol. Default is None.

Returns

Mean and variance.

Return type

mu, logvar

forward(X)[source]

Forward propagation

This method takes an input and applies encoder and decoder layers.

Parameters

X (list) – List of inputs either raw or in the feature space.

Returns

Decoded latent vector.

Return type

mu and logvar for two multivariate gaussian

get_latent_space(X, svm=False, purpose=None)[source]

Get latent space for training ML4Chem models

This method takes an input and use the encoder to return latent space in the structure needed for training ML4Chem models or visualization.

Parameters
  • X (list) – List of inputs either raw or in the feature space.

  • svm (bool) – Whether or not these latent vectors are going to be used for kernel methods.

  • purpose (str) – The purpose for this latent space. This is just useful for the case where the latent space will be preprocessed (purpose=’preprocessing’).

Returns

latent_space – Latent space with structure: {‘hash’: [(‘H’, [latent_vector]]}

Return type

dict

Notes

The latent space saved with this function creates a dictionary that can operate with other parts of this package. Note that if you would need to get the latent space for an unseen structure then you will have to forward propagate and get the latent_space.

classmethod name()[source]

Returns name of class

reparameterize(mu, logvar, purpose=None)[source]

Reparameterization trick

This trick samples the posterior (a latent vector) from a multivariate Gaussian probability distribution. At the same time it allows the model to be backward-propagated.

Parameters
  • mu (tensor) – Mean values of distribution.

  • logvar (tensor) – Logarithm of variance of distribution.

Returns

A sample from the distribution.

Return type

Sample vector

class ml4chem.atomistic.models.autoencoders.train(inputs, targets, model=None, data=None, optimizer=(None, None), regularization=None, epochs=100, convergence=None, lossfxn=None, device='cpu', batch_size=None, lr_scheduler=None, **kwargs)[source]

Bases: object

Train the model

Parameters
  • inputs (dict) – Dictionary with hashed feature space.

  • targets (list) – The expected values that the model has to learn aka y.

  • model (object) – The NeuralNetwork class.

  • data (object) – Data object created from the handler.

  • optimizer (tuple) –

    The optimizer is a tuple with the structure:
    >>> ('adam', {'lr': float, 'weight_decay'=float})
    

  • epochs (int) – Number of full training cycles.

  • regularization (float) – This is the L2 regularization. It is not the same as weight decay.

  • convergence (dict) – Instead of using epochs, users can set a convergence criterion.

  • lossfxn (obj) – A loss function object.

  • device (str) – Calculation can be run in the cpu or cuda (gpu).

  • batch_size (int) – Number of data points per batch to use for training. Default is None.

  • lr_scheduler (tuple) –

    Tuple with structure: scheduler’s name and a dictionary with keyword arguments.

    >>> lr_scheduler = ('ReduceLROnPlateau',
                        {'mode': 'min', 'patience': 10})
    

  • anneal (bool) – Cyclical annealing based on https://arxiv.org/abs/1903.10145.

  • penalize_latent (bool) – Set to True if latent vectors are going to be penalized. Default is False.

classmethod closure(chunks, targets, model, lossfxn, device, inputs_chunk_vals=None, annealing=None, penalize_latent=False)[source]

Closure

This method clears previous gradients, iterates over chunks, accumulate the gradients, update model params, and return loss.

static get_inputs_chunks(chunks)[source]

Get inputs in chunks for EncoderMapLoss

Returns

A list with inputs_chunk_vals.

Return type

inputs_chunk_vals

classmethod train_batches(index, chunk, targets, model, lossfxn, device, inputs_chunk_vals, annealing, penalize_latent)[source]

A function that allows training per batches

Parameters
  • index (int) – Index of batch.

  • chunk (tensor or list) – Tensor with input data points in batch with index.

  • targets (tensor or list) – The targets.

  • model (obj) – Pytorch model to perform forward() and get gradients.

  • lossfxn (obj) – A loss function object.

  • device (str) – Are we running cuda or cpu?

  • inputs_chunk_vals (tensor or list) – Inputs needed by EncoderMapLoss

Returns

loss – The loss function of the batch.

Return type

tensor

trainer()[source]

Run the training class

ml4chem.atomistic.models.gaussian_process module

class ml4chem.atomistic.models.gaussian_process.GaussianProcess(sigma=1.0, kernel='rbf', scheduler='distributed', lamda=1e-05, trainingimages=None, checkpoints=None, cholesky=True, weights_independent=True, forcetraining=False, nnpartition=None, sum_rule=True, batch_size=None, weights=None)[source]

Bases: ml4chem.atomistic.models.kernelridge.KernelRidge

Gaussian Process Regression

This method is based on the KernelRidge regression class of ML4Chem.

Parameters
  • sigma (float, list, or dict) –

    Length scale of the Gaussian in the case of RBF, exponential, and laplacian kernels. Default is 1. (float) and it computes isotropic kernels. Pass a list if you would like to compute anisotropic kernels, or a dictionary if you want sigmas for each model.

    Example:

    >>> sigma={'energy': {'H': value, 'O': value},
               'forces': {'H': {0: value, 1: value, 2: value},
                      'O': {0: value, 1: value, 2: value}}}
    

    value can be a float or a list.

  • kernel (str) – Choose the kernel. Available kernels are: ‘linear’, ‘rbf’, ‘laplacian’, and ‘exponential’. Default is ‘rbf’.

  • lamda (float, or dictionary) –

    Strength of the regularization. If you pass a dictionary then force and energy will have different regularization:

    >>> lamda = {'energy': value, 'forces': value}
    

    Dictionaries are only used when performing Cholesky factorization.

  • trainingimages (str) – Path to Trajectory file containing the images in the training set. This is useful for predicting new structures.

  • cholesky (bool) – Whether or not we are using Cholesky decomposition to determine the weights. This method returns an unique set of regression coefficients.

  • weights_independent (bool) – Whether or not the weights are going to be split for energy and forces.

  • forcetraining (bool) – Turn force training true.

  • nnpartition (str) – Use per-atom energy partition from a neural network calculator. You have to set the path to .amp file. Useful for energy training with Cholesky factorization. Default is set to None.

  • scheduler (str) – The scheduler to be used with the dask backend.

  • sum_rule (bool) – Whether or not we sum of fingerprintprime elements over a given axis. This applies np.sum(fingerprint_list, axis=0).

  • batch_size (int) – Number of elements per batch in order to split computations. Useful when number of local chemical environments is too large.

  • weights (dict) – Dictionary of weights.

Notes

This regressor applies the atomic decomposition Ansatz (ADA). For more information check the Notes on the KernelRidge class.

NAME = 'GaussianProcess'
get_potential_energy(features, reference_space, purpose)[source]

Get potential energy with Kernel Ridge

Parameters
  • features (dict) – Dictionary with hash and features.

  • reference_space (array) – Array with reference feature space.

  • purpose (str) – Purpose of this function: ‘training’, ‘inference’.

Returns

Energy of a molecule and its respective variance.

Return type

energy, variance

get_variance(features, ks, reference_space, purpose)[source]

Compute predictive variance

Parameters
  • features (dict) – Dictionary with data point to be predicted.

  • ks (array) – Variance between data point and reference space.

  • reference_space (list) – Reference space used to compute kernel.

  • purpose (str) – Purpose of this function: ‘training’, ‘inference’.

Returns

Predictive variance.

Return type

variance

ml4chem.atomistic.models.kernelridge module

class ml4chem.atomistic.models.kernelridge.KernelRidge(sigma=1.0, kernel='rbf', scheduler='distributed', lamda=1e-05, trainingimages=None, checkpoints=None, cholesky=True, weights_independent=True, forcetraining=False, nnpartition=None, sum_rule=True, batch_size=None, weights=None, **kwargs)[source]

Bases: object

Kernel Ridge Regression

Parameters
  • sigma (float, list, or dict) –

    Length scale of the Gaussian in the case of RBF, exponential, and laplacian kernels. Default is 1. (float) and it computes isotropic kernels. Pass a list if you would like to compute anisotropic kernels, or a dictionary if you want sigmas for each model.

    Example:

    >>> sigma={'energy': {'H': value, 'O': value},
               'forces': {'H': {0: value, 1: value, 2: value},
                      'O': {0: value, 1: value, 2: value}}}
    

    value can be a float or a list.

  • kernel (str) – Choose the kernel. Available kernels are: ‘linear’, ‘rbf’, ‘laplacian’, and ‘exponential’. Default is ‘rbf’.

  • lamda (float, or dictionary) –

    Strength of the regularization. If you pass a dictionary then force and energy will have different regularization:

    >>> lamda = {'energy': value, 'forces': value}
    

    Dictionaries are only used when performing Cholesky factorization.

  • trainingimages (str) – Path to Trajectory file containing the images in the training set. This is useful for predicting new structures.

  • cholesky (bool) – Whether or not we are using Cholesky decomposition to determine the weights. This method returns an unique set of regression coefficients.

  • weights_independent (bool) – Whether or not the weights are going to be split for energy and forces.

  • forcetraining (bool) – Turn force training true.

  • nnpartition (str) – Use per-atom energy partition from a neural network calculator. You have to set the path to .amp file. Useful for energy training with Cholesky factorization. Default is set to None.

  • scheduler (str) – The scheduler to be used with the dask backend.

  • sum_rule (bool) – Whether or not we sum of fingerprintprime elements over a given axis. This applies np.sum(fingerprint_list, axis=0).

  • batch_size (int) – Number of elements per batch in order to split computations. Useful when number of local chemical environments is too large.

  • weights (dict) – Dictionary of weights.

Notes

In the case of training total energies, we need to apply either an atomic decomposition Ansatz (ADA) during training or an energy partition scheme to the training set. ADA can be achieved based on Ref. 1. For an explanation of what they do, see the Master thesis by Sonja Mathias.

http://wissrech.ins.uni-bonn.de/teaching/master/masterthesis_mathias_revised.pdf

ADA is the default way of training total energies in this KernelRidge class.

An energy partition scheme for total energies can be obtained from an artificial neural network or methods such as the interacting quantum atoms theory (IQA). I implemented the nnpartition mode for which users can provide the path to a NN calculator and we take the energies per-atom from the function .calculate_atomic_energy(). The strategy would be to use train the NN with a very tight convergence criterion (1e-6 RSME). Then, calling .calculate_atomic_energy() would give you the atomic energies for such set.

For forces is a different history because we do know the derivative of the energy with respect to atom positions (a per-atom quantity). So we rely on the method in the algorithm shown by Rupp in Ref. 2.

References

1. Bartók, A. P. & Csányi, G. Gaussian approximation potentials: A brief tutorial introduction. Int. J. Quantum Chem. 115, 1051–1057 (2015). 2. Rupp, M. Machine learning for quantum mechanics in a nutshell. Int. J.

Quantum Chem. 115, 1058–1073 (2015).

NAME = 'KernelRidge'
get_kernel_matrix(feature_space, reference_features, purpose)[source]

Get kernel matrix delayed computations

Parameters
  • features (dict, list) – Dictionary with hash and features, or a list.

  • reference_space (array) – Array with reference feature space.

  • purpose (str) – Purpose of this kernel matrix. Accepted arguments are ‘training’, and ‘inference’.

Returns

List with kernel matrix values.

Return type

kernel_matrix

Notes

This class method expects the feature_space to be an OrderedDict and reference_space but it turns out that for computing variances, it might be the case the feature_space is also a list.

get_lt
get_potential_energy(features, reference_space, purpose)[source]

Get potential energy with Kernel Ridge

Parameters
  • features (dict) – Dictionary with hash and features.

  • reference_space (array) – Array with reference feature space.

  • purpose (str) – Purpose of this function: ‘training’, ‘inference’.

Returns

Energy of a molecule.

Return type

energy

get_sigma(sigma, forcetraining=False)[source]

Function to build sigma

Parameters
  • sigma (float, list or dict.) – This is user’s raw input for sigma.

  • forcetraining (bool) – Whether or not force training is set to true.

Returns

_sigma – Universal sigma dictionary for KernelRidge.

Return type

dict

classmethod name()[source]

Returns name of class

prepare_model(feature_space, reference_features, data=None, purpose='training')[source]

Prepare the Kernel Ridge Regression model

Parameters
  • feature_space (dict) – A dictionary with hash, fingerprint structure.

  • reference_features (dict) – A dictionary with raveled tuples of symbol, atomic fingerprint.

  • data (object) – Data object created from the handler.

  • purpose (str) – Purpose of this model: ‘training’, ‘inference’.

Notes

This method builds the atomic kernel matrices and the LT vectors needed to apply the atomic decomposition Ansatz.

train(inputs, targets, data=None)[source]

Train the model

Parameters
  • inputs (dict) – Dictionary with hashed feature space.

  • targets (list) – The expected values that the model has to learn aka y.

  • data (object) – Data object created from the handler.

ml4chem.atomistic.models.kernelridge.decode(symbol)[source]

Decode from binary to string

Parameters

symbol (binary) – A string in binary form, e.g. b’hola’.

Returns

Symbol as a string.

Return type

str

ml4chem.atomistic.models.loss module

ml4chem.atomistic.models.loss.AtomicMSELoss(outputs, targets, atoms_per_image, uncertainty=None)[source]

Default loss function

If user does not input loss function we provide mean-squared error loss function.

Parameters
  • outputs (tensor) – Outputs of the model.

  • targets (tensor) – Expected value of outputs.

  • atoms_per_image (tensor) – A tensor with the number of atoms per image.

  • uncertainty (tensor, optional) – A tensor of uncertainties that are used to penalize during the loss function evaluation.

Returns

loss – The value of the loss function.

Return type

tensor

ml4chem.atomistic.models.loss.EncoderMapLoss(inputs, outputs, targets, latent, periodicity=inf, k_c=1.0, k_auto=1.0, k_sketch=1.0, sigma_h=4.5, a_h=12.0, b_h=6.0, sigma_l=1.0, a_l=2.0, b_l=6.0)[source]

Encodermap loss function

Parameters
  • inputs (tensor) – Inputs of the model.

  • outputs (tensor) – Outputs of the model.

  • targets (tensor) – Expected value of outputs.

  • latent (tensor) – The latent space tensor.

  • periodicity (float) – Defines the distance between periodic walls for the inputs. For example 2pi for angular values in radians. All periodic data processed by EncoderMap must be wrapped to one periodic window. E.g. data with 2pi periodicity may contain values from -pi to pi or from 0 to 2pi. Default is float(“inf”) – non-periodic inputs.

  • k_auto (float) – Contribution of distance loss function to total loss.

  • k_sketch (float) – Contribution of sketch map loss function to total loss.

Returns

loss – The value of the loss function.

Return type

tensor

Notes

This loss function combines a distance measure between outputs and targets plus a sketch-map loss plus a regularization. See Eq. (5) from paper referenced above.

When passing it to the Autoencoder() class, the model basically becomes an atom-centered model with the encodermap variant.

There is something to note about regularization for this loss function. Autors of EncoderMap penalize both the weights using L2 regularization, and the magnitude of activation in the latent space layer. is added in the optimizer The L2 regularization is included using weight_decay in the optimizer of choice. The activation penalization is computed below.

References

This is the implementation of the encodermap loss function as proposed by:

  1. Lemke, T., & Peter, C. (2019). EncoderMap: Dimensionality Reduction and Generation of Molecule Conformations. Journal of Chemical Theory and Computation, 15(2), 1209–1215. research-article.

ml4chem.atomistic.models.loss.MSELoss(outputs, targets)[source]

Mean-squared error loss function

Parameters
  • outputs (tensor) – Outputs of the model.

  • targets (tensor) – Expected value of outputs.

Returns

loss – The value of the loss function.

Return type

tensor

ml4chem.atomistic.models.loss.SumSquaredDiff(outputs, targets)[source]

Sum of squared differences loss function

Parameters
  • outputs (tensor) – Outputs of the model.

  • targets (tensor) – Expected value of outputs.

Returns

loss – The value of the loss function.

Return type

tensor

Notes

In the literature it is mentioned that for real-valued autoencoders the reconstruction loss function is the sum of squared differences.

ml4chem.atomistic.models.loss.VAELoss(outputs=None, targets=None, mus_latent=None, logvars_latent=None, mus_decoder=None, logvars_decoder=None, annealing=None, variant=None, latent=None, input_dimension=None)[source]

Variational Autoencoder loss function

Parameters
  • outputs (tensor) – Outputs of the model.

  • targets (tensor) – Expected value of outputs.

  • mus_latent (tensor) – Mean values of distribution.

  • logvars_latent (tensor) – Logarithm of the variance.

  • variant (str) –

    The following variants are supported: - “multivariate”: decoder outputs a distribution with mean and

    variance, we minimize the negative of the log likelihood plus the KL-Divergence. Useful for continuous variables. Feature range [-inf, inf].

    • ”bernoulli”: decoder outputs a layer with sigmoid activation function, and we minimize cross-entropy plus KL-diverence. Features must be in a range [0, 1].

    • ”dcgan”: decoder outputs a single layer with tanh, and loss equals to KL-Diverngence plus MSELoss. Useful for feature ranges [-1, 1].

  • annealing (float) – Contribution of distance loss function to total loss.

  • latent (tensor, optional) – The latent space tensor.

  • input_dimension (int, optional) – Input’s dimension.

Returns

loss – The value of the loss function.

Return type

tensor

ml4chem.atomistic.models.loss.get_distance(i, j, periodicity)[source]

Get distance between two tensors

Parameters
  • i (tensor) – A tensor.

  • j (tensor) – A tensor.

  • periodicity (float) – Defines the distance between periodic walls for the inputs.

Returns

Return type

tensor with distances.

Notes

Cases where periodicity is present are not yet supported.

ml4chem.atomistic.models.loss.get_pairwise_distances(positions, squared=False)[source]

Get pairwise distances of a matrix

Parameters
  • positions (tensor) – Tensor with positions.

  • squared (bool, optional) – Whether or not the squared of pairwise distances are computed, by default False.

Returns

Pairwise distances.

Return type

distances

ml4chem.atomistic.models.loss.sigmoid(r, sigma, a, b)[source]

Sigmoid function

Parameters
  • r (array) – Pairwise distances.

  • sigma (float) – Location of the inflection point.

  • a (float) – Rate in which sigmoid approaches 0 or 1.

  • b (float) – Rate in which sigmoid approaches 0 or 1.

Returns

sigmoid – Value of the sigmoid function.

Return type

float

ml4chem.atomistic.models.merger module

class ml4chem.atomistic.models.merger.ModelMerger(models)[source]

Bases: torch.nn.modules.module.Module

Model Merger

A class that can merge models and train them simultaneously. Models are executed sequentially. It is assumed that outputs of model1 are the inputs of model2. This behavior can be modified by adding extra_funcs to call external functions.

Parameters

models (list) –

A list of models.
>>> models = [list of models]

NAME = 'Merged'
autoencoders = ['AutoEncoder', 'VAE']
closure(index, model, independent_loss, name=None)[source]

Closure

This method clears previous gradients, iterates over batches, accumulates the gradients, reduces the gradients, update model params, and finally returns loss and outputs_.

Parameters
  • index (int) – Index of model.

  • model (obj) – Model object.

  • independent_loss (bool) – Whether or not models’ weight are optimized independently.

  • name (str, optional) – Model class’s name, by default None.

Returns

A tuple with loss function magnitudes and tensor with outputs.

Return type

loss, outputs

forward(X, models)[source]

Forward propagation

Parameters
  • X (list) – List of models’ inputs.

  • models (list) – List of model objects.

Returns

A list with the forward propagation evaluation.

Return type

outputs

classmethod name()[source]

Returns name of class

train(inputs, targets, data=None, optimizer=(None, None), epochs=100, regularization=None, convergence=None, lossfxn=None, device='cpu', batch_size=None, lr_scheduler=None, independent_loss=True, loss_weights=None)[source]

Train the models

Parameters
  • inputs (dict) – Dictionary with hashed feature space.

  • targets (list) – The expected values that the model has to learn aka y.

  • model (object) – The NeuralNetwork class.

  • data (object) – Data object created from the handler.

  • optimizer (tuple) –

    The optimizer is a tuple with the structure:
    >>> ('adam', {'lr': float, 'weight_decay'=float})
    

  • epochs (int) – Number of full training cycles.

  • regularization (float) – This is the L2 regularization. It is not the same as weight decay.

  • convergence (dict) –

    Instead of using epochs, users can set a convergence criterion.
    >>> convergence = {"rmse": [0.04, 0.02]}
    

  • lossfxn (obj) – A loss function object.

  • device (str) – Calculation can be run in the cpu or cuda (gpu).

  • batch_size (int) – Number of data points per batch to use for training. Default is None.

  • lr_scheduler (tuple) –

    Tuple with structure: scheduler’s name and a dictionary with keyword arguments.

    >>> lr_scheduler = ('ReduceLROnPlateau',
                        {'mode': 'min', 'patience': 10})
    

  • independent_loss (bool) – Whether or not models’ weight are optimized independently.

  • loss_weights (list) – How much the loss of model(i) contributes to the total loss.

train_batches(chunk_index, chunk, targets, models, lossfxn, atoms_per_image, device)[source]

ml4chem.atomistic.models.neuralnetwork module

class ml4chem.atomistic.models.neuralnetwork.NeuralNetwork(hiddenlayers=(3, 3), activation='relu', **kwargs)[source]

Bases: ml4chem.atomistic.models.base.DeepLearningModel, torch.nn.modules.module.Module

Atom-centered Neural Network Regression with Pytorch

This model is based on Ref. 1 by Behler and Parrinello.

Parameters
  • hiddenlayers (tuple) – Structure of hidden layers in the neural network.

  • activation (str) – Activation functions. Supported “tanh”, “relu”, or “celu”.

References

  1. Behler, J. & Parrinello, M. Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces. Phys. Rev. Lett. 98, 146401 (2007).

  2. Khorshidi, A. & Peterson, A. A. Amp : A modular approach to machine learning in atomistic simulations. Comput. Phys. Commun. 207, 310–324 (2016).

NAME = 'PytorchPotentials'
forward(X)[source]

Forward propagation

This is forward propagation and it returns the atomic energy.

Parameters

X (list) – List of inputs in the feature space.

Returns

outputs – A list of tensors with energies per image.

Return type

tensor

get_activations(images, model=None, numpy=True)[source]

Get activations of each hidden-layer

This function allows to extract activations of each hidden-layer of the neural network.

Parameters
  • image (dict) – Image with structure hash, features.

  • model (object) – A ML4Chem model object.

  • numpy (bool) – Whether we want numpy arrays or tensors.

Returns

activations – A DataFrame with activations for each layer.

Return type

DataFrame

classmethod name()[source]

Returns name of class

prepare_model(input_dimension, data=None, purpose='training')[source]

Prepare the model

Parameters
  • input_dimension (int) – Input’s dimension.

  • data (object) – Data object created from the handler.

  • purpose (str) – Purpose of this model: ‘training’, ‘inference’.

class ml4chem.atomistic.models.neuralnetwork.train(inputs, targets, model=None, data=None, optimizer=(None, None), regularization=None, epochs=100, convergence=None, lossfxn=None, device='cpu', batch_size=None, lr_scheduler=None, uncertainty=None, checkpoint=None, test=None)[source]

Bases: ml4chem.atomistic.models.base.DeepLearningTrainer

Train the model

Parameters
  • inputs (dict) – Dictionary with hashed feature space.

  • targets (list) – The expected values that the model has to learn aka y.

  • model (object) – The NeuralNetwork class.

  • data (object) – Data object created from the handler.

  • optimizer (tuple) –

    The optimizer is a tuple with the structure:
    >>> ('adam', {'lr': float, 'weight_decay'=float})
    

  • epochs (int) – Number of full training cycles.

  • regularization (float) – This is the L2 regularization. It is not the same as weight decay.

  • convergence (dict) – Instead of using epochs, users can set a convergence criterion. Supported keys are “training” and “test”.

  • lossfxn (obj) – A loss function object.

  • device (str) – Calculation can be run in the cpu or cuda (gpu).

  • batch_size (int) – Number of data points per batch to use for training. Default is None.

  • lr_scheduler (tuple) –

    Tuple with structure: scheduler’s name and a dictionary with keyword arguments.

    >>> lr_scheduler = ('ReduceLROnPlateau',
                        {'mode': 'min', 'patience': 10})
    

  • uncertainty (list) – A list of uncertainties that are used to penalize during the loss function evaluation.

  • checkpoint (dict) –

    Set checkpoints. Dictionary with following structure:

    >>> checkpoint = {"label": label, "checkpoint": 100, "path": ""}
    

    label refers to the name used to save the checkpoint, checkpoint is a integer or -1 for saving all epochs, and the path is where the checkpoint is stored. Default is None and no checkpoint is saved.

  • test (dict) –

    A dictionary used to compute the error over a validation/test set during training procedures.

    >>>  test = {"features": test_space, "targets": test_targets, "data": data_test}
    

    The keys,values of the dictionary are:

    • ”data”: a Data object.

    • ”targets”: test set targets.

    • ”features”: a feature space obtained using features.calculate().

classmethod closure(chunks, targets, uncertainty, model, lossfxn, atoms_per_image, device)[source]

Closure

This class method clears previous gradients, iterates over batches, accumulates the gradients, reduces the gradients, update model params, and finally returns loss and outputs_.

Parameters
  • Cls (object) – Class object.

  • chunks (tensor or list) – Tensor with input data points in batch with index.

  • targets (tensor or list) – The targets.

  • uncertainty (list) – A list of uncertainties that are used to penalize during the loss function evaluation.

  • model (obj) – Pytorch model to perform forward() and get gradients.

  • lossfxn (obj) – A loss function object.

  • atoms_per_image (list) – Atoms per image because we are doing atom-centered methods.

  • device (str) – Are we running cuda or cpu?

classmethod train_batches(index, chunk, targets, uncertainty, model, lossfxn, atoms_per_image, device)[source]

A function that allows training per batches

Parameters
  • index (int) – Index of batch.

  • chunk (tensor or list) – Tensor with input data points in batch with index.

  • targets (tensor or list) – The targets.

  • model (obj) – Pytorch model to perform forward() and get gradients.

  • uncertainty (list) – A list of uncertainties that are used to penalize during the loss function evaluation.

  • lossfxn (obj) – A loss function object.

  • atoms_per_image (list) – Atoms per image because we are doing atom-centered methods.

  • device (str) – Are we running cuda or cpu?

Returns

loss – The loss function of the batch.

Return type

tensor

trainer()[source]

Run the training class

ml4chem.atomistic.models.rt module

ml4chem.atomistic.models.se3net module

class ml4chem.atomistic.models.se3net.AvgSpacial[source]

Bases: torch.nn.modules.module.Module

forward(inp)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ml4chem.atomistic.models.se3net.SE3Net(num_classes, size, activation='relu')[source]

Bases: torch.nn.modules.module.Module

Rotational equivariant neural network

Parameters
  • num_classes (int) –

  • size (int) –

  • activation (str) –

forward(inputs, difference_mat)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ml4chem.atomistic.models.se3net.torch_default_dtype(dtype)[source]

Bases: object

Module contents