ml4chem.atomistic package

Subpackages

Submodules

ml4chem.atomistic.potentials module

class ml4chem.atomistic.potentials.Potentials(features=None, model=None, path=None, label='ml4chem', atoms=None, ml4chem_path=None, preprocessor=None, batch_size=None)[source]

Bases: Calculator, object

Atomistic Machine Learning Potentials

This class is highly inspired by the Atomistic Machine-Learning package (Amp).

Parameters:
  • features (object) – Atomic feature vectors (local chemical environments) from any of the features module.

  • model (object) – Machine learning algorithm to build a model.

  • path (str) – Path to save files.

  • label (str) – Name of files. Default ml4chem.

  • preprocessor (str) – Path to load sklearn preprocessor object. Useful when doing inference.

  • batch_size (int) – Number of data points per batch to use for training. Default is None.

autoencoders = ['AutoEncoder', 'VAE']
calculate(atoms, properties, system_changes)[source]

Calculate things

Parameters:
  • atoms (object, list) – List if images in ASE format.

  • properties

implemented_properties: List[str] = ['energy', 'forces']

Properties calculator can handle (energy, forces, …)

classmethod load(model=None, params=None, preprocessor=None, **kwargs)[source]

Load ML4Chem models

Parameters:
  • model (str) – The path to load the model from the .ml4c file for inference.

  • params (srt) – The path to load .params file with users’ inputs.

  • preprocessor (str) – The path to load the file with the sklearn preprocessor object.

module_names = {'GaussianProcess': 'gaussian_process', 'KernelRidge': 'kernelridge', 'PytorchIonicPotentials': 'ionic', 'PytorchPotentials': 'neuralnetwork', 'RetentionTimes': 'rt', 'VAE': 'autoencoders'}
static save(model=None, features=None, path=None, label='ml4chem')[source]

Save a model

Parameters:
  • model (obj) – The model to be saved.

  • features (obj) – Features object.

  • path (str) – The path where to save the model.

  • label (str) – Name of files. Default ml4chem.

svm_models = ['KernelRidge', 'GaussianProcess']
train(training_set, epochs=100, lr=0.001, convergence=None, device='cpu', optimizer=(None, None), lossfxn=None, regularization=0.0, batch_size=None, **kwargs)[source]

Method to train models

Parameters:
  • training_set (object, list) – List containing the training set.

  • epochs (int) – Number of full training cycles.

  • lr (float) – Learning rate.

  • convergence (dict) – Instead of using epochs, users can set a convergence criterion.

  • device (str) – Calculation can be run in the cpu or cuda (gpu).

  • optimizer (tuple) –

    The optimizer is a tuple with the structure:

    >>> ('adam', {'lr': float, 'weight_decay'=float})
    

  • lossfxn (object) – A loss function object.

  • regularization (float) – This is the L2 regularization. It is not the same as weight decay.

  • batch_size (int) – Number of data points per batch to use for training. Default is None.

Module contents

class ml4chem.atomistic.Potentials(features=None, model=None, path=None, label='ml4chem', atoms=None, ml4chem_path=None, preprocessor=None, batch_size=None)[source]

Bases: Calculator, object

Atomistic Machine Learning Potentials

This class is highly inspired by the Atomistic Machine-Learning package (Amp).

Parameters:
  • features (object) – Atomic feature vectors (local chemical environments) from any of the features module.

  • model (object) – Machine learning algorithm to build a model.

  • path (str) – Path to save files.

  • label (str) – Name of files. Default ml4chem.

  • preprocessor (str) – Path to load sklearn preprocessor object. Useful when doing inference.

  • batch_size (int) – Number of data points per batch to use for training. Default is None.

autoencoders = ['AutoEncoder', 'VAE']
calculate(atoms, properties, system_changes)[source]

Calculate things

Parameters:
  • atoms (object, list) – List if images in ASE format.

  • properties

implemented_properties: List[str] = ['energy', 'forces']

Properties calculator can handle (energy, forces, …)

classmethod load(model=None, params=None, preprocessor=None, **kwargs)[source]

Load ML4Chem models

Parameters:
  • model (str) – The path to load the model from the .ml4c file for inference.

  • params (srt) – The path to load .params file with users’ inputs.

  • preprocessor (str) – The path to load the file with the sklearn preprocessor object.

module_names = {'GaussianProcess': 'gaussian_process', 'KernelRidge': 'kernelridge', 'PytorchIonicPotentials': 'ionic', 'PytorchPotentials': 'neuralnetwork', 'RetentionTimes': 'rt', 'VAE': 'autoencoders'}
static save(model=None, features=None, path=None, label='ml4chem')[source]

Save a model

Parameters:
  • model (obj) – The model to be saved.

  • features (obj) – Features object.

  • path (str) – The path where to save the model.

  • label (str) – Name of files. Default ml4chem.

svm_models = ['KernelRidge', 'GaussianProcess']
train(training_set, epochs=100, lr=0.001, convergence=None, device='cpu', optimizer=(None, None), lossfxn=None, regularization=0.0, batch_size=None, **kwargs)[source]

Method to train models

Parameters:
  • training_set (object, list) – List containing the training set.

  • epochs (int) – Number of full training cycles.

  • lr (float) – Learning rate.

  • convergence (dict) – Instead of using epochs, users can set a convergence criterion.

  • device (str) – Calculation can be run in the cpu or cuda (gpu).

  • optimizer (tuple) –

    The optimizer is a tuple with the structure:

    >>> ('adam', {'lr': float, 'weight_decay'=float})
    

  • lossfxn (object) – A loss function object.

  • regularization (float) – This is the L2 regularization. It is not the same as weight decay.

  • batch_size (int) – Number of data points per batch to use for training. Default is None.