Engine
full name: tenpy.algorithms.tdvp.Engine
parent module:
tenpy.algorithms.tdvp
type: class
Inheritance Diagram
Methods
|
|
|
Gives an approximate prediction for the required memory usage. |
|
Evolve by N_steps*dt. |
|
|
|
Return necessary data to resume a |
Prepare an evolution step. |
|
Resume a run that was interrupted. |
|
(Real-)time evolution with TDVP. |
|
|
Perform a (real-)time evolution of |
|
Run the TDVP algorithm with the one site algorithm. |
|
Run the TDVP algorithm with two sites update. |
Performs the sweep left->right of the second order TDVP scheme with one site update. |
|
Performs the sweep left->right of the second order TDVP scheme with two sites update. |
|
Performs the sweep right->left of the second order TDVP scheme with one site update. |
|
Performs the sweep left->right of the second order TDVP scheme with two sites update. |
|
|
Initialize algorithm from another algorithm instance of a different class. |
|
Performs the SVD from left to right. |
|
Performs the SVD from right to left. |
|
Update with the zero site Hamiltonian (update of the singular value) |
|
Update with the one site Hamiltonian. |
|
Update with the two sites Hamiltonian. |
Class Attributes and Properties
|
|
whether the algorithm supports time-dependent H |
|
|
- class tenpy.algorithms.tdvp.Engine(psi, model, options, **kwargs)[source]
Bases:
OldTDVPEngine
Deprecated old name of the
OldTDVPEngine
.- estimate_RAM(saving_factor=None, mini=50000)[source]
Gives an approximate prediction for the required memory usage. This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.
- Parameters:
saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state ‘psi’ (usually by performing DMRG ) and then calling ‘’print(psi.get_B(0).sparse_stats())’’ TeNPy will automatically print the fraction of nonzero entries in the first line (for example, 6 of 16 entries (=0.375) nonzero). This fraction corresponds to the saving_factor; in our example, it is 0.375.
mini (float) – The minimum amount of RAM in kB to be returned. By default 50 MB.
- Returns:
usage – Required RAM in kB. It always returns a minimum of mini=50 MB.
- Return type:
- evolve(N_steps, dt)[source]
Evolve by N_steps*dt.
Subclasses may override this with a more efficient way of do N_steps update_step.
- Parameters:
- Returns:
trunc_err – Sum of truncation errors introduced during evolution.
- Return type:
- get_resume_data(sequential_simulations=False)[source]
Return necessary data to resume a
run()
interrupted at a checkpoint.At a
checkpoint
, you can savepsi
,model
andoptions
along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:eng = AlgorithmClass(psi, model, options, resume_data=resume_data) eng.resume_run()
An algorithm which doesn’t support this should override resume_run to raise an Error.
- Parameters:
sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see
run_seq_simulations()
.- Returns:
resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the simulation from where we are now. It might contain an explicit copy of psi.
- Return type:
- prepare_evolve(dt)[source]
Prepare an evolution step.
This method is used to prepare repeated calls of
evolve()
given themodel
. For example, it may generate approximations ofU=exp(-i H dt)
. To avoid overhead, it may cache the result depending on parameters/options; but it should always regenerate it ifforce_prepare_evolve
is set.- Parameters:
dt (float) – The time step to be used.
- resume_run()[source]
Resume a run that was interrupted.
In case we saved an intermediate result at a
checkpoint
, this function allows to resume therun()
of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just callrun()
.
- run_evolution(N_steps, dt)[source]
Perform a (real-)time evolution of
psi
by N_steps * dt.This is the inner part of
run()
without the logging. For parameters seeTimeEvolutionAlgorithm
.
- run_one_site(N_steps=None)[source]
Run the TDVP algorithm with the one site algorithm.
Warning
Be aware that the bond dimension will not increase!
- Parameters:
N_steps (integer. Number of steps) –
- run_two_sites(N_steps=None)[source]
Run the TDVP algorithm with two sites update.
The bond dimension will increase. Truncation happens at every step of the sweep, according to the parameters set in trunc_params.
- Parameters:
N_steps (integer. Number of steps) –
- sweep_left_right()[source]
Performs the sweep left->right of the second order TDVP scheme with one site update.
Evolve from 0.5*dt.
- sweep_left_right_two()[source]
Performs the sweep left->right of the second order TDVP scheme with two sites update.
Evolve from 0.5*dt
- sweep_right_left()[source]
Performs the sweep right->left of the second order TDVP scheme with one site update.
Evolve from 0.5*dt
- sweep_right_left_two()[source]
Performs the sweep left->right of the second order TDVP scheme with two sites update.
Evolve from 0.5*dt
- classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]
Initialize algorithm from another algorithm instance of a different class.
You can initialize one engine from another, not too different subclasses. Internally, this function calls
get_resume_data()
to extract data from the other_engine and then initializes the new class.Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as
engine = OtherSubClass.switch_engine(engine)
directly replacing the reference.- Parameters:
cls (class) – Subclass of
Algorithm
to be initialized.other_engine (
Algorithm
) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from theTwoSiteDMRGEngine
to theOneSiteDMRGEngine
.options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.
**kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with
get_resume_data()
.
- theta_svd_left_right(theta)[source]
Performs the SVD from left to right.
- Parameters:
theta (
tenpy.linalg.np_conserved.Array
) – the theta tensor on which the SVD is applied
- theta_svd_right_left(theta)[source]
Performs the SVD from right to left.
- Parameters:
theta (
tenpy.linalg.np_conserved.Array
,) – The theta tensor on which the SVD is applied
- time_dependent_H = False
whether the algorithm supports time-dependent H
- update_s_h0(s, H, dt)[source]
Update with the zero site Hamiltonian (update of the singular value)
- Parameters:
s (
tenpy.linalg.np_conserved.Array
) – representing the singular value matrix which is updatedH (H0_mixed) – zero site Hamiltonian that we need to apply on the singular value matrix
dt (complex number) – time step of the evolution
- update_theta_h2(Lp, Rp, theta, W0, W1, dt)[source]
Update with the two sites Hamiltonian.
- Parameters:
Lp (
tenpy.linalg.np_conserved.Array
) – tensor representing the left environmentRp (
tenpy.linalg.np_conserved.Array
) – tensor representing the right environmenttheta (
tenpy.linalg.np_conserved.Array
) – the theta tensor which needs to be updatedW (
tenpy.linalg.np_conserved.Array
) – MPO which is applied to the ‘p0’ leg of thetaW1 (
tenpy.linalg.np_conserved.Array
) – MPO which is applied to the ‘p1’ leg of theta