So far, running LLMs has required a large amount of computing resources, mainly GPUs. Running locally, a simple prompt with a typical LLM takes on an average Mac ...
pyrama my_pdb_file.pdb -o output_folder --no-show Note: The script is able to read in multiple PDB files, and can also process multiple chain automatically now. The outliers would be annotated with ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results