HTS Bioinf - Update of external annotation data
Scope
This document describes the procedures for generating external annotation data sets for Anno.
Data sources
The instructions for generating the annotation data sets are coordinated by Anno and Anno-targets's datasets.json
files. The repository of competence for each data source is shown in the table below together with the agreed respective update frequency.
Data sources | Repository | Updates |
---|---|---|
ClinVar (and PubMed db) | Anno | monthly |
HGMD (and PubMed db) | Anno-targets | quarterly |
wgsDB | Anno-targets | quarterly |
VEP | Anno | yearly |
inDB | Anno-targets | yearly |
gnomAD, SeqRepo, UTA, RefSeq | Anno | irregularly |
gnomAD-MT, gnomAD-SV, SweGen-SV, AnnotSV | Anno-targets | irregularly |
For data sources with irregular updates, we will check every quarter for new releases, and update whenever there is a suitable and significant one.
Update Procedure
Credentials
All credentials required by Anno to manage external annotation data sets are expected to be stored in a credentials file, provided to Anno's Makefile
via the environment variable DB_CREDS
.
DigitalOcean (DO) -
To download and upload data to DigitalOcean's OUSAMG project, a DO access key and its corresponding secret are required. Directions for generating these credentials are available here. Store your key and secret as environment variables in the DB_CREDS
file (which we assume will be set to $HOME/.db_creds
) as follows:
HGMD -
HGMD credentials are required to download HGMD data. Store your HGMD user name and password as environment variables in the DB_CREDS
file as follows:
NCBI -
An ENTREZ API token is necessary to download bulk NCBI data. Follow the instructions here and here to obtain a token and add it to the DB_CREDS
file as follows:
Automatic data generation
In the following, we will assume a credentials file .db_creds
to exist in $HOME
.
Tip
Some of the below steps may be resource demanding and time consuming. Consider generating the data from /storage/<your_username>
on Hetzner.
-
Clone the relevant repository (i.e.,
ella-anno
oranno-targets
, refer to the table above). -
Update
datasets.json
with the version you wish to generate. If required (which is rare), modify thegenerate
commands accordingly. -
make build-annobuilder
. -
make generate[-amg]-package DB_CREDS=$HOME/.db_creds PKG_NAME=<package name>
(include-amg
foranno-targets
data sources, check theMakefile
if in doubt,make help
may help). -
make upload[-amg]-package DB_CREDS=$HOME/.db_creds PKG_NAME=<package_name>
(include-amg
foranno-targets
data sources, check theMakefile
if in doubt,make help
may help).For HGMD updates you will need to supply the location of the reference FASTA file as
FASTA=/path/to/fasta
. The followingmake
command can be used to download it from DO. -
Commit and push the changes to
datasets.json
in an aptly named branch (refer to a pre-existing issue in the respective repository if applicable) and file a MR. Use the merge request template data_mr_template, which proposes basic sanity checks for the newly generated data. -
Once the MR is approved, merge your branch into
dev
. -
After merging, follow the Release and deploy procedure for anno system.
Update literature reference database
In ELLA, we aim to keep data for all PubMed references present in either HGMD or ClinVar. These PubMed ids are generated as line-separated text files in the HGMD or ClinVar data directories.
- Clone the
anno-targets
repository make download-amg-package PKG_NAME=hgmd DB_CRES=$HOME/.db_creds
make download-anno-package PKG_NAME=clinvar DB_CRES=$HOME/.db_creds
cat anno-data/variantDBs/*/*_pubmed_ids.txt | sort -n | uniq >pubmed_ids.txt
The next steps are to download reference details for all these PubMed ids:
-
Preparation. Because some of the operations below use
git submodule
under the hood, it is recommended to set yourssh
in advance, e.g. -
Clone the ELLA repository
- Copy
pubmed_ids.txt
into the ELLA directory - Access
ella-cli
via docker container:docker compose run -e LOGPATH=/tmp -u $(id -u):$(id -g) --no-deps -it --entrypoint /bin/bash --build apiv1
ella-cli references fetch pubmed_ids.txt
(this will take some time)- Import the file created in the previous step (
references_YYMMDD.txt
) to TSD see wiki for tacl -
Deposit the references in the ELLA database:
7. Delete the imported file used to deposit the references