Guidance on how to prepare a Readme describing data de-identification steps performed on datasets ahead of submission
The purpose of this document is to provide guidance to data contributors on how to prepare a Readme describing data de-identification steps performed on the data sets ahead of submission in BDC. Clear documentation describing the de-identification methods is required for submission to help BDC and future data re-use to understand the data elements and de-identification applied to the data sets being shared.
Researchers are expected to describe the de-identification methods used on the data in sufficient detail for an external party to be able to understand and replicate. Documentation of the approach for all 18 De-identifying Elements is required.
Section 3 is an example checklist used by BDC to perform checking on data de-identification to demonstrate that data has been reviewed and to note if the particular PHI/PII was found in the dataset and how it was resolved.
Section 4 is an example Readme, adapted from the NHLBI .
De-identification QC by BDC includes the data elements in the table below.
As studies begin to prepare to upload data to BDC, it is required that studies describe the de-identification approach applied to the study datasets. The study team is responsible for the de-identification of all data uploaded to BDC in accordance with , as outlined in the and .
Below is an example de-identification readme file describing the de-identification of different data elements.
The study should perform date shifting instead of using a reference date (e.g., days from randomization or consent) when de-identifying dates. Because the goal of the study is standardization across studies to the greatest extent possible, date shifting is preferred to reference dates to avoid any confusion in interpretation of Day 0 vs. Day 1 (and therefore all subsequent days) across studies. Additionally, not all studies include the same data collected in clinical trials (e.g., randomization date); use of date shifting instead of reference date will allow broad applicability and linkage to different types of studies in the future, including observational and non-randomized studies.
Dates should be shifted by a consistent length of time for each record by a random integer from 0-364 days subtracted from the true date, thus preserving the interval between dates. For example, if a subject had three sequential appointments with dates of April 2, April 15, and April 26, when the dates are shifted, each appointment will remain in order sequentially with the same interval between appointments for November 16, November 29, and December 10. For dates where only a month and year is available, the day of the month should be imputed to the 15th for date shifting purposes only. After a date using the 15th of the month is created, the same date shifting method outlined above should be employed. The dummy day of month should then be returned to missing status and only the shifted month and year should be uploaded as the actual date. If only a year is available, no date shifting should occur, and day and month should be marked as missing.
Follow the above guidance as illustrated in the examples below:
* Indicates missing
Study personnel should review data to ensure no identifying data is included.
For individuals aged 90 or above, ages should be aggregated into a single age grouping (“90”).
Device identifiers and serial numbers
Data not included or data are masked using a method that renders it unreadable or unlinkable to original values
Email addresses
Data not included or data are masked using a method that renders it unreadable
Web Universal Resource Locators (URLs)
Data not included or data are masked using a method that renders it unreadable
Social security numbers
Data not included or data are masked using a method that renders it unreadable
Internet Protocol (IP) addresses
Data not included or data are masked using a method that renders it unreadable
Medical record numbers
Data not included or data are masked using a method that renders it unreadable or unlinkable to original values
Biometric identifiers, including finger and voice prints
Data not included or data are masked using a method that renders it unreadable
Health plan beneficiary numbers
Data not included or data are masked using a method that renders it unreadable or unlinkable to original values
Full-face photographs and any comparable images
Data not included or data are masked using a method that renders it unreadable
Account numbers
Data not included or data are masked using a method that renders it unreadable or unlinkable to original values
Any other unique identifying number, characteristic, or code
Data not included or data are masked using a method that renders it unreadable or unlinkable to original values
Certificate/license numbers
Data not included or data are masked using a method that renders it unreadable or unlinkable to original values
Names of patients, relatives, employers or household members
Data not included or data are masked using a method that renders it unreadable
All geographic subdivisions smaller than a state
Data not included or data are masked using a method that renders it unreadable
All elements of dates (except year) for dates that are directly related to an individual
Dates were addressed as follows:
Ages were reported as bins spanning 10 years: 40-49, 50-59, 60-69, 70-79, 80-89 and ≥90
Dates of events, such as treatment, visit, birth, and death, are given as year only
Telephone numbers
Data not included or data are masked using a method that renders it unreadable
Vehicle identifiers and serial numbers, including license plate numbers
Data not included or data are masked using a method that renders it unreadable
Fax numbers
**-MON-YEAR
Impute DD as 15, date shift, return dummy DD to missing
**-***-YEAR
Do not date shift, retain YEAR as is
DD-***-YEAR
Remove DD, do not date shift, retain YEAR as is
**-MON-****
Mark entire date as missing
DD-***-****
Mark entire date as missing
DD-MON-****
Data not included or data are masked using a method that renders it unreadable
Mark entire date as missing
If you have already prepared your data (see Step 3), you may complete Step 1 (Intent to Submit) and then simultaneously work on dbGaP Study Registration (see Step 2) and begin the Data Submission process (see Step 4).
If you have not yet prepared your data, you may complete Step 1 (Intent to Submit) and then simultaneously work on dbGaP Study Registration (see Step 2) and Data Preparation (see Step 3).
This step has two data submitter action items, and the first is different for NHLBI intramural investigators than for extramural investigators.
➡️ Data Submitter Action Item 1: for NHLBI Intramural Investigators Email NHLBIDIRBDCSubmission@mail.nih.gov for submission information.
➡️ Data Submitter Action Item 1: for Extramural Investigators Use the following email template, complete it with information specific to your study, and send it to bdcatalystdatasharing@nih.gov.
Email template:
To: bdcatalystdatasharing@nih.gov Subject: BioData Catalyst Data Submission [Grant Number / Award Number]
Study Name
Institution Name
PI Name
Grant Number/Award Number/ZIA Number
Expected date for data upload/submission
Does this submission include genomics data?
✅ Result
After sending the email, you will receive an automated response with the following documents to use in Step 2.
Institutional Certification Form
Data Submission Information Sheet
Guidance document for registration of data in dbGaP
➡️ Data Submitter Action Item 2: Complete the Institutional Certification and Data Submission Information Sheet (see results from Step 1, Action 1), and email them to nhlbigeneticdata@nhlbi.nih.gov.
✅ Results
You will receive a response from the Genomic Program Administrator (GPA) confirming receipt of your email.
Extramural submitters will also receive a response from the BioData Catalyst Data Management Core (DMC) (nhlbi.dmc.concierge@rti.org) to provide further assistance or answer data submission-related questions.
Step 1 Related Links
All research data shared with BDC must be registered through dbGaP, though the controlled and non-controlled access processes may differ. The DMC will contact you and provide specific guidance in such cases. Study registration has two parts but only one action for data submitters.
🔵 The GPA completes the first part of dbGaP study registration and, as a result, generates your study accession number. The GPA does this by entering information from your Institutional Certification and Data Submission Information Sheet into the dbGaP Submission System. If needed, the GPA may contact you for additional information or clarification or if asking for a data sharing plan and data use agreement.
✅ Results
The GPA will share the accession number and the consent group information with the DMC to create Data Submission Infrastructure for your study.
You will receive an automated email from dbGaP to complete Study Submission (see screenshots of the dbGaP email below).
➡️ Data Submitter Action Item 1: After receiving the automated email from dbGaP, complete the dbGaP submission process using guidance available in the dbGaP Study Configuration Process for Submission of Data to BDC (See a screenshot of the dbGaP Study Submission portal below). Study Config consists of a web form that collects a description of the study data, methods, and findings, inclusion/exclusion, study history, references, attributions, and terms that will be indexed to enable users to search for your study in dbGaP Advanced Search.
Note: Gather all information ahead of the web form entry, as the current form does not have a “save” button for partial entry. Click here to download the example files for dbGaP submission.
✅ Results
Once you finish your study configuration, dbGaP will curate your submission and may contact you for questions. Once dbGaP completes its curation process, you will receive an email from dbGaP to approve and complete your study registration.
Note: While waiting for dbGaP curation, please proceed with data submission to BDC (steps 3 and 4 below) to reduce the time to ingest and release the data.
Data preparation can happen before, during, or after the study registration process and must be completed to submit data to BDC. This step has one action item for all data submitters and a second action item for submitters of omics and phenotypic data types.
➡️ Data Submitter Action Item 1: Prepare supplemental documentation to accompany the data submission (“data package”) according to the Instructions for Preparing Clinical Research Study Datasets for Submission to the NHLBI, including:
Protocols
Survey Instruments
Data/Metadata model, if applicable
Datasets Readme*
Specify data file name and variable name for “subject ID” and “age”
Datasets organization - if the datasets are organized in multiple sub-folders, need a Readme file to describe the relationship of the sub-folders, if they are independent (e.g., multiple phases or visits), main studies with ancillary studies, or overlapping (e.g., /raw data and /harmonized data, where the /harmonized data is a subset of the /raw data).
Additional Supplemental documentation to reproduce study results
* Supported documentation types for data dictionaries and models are .csv, tab-delimited, xml, json, and other machine-readable formats. PDF and SAS file formats are not machine-readable and are discouraged from submission. File name should not include any spaces and special characters.
➡️ Data Submitter Action Item 2: Only for Omics and Phenotypic data types, prepare the data files per the dbGaP Study Submission Guidance.
Data submission has two action items for data submitters. This process can happen in parallel with Data Submitter Action Item 1 from Step 2. The data submission process begins by filling out the BDC contact form: https://biodatacatalyst.nhlbi.nih.gov/contact.
➡️ Data Submitter Action Item 1: Request bucket creation by filling out the BDC contact form using the following information:
Your institutional email address used for NIH eRA Commons
Subject: Data Submission
Type: Data Submission (select in the dropdown menu)
In the body of the message, 1) include your dbGaP PHS accession number and 2) request access for read/write permission to the assigned cloud bucket
In the rare case that your institute can’t access any cloud services hosted by Google or Amazon, request assistance for direct data upload from your data package location (e.g., SFTP transfer)
Data upload may not begin until your data is prepared (see Step 3: Data Preparation), and you receive an invitation from dbGaP to complete your study submission and configuration (see the Results section in Step 2.
➡️ Data Submitter Action Item 2: Access the cloud bucket created for your study. You will receive a secure email from the Information Technology Applications Center (ITAC) team at NHLBI that provides the URL to activate the access with user ID and password (see screenshot below):
Follow the links and instructions in the email to activate the Amazon Web Service (AWS) S3 web interface.
If you have any questions or issues about accessing the buckets, please contact nhlbi.dmc.concierge@rti.org
➡️ Data Submitter Action Item 3: Upload data sets to the cloud bucket created for your study. After access, upload datasets for each consent group to the corresponding buckets (e.g., xxxx-c1) as described in the dbGaP 2b file.
Once selected the specific bucket for a consent group, use the “Upload” button to upload data files.
If you choose to use the GCP platform, see screenshot below (“upload” highlighted)
Once your data package is uploaded successfully, the data go through quality checks before ingestion and release. If issues are found, the DMC will contact you and assist in resolving the issues before ingestion and release. There are three data submitter action items associated with this step.
➡️ Data Submitter Action Item 1: If the DMC contacts you about QC issues with the uploaded data, respond to their inquiries to resolve the issues.
➡️ Data Submitter Action Item 2: If requested by the DMC, resubmit the data package after all issues are resolved.
After data clears the data quality checks, the ingestion and release process can take as few as 4-6 weeks. After the data is released, the DMC will notify you that your study is available for use by authorized individuals in BDC (study inventory).
➡️ Data Submitter Action Item 3: You are encouraged to log in and view your study data in BDC.
Contact the BioData Catalyst Data Management Core (DMC) via https://biodatacatalyst.nhlbi.nih.gov/contact and select “Data Submission” in the Type field.
Why do I need to submit to dbGaP when I don’t have genetic data?
Answer: Historically, the BDC registration and ingestion mechanisms were developed to support omics data for TOPMed. As BDC evolved to include clinical and other non-genomic data types, dbGaP registration continued to be a mechanism of registering and supporting authorization of controlled data access. In collaboration with dbGaP, BDC was able to support a registration process that allowed for us to continue leveraging the data access management and request mechanisms that we have been using, but with the data being submitted to BDC rather than to dbGaP.
Do I need to upload my data to both dbGaP and BDC?
Answer: As stated in the , study “data files” are uploaded to BDC only (step 4), and “study level metadata and subject consent files” are uploaded to dbGaP (step 2).
Where is the dbGaP submission link?
Answer: The Submission Portal (SP) link is .
During dbGaP submission, “Sample Attributes” (6a/6b) is a required field, but I don’t have samples.
Answer: If your study doesn’t have samples, please use a dummy blank file
Will the ‘record IDs’ from datasets be masked/transformed by dbGAP team before publishing?
Answer: Each subject should be submitted with a single, unique, de-identified subject ID. dbGaP will assume the ‘record IDs’ are de-identified and will not mask/transform before publishing. If this study dataset is collected from a cohort that have existing dbGaP study (parent study or sub-studies), dbGaP will curate that the IDs are the same, or will need the linkage file to link the IDs.
Is the data we uploaded to BioData Catalyst available now for researchers to request? We have a manuscript using the data which was recently accepted for publication, and we wanted to include a link in our manuscript to the dataset.
Answer: The data uploaded will go through the BDC ingestion process, which takes as little as 4-6 weeks to release.
What are the data types that are currently acceptable to BDC?
Answer: BioData Catalyst can ingest data types of all types and sizes, including, but not limited to genomic and proteomic to clinical/phenotypic and imaging data.
How does BDC want the dates represented in the datafiles? Can we date shift or should they be converted to days from an index date?
Answer: BDC will accept any de-identification methods for dates, as long as it’s been documented for others to understand and reuse the data. Please reference the .
How do we start the data upload? (steps)
Answer: Please reference the document for the information.
What data types does BioData Catalyst accept? Is it a suitable repository for my proposed multiple data types (including imaging) to include in my DMS Plan?
Answer: BioData Catalyst (BDC) is flexible to accept most data types related to human data, including imaging data. For non-human data, many NIH-supported generalist repositories are available as alternatives. You should work with your program officials to determine the best repository for your data.
I’m preparing a data sharing plan for an upcoming NHLBI proposal submission. Does data submission and storage have any associated costs or fees that I need to budget for?
Answer: , is envisioned as the central repository for NHLBI supported studies , so for your DMS plan, it would be good to reference BDC as the repository for your data. NHLBI covers the cost of storing data submitted to BDC for broad research reuse (“hosted data”). If you plan to use BDC to analyze or prepare your data for sharing, you will incur computational costs which vary depending on the size of your data and the scale of your compute. More information on cloud costs is available at:
Describe the de-identification methods, see example file *
Does this submission include biospecimen?
Does this submission include imaging data?






