BioBright Solution for Lab Data Automation

Fully automated solutions to acquire, secure and analyze laboratory instrument and CRO data

Automated lab data solutions to support data-centric research

In many labs manual data processes are common. These are invariably time-consuming and negatively impact lab efficiency. Manual data-wrangling also invites error, causing problems when the data is later used for AI/ML modeling and decision support.

The Dotmatics BioBright solution allows for full end-to-end data automation.

  • Files produced by in-house instruments, or generated at CROs, are automatically captured and uploaded to a secure cloud-based data vault. Your cloud, our cloud, your choice.
  • The data and metadata are liberated from their original, proprietary source so that automated QA/QC checks can be performed
  • The data can then be used to power dashboards and analytics, providing operational and scientific insights
  • Machine learning enhances data analysis and equipment monitoring
  • Data can be sent via an open API into ELN experiments or other applications

With the solution in place, scientists are freed from mundane, error prone data processing. Instead, they can concentrate on data analysis and critical decision making to help bring critical therapies and materials to market.

Your Challenges

Can You...

Easily capture and secure your instrument files?

Or do your scientists lose hours a week collecting files from instruments, manually moving files around the network, and storing and organizing files before they can even start to analyze them?

Securely transfer files from CROs?

Or do your scientists exchange valuable results via email and SharePoint?

Do they spend hours finding and organizing CRO reports and understanding the results from irregular formats?

How much time is lost in your research cycle waiting for data to send, sychronize or receiving results?

Automatically extract, QA and process results?

How much valuable time do your scientists spend extracting the data and metadata from a wide variety of proprietary instrument output types?

Is your team forced to maintain an ever growing and shifting ecosystem of proprietary formats? Analysis pipelines breaking all the time

Are errors being introduced in the data’s journey from files, to data/metadata to analysis and experimental results only to be detected late in the process, invalidating experiments?

Scale to keep up with output from your instruments?

High throughput labs can generate terabytes of files per day.

Can your scientists and their existing systems keep up, or are your processes and storage solutions becoming overwhelmed with the sheer volume of files and data?

Be sure your data is secure?

Is your data secure end-to-end? Are the files encrypted both in transit from the instrument and at rest in the data repository? Is it secured all the way to the visualization or analytics layers?

Who is responsible for securing the data? Is that overhead being delegated to the scientists? Who is responsible for keeping up with state of the art protections and stewardship of your data?

Proactively monitor equipment trends?

Can you see trends in equipment performance in real-time across extended time periods (e.g Can you spot drift in mass-spec)?

Are manual data capture processes and analytics too slow or soloed to allow proactive management, costing you time and money when action is delayed?

Quickly develop new dashboards and analytics for new science?

Can you serve new views of scientific data and high volume instrument outputs to your researchers, as fast as they develop and adopt new methods?

Are your scientists dependent on manually processing data into Excel and unsustainable scale? Are they hindered in fully exploiting their scientific advances due to a lack of relevant and labeled data for analytics?

Make fully informed decisions?

Can you see raw instrument data, experimental protocols and results all in one place?

Is it difficult to drill down from an interesting or questionable result back to the raw data and instrument outputs?

Capabilities

With BioBright you can

  • Acquire and synchronize hundreds of terabytes of files and equipment logs
  • Upload data from instrument PCs to secure storage
  • Acquire instrument data directly from instrument software system e.g. Empower, Unicorn etc…
  • Enable CROs to transmit results and reports in real-time
  • Securely store files and data in a cloud hosted data lake
  • Search and browse the document repository

  • Extract metadata for over 200+ file formats, revealing hidden equipment settings, reagents used, timed steps, image metadata and more
  • Automate data QA/QC workflows.
  • Access equipment error logging and operational analytics in web dashboards

  • Leverage full API connectivity for both search and data delivery to informatics applications e.g. ELN, assay data management or statistical analysis/visualization packages
  • Rapidly develop lightweight cloud applications and dashboards

  • Secure end-to-end encryption for files in transit and at rest
  • Detect anomalous behavior by monitoring instrument computers and networks
  • Benefit from security developed with DARPA & verified by Sandia National Laboratories

  • Operational and analytic dashboards fed by real-time data and analytics
  • Scale to massive data volumes not feasible to analyze or visualize with manual methods
  • Automatically and continuosly monitor instruments allowing proactive intervention to maximize productivity and minimize the need to repeat runs

  • Quickly build new dashboards, or lightweight cloud analytics applications to keep pace with the demands of new science and lab methods based on tools your data team is already familiar with.
  • Integrate 3rd party analytics into seamless automated workflows to derive results from the raw or parsed instrument data
  • Generate additional insights by automatically applying AI/ML analytics to the raw data

  • Raw data, experiment protocols and derived results are organized as a single logical collection
  • Simple data exploration: easy to find results and then drill down to the raw data and instrument files
  • Organize data and results generated with built in or 3rd party analytics
  • The BioBright solution can automatically compile data packages using business rules, experiment metadata and workflow presets

Outcomes

Customer Success Stories

A Mass-Spec core facility at a large pharma company implemented the BioBright solution to capture and process the 700Gb - 1Tb of files they were producing per week. They replaced a process that required scientists to manually load their data into a data store and then manually analyse it – taking each scientist 2-3 days per week

With the Biobright solution

  • Data collection was sped up by 63x times
  • Each scientist saved two-thirds of their time formerly spent on manual processing
  • Data was automatically QC’d and dashboards ready for review within seconds of the analysis completing
  • Operational analysis and dashboards allowed instrument drift to be detected and scientists to develop novel system suitability metrics for high value samples

 

A high content screening lab at a leading biotech had a legacy HCS data processing infrastructure that was slow, lossy and manual

  • Slow, as it relied on time consuming, periodic scripted uploads of raw results files to an enterprise file management system, and from there to image analysis software
  • Lossy, because users could only transfer final results and native instrument software was struggling to keep up with data volume, crashing often and leading to data loss
  • Manual, as management of the image analysis server required manual data purging and manual transfer of aging data

The legacy infrastructure was replaced with the fast, lossless and insightful Dotmatics BioBright solution

  • Fast, as data transfer from the instruments to the BioBright cloud and to/from the image analysis software is full automated and happens in real-time
  • Lossless, because the raw data, protocols AND the full results are captured as a single collection in the BioBright cloud repository as soon as they’re created. If the run crashes part way, the BioBright platform has already synced the partial data
  • Insightful, as real-time dashboards enable equipment monitoring and decision making

A large pharma’s bioreactor scale up facility used manual collection methods to gather data from their reactors. researchers used Excel to aggregate data across the various sizes of reactor to make decisions about reactor set-up during the scale up process. It took 2 to 3 days to gather up all the data, analyze it and make decisions.

The aggregated data became available fast enough to allow previously impossible analytics. For example, an AI model of dissolved oxygen (DO) crashes was applied to the readily available data, and predicted the likelihood of a crash up to two hours in advance. This gives the operator time to adjust the bioreactor to mitigate the potential problem.

Products

This solution consists of

  • Darwin Sync client installs on PCs connected to instruments, in the cloud or at CROs
  • Encrypts files and securely uploads to the cloud
  • Direct API connections for instrument software platforms (e.g. Empower, Unicorn etc)
  • Stores files in a secure data lake on the cloud
  • Search and browse for files
  • Parse and extract data and metadata from file and non-file based systems
  • Define and apply QA/QC workflows to the data
  • Build lightweight dashboards and visualization apps for operational insights
  • Rapidly integrate data into other applications via BioBright’s API-first approach (e.g. ELN, image analysis, assay data management, …)

Schedule a demo

We continuously add support for new formats.

Do you have a particular data format you need for your workflows?
Reach out to us!

We will analyze the file, show you the data that can be extracted, and how you can leverage it with DarwinSync