Lab Automation 'Big Data' – Challenge or Chance for Lab Automation?
Lab automation enables efficient handling of numerous samples. However, the challenge lies in how to make good use of ‘big data’. Therefore, current initiatives aim for standardization - but can that achieved? Discover the chances and challenges for laboryatories in the digital age...
Time and cost pressure, a heterogeneous set of hardware, rapidly expanding data volumes and a diversity of data formats depicts a typical IT operating environment in automated lab workflows. Owing to the highly complex, lab-specific tasks involved; a single uniform long-term standard for pharmaceutical, biotechnology or clinical diagnostics is unlikely to emerge in the foreseeable future. Hence, there is a need for scalable systems, integrative platforms and standardized interfaces.
Substance libraries made up of potential active ingredient molecules for drug development currently contain over a hundred thousand active substances. Lab robots build these libraries based on a pre-defined set of synthesis rules. With the aid of lab automation systems, the lab teams use a high-throughput screening to process in the region of 10,000 samples per day.
The Advent of Lab Automation
Ultra high-throughput screening was developed at the beginning of the 1990s by Evotec in collaboration with international pharmaceutical producers such as Novartis and SmithKline Beecham. Using this technique, more than 100,000 samples can be processed on a daily basis in active ingredient research for the development of new pharmaceutical products. Each microtitration plate can contain up to 3,456 wells to facilitate efficient handling and archiving by automated systems.
Lab information and management systems (LIMS) have been in use since the 1980s to integrate lab workflows into the IT landscape, but higher levels of automation greatly increase the complexity of the demands placed on lab IT. This is particularly the case when automated workflows extend beyond the PC and microcontroller-based control on a single piece of equipment. Attention is currently focused primarily on process and data management as well as inclusive lab management.
The Big Data Challenge
Data volumes in the world of medicine and research normally extend well into the terabyte and petabyte range. The term ‘big data’ is used to describe the enormous amount of data that is generated during automated processing. Data volumes worldwide are expected to double every two years or so. The enormous increase in data resources for genome sequencing is one example from Life Sciences.
The abundance of data also creates huge potential for development of new drugs and life-sustaining medical analysis. Companies including Boehringer Ingelheim, CHDI, Evotec, Genentech, MedImmune/AstraZeneca, Ono Pharmaceutical and UCB have forged research alliances to develop new pathways for treating alzheimer’s, diabetes, cancer and for palliative medicine.