While AI(artificial intelligence) is transforming healthcare, Data2Life focuses its unique technology to tackle the vast amount of information generated by the various stakeholders (researchers, healthcare providers, including patients) and focusing its machine learning platform to take-on the difficult data in healthcare (unstructured data) and then harmonise a set of insights the to become real world evidence. This increased visibility helps focus research and clinical trials, innovate drugs development and repositioning, personalize treatments, better evaluate risk exposure, and save on IT costs. The Necessity of Multi-Dimensional Real World Evidence Perspective Data2Life changes the paradigm of data-driven-evidence by adding the ‘Patient-Voice’ side-by-side with traditional datasets. This approach is one of Data2Life’s main differentiators in the landscape of Real World Evidence analytics. The company’s fundamental ability to distill and analyze Patient Generated Data from various sources and integrate it into traditional data sets is key. Data2Life invested heavily in Artificial Intelligence – driven analytics, focusing on proprietary Deep Learning algorithms to forme a multi-dimensional healthcare intelligence platform, unique in its scalable approach to medical data sets, whether structured (medical codes) or unstructured (free text). Data2Life data-path: Data Crawling – Data2Life’s data-engineering stage. It coordinates and executes the crawling of Data2Life multi-dimension healthcare data, composed of: clinical electronic health records (EHR), hundreds of million patient generated social media posts, regulatory data and medical literature. Data Lake Clustering – At this stage, the engineered data is aggregated in a big-data cluster on which the raw copies of all data ingested is stored and metadata is populated for future data enhancement and analytics NLP Pipeline – An automated artificial intelligence set of components which cleanse, normalize, analyse, enhance, classify, and transform the raw data sources found in the Data Lake into processed data to be used by the Data2Life products. Data Mart – A structured database used for fast, common, repeatable queries against pre-processed dataset. On top of this, Data2Life’s Data Mart also features a set publications spanning over 100 medical and drug-based ontologies which allow the normalization of data from different data sources. For pharmas, adopting a patient-centric approach by using a portfolio of products or reports is a medical necessity. To best support this need, we, at Data2Life, are embracing the common data-scientist say: “you’re as good as your data” ! In other words, if your dataset is scientifically authoritative, high-quality, of substantial quantity and freshly updated, then your insights will be accurate and relevant. On the contrary, if your data is poor in any dimension, then the outcome will be dramatically low. To make sure our data is indeed at its best, we have meticulously processed our clinical data streams (Electronic Medical Records) and aggregated and curated around 60 million clinical records. Also, we are carefully harvesting our social media sources and analysis, as well as our regulatory data providers( FDA, Health Canada and EMA). As of today, Data2Life has developed a unique capability to seamlessly “stack” these diversified sources into an aggregated and normalized single “uber data-source”. This has allowed us to distill a great set of marketing, strategy, comparative and risk-related signal detection insights. Exclusive Identification and Analysis of ‘Patients’ Voice’ Drug comparison, as of today, is performed by using siloe’d structured data source reviews of purchased databases, and it typically requires heavy IT support to load, cleanse, update, and make available for analysis. This involves many professionals with a variety of skill-sets to review and understand the data, including clinicians, statisticians, epidemiologists, product experts, database administrators, ETL developers, and more. The amount of effort required to get answers to specific questions or business problems is typically an issue for most multi-source data analytics platforms. This effort is typically IT time-consuming and often, it doesn’t provide the flexibility and speed most business units demand. Data2Life’s patient-listening process consist of patient generated data being gathered from online communities, power groups and discussion forums, mobile apps, monitoring devices yielding hundreds of millions of patients generated text ,reports and posts. Beyond IT burden relief, it takes significant experience to understand and validate a genuine patient report out of what data-scientists usually call “noise” – typically, social media posts that aren’t meeting the qualifying attributes to become datasets that are rich, deep and authoritative enough. This is a unique advantage that enriches our data assets and allows Data2Life to claim it delivers sound patient-generated insights. Data-Driven Real World Evidence – Enabled by Scalable Healthcare Technology Solutions Unlike longitudinal, expensive clinical trials and analysis services, Data2Life’s healthcare technology solutions provide real world insights and evidence about marketed products (drugs and devices). The products are analyzed based on their therapeutic alternatives, their usage patterns, the patient experiences, the safety risks in specific populations, or based on other relevant factors. For various types of scientific users, Data2Life allows for a slice-and-dice data analysis that enables to detect patterns and assess various hypothesis (for pipeline prioritization in R&D teams, early detection of safety risks, as well as therapeutic opportunities for current molecules and suggest new indications for existing drugs). Data2Life deploys home-grown data handling and analytics, including NLP (Natural-Language Processing) based algorithms for physician’s and patient’s free text reports, as well as surveillance algorithms and predictive analytics The product’s analytical process uses machine learning and pattern recognition algorithms to extract findings accessible to the end business decision maker in the form of reports and data interaction tools.