How Looker Approaches Data in the Time of COVID-19
By Daniel Mintz
Chief Data Evangelist at Looker
April 21, 2020 — Santa Cruz, CA
Since COVID-19 started to spread around the world, data has been in the spotlight. With such a fast-moving virus, reliable data has been very hard to come by. Frontline healthcare workers have rightly been focused on saving lives, and haven’t had time to pause for data collection.
But many others have jumped in to help use data to understand, as best we can, what exactly is happening. Public health workers, journalists, academics, and even grassroots groups of volunteers have done the hard work of finding, collecting, cleaning, understanding, and maintaining critical datasets.
As we’ve worked with our customers, partners, and communities who are adapting to this new reality, the requests for data about what is happening have been relentless. In addition, ecommerce companies are using COVID data to adapt to radically-increased usage, restaurant chains are using it as they retool to focus on delivery, governmental agencies are using it to plan so they can mitigate the impacts of COVID on their citizens. And of course, organizations across the healthcare space — hospital systems and labs and insurers — are using it to understand how to prepare their business and save lives.
But with so many different entities collecting data, and each presenting the data in slightly different ways, the amount of work required to unify that data and make sense of it isn’t inconsequential. Even once you’ve got things working, building a data pipeline to keep the data fresh while monitoring for any schema or methodology changes is a big lift.
Our colleagues at the Google Cloud Public Datasets program have used their existing tools to centralize the data. They’re adding new datasets continuously, and have made queries against these datasets free on Google BigQuery.
Continue reading here: https://looker.com/blog/data-in-the-time-of-covid-19