CRESCYNT Toolbox – Data Cleaning

cleaning_ancient-1807518_1280_pixabay

Data cleaning. Data cleansing. Data preparation. Data wrangling. Data munging.

Garbage In, Garbage Out.

If you’re like most people, your data is self-cleaning, meaning: you clean it yourself! We often hear that 80% of our “data time” is spent in data cleaning to enable 20% in analysis. Wouldn’t it be great to work through data prep faster and keep more of our data time for analysis, exploration, visualization, and next steps?

Here we look over the landscape of tools to consider, then come back to where our feet may be right now to offer specific suggestions for workbook users – lessons learned the hard way over a long time.

The end goal is for our data to be accurate, human-readable, machine-readable, and calculation-ready.

Software for data cleaning:

RapidMiner may be the best free (for academia) non-coding tool available right now. It was built for data mining, which doesn’t have to be your purpose for it to work hard for you. It has a diagram interface that’s very helpful. It almost facilitates a “workflow discovery” process as you incrementally try, tweak, build, and re-use workflow paths that grow during the process of data cleaning. It makes quick work of plotting histograms for each data column to instantly SEE distributions, zeros, outliers, and number of valid entries. It also records and tracks commands (like a baby Jupyter notebook). When pulling in raw datasets, it automatically keeps the originals intact: RapidMiner makes changes only to a copy of the raw data, and then one can export the finished files to use with other software. It’s really helpful in joining data from multiple sources, and pulling subsets for output data files. Rapid Miner Studio: Data Prep.

R is popular in domain sciences and has a number of powerful packages that help with data cleaning. Make use of RStudio as you clean and manipulate data with dplyr and tidyr. New packages are frequently released, such as assertr, janitor, and datamaid. A great thing about R is its active community in supporting learning. Check out this swirl tutorial on Getting and Cleaning Data – or access through DataCamp. The most comprehensive list of courses on R for data cleaning is here via R-bloggers. There’s lovely guidance for data wrangling in R by Hadley Wickham – useful even outside of R.

Data cleaning tool recommendations by KD Nuggets, Quora, and Varonis are a little dated and business-oriented, but these survivors may be worth investigating:

  • Trifacta Wrangler was built for desktop use, and designed for many steps of data wrangling: cleaning and beyond. See intro video, datasheet, demo with Tableau.
  • DataCleaner – community or commercial versions; can use SQL databases. Mostly designed for business applications; videos show what it can do.
  • OpenRefine gets the legacy spotlight (was Google Refine… now community held). Free, open source, and still in use. Here’s a recent walkthrough. Helps fix messy text and categorical data; less useful for other science research data.

There are some great tools to potentially steal borrow that started in data journalism:

  • Tabula is “a tool for liberating data tables trapped inside PDF files” – extracts text-based pdfs (not scans) to data tables.
  • csvkit is “a suite of command-line tools for converting to and working with CSV, the king of tabular file formats.” Helpful for converting Excel to csv cleanly, csv to json, json to csv, working with sql, and more.
  • agate is “a Python data analysis library that is optimized for humans instead of machines…. an alternative to numpy and pandas that solves real-world problems with readable code.” Here’s the cookbook.

Finally, Python itself is clearly a very powerful open source tool available for data cleaning. Look into it with this DataCamp course, pandas and other Python libraries, or this kaggle competition walkthrough.

Manual Data Munging. If you’re using Excel, Open Office, or Google Sheets to clean your data (e.g., small complex datasets common to many kinds of research), you may know all the tricks you need. For those newer to data editing, here are some tips.

  • To start: save a copy of your original file with a new name (e.g., tack on “initials-mod” plus the current date: YYYYMMDD). Then make your original file read-only to protect it. Pretend it’s in an untouchable vault. Use only your modifiable copy.
  • Create a Changes page where you record the edits you make in the order you make them. This also lets you scribble notes for changes you plan to make or items you need to track down but haven’t yet executed (Done and To-Do lists).
  • First edit: if you don’t have a unique ID for each row, add a first column with a simple numeric sequence before doing anything else.
  • Create a copy of that spreadsheet page, leave the original intact, and make modifications only to the newly copied page. If each new page is created on the left, the older pages are allowed to accumulate to the right (less vulnerable to accidental editing). Name each tab usefully.
  • Second edit: if your column headings take up more than one row, consolidate that information to one row. Do not Merge cells. Include units but no special characters or spaces: use only letters, numbers, dashes and underlines.
  • Add a Data Definitions page to record your old column headings, your new column headings, and explain what each column heading means. Include units here and also in column headings where possible.
  • In cells with text entries, do not use bare commas. Either use semicolons and dashes instead of commas in your text, or enclose text entries in quotation marks (otherwise creates havoc exporting to and importing from csv).
  • Add a Comments column, usually at the end of other columns, to record any notes that apply to individual rows or a subset of rows. Hit Save, now and often.
  • Now you’re free to sort each column to find data entry typos (e.g., misplaced decimals), inconsistent formats, or missing values. The danger here is failing to select the entire spreadsheet before sorting – always select the square northwest of cell A1 (or convert the spreadsheet to a table). This is where you’ll be glad you numbered each row at the start: to compare with the original.
  • If there’s a short note like data source credit that MUST accompany the page and must not get sorted, park it in the column header row to the right of the meaningful headers so it won’t get sorted, lost, or confused with actual data.
  • If you use formulas, document the formulas in your Data Definitions page (replace cells with column_names), and copy-paste as value-only as soon as practical.
  • Make sure there is only one kind of data in each column: do not mix numeric and text entries. Instead, create extra columns if needed.
  • Workbooks should be saved each day of editing with that day’s date (as YYYYMMDD) as part of the filename so you can get back to an older copy. At the end of your session clean up your Changes page, moving To-Do to Done and planning next steps.

Find more spreadsheet guidance here (a set of guidelines recently developed for participants in another project – good links to more resources at its end).

Beyond Workbooks. If you can execute and document your data cleaning workflows in a workbook like Excel, Open Office, or Google Sheets, then you can take your data cleaning to the next level. Knowing steps and sequences appropriate for your specific kinds of datasets will help enormously when you want to convert to using tools such as RapidMiner, R, or Python that can help with some automation and much bigger datasets.

Want more depth? Check out Data Preparation Tips, Tricks, and Tools: An Interview with the Insiders  “If you are not good at data preparation, you are NOT a good data scientist…. The validity of any analysis is resting almost completely on the preparation.” – Claudia Perlich

Happy scrubbing! Email or comment with your own favorite tips. Cheers, Ouida Meier

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

CRESCYNT Toolbox – Data Cleaning

CRESCYNT Toolbox – Discovery of Online Datasets

cinergi-coralscreendump
Data discovery at cinergi.sdsc.edu

Announcing recent progress for data discovery in support of coral reef research!

Take advantage of this valuable community resource: a data discovery search engine with a special nose for locating coral reef research data sources: cinergi.sdsc.edu.

A major way CRESCYNT has made progress is by serving as a collective coral reef use case for EarthCube groups that are building great new software tools. One of those is a project called CINERGI. It registers resources – especially online repositories and individual online datasets, plus documents and software tools – and then enriches the descriptors to make the resources more searchable. The datasets themselves stay in place: a record of the dataset’s location and description are registered and augmented for better find and filter. Registered datasets and other resources, of course, keep whatever access and use license their authors have given them.

CINERGI already has over a million data sources registered, and over 11,000 of these are specifically coral reef datasets and data repositories. The interface now also features a geoportal to support spatial search options.

The CINERGI search tool is now able to incorporate ANY online resources you wish, so if you don’t find your favorite resources or want to connect your own publications, data, data products, software, code, and other resources, please contribute. If it’s a coral-related resource, be sure to include the word “coral” somewhere in your title or description so it can be retrieved that way later as well. (Great retrieval starts with great metadata!)

To add new resources: Go to cinergi.sdsc.edu, and click on CONTRIBUTE. Fill in ESPECIALLY the first fields – title, description, and URL – then as much of the rest as you can.

Try it out!

Thanks to EarthCube, the CINERGI Data Discovery Hub, and the great crew at the San Diego Supercomputer Center and partners for making this valuable tool possible for coral reef research and other geoscience communities. Here are slides and a video to learn more.

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

CRESCYNT Toolbox – Discovery of Online Datasets

CRESCYNT Toolbox – Workflows as Collaboration Space and Workbench Blueprint

puzzle-juggling_pixabayScientists need better ways to analyze and integrate their data and collaborate with other scientists; new computing technologies and tools can help with this. However, it’s difficult to overcome the challenge of disparate perspectives and the absence of a common vocabulary: this is true of multidisciplinary science teams, and true when scientists try to talk with computer scientists. Workflows, as a way to help design and implement a workbench, are needed both as a collaboration space and a blueprint for implementation.

Take a look at a recent presentation to the EarthCube science committee (video) or an earlier presentation offered at ASLO 2017 (slides and voice) to see a flexible and low-tech way to simultaneously (1) facilitate necessary sci-tech interactions for your own lab and (2) begin to sketch out a blueprint for work that needs to be done. Subsequent technical implementation is possible with new tools including Common Workflow Language (CWL) as a set of specifications, Dockers as modular and sharable containers for either fully developed tools or small pieces of code, and Nextflow as an efficient and highly scalable definitive software language to make the computational work happen. Look for a post in the near future by Mahdi Belcaid to describe the technical implementation of these workflows.

OPPORTUNITY! We will be hosting one or two in-person skills training workshops in the coming months, with your expenses covered by our NSF EarthCube CRESCYNT grant, focused particularly on training early career professionals, and will work through some challenging coral reef use cases and their cyberinfrastructure needs. We collected some great use cases at ICRS, but would like additional cases to consider, so we invite you to describe your own research challenges through this google form. Please contact us for more on this, or other issues. Thanks!

CRESCYNT Toolbox – Workflows as Collaboration Space and Workbench Blueprint

CRESCYNT Toolbox – Data Repositories – Estate Planning for your Data

“Hypotheses come and go but data remain.”    – Ramon y Cajal

Taking care of our data for the long term is not just good practice, allowing us to share our data, defend our work, reassess conclusions, collaborate with colleagues, and examine broader scales of space and time – it’s also estate planning for our data, and a primary way of communicating with future scientists and managers.

egg-gold

Here are some great options for long-term data storage, highlighting repositories friendly to coral reef science.

First, there are some important repository networks useful for coral reef data – these can unify standards and offer collective search portals: we like DataONE (members here) and bioCaddie (members here).

KNB – the Knowledge Network for Biocomplexity offers open and private data uploads; ecological orientation. DataONE network.

NOAA CoRIS: Coral Reef Information System – often free to use and can accept coral reef related data beyond NOAA’s own data; contact them first.

BCO-DMO – Biological and Chemical Oceanography Data Management Office – if you have an NSF grant that requires data storage here, you’re fortunate. Good data management guidelines and metadata templates, excellent support staff. Now a DataONE member.

Dataverse – supported by Harvard endowments. There are multiple organizational dataverses – the Harvard Dataverse is free to use. bioCaddie member.

Zenodo – free to use, supported by the European Commission (this is a small slice of CERN’s enormous repository for the Large Hadron Collider). Assigns dois. We invite you to include the “Coral Reef” community when you upload. bioCaddie member.

NCBI – the National Center for Biotechnology Information is very broadly accepted for ‘omics data of all types. A bioCaddie member.

DataCite – not a repository, but if you upload a dataset at a repository that does not assign its own doi’s, you can get one at DataCite and include it when publishing your datasets.

We’ve not listed more costly repositories such as Dryad (focused on journal requirements) or repositories restricted to institutions. What about other storage options such as GitHub, Amazon Web Services, websites? Those have important uses, but are not curated repositories with long-term funding streams, so are not the best data legacy options.

eggs-stacked-imagesMost of these repositories allow either private (closed) or public (open) access, or later conversion to open access. Some have API’s for automated access within workflows. These are repositories we really like for storing and accessing coral reef work. Share your favorite long-term data repository – or experiences with any of the repositories listed here – in the comments.

CRESCYNT Toolbox – Data Repositories – Estate Planning for your Data

CoralNet: deploying deep learning in the shallow seas – by Oscar Beijbom

coralnet_oscar-beijbom

Having dedicated my PhD to automating the annotation of coral reef survey images, I have seen my fair share of surveys and talked to my fair share of coral ecologists. In these conversations, I always heard the same story: collecting survey images is quick, fun and exciting. Annotating them is, on the other hand, slow, boring, and excruciating.

When I started CoralNet (coralnet.ucsd.edu) back in 2012 the main goal was to make the manual annotation work less tedious by deploying automated annotators alongside human experts. These automated annotators were trained on previously annotated data using what was then the state-of-the-art in computer vision and machine learning. Experiments indicated that around 50% of the annotation work could be done automatically without sacrificing the quality of the ecological indicators (Beijbom et al. PLoS ONE 2015).

The Alpha version of CoralNet was thus created and started gaining popularity across the community. I think this was partly due to the promise of reduced annotation burden, but also because it offered a convenient online system for keeping track of and managing the annotation work. By the time we started working on the Beta release this summer, the Alpha site had over 300,000 images with over 5 million point annotations – all provided by the global coral community.

There was, however, a second purpose of creating CoralNet Alpha. Even back in 2012 the machine learning methods of the day were data-hungry. Basically, the more data you have, the better the algorithms will perform. Therefore, the second purpose of creating CoralNet was quite simply to let the data come to me rather than me chasing people down to get my hands on their data.

At the same time the CoralNet Alpha site was starting to buckle under increased usage. Long queues started to build up in the computer vision backend as power-users such as NOAA CREP and Catlin Seaview Survey uploaded tens of thousands of images to the site for analysis assistance. Time was ripe for an update.

As it turned out the timing was fortunate. A revolution has happened in the last few years, with the development of so-called deep convolutional neural networks. These immensely powerful, and large nets are capable of learning from vast databases to achieve vastly superior performance compared to methods from the previous generation.

During my postdoc at UC Berkeley last year, I researched ways to adapt this new technology to the coral reef image annotation task in the development of CoralNet Beta. Leaning on the vast database accumulated in CoralNet Alpha, I tuned a net with 14 hidden layers  and 150 million parameters to recognize over 1,000 types of coral substrates. The results, which are in preparation for publication, indicate that the annotation work can be automated to between 80% and 100% depending on the survey. Remarkably: in some situations, the classifier is more consistent with the human annotators than those annotators are with themselves. Indeed, we show that the combination of confident machine predictions with human annotations beat both the human and the machine alone!

Using funding from NOAA CREP and CRCP, I worked together with UCSD alumnus Stephen Chan to develop CoralNet Beta: a major update which includes migration of all hardware to Amazon Web Services, and a brand new, highly parallelizable, computer vision backend. Using the new computer vision backend the 350,000 images on the site were re-annotated in one week! Software updates include improved search, import, export and visualization tools.

With the new release in place we are happy to welcome new users to the site; the more data the merrier!

_____________

– Many thanks to Oscar Beijbom for this guest posting as well as significant technological contributions to the analysis and understanding of coral reefs. You can find Dr. Beijbom on GitHub, or see more of his projects and publications here. You can also find a series of video tutorials on using CoralNet (featuring the original Alpha interface) on CoralNet’s vimeo channel, and technical details about the new Beta version in the release notes.

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

CoralNet: deploying deep learning in the shallow seas – by Oscar Beijbom

CRESCYNT Toolbox – Disaster Planning and Recovery

With computers, the question is not whether they will fail, but when.

tl;dr – It’s very practical to have cloud storage backup in addition to still-useful external hard drive backup routines. Here are some secure cloud alternatives.

itcrowd_giphyPersonal note. I’ve had hard drive failures due to lightning strike; simultaneous death of mirrored hard drives within a RAID; drenching from an upper floor emergency shower left flowing by a disgruntled chemistry student; and most recently, demise of my laptop by sudden immersion in salt water (don’t ask). By some intersection of luck and diligence, on each occasion recent backups were available for data recovery. In the most recent remake, it was a revelation how much work is now backed up via regular entry into the casual cloud.

This latest digital landing was mercifully soft (…cloudlike). Because of work portability, my recent sequential backup habit has been to a paid unshared Dropbox account; $10/mo is a bargain for peace of mind (beyond a certain size, restoration is not drag-and-drop). A surprising number of files these days are embedded in multiple team projects – much on Google Drive – so all of that was available, with revision history. Group conversations and files were on Slack and email. One auxiliary brain (iPhone)  was in a waterproof case with cloud backup, and another auxiliary brain (project/task tracking) was in a web app, KanbanFlow. Past years of long-term archives were already on external hard drives in two different cities. GitHub is an amazing place to develop, document, recover and share work in progress and products, but it is not a long-term curated data repository. For valuable datasets, the rule is to simplify formats, attach metadata, and update media periodically.

Thinking about your own locations for data storage and access? Check out this review of more secure alternatives to – and apps on top of – Dropbox. Some, like OwnCloud, can serve as both storage and linked access for platforms like Agave. A strength of some current analytical platforms is that they can access multiple data storage locations; for example, Open Science Framework can access Dropbox, Google Drive, GitHub, Box, figshare, and now Dataverse and Amazon Web Services as well.

A collaborator recently pointed out that the expense of any particular type of data storage is really the expense of its backup processes: frequency, automation, security, and combination of archiving media. Justifying the expense can come down to this question: What would it cost to replace these data? Some things are more priceless than others.

Disaster Planning and Recovery tools.  To go beyond data recovery in your planning, here’s an online guide for IT disaster recovery planning and cyberattacks. How much of a problem is this really? See Google’s real-time attack map (hit “play”). Better to plan than fear. You did update those default passwords on your devices, yes?

Feel free to share your own digital-disaster-recovery story in the comments.

CRESCYNT Toolbox – Disaster Planning and Recovery