CRESCYNT Data Science For Coral Reefs Workshop 2 – Data Integration and Team Science

light-bulb-comfreak-pixabay_503881_1920

We’re extremely pleased to be able to offer two workshops in March 2018 at NCEAS. The second is CRESCYNT Data Science for Coral Reefs Workshop 2: Data Modeling, Data Integration and Team Science. Apply here.

When: March 12-15, 2018
Where: NCEAS, Santa Barbara, CA

Workshop description:

This workshop is recommended for early to mid-career and senior scientists with interest in applying technical skills to collaborative research questions and committed to subsequently sharing what they learn. Participants will learn how to structure and combine heterogeneous data sets relevant to coral reef scientists in a collaborative way. Topics covered on days 1 and 2 of the workshop will cover reproducible workflows using R/RStudio and RMarkdown, collaborative coding with GitHub, strategies for team research, data modeling and data wrangling, and advanced data integration and visualization tools. Participants will also spend 2 days working in small teams to integrate various coral reef datasets to practice the skills learned and develop workflows for data tidying and integration.

The workshop is limited to 20 participants. We encourage you to apply via this form. Workshop costs will be covered with support from NSF EarthCubeCRESCYNT RCN. We anticipate widely sharing workshop outcomes, including workflows and recommendations. Anticipate some significant pre-workshop prep effort.

Related posts: Learning to Love R More and R Resources for Visualization

>>>Go to the blog Masterpost or the CRESCYNT website or NSF EarthCube.<<<

CRESCYNT Data Science For Coral Reefs Workshop 2 – Data Integration and Team Science

CRESCYNT Data Science for Coral Reefs Workshop 1 – Data Rescue

light-bulb-nero-neo-pixabay_1129247_1920

We’re extremely pleased to be able to offer two workshops in March 2018 at NCEAS. The first is CRESCYNT Data Science for Coral Reefs Workshop 1: Data Rescue. Apply here.

When: March 7-10, 2018
Where: NCEAS, Santa Barbara, California, USA

Workshop description:

Recommended for senior scientists with rich “dark” data on coral reefs that needs to be harvested and made accessible in an open repository. Students or staff working with senior scientists are also encouraged to apply. Topics covered on days 1 and 2 of the workshop will cover the basic principles of data archiving and data repositories, including Darwin Core and EML metadata formats, how to write good metadata, how to archive data on the KNB data repository and elsewhere, data preservation workflow and best practices, and how to improve data discoverability and reusability. Additionally, participants will spend approximately 2 days working in pairs to archive their own data using these principles, so applying with a team member from your research group is highly recommended.

The workshop is limited to 20 participants. We encourage you to apply via this form. Workshop costs will be covered with support from NSF EarthCubeCRESCYNT RCN. Participants will publish data during the workshop process, and we anticipate widely sharing workshop outcomes, including workflows and recommendations. Because coral reef science embodies a wide range of data types (spreadsheets, images, videos, field notes, large ‘omics text files, etc.), anticipate some significant pre-workshop prep effort.

Related post: CRESCYNT Toolbox – Estate Planning for Your Data

>>>Go to the blog Masterpost or the CRESCYNT website or NSF EarthCube.<<<

CRESCYNT Data Science for Coral Reefs Workshop 1 – Data Rescue

CRESCYNT Toolbox – Data Cleaning

cleaning_ancient-1807518_1280_pixabay

Data cleaning. Data cleansing. Data preparation. Data wrangling. Data munging.

Garbage In, Garbage Out.

If you’re like most people, your data is self-cleaning, meaning: you clean it yourself! We often hear that 80% of our “data time” is spent in data cleaning to enable 20% in analysis. Wouldn’t it be great to work through data prep faster and keep more of our data time for analysis, exploration, visualization, and next steps?

Here we look over the landscape of tools to consider, then come back to where our feet may be right now to offer specific suggestions for workbook users – lessons learned the hard way over a long time.

The end goal is for our data to be accurate, human-readable, machine-readable, and calculation-ready.

Software for data cleaning:

RapidMiner may be the best free (for academia) non-coding tool available right now. It was built for data mining, which doesn’t have to be your purpose for it to work hard for you. It has a diagram interface that’s very helpful. It almost facilitates a “workflow discovery” process as you incrementally try, tweak, build, and re-use workflow paths that grow during the process of data cleaning. It makes quick work of plotting histograms for each data column to instantly SEE distributions, zeros, outliers, and number of valid entries. It also records and tracks commands (like a baby Jupyter notebook). When pulling in raw datasets, it automatically keeps the originals intact: RapidMiner makes changes only to a copy of the raw data, and then one can export the finished files to use with other software. It’s really helpful in joining data from multiple sources, and pulling subsets for output data files. Rapid Miner Studio: Data Prep.

R is popular in domain sciences and has a number of powerful packages that help with data cleaning. Make use of RStudio as you clean and manipulate data with dplyr and tidyr. New packages are frequently released, such as assertr, janitor, and datamaid. A great thing about R is its active community in supporting learning. Check out this swirl tutorial on Getting and Cleaning Data – or access through DataCamp. The most comprehensive list of courses on R for data cleaning is here via R-bloggers. There’s lovely guidance for data wrangling in R by Hadley Wickham – useful even outside of R.

Data cleaning tool recommendations by KD Nuggets, Quora, and Varonis are a little dated and business-oriented, but these survivors may be worth investigating:

  • Trifacta Wrangler was built for desktop use, and designed for many steps of data wrangling: cleaning and beyond. See intro video, datasheet, demo with Tableau.
  • DataCleaner – community or commercial versions; can use SQL databases. Mostly designed for business applications; videos show what it can do.
  • OpenRefine gets the legacy spotlight (was Google Refine… now community held). Free, open source, and still in use. Here’s a recent walkthrough. Helps fix messy text and categorical data; less useful for other science research data.

There are some great tools to potentially steal borrow that started in data journalism:

  • Tabula is “a tool for liberating data tables trapped inside PDF files” – extracts text-based pdfs (not scans) to data tables.
  • csvkit is “a suite of command-line tools for converting to and working with CSV, the king of tabular file formats.” Helpful for converting Excel to csv cleanly, csv to json, json to csv, working with sql, and more.
  • agate is “a Python data analysis library that is optimized for humans instead of machines…. an alternative to numpy and pandas that solves real-world problems with readable code.” Here’s the cookbook.

Finally, Python itself is clearly a very powerful open source tool available for data cleaning. Look into it with this DataCamp course, pandas and other Python libraries, or this kaggle competition walkthrough.

Manual Data Munging. If you’re using Excel, Open Office, or Google Sheets to clean your data (e.g., small complex datasets common to many kinds of research), you may know all the tricks you need. For those newer to data editing, here are some tips.

  • To start: save a copy of your original file with a new name (e.g., tack on “initials-mod” plus the current date: YYYYMMDD). Then make your original file read-only to protect it. Pretend it’s in an untouchable vault. Use only your modifiable copy.
  • Create a Changes page where you record the edits you make in the order you make them. This also lets you scribble notes for changes you plan to make or items you need to track down but haven’t yet executed (Done and To-Do lists).
  • First edit: if you don’t have a unique ID for each row, add a first column with a simple numeric sequence before doing anything else.
  • Create a copy of that spreadsheet page, leave the original intact, and make modifications only to the newly copied page. If each new page is created on the left, the older pages are allowed to accumulate to the right (less vulnerable to accidental editing). Name each tab usefully.
  • Second edit: if your column headings take up more than one row, consolidate that information to one row. Do not Merge cells. Include units but no special characters or spaces: use only letters, numbers, dashes and underlines.
  • Add a Data Definitions page to record your old column headings, your new column headings, and explain what each column heading means. Include units here and also in column headings where possible.
  • In cells with text entries, do not use bare commas. Either use semicolons and dashes instead of commas in your text, or enclose text entries in quotation marks (otherwise creates havoc exporting to and importing from csv).
  • Add a Comments column, usually at the end of other columns, to record any notes that apply to individual rows or a subset of rows. Hit Save, now and often.
  • Now you’re free to sort each column to find data entry typos (e.g., misplaced decimals), inconsistent formats, or missing values. The danger here is failing to select the entire spreadsheet before sorting – always select the square northwest of cell A1 (or convert the spreadsheet to a table). This is where you’ll be glad you numbered each row at the start: to compare with the original.
  • If there’s a short note like data source credit that MUST accompany the page and must not get sorted, park it in the column header row to the right of the meaningful headers so it won’t get sorted, lost, or confused with actual data.
  • If you use formulas, document the formulas in your Data Definitions page (replace cells with column_names), and copy-paste as value-only as soon as practical.
  • Make sure there is only one kind of data in each column: do not mix numeric and text entries. Instead, create extra columns if needed.
  • Workbooks should be saved each day of editing with that day’s date (as YYYYMMDD) as part of the filename so you can get back to an older copy. At the end of your session clean up your Changes page, moving To-Do to Done and planning next steps.

Find more spreadsheet guidance here (a set of guidelines recently developed for participants in another project – good links to more resources at its end).

Beyond Workbooks. If you can execute and document your data cleaning workflows in a workbook like Excel, Open Office, or Google Sheets, then you can take your data cleaning to the next level. Knowing steps and sequences appropriate for your specific kinds of datasets will help enormously when you want to convert to using tools such as RapidMiner, R, or Python that can help with some automation and much bigger datasets.

Want more depth? Check out Data Preparation Tips, Tricks, and Tools: An Interview with the Insiders  “If you are not good at data preparation, you are NOT a good data scientist…. The validity of any analysis is resting almost completely on the preparation.” – Claudia Perlich

Happy scrubbing! Email or comment with your own favorite tips. Cheers, Ouida Meier

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

CRESCYNT Toolbox – Data Cleaning

CRESCYNT Toolbox – Discovery of Online Datasets

cinergi-coralscreendump
Data discovery at cinergi.sdsc.edu

Announcing recent progress for data discovery in support of coral reef research!

Take advantage of this valuable community resource: a data discovery search engine with a special nose for locating coral reef research data sources: cinergi.sdsc.edu.

A major way CRESCYNT has made progress is by serving as a collective coral reef use case for EarthCube groups that are building great new software tools. One of those is a project called CINERGI. It registers resources – especially online repositories and individual online datasets, plus documents and software tools – and then enriches the descriptors to make the resources more searchable. The datasets themselves stay in place: a record of the dataset’s location and description are registered and augmented for better find and filter. Registered datasets and other resources, of course, keep whatever access and use license their authors have given them.

CINERGI already has over a million data sources registered, and over 11,000 of these are specifically coral reef datasets and data repositories. The interface now also features a geoportal to support spatial search options.

The CINERGI search tool is now able to incorporate ANY online resources you wish, so if you don’t find your favorite resources or want to connect your own publications, data, data products, software, code, and other resources, please contribute. If it’s a coral-related resource, be sure to include the word “coral” somewhere in your title or description so it can be retrieved that way later as well. (Great retrieval starts with great metadata!)

To add new resources: Go to cinergi.sdsc.edu, and click on CONTRIBUTE. Fill in ESPECIALLY the first fields – title, description, and URL – then as much of the rest as you can.

Try it out!

Thanks to EarthCube, the CINERGI Data Discovery Hub, and the great crew at the San Diego Supercomputer Center and partners for making this valuable tool possible for coral reef research and other geoscience communities. Here are slides and a video to learn more.

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

CRESCYNT Toolbox – Discovery of Online Datasets

Chasing Coral is now on Netflix – A Powerful Film to See and Share

Please credit The Ocean Agency - XL Catlin Seaview Survey - Richard Vevers &amp; Christophe Bailhache
Coral Bleaching and its Aftermath – a scene from Chasing Coral. Credit: The Ocean Agency – XL Catlin Seaview Survey – Richard Vevers & Christophe Bailhache

Several of the coral reef scientists featured in the film Chasing Coral are CRESCYNT participants, including our PI, Dr Ruth D Gates, and we congratulate and thank them all for their eloquence, passion, deep experience, scientific integrity, and significant intellectual contributions to this powerful film. Chasing Coral‘s producers are making it available free for public screenings, and its focus now is educating audiences and moving people to action.

It’s not too late for coral reefs…  indeed, for many other ecosystems that are facing challenges from climate change. It’s still possible to reduce the rate at which the climate is changing, and that’s within our power today.” – Dr Ove Hoegh-Guldberg

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

Chasing Coral is now on Netflix – A Powerful Film to See and Share

CRESCYNT at EarthCube All Hands Meeting 2017

EarthCube domain scientists, computer scientists, data scientists, and new members gathered in Seattle June 7-9, 2017 to communicate progress, connect over projects and science challenges, plan for future collaborative work, and welcome new participants.

Most of the presentations and posters from the meeting are available here. CRESCYNT program manager Ouida Meier delivered an invited talk on sci-tech matchmaking (video|slides, helped facilitate breakout sessions focused on clarifying requirements and resources for virtual workbenches (summary), and presented CRESCYNT coral reef use cases and workflow collaboration during a poster session. Discussion and collective brainstorming throughout the meeting was very dynamic and fruitful.

CRESCYNTposter_EC-AHM_Seattle_2017June

Download a larger pdf of the CRESCYNT poster – Earth Cube AHM 2017.

Read more EarthCube in the News.

 

>>>Go to NSF EarthCube or the CRESCYNT website or the blog Masterpost.<<<

CRESCYNT at EarthCube All Hands Meeting 2017