The data never comes perfect. There is always missing information, different formats, or it is full of useless information for your analysis. The process of data cleaning consists of the correction and transformation of the values, standardizing all the formats, fixing encoding, removing unnecessary information, splitting columns and extracting relevant
Category: Data Science
We are currently experiencing the negative impacts of the novel Coronavirus. This virus has quickly changed our lives and has left many of us feeling confused and fearful of what’s to come. As engineers and data scientists – we want to help make sense of the overwhelming amount of data
Why should we care about Data Science? Nowadays more and more data is being generated by smartphones, social media, health, banks, stores, online services, governments, sensors, etc. Every piece of information is saved ‘just in case’. Thus, the available data cannot be processed by human’s brains, we need algorithms and
Despite being a global catastrophe, the outbreak should be an eye-opening experience to reshuffle our priorities.
Why we should choose representative samples with error in mind when we build data visualizations. A brief overview of uncertain bar charts and uncertain ranked lists.
The type of data samples that populate our visualizations can add uncertainty to our results. Some common data displays like bar and pie charts work better than others for making that uncertainty understandable. This article explores how to understand our data samples and create the most suitable graphs for visualizing what they represent.
In general, the goals of data science are to understand data and generate predictive models that help us make better decisions. For a more thorough overview of data visualization, see “Data visualization and The Truthful Art.”
How to be prepared for the change that will transform the business landscape forever.
Worldwide access to vast amounts of data has changed the business landscape. Competitive marketing depends on knowing how to manage, process, and analyze that data. This article describes the path organizations need to take from collecting data to maximizing its use.
Today’s organizations are undergoing a challenging transformation process around their technical systems. The static software platforms that might have stored and processed a business’ data are no longer sustainable in the current web environment. Enterprises need cutting-edge technology to collect big data in real-time, analyze that data, and then get the information they need to stay competitive in today’s marketplace.
Why the confusion of these concepts has profound implications, from healthcare to business management
In correlated data, a pair of variables are related in that one thing is likely to change when the other does. This relationship might lead us to assume that a change to one thing causes the change in the other. This article clarifies that kind of faulty thinking by explaining correlation, causation, and the bias that often lumps the two together.
The human brain simplifies incoming information, so we can make sense of it. Our brains often do that by making assumptions about things based on slight relationships, or bias. But that thinking process isn’t foolproof. An example is when we mistake correlation for causation. Bias can make us conclude that one thing must cause another if both change in the same way at the same time. This article clears up the misconception that correlation equals causation by exploring both of those subjects and the human brain’s tendency toward bias.
A bird-eye view of the machine learning landscape.
The main goal of this article is to cover the most important concepts of machine learning, and lay-out the landscape. The reader will have the vision to understand what kind of solution matches a specific kind of problem, and should be able to find more specific knowledge after diving into a real-life project.
I’ll start with a 60 years old definition, but still valid today:
The name is pretty self-explanatory, and the definition reinforces the same concept.
Data quality is a broad concept with multiple dimensions. I detail that information in another introductory article. This tutorial explores a real-life example. We identify what we want to improve, create the code to achieve our goals, and wrap up with some comments about things that can happen in real-life situations. To follow along, you need a basic understanding of Python.
Python Data Analysis Library (pandas) is an open-source, BSD-licensed library that provides high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
You can install pandas by entering this code in a command line: python3 -m pip install — upgrade pandas.
An amazing book about data visualization that I can’t recommend enough is The Truthful Art by Alberto Cairo.
In The Truthful Art, Cairo explains the principles of good data visualization. He describes five qualities that should be your foundation when you work with data visualization: truthful, functional, beautiful, insightful, and enlightening. Cairo also gives some great examples of biased and dishonest visualization.
Before I dive into the “Five Qualities of Great Visualizations,” there’s another related concept that I want to cover: data-ink ratio, introduced by Edward Tufte in The Visual Display of Quantitative Information.