DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.

DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!

Launch your software development career: Dive head first into the SDLC and learn how to build high-quality software and teams.

Open Source Migration Practices and Patterns: Explore key traits of migrating open-source software and its impact on software development.

Related

  • Optimization of I/O Workloads by Profiling in Python
  • Enhancing Security With ZTNA in Hybrid and Multi-Cloud Deployments
  • Strengthening Web Application Security With Predictive Threat Analysis in Node.js
  • Comparing Axios, Fetch, and Angular HttpClient for Data Fetching in JavaScript

Trending

  • Implementing Real-Time Credit Card Fraud Detection With Apache Flink on AWS
  • Node.js Walkthrough: Build a Simple Event-Driven Application With Kafka
  • Build Your Business App With BPMN 2.0
  • Theme-Based Front-End Architecture Leveraging Tailwind CSS for White-Label Systems
  1. DZone
  2. Data Engineering
  3. Data
  4. What Is Data Profiling?

What Is Data Profiling?

Data profiling is a process of examining data from an existing source and summarizing information about that data. Learn about its benefits!

By 
Garrett Alley user avatar
Garrett Alley
·
Jan. 21, 19 · Analysis
Like (3)
Save
Tweet
Share
49.8K Views

Join the DZone community and get the full member experience.

Join For Free

Data profiling is a process of examining data from an existing source and summarizing information about that data. You profile data to determine the accuracy, completeness, and validity of your data. Data profiling can be done for many reasons, but it is most commonly part of helping to determine data quality as a component of a larger project. Commonly, data profiling is combined with an ETL (Extract, Transform, and Load) process to move data from one system to another. When done properly, ETL and data profiling can be combined to cleanse, enrich, and move quality data to a target location.

For example, you might want to perform data profiling when migrating from a legacy system to a new system. Data profiling can help identify data quality issues that need to be handled in the code when you move data into your new system. Or, you might want to perform data profiling as you move data to a data warehouse for business analytics. Often when data is moved to a data warehouse, ETL tools are used to move the data. Data profiling can be helpful in identifying what data quality issues must be fixed in the source, and what data quality issues can be fixed during the ETL process.

Why Profile Data?

Data profiling allows you to answer the following questions about your data:

  • Is the data complete? Are there blank or null values?
  • Is the data unique? How many distinct values are there? Is the data duplicated?
  • Are there anomalous patterns in your data? What is the distribution of patterns in your data?
  • Are these the patterns you expect?
  • What range of values exist, and are they expected? What are the maximum, minimum, and average values for given data? Are these the ranges you expect?

Answering these questions helps you ensure that you are maintaining quality data, which — companies are increasingly realizing — is the cornerstone of a thriving business. For more information, see our post on data quality.

How Do You Profile Data?

Data profiling can be performed in different ways, but there are roughly three base methods used to analyze the data.

  • Column profiling counts the number of times every value appears within each column in a table. This method helps to uncover the patterns within your data.

  • Cross-column profiling looks across columns to perform key and dependency analysis. Key analysis scans collections of values in a table to locate a potential primary key. Dependency analysis determines the dependent relationships within a data set. Together, these analyses determine the relationships and dependencies within a table.

  • Cross-table profiling looks across tables to identify potential foreign keys. It also attempts to determine the similarities and differences in syntax and data types between tables to determine which data might be redundant and which could be mapped together.

Rule validation is sometimes considered the final step in data profiling. This is a proactive step of adding rules that check for the correctness and integrity of the data that is entered into the system.

These different methods may be performed manually by an analyst, or they may be performed by a service that can automate these queries.

Data Profiling Challenges

Data profiling is often difficult due to the sheer volume of data you’ll need to profile. This is especially true if you are looking at a legacy system. A legacy system might have years of older data with thousands of errors. Experts recommend that you segment your data as a part of your data profiling process so that you can see the forest for the trees.

If you manually perform your data profiling, you’ll need an expert to run numerous queries and sift through the results to gain meaningful insights about your data, which can eat up precious resources. In addition, you will likely only be able to check a subset of your overall data because it is too time-consuming to go through the entire data set.

Data (computing) Data profiling

Published at DZone with permission of Garrett Alley, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Optimization of I/O Workloads by Profiling in Python
  • Enhancing Security With ZTNA in Hybrid and Multi-Cloud Deployments
  • Strengthening Web Application Security With Predictive Threat Analysis in Node.js
  • Comparing Axios, Fetch, and Angular HttpClient for Data Fetching in JavaScript

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: