Info Overload: How to Avoid Drowning in Crappy Data

· 3 min read
Info Overload: How to Avoid Drowning in Crappy Data

The sinking of the RMS Titanic in 1912 is one of history’s most infamous maritime disasters. Over 1,500 lives were lost when the supposedly “unsinkable” ship struck an iceberg and plunged to the bottom of the North Atlantic on its maiden voyage.

The chain of human errors that led to this tragedy provides a sobering case study on the severe consequences of overconfidence and complacency. From dismissing iceberg warnings to inadequate lifeboat provisions, the Titanic catastrophe was riddled with poor decisions, mistakes, and oversight by the crew and leadership.

This preventable disaster shows how human error, on both an individual and organizational level, can lead to catastrophic outcomes. By examining the human failures that contributed to the demise of the Titanic, we can reflect on the hubris that often underlies disasters and hopefully work to prevent such needless loss of life in the future.

Human Error in Business Can Be Costly Too

Most businesses, of course, do not operate in such extreme conditions as the Titanic crew on their journey through the Atlantic. Yet, the mistakes we humans make compound over time and may cost businesses lost opportunities, reputation, clients, and, ultimately, a lot of money.

Internal processes and tools often help minimize the risk of errors and augment what we, humans, lack with machines that excel in consistency. The range of these potential errors is vast, but the most common issue is incorrect, incomplete, or duplicate data entered into systems. Operating with data that contains these problems results in poor forecasting and decision-making, wasted resources, negative customer experiences, and more.

A business operating like this is likely to lose profits and reputation. By learning from the Titanic tragedy, companies today can take proactive steps to prevent human error and avoid catastrophic outcomes. Robust processes, safety checks, and intelligent tools are crucial to minimizing mistakes and keeping operations running smoothly.

The Limits of Strict Data Validation

Many companies attempt to minimize data errors by building strict systems and processes for data entry. The idea is that validated, standardized data will be cleaner and more accurate. However, this engineering-driven approach often needs to be revised in practice.

Engineers strive to create perfect forms where every field is validated. For example, an ideal form to add a company to a CRM would have 10–20 fields — logo, name, website, address, industry, etc. Each field would have rules to ensure data integrity — logos would need the correct file size, company names would have to be unique, and industries would be selected from a drop-down list.

However, the users entering data need more time for this rigid process. A sales manager who just finished a meeting wants to quickly add some notes before her next appointment in 5 minutes. If forced to complete a complex form to log basic notes, she’ll skip the system, storing info in Notion or elsewhere.

When leadership sees low adoption, they’ll demand relaxed rules so users can at least enter a company name quickly. The result is duplicate, incomplete entries lacking crucial info — the opposite of clean data.

Balancing Validation with User Needs

The previous sections explored two problematic extremes — overly strict validation that blocks data entry versus minimal validation that allows low-quality data. The key is finding the right balance that maximizes data capture while enabling data quality.

Strict validation stops people from entering data, leaving systems empty. Lax validation results in systems filled with duplicate, incomplete, and incorrect data. Both extremes render systems useless over time.

The top priority should be capturing as much data as possible into systems. But there need to be some minimal distinguishing attributes to enable clean up later.

Instead of requiring users to fill 10–20 fields to add a company, ask for one vital distinguishing attribute, like website URL. The name, address, industry, etc., can be filled in later through a separate clean-up workflow, which can be manual or utilize AI for automatic processing.

The key is determining the minimum validation needed to uniquely identify each record. If some records lack the basic unique attribute like website, allow exceptions but track them closely for follow-up.

The goal needs to be more than a theoretical perfection in validation. It’s maximizing data capture while still enabling ongoing quality improvement. With creative, flexible requirements focused on essentials, systems can support business needs instead of hindering them.


The primary purpose of internal company tools is to capture valuable data, clean it up, help people navigate it, make sense of it, and provide alerts about important events.

These processes are often complex, requiring experienced teams to build automated and manual systems. The end result should be an extremely simple user experience on the front end, supported by rigorous enrichment and clean-up processes in the back end.

You need analysts to understand needs and design systems, engineers to build them, and designers to craft intuitive interfaces.

If bad data goes in, bad results will come out. Solid data practices are the foundation for transforming raw data into usable business intelligence.

With quality data powering internal tools, companies can gain crucial visibility into operations, customers, markets, and more to drive strategic decisions. Data is at the heart of business success today.


Originally published on Medium.com