“I can’t trust the data in Salesforce.” This is a statement that every Administrator has heard at least once. With managers and executive leadership looking to make important data-driven decisions, it’s important that all users have trust in the data.

There isn’t a perfectly clean database in existence. I saw on Twitter someone mention that the sales team was complaining that the data in Salesforce was wrong. After further research, the Administrator found that the Excel spreadsheet they were using to compare with Salesforce had incorrect formulas resulting in inaccurate data.

Salesforce tends to be scrutinized more heavily and is quick to receive the blame when things go wrong; probably because it’s an expensive tool and users don’t initially want to use it.

So, how do we place trust in the data and Salesforce?

Users & Management Need to Trust YOU

First and foremost, everyone in your organization needs to trust you. As the subject matter expert, you need to be able to eloquently provide answers to questions, and make everyone feel at ease. Do this by listening to users and provide quick and effective feedback. Give awesome user training and always be available to those who need help.

Offer white-glove treatment to the executive team. Don’t sugar coat anything and take responsibility when things go wrong. You are the leader in this strange world and leader’s need to inspire the troops.

When the organization can trust you as a person based on their interactions with you, you’ll notice that everything will run smoothly. Grace will be given to you to actively make things right. Take that opportunity to fix the issues around your companies dirty, untrustworthy data.

Michael Hyatt, a great author, blogger and speaker, provides four steps to build (or rebuild) trust with others.

1. Keep your word
2. Tell the truth
3. Be transparent
4. Give without any strings attached

While these are not the only steps one should take to build trust, they provide a great starting point.

Safeguard & Clean the Data

Part of the job of a System Administrator is to manage data integrity in Salesforce. This means that part of our regular activity should be to update, translate, format, scrub, standardize and normalize (or whatever term you want to use) the data. But before regular cleaning takes place, safeguards should be put into place.

Before jumping into this step, it’s important to have a conversation with stakeholders and end users to understand what dirty data is and means to the company. Every companies definition of dirty data is different based on business process and need. Here are some questions to ask in order to help understand where dirty data lives and how to clean it up.

1. What is the company definition of dirty data? – Before dirty data can be fixed, it needs to be defined. This should be a conversation with key stakeholders who are responsible for those who provide data entry or manage company processes residing in Salesforce.

This is an important step because there may be legitimate business reasons for something like duplicate records. If this is a business reason that cannot be fixed or addressed through process improvement, then duplicate records should be excluded from the definition.

2. Where does the dirty data exist? – Using the business definition of dirty data, an evaluation of Salesforce needs to take place in order to understand where this data lives. More than likely, you’ve already done some of this analysis, but you would be surprised to find many areas of the system that house poor data. The goal here is to evaluate where data exists so that the broken processes can be fixed.

3. What needs to be done to fix this dirty data? – Now that there is an understanding of where the dirty data resides and the processes that may be perpetuating the problem, you can begin to evaluate the solution. This will include not only putting safeguards into place, but also the actual act of cleansing the data, training users and more.

Create and document a plan to fix the data before executing. This written plan should then be shared with stakeholders and management to help them see the intensity of the issue, and the strategic approach for fixing it (which builds trust).

Now that there is a plan in place, the real hands-on work can begin. While it may be tempting to jump right into fixing the actual data, the first thing is to fix the holes creating the dirty data. If duplicate management or prevention is a top priority and an area that needs fixing, first get a framework setup in Salesforce that will prevent users from creating duplicate records. This patches that hole and allows you to fix the data itself without worrying about more dirty data pouring into the system.

Validation rules are a heavily overlooked feature which, when implemented correctly, can ensure the accuracy of data on a record by enforcing that there are no data conflicts, or certain fields or field value is populated before a record is saved based on a specific condition. Dependent picklists is another great way to ensure accurate data. By reducing the number of picklist values through dependencies, users are more likely to select a value that is accurate and meaningful to them and other users.

Add Context to Data

Cleansing data is not a quick process, nor is it a one-time process. Even after patching the holes and cleaning the database, monitoring of the critical data points will need to be performed and regularly cleansed. During this cleansing process, users will still need to leverage the data in Salesforce to help make decisions. This is where the application of data context comes into play.

I’ve mentioned before that the really great Salesforce Administrators don’t just run the report and hand over the data. Instead, they provide context which helps the person viewing the data understand what they are looking at. When the database is extremely dirty, contextual information is vitally important.

38a76eb48dc009077311d115385a7cba

Explain what this data shows and why. What does it include or exclude and what should be taken into consideration when evaluating the data?

For example, if an executive has asked for an opportunity report that shows won deals in the last six months and you know that the Close Date is not being updated by reps, which means the Close Date tends to be off by approximately 3 months, this information should be provided along with the report results. In addition to this contextual information, provide a note with how you are fixing this issue and when you expect it to be fixed.

Providing this type of information continues to build trust while providing the decision maker with all of the caveats of the data so that an informed decision can be made.

Simplify, Train, Repeat

Every process which is generating dirty data should be scrutinized and evaluated to determine why it is causing dirty data. The ultimate goal should be to simplify the process and/or the page layout of specific records used in that process. This ensures that users have everything they need to successfully complete the specific task without being overwhelmed, which can cause dirty data.

Humans can only process so much information at a time before feeling overwhelmed. This is called cognitive overload and is associated with an entire psychological theory called cognitive load.

Cognitive load theory has been designed to provide guidelines intended to assist in the presentation of information in a manner that encourages learner activities that optimize intellectual performance. – John Sweller

This article does a great job of explaining the elements and application of cognitive load theory to schoolwork but can easily be extrapolated and applied to any other situation including users using Salesforce.

Understanding this theory, we can begin to remove fields and field values from objects and page layouts to reduce the likelihood of cognitive overload, resulting in the skipping of fields or selection of incorrect data. Users shouldn’t feel overwhelmed in Salesforce. Seek to understand the minimal information needed on a record and work to get the page layout showing just those fields and values.

Page layout design is also an important factor in cognitive overload. Fields should be arranged in a logical order and the layout itself should be designed to be scannable and easy to read. Leverage sections and blank fields to literally design a layout that will work for your users.

The last point is one of the most important points of the entire post; ongoing user training. User training cannot be overlooked. Regardless of how long users have been with the company or how long a process has existed, consistent, clearly defined training is the only way that data and processes will remain clean.

Provide training the way that users want to consume it. This may mean that you are producing multiple types of training (such as video, webinar, hard-copy, etc.) for a single topic. But if users are leveraging the materials and learning along the way, the time it takes to produce each of these training materials is well worth the investment.

Dirty data is a killer. Let’s prevent it from killing our orgs and frustrating users. Let’s build trust in data.

Perhaps you had a revelation after reading this post, or you have had some experience with this topic. Leave a comment below to share your experience or ask a question.

2 thoughts on “ Trusting Your Data When It Isn’t Trustworthy ”

  1. Brent – fabulous article! You covered all the bases, wow! Seriously, you hit it on the head.. training really is the most important… which is probably why it is the most difficult 🙂 This reminds me I really need to get it together and schedule my “office hours” sessions for the year! Thanks for the great information as always!!

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.