Reproduced from a July 2017 article on logickull.com
Almost all the data in your possession is ROT. That’s “redundant, outdated, or trivial,” data that rarely serves a purpose. According to some estimates, 70 percent or more of stored data is ROT. Yet many organizations hold on to useless data for extremely long periods, at significant cost—often because it can be difficult to tell the proverbial “baby” from the “bathwater.” Throw out both, and you could find yourself facing stiff regulatory repercussions, spoliation sanctions, or worse.
Balancing these competing imperatives is rarely straightforward. Many regulations require preservation of a wide range of information while others, such as Europe’s “right to be forgotten,” demand data’s erasure. In the realm of litigation, large organizations can face dozens, even hundreds, of legal holds at any time, while the threat of pending litigation can force even the smallest business to preserve large amounts of data years before a suit is filed.
Finally, the massive growth of data, from the increase in mobile devices, the Internet of Things, and technology’s growing ability to monitor, catalog and retain more and more information, means that these issues won’t be going away anytime soon.
To get an idea of how the experts are tackling these issues, Logikcull recently spoke to John Isaza, a partner at Rimon Law. The head of that innovative firm’s information governance and records management practice, Isaza is recognized as a leader on records and information management, eDiscovery, and legal holds. In addition to his practice, Isaza has also created software, called Virgo, to aid organizations in retention compliance across over 200 global jurisdictions.
In our conversation, a lightly edited version of which follows, Isaza walks us through two case studies involving some of the biggest data challenges today: defensible data disposition and the treatment of “big data.”
Logikcull: Given your experience in the field, how would you describe the state of data preservation today?
Isaza: Here, I have a good historical perspective because I’ve been in this space since 2001, even though I’ve been a lawyer since the early 1990s. In 2001, we were still dealing with records in the traditional sense of the word: paper storage. This was also when we started transitioning into the electronic environment for management of records. So, for several years after, the industry struggled with trying to retain records in the old sense of the word in this electronic environment.
What’s been happening over the last five years is that there’s been a complete paradigm shift from a focus on records to preservation of information.
Now, my clients are evolving more and more into figuring out how is it that we’re going to manage all this information for the organization, ultimately with the goal of retaining what you have to keep in accordance with regulatory requirements while balancing what you must dispose of to meet global requirements such as the General Data Protection Regulation requirements, effective May 2018, that require you to dispose of information that contains private information as long as it’s no longer needed.
“It is definitely an industry in transition and we are right in the epicenter of it right now.”
The industry right now is trying to figure out how to balance retention of big data with global requirements around disposition of data that is redundant, obsolete, trivial, or contains private information. So, it is definitely an industry in transition and we are right in the epicenter of it right now.
Logikcull: And both over-broad preservation and poor preservation can have significant consequences…
Isaza: I’ll give you a great example.
The GDPR requires you to have policies and processes in place around the protection of information and the management of that protected information. A violation under the GDPR of those requirements could result in a sanction that’s as high as four percent of your global gross revenue—which is a huge number. For many organizations, that could be their entire profit margin in a given year! So we are really concerned about making sure that we are going to be compliant with those kinds of requirements while also recognizing that we have businesses to run. At the same time, we have the ability to make ourselves much more efficient by mining big data; so, we are very cognizant of that.
We have to devise policies and procedures and processes that are going to be able to meet both those requirements that have such high possible sanctions while also meeting the requirements of the organization to try to continue to stay competitive in the market.
Logikcull: What are the main drivers around information governance and data preservation? Is it regulations such as GDPR, the threat of litigation and possible sanctions, or other considerations?
Isaza: It’s actually driven by business needs, first and foremost, because your business needs to decide what records or information they need at the end of the day to stay afloat and competitive.
In addition to that, it’s driven by regulatory requirements. There are literally thousands of them. This is why I have a software company devoted to this issue; it informs specifically how long you need to retain certain kinds of information, not only how long you need to retain it, but in what format, location, storage and privacy requirements, etc. There are prescriptive regulatory requirements driving all this.
When it comes to over-retention, with the changes in the Federal Rules of Civil Procedure in 2006 and now, more recently, in 2015, there’s always been a concern about discovery because if the data exists, it’s discoverable. So obviously it would behoove organizations to get rid of information that they don’t need, which, out of context, could wind up presenting issues for the organization.
The third and final driver these days is the GDPR and similar global regulations, which in essence are saying: “We have the right to be forgotten. We want all the data gone from your systems. We don’t want all private information in your systems because we, Europe, don’t trust you, the U.S. company, managing our data.”
So it is a combination of these factors that result in this current tempest.
Logikcull: Speaking of the 2015 amendments to the Federal Rules, we’ve been doing a series of interviews with “eDiscovery judges” and have noticed something of a split between some judges with regards to the amendments and preservation. Magistrate Judge Maas from the Southern District of New York, for example, told us that when he speaks in front of a corporate audience, he advises against over-preservation. They don’t need to keep this stuff around; it’ll be a problem later on when litigation arises.
But we also spoke to Judge Scheindlin, also from the S.D.N.Y., and she says that she didn’t expect the 2015 amendments to have much of a difference on how companies will preserve ESI or change their behavior around preservation, at least with regards to language about proportionality in preservation. She mostly anticipates that people will keep preserving broadly, particularly if there’s any threat of litigation anticipated—which could be a long before an actual lawsuit is filed.
How do you see these amendments impacting your clients?
Isaza: I think that the amendments are giving us more room to design policy that’s defensible around disposition. I tend to be in line more with Judge Maas. I have presented with Magistrate Judge Ronald Hedges, Judge Facciola, and other judges who have taken the position of: “What the heck is that data doing there?”
So, I respectfully disagree with Judge Scheindlin. I’m hoping that she’s not right because I think that there’s a lot of opportunity for abuse in discovery. Organizations need to be more proactive to prevent that sort of abuse. The rules are now making it harder to be abusive in that space while at the same time they’re giving us the freedom to design policies that allow our clients to dispose of data that is really not necessary. The prime example would be stuff that’s purely for disaster recovery, which you should be able to dispose of routinely without fear that it’s disposal is running afoul of your legal hold processes.
“The rules are now making it harder to be abusive in that space while at the same time they’re giving us the freedom to design policies that allow our clients to dispose of data that is really not necessary.”
Logikcull: Which is something you have experience doing. At LegalWeek West last month, you presented a case study on data disposition. Can you give us a quick overview of that process?
Isaza: In that situation, we had 45,000 backup tapes that needed to be disposed of. They were costing approximately, just in hard dollars alone, about $25,000 a month to retain. Plus, all the other costs that are associated with the physical retention of that information, such as making sure people stayed trained, making sure that there are ways to translate the data if it’s ever needed.
In my opinion, it is a complete waste of money because they are purely disaster recovery tapes, primarily. The Federal Rules of Civil Procedure were intended, under the 2006 rule, to give you a safe harbor. In 2015, it’s no longer called “safe harbor,” but that concept is implicit. I think it’s in the committee notes that anything that’s purely for disaster recovery, that’s otherwise retained within the organization in other systems, there’s no reason to keep that redundancy year-round.
This was, to me, an example of low-hanging fruit in terms of being able to dispose of information that wouldn’t be necessary. That’s giving you an oversimplified view of the complexity of the analysis that we did for that client. For that client, there were other things that were involved in the analysis such as if they knew if the tapes were purely disaster recovery tapes, what was contained in the data that could be unique and not available elsewhere, what types and how broad were the legal holds they had in place, and in how many different states, etc. So, there was a very complex analysis, but bottom-line is that it is, believe it or not, low-hanging fruit, by comparison to other big data retention issues.
Logikcull: What lessons do you have for smaller firms and businesses that might be hanging on to data? It might not be data on the same scale as what you work with normally, but they’re wondering how long they have to keep things around. What suggestions would you give them?
Isaza: I do have concerns about smaller to medium-sized companies not having the resources to develop a more robust program around these issues. Yet, their liability and their exposure are still just as high. For example, under the GDPR, for a small-to-medium-sized organization, four percent of your global gross revenue as a sanction could sink the entire organization, depending on the level of information that they’re managing. Say, for example, if you’re a small practitioner in the healthcare sector, any time there’s a potential breach to your data, it’s a huge problem that could be enough to make you go under.
“[I]f you’re a small practitioner… any time there’s a potential breach to your data, it’s a huge problem that could be enough to make you go under.”
One of the reasons I developed my software was to try to make it so that organizations that are smaller to medium-sized can also have access to retention requirements that they need. For me, the retention requirements, or your records retention schedule, are your first line of defense; so, you need to know what you need to keep so that you can then decide what you can dispose of on a more routine basis.
Along those lines, we recommend an information lifecycle model that allows you to retain information in three different states. One of the them is temporarily, say, for example, when something first goes into your inbox. The next is that work-in-progress, where you have given yourself maybe three years to determine if something is going to rise to the level of critical information or a record. The third is that section that needs to be preserved in accordance with retention requirements to meet the business needs of your organization, which is going to be no more than five percent of all your data.
That lifecycle model would allow small-to-medium-sized organizations to dispose of 95 percent of their data within a specified time period
We are trying to develop this into programs that organizations can use without necessarily having to go out and hire a team of lawyers, but I do have to tell you that I do have small clients that do have me on retainer to deal with these issues. It’s just unfortunately a cost of doing business for them.
Logikcull: What do you see ahead in terms of information governance and data preservation and retention? I know that you’ve been doing writing about big data and social media. How do you see those areas impacting these areas?
Isaza: I have a book coming out in August, published by the American Bar Association, a “Handbook on Global Social Media Law with Business Lawyers.” Basically, what we’re doing is identifying the “gotcha’s” in social media that impact every organization. The book covers issues in human resources, intellectual property, ownership of data, and governance around social media. We’re even dealing with issues such as fake news and the effect that has on organizations. Then, we’re taking those considerations in the U.S. and then morphing it into considerations in other parts of the world like in Europe, Asia, Latin America, and even Russia. We’re even dealing with the possibility of cyberterrorism being used in social media.
Social media is one of those many platforms where organizations need to keep their eyes wide open as they enter it. When they engage in social media, they need to make sure that they have policies and procedures that inform the employee about what’s proper and what belongs to them versus what belongs to the organization.
From a corporate standpoint, we address what sort of liabilities the organization is putting itself in by having a presence in social media which then results in records retention requirements.
With regards to big data, what we’re trying to do these days is to have a way to create buckets in the records retention schedule that allow the organization to develop a retention program for what’s called “big data.” That would be information in your systems that you don’t really know its use just yet, but you know you’re going to need to mine it.
So, we’re trying to figure out a way to develop programs or buckets in your retention schedule that will allow you to keep that information and dispose of everything else that’s redundant, obsolete or trivial that wouldn’t otherwise have a value to the organization. To me, that’s a big challenge for organizations right now—figuring out a way to carve out buckets for that big data that should be retained within their data warehouses or any of their systems.
“To me, that’s a big challenge for organizations right now—figuring out a way to carve out buckets for that big data that should be retained…”
Logikcull: Is there a contradiction there? Big data is all trivial until you know what to do with it, right? How do you go about deciding what unstructured data might be useful to preserve for the future versus data that is not going to be useful? Is that a case-by-case analysis, depending on the needs of the business?
Isaza: What we’ve challenged our clients to do, and we’ve done successfully with many of our clients, is to say to them, “Okay, we’ve got all the business functions here in the room. I challenge you to tell me what big data you need.”
Here’s an example.
I had a client where we went through this exercise, a global manufacturing company, and of all the functions represented in the room only the people from finance and marketing came back saying they needed big data for something. So, we’re trying to get our clients to think proactively about what kind of big data they need rather than just having it be treated as trivial information.
In that instance, marketing said there is a certain amount of data that they needed and they need it for about three years. If it was older than three years, they didn’t use it anymore. That allowed me to then go back to them and say, “Within the systems of the organization, if we would look at a data map of the organization, what systems contain the data that you need so that I can carve out an exception for you?”
We did the same thing with the finance department. The finance department said they generally need it for at least 10 years; anything older than 10 years they no longer used. So we were able to create a bucket for the finance department and have them identify within their data map what systems they would they would use to retain their big data.
That’s our approach right now. I’m not saying that it’s foolproof, but it’s certainly a step in the right direction.
This post was authored by Casey C. Sullivan, who leads education and awareness efforts at Logikcull. You can reach him at [email protected] or on Twitter at @caseycsull.
Share