By now, hopefully it’s clear that developing a Legal Vendor Cyber Risk Management (LVCRM) program to evaluate your legal vendors should be a priority as part of maturing your overall legal operations. In this section, we discuss the decisions you’ll make as you supplement or augment your existing capacities in this regard.
Regarding LVCRM, most legal operations groups largely lack the necessary capabilities and tools, yet they are responsible for most, if not all, of the diligence process. To overcome this challenge, think critically about the balance between internal resources, desired outcomes and the risks being addressed. This part of the LVCRM Best Practices Guide should set the stage for decisions on how much external support to acquire, as very few – if any – legal operations departments can handle this process on their own and still meet the best practice standards.
Assess Your Situation
Before setting out on this journey, you should take stock of your existing posture, establish a solid understanding of internally available resources and fully comprehend the costs and long-term implications of engaging any internal resources.
Be sure to consider the following areas when approaching a build or buy decision:
- Capacity: Do you have sufficient resources to dedicate to the process, including the nonobvious time requirements around communication, content creation, and engaging internal and external stakeholders? Supplementing your LVCRM by choosing a vendor that offers managed service support will allow you to better balance the competing priorities faced by your internal resources.
- Expertise: Cyber risk assessments require technical cyber security expertise. In many cases, some of this capacity can come from internal enterprise information security resources, but these resources often come with restrictions that may be a nonstarter for legal operations professionals. Legal operations are unlikely to have cyber security subject matter experts (SMEs) on staff. People with these skills are in high demand, making them difficult to find and expensive to secure. Consider these long-term, fully burdened costs as you weigh the decision to hire or contract these skilled professionals.
- Tools: Performing LVCRM at any scale will require support from one or more technology tools. Rudimentary or small-scale programs can be run in a mostly manual fashion using standard business tools such as email, spreadsheets and perhaps a single shared repository, such as SharePoint. Beyond more than a couple dozen vendors, however, you will likely need tools to help with the creation, distribution, collection, validation, analysis, storage and tracking of vendor cyber risk efforts. If these tools are not already in place, you need to consider the acquisition, maintenance, performance and other elements of any tools you decide to acquire to either supplement your team or provide outright augmentation.
What It Takes
Building a LVCRM program requires you to secure not only people but also application-specific technology and expertise. Regardless of whether you build the program in-house or secure external support, you must address the core elements of building this program. These elements generally fall in one of two cost areas:
- Startup Costs
Elements you need prior to beginning any cyber risk assessments.
- Operating Costs
Elements you need as your assessment program continues to operate.
While these core elements should be present in your program regardless of how much of your program is run in-house, the decision to operate internally will impact not only the actual cost of these core elements but also who bears the cost. Keep this in mind as you consider each of the following LVCRM components.
- Assessment: The underlying core of any LVCRM program is the assessment itself. Everything else – any software for handling workflow around the assessment, any analysis that comes out of the assessment, or any other downstream remediation or risk decisions – is based squarely on this fundamental collateral. Many organizations have developed their own internal assessment questionnaire. Often, this has been done in an ad hoc manner over time, without a concerted effort to adhere to a recognized standard control framework, such as the National Institute of Standards and Technology (NIST) Cyber security Framework or Center for Internet Security (CIS) Controls.
Furthermore, these assessments often feature questions that can unintentionally complicate the process of both completing and reviewing assessments. Consider these common assessment question structures and their unintended consequences:
- Yes/No/Not Applicable Questions
Writing questions that are easily answered with these common responses can seem like a good approach because it limits the amount of choices and leads to the development of straightforward questions. Unfortunately, it also creates constraints that drive assessment creators to quickly balloon the number of questions or leave valuable details obfuscated because they don’t fit cleanly into a yes/no approach.
- Open-Ended Questions
As a response to the constraints of yes/no questions, it can be tempting to include open-ended questions to encourage a more narrative response, which requires humans to not only write the questions but also read and interpret them once they have been received. This significantly increases the time it takes to both respond to and analyze assessments. It also creates opportunity for confusion when responses don’t fully or accurately address the question.
- Scaled Response Questions
While a rating scale can be incredibly helpful to better understand a vendor’s posture, presenting a set of choices without explicitly defining them can lead to confusion and miscommunication. What you may think qualifies as a 2 on the scale may be what a vendor considers a 4 on the same scale. Avoid these challenges by defining any answers against a known set of standards.
- Controls and Content: Because internally developed questionnaires are often drawn from contributions from different business units, they can feel disjointed and suffer from a lack of ownership with regards to the content (or, conversely, a significant challenge around territorialism of the content, creating significant internal friction when making modifications). Updating content becomes a similarly difficult proposition, with the complexity of the exercise increasing exponentially as additional stakeholders are added.
- Software Tools: As mentioned earlier, the sheer scope of the assessment and vendor population will quickly out-scale manual processes such as email, spreadsheets and shared file storage. Modern software tools can help solve these issues of scale, but not all are created equal.
Some software tools handle only the workflow components and do not provide any assessments or support for assessment creation – these pieces of content must be developed independently and brought to the platform.
Some software tools handle only certain parts of the risk management life cycle (namely Collect and potentially Validate), leaving your team to struggle to analyze, remediate and monitor with either a manual process or another software solution. Other software solutions are built for more general-purpose governance, risk and compliance support – making them too bulky to gracefully tackle a vendor risk management problem – or require high levels of commitment to custom module development, training or both.
In selecting the software, look for a solution that:
- Is tailored to your needs;
- Supports the security standards you’re seeking to evaluate your vendors against;
- Requires little to no custom development or platform-specific training; and
- Supports as many of the vendor risk management life cycle phases as possible.
While there may be no such thing as a perfect solution, those built to address your problems will perform better than those built for another purpose.
- Process Support: As you engage your vendors, they will inevitably have many questions about the assessment process. Often these questions begin before you even distribute the first assessment, as your vendors learn of your intention to begin an assessment process. Without hiring external support in the form of consulting or managed services, your team will handle each of these questions. If you lack external support, you should ensure that you have the capacity to answer questions ranging from the premise of the exercise to exceptions or exemptions to technical information about the assessment content. Clear and consistent communication, combined with strong expectation setting, will make the process go more smoothly, but there will always be hiccups. A solid dose of empathy when working with your vendors can help smooth out these bumps in the road.
In addition to the headcount necessary to support the communication around these efforts, consider developing a knowledge base of standardized responses to common questions. This will ensure consistency and fairness across your program and reduce response times and levels of effort. More advanced solutions may contain metrics around this service desk style approach and may even leverage software specifically built to handle these issues, with tickets, tracking, knowledge base support and additional functionality. If these capabilities are not presently available in your enterprise, consider acquiring them or acquiring a managed service provider that can provide them to your vendors on your behalf.
- Outcomes and Remediation: In addition to handling the content of the assessment, communication regarding the outcome of the assessment and any required or requested remediation will add additional burden to your team. Consider looking for this communication capability within a software solution or managed services support that will help to track and communicate these needs. In many ways, the long tail of these components can be more burdensome than questions regarding the assessment itself because each remediation plan is tailored to a specific vendor and carries its own activities, timelines and monitoring needs. At scale, managing these open threads can quickly overwhelm even the most dedicated team.
Given the large number of vendors in need of assessment and the often limited time and resources for corporate legal departments to complete their risk assessments, it can be tempting to leverage existing reports and resources to help cover more of this ground more quickly. Engaging with these functions can have great value but can also present unique challenges.
Internal Security Teams
Many large enterprises have equally large enterprise information security operations, often performing similar cyber risk assessment functionalities. Unfortunately for corporate legal departments, collaborating with these internal resources is not always as easy as one would hope. In some instances, engaging these internal resources requires participation in a procurement process that simply does not work from the legal operations perspective. In other cases, the scope of the cyber risk assessments available through internal channels is either too large or too small. Internal resources also tend to suffer from a challenge of velocity, with risk assessments frequently taking somewhere between six weeks and six months to fully complete. Navigating these challenges can be so difficult, time-consuming and costly that obtaining your own risk assessment capacity often makes better business sense, but this does not mean that you should leave your internal cyber colleagues completely in the dark.
Engaging these internal enterprise security teams can have significant benefits. One strong way to offer a path forward is to work with your internal resources to ensure that their highest priority risk areas are addressed in the process that you are building. This may take the form of a minimum set of controls expected to be in place with third-party vendors or a certain set of questions addressed within the process.
Certifications and Accreditations
In an admirable effort to standardize controls and validate their implementation, several security-specific attestations are available in the market, and you have doubtlessly already encountered them. Chief among these is the ISO/IEC 27001:2013 certification, typically performed by a third-party auditing firm on behalf of a vendor. This certification standard consists of a systematic representation of an organization’s information security practices, as evaluated against the International Organization for Standardization (ISO) control set and validated by an independent certification authority or auditor. Achieving this certification is not a small undertaking and often represents significant effort in terms of both time and financial resources. That said, it does have a couple elements that make it difficult to rely on as the sole source of truth for a meaningful cyber risk implementation.
First, the breadth, depth and cost of the ISO certification process can be prohibitive for many mid-sized and smaller vendors. A recent study indicated that only 9% of all law firms have achieved this ISO certification,9 meaning that you’re left to do your own assessment on the remaining 91% of firms.
Second, the ISO certification process is heavily dependent on the scope of the process. Clearly understanding what is in scope for a given assessment will help you better determine where any gaps in coverage may exist. For example, it is not uncommon for an ISO 27001 certification to be limited to internal systems, applications and services. It may not cover external services, including web-based Software-as-a-Service solutions and contract employees (including attorneys). These out-of-scope elements are left to you to conduct an assessment against.
- System and Organization Controls 2
Developed by the American Institute of Certified Public Accountants, the System and Organization Controls (SOC) 2 audit is a comprehensive assessment on the system-level controls of a service organization (as opposed to SOC 1, which focuses on financial controls and reporting). The SOC 2 audit is available in both Type 1 and Type 2: Type 1 includes a review around an organization’s controls against the trust services criteria, and Type 2 includes the same coverage as Type 1 and also tests those controls to validate their implementation.
Similar to the ISO certification discussed earlier, SOC 2 Type 2 reports are developed to be thorough assessments of an organization’s security posture but also have shortcomings. They are conducted by an independent third-party and offered as a proxy report for overall security posture. As with ISO, scope can be a significant area of concern when reviewing a SOC 2 Type 2 report. Anything that is not explicitly included in the scope of said report should be considered to be out of scope and thus completely unaccounted for in the contents of the report. Perhaps this is not a high-risk consideration for your relationship with a given vendor because all your interactions are covered under the scope of the SOC 2 Type 2 report. If not, however, you must conduct your own assessment on out-of-scope items.
In addition to the scope challenges, SOC 2 Type 2 reports are frequently written in close collaboration with the organization, often to the point where any potential findings of risk or areas of concern are minimized or excluded. Indeed, it is rare to find SOC 2 Type 2 reports that contain any negative findings. If they do, the findings are tend to be inconsequential, appearing only as token content for that section of the report.
Generally, both ISO 27001 certification and SOC 2 Type 2 reports should be considered useful but not sufficient. Unless your entire relationship with the vendor in question is addressed in the scope of the certification or report, additional risk assessments will be necessary to diligently address areas of concern.
- Shared Assessments Standardized Information Gathering
These challenges often drive organizations to consider premade and readily available assessment vehicles, such as the Shared Assessments Standardized Information Gathering (SIG) questionnaire. The SIG, as it’s commonly known, is not as standardized as it might seem. To allow organizations to create more tailored assessments, Shared Assessments has introduced the ability to scope and tailor questionnaires beyond the three pre-scoped SIGs already offered – Lite, Core and Full. Because of this, there is less and less overlap between the assessments as organizations mix and match questions from the various pre-scoped parts, resulting in gaps between what vendors may have previously answered and what an organization is looking to have vendors complete. This often defeats the reusability purpose of the SIG, and the only way to be prepared to respond to any question is for vendors to complete the Full SIG, which comprises over 1,400 questions.
Furthermore, the SIG assessments include several of the assessment question structures, mentioned earlier, that can make interpreting results more complicated. Namely, much of the SIG relies on both a Yes/No/Not Applicable response, followed by a scaled response for questions answered in the affirmative. Some SIG questions, such as the NIST Cyber security Framework or CIS Controls, are mapped to standards, but others are not, further compounding the difficulty in understanding what these answers mean against your chosen standard. Because of their standardized nature and licensing agreements, organizations are not able to make changes to any SIG questions to better meet their needs, which can limit flexibility and scope.
Evaluating the landscape of third-party risk tools and services can be overwhelming. Each one claims a meaningful differentiation from the others, yet they appear to be very similar after reviewing the details of each one’s website or marketing sheet or talking to their representative at a conference booth. In the next section, we’ll discuss the three pillars of people, process and technology to help evaluate which partners will be the best fit for your team.
Download the Report