Thursday, April 29, 2010

Blame the Bullets, not PowerPoint

Blog readers probably know I am not a big fan of PowerPoint presentations. I sympathize with many points in the recent article We Have Met the Enemy and He Is PowerPoint, which resurrects the December 2009 story by Richard Engel titled So what is the actual surge strategy? I think it is important to focus, however, on the core problem with PowerPoint presentations: bullets.

Bullets are related to the main PowerPoint problem, which is having the medium drive the message rather than having the message drive the medium. When you create a PowerPoint presentation that relies on bullets to deliver a message, you essentially cripple the intellect of anyone attending the presentation.

I thought about this yesterday while listening to Johnny Cash. Let's imagine Johnny wanted to explain the devotion someone feels for his significant other. If his default thinking involved creating a PowerPoint presentation every time he wanted to communicate, the bullets might look something like this:

Title: Reasons I Walk the Line

  • Key points about me:


    • My heart: keep a close watch

    • My eyes: keep open

    • My "ends:" "keep out for ties that bind"

    • Easy for me to me true

    • I'm a fool


  • Proof I'm true:


    • End each day alone

    • You're on my mind


      • Day

      • Night


    • Happiness

    • I'd turn the tide for you

    • I walk the line


  • Reasons you keep me true:


    • You're a got "a way"

    • You "give me cause"

    • You're mine




Or, instead of delivering this disaster (which probably takes 5 minutes), Johnny sings "I Walk the Line" in 2 minutes 44 seconds. Which approach is more effective, efficient, powerful? This doesn't mean we should all start singing when we need to deliver a message. Rather, determine the message first, and then select a medium. Don't lead with PowerPoint.

Saturday, April 24, 2010

Review of The Rootkit Arsenal Posted

Amazon.com just posted my five star review of The Rootkit Arsenal by Bill Blunden. I received this book last year but didn't get a chance to finish it until this week, thanks to several long plane flights. From the review:

Disclaimer: Bill mentions me and my book "Real Digital Forensics" on pages xxvi and 493. He sent me a free review copy of his book.

"Wow." That summarizes my review of "The Rootkit Arsenal" (TRA) by Bill Blunden. If you're a security person and you plan to read one seriously technical book this year, make it TRA. If you decide to really focus your attention, and try the examples in the book, you will be able to write Windows rootkits. Even without taking a hands-on approach, you will learn why you can't trust computers to defend themselves or report their condition in a trustworthy manner.

Snort Near Real Time Detection Project

I don't think many people noticed this story, but on Thursday Sourcefire Labs published A New Detection Framework on the VRT blog and a NRT page on their labs site. I had a small part in this development due to the Incident Detection Summit I organized late last year. Sourcefire sent an army of developers (I think they had the biggest contingent) to the conference and clearly enjoyed participating. During the event they spoke to participants from multiple security teams and had follow-up discussions with several of us.

One item we emphasized with Sourcefire was the need for analysis of file contents, not just network traffic. As Matt mentions in his latest post, Mike Cloppert and his team have used these approaches very effectively and have even published components of their work as open source projects like Vortex by Charles Smutz. In my NSM in products post last year I called this extracted content and listed it as one of the forms of NSM data.

What does this mean? The basic idea is that you extract content from network traffic, analyze it, record metadata, and so on, and then provide that information to a security analyst. That may sound like an anti-malware approach, but the idea is to provide indicators, not necessarily block transmission. In any case, Sourcefire published a presentation on their site on what their beta code can do. I'm really glad to see them working on this problem and sharing results in a form that interested parties can download and test.

Thoughts on New OMB FISMA Memo

I read the new OMB memorandum M-10-15, "FY 2010 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management." This InformationWeek article pretty well summarizes the memo, but I'd like to share a few thoughts.

Long-time blog readers should know I've been writing about FISMA for five years, calling it a "joke," a "a jobs program for so-called security companies without the technical skills to operationally defend systems," and other kind words. Any departure from the previous implementation is a welcome change.

However, it's critical to remember that control monitoring is not threat monitoring. Let's take a look at the OMB letter to see if we can see what is really changing for FISMA implementation.

For FY 2010, FISMA reporting for agencies through CyberScope, due November 15, 2010, will follow a three-tiered approach:

1. Data feeds directly from security management tools
2. Government-wide benchmarking on security posture
3. Agency-specific interviews


I wonder how long before CyberScope is compromised?

Turning to the three points, what does #1 really mean?

Beginning January 1, 2011, agencies will be required to report on this new information monthly. The new data feeds will include summary information, not detailed information, in the following areas for CIOs:

• Inventory
• Systems and Services
• Hardware
• Software
• External Connections
• Security Training
• Identity Management and Access

So it looks like OMB is requiring agencies to basically report asset inventory information, training status for employees, and some IDM information? And monthly? I guess if you're moving from a three-year cycle to a monthly cycle, that sounds "continuous," but monthly in the modern enterprise is recognized as a snapshot.

How about #2?

A set of questions on the security posture of the agencies will also be asked in CyberScope. All agencies, except microagencies, will be required to respond to these questions in addition to the data feeds described above.

Now I see OMB will be asking agencies questions, which they will have to answer?

And #3:

As a follow-up to the questions described above, a team of government security specialists will interview all agencies individually on their respective security postures.

This looks like another question-and-answer session, except I expect OMB to spend time with the problem cases identified in steps 1 and 2.

Let's be clear: there's no "continuous monitoring" happening here. This is basic housekeeping, although the scale of the government and bureaucratic inertia make this a difficult problem. I hope this is only the first round of change.

I found the frequently asked questions to be more interesting than the main memo.

30. Why should agencies conduct continuous monitoring of their security controls?

Continuous monitoring of security controls is a cost-effective and important part of managing enterprise risk and maintaining an accurate understanding of the security risks confronting your agency’s information systems. Continuous monitoring of security controls is required as part of the security authorization process to ensure controls remain effective over time (e.g., after the initial security authorization or reauthorization of an information system) in the face of changing threats, missions, environments of operation, and technologies.


Ah ha, finally we see it in print: "continuous monitoring of security controls." There's no continuous monitoring of threats here. Furthermore, I'm wondering why OMB considers asset inventory, training, and IDM to be so crucial to security risks. Sure, they are important, but where's the real "security" in those controls? In other words, they could still observe controls, but those controls could be implementation of filtering Web proxies, firewalls, anti-malware, and other traditional security measures.

36. Must Government contractors abide by FISMA requirements?

Yes... Because FISMA applies to both information and information systems used by the agency, contractors, and other organizations and sources, it has somewhat broader applicability than prior security law. That is, agency information security programs apply to all organizations (sources) which possess or use Federal information – or which operate, use, or have access to Federal information systems (whether automated or manual) – on behalf of a Federal agency. Such other organizations may include contractors, grantees, State and local Governments, industry partners, providers of software subscription services, etc. FISMA, therefore, underscores longstanding OMB policy concerning sharing Government information and interconnecting systems.


This concerns me. Is the government further pushing on contractors to adopt FISMA in private business?

FISMA is unambiguous regarding the extent to which security authorizations and annual IT security assessments apply. To the extent that contractor, state, or grantee systems process, store, or house Federal Government information (for which the agency continues to be responsible for maintaining control), their security controls must be assessed against the same NIST criteria and standards as if they were a Government-owned or -operated system. The security authorization boundary for these systems must be carefully mapped to ensure that Federal information:

(a) is adequately protected,

(b) is segregated from the contractor, state or grantee corporate infrastructure, and

(c) there is an interconnection security agreement in place to address connections from the contractor, state or grantee system containing the agency information to systems external to the security authorization boundary.


It's probably going to take .gov-savvy lawyer to really explain what these points mean, but private enterprise working with government data should probably take a close look at these new FISMA developments.

Tuesday, April 20, 2010

Still Looking for Infrastructure Administrator for GE-CIRT

Two months ago I posted Information Security Jobs in GE-CIRT and Other GE Teams. I've almost filled all of the roles, or have candidates for all roles in play, with the exception of one -- Information Security Infrastructure Engineer (1147859).

We're looking for someone to design, build, and run infrastructure to support GE-CIRT functions. As you might expect, we don't need someone with Windows experience. Beyond Unix-like operating systems, we are interested in someone with MySQL experience. You must be a US citizen who lives near our Michigan AMSTC or can relocate on your own cost.

If you are interested, please visit www.ge.com/careers and apply for role 1147859. Thank you.

Sunday, April 18, 2010

Review of Handbook of Digital Forensics and Investigation Posted

Amazon.com just posted my four star review of Handbook of Digital Forensics and Investigation by Eoghan Casey and colleagues. From the review:

I've probably read and reviewed a dozen or so good digital forensics books over the last decade, and I've written a few books on that topic or related ones. The Handbook of Digital Forensics and Investigation (HODFAI) is a solid technical overview of multiple digital forensics disciplines. This book will introduce the reader to a variety of topics and techniques that a modern investigator is likely to apply in the enterprise. Because the book is a collection of sections by multiple authors, some of the coverage is uneven. Nevertheless, I recommend HODFAI as a single volume introduction to modern digital forensics.

Review of The Victorian Internet Posted

Amazon.com just posted my five star review of The Victorian Internet by Tom Standage. From the review:

Tom Standage mentions chronocentricity on p 213 as "the egotism that one's own generation is poised on the very cusp of history." Comparing modern times to the past, he says "if any generation has the right to claim that it bore the full bewildering, world-shrinking brunt of such a revolution, it is not us -- it is our nineteenth-century forbears." Commentator Gary Hoover defines chronocentricity as being "obsessed with our own era, considering it the most important or most dynamic time ever." Being a history major, I find The Victorian Internet (TVI) to be an enlightening antidote to chronocentricity, and I recommend it to anyone trying to better understand modern times through the lens of history.

Measurement Over Models

Most blog readers know I strongly prefer measurement over models. In digital security, I think too many practitioners prefer to substitute their own opinions for data, i.e., "defense by belief" instead of "defense by fact." I found an example of a conflict between the two mindsets in Test flights raise hope for European air traffic:

Dutch airline KLM said inspection of an airliner after a test flight showed no damage to engines or evidence of dangerous ash concentrations. Germany's Lufthansa also reported problem-free test flights...

"We hung up filters in the engines to filter the air. We checked whether there was ash in them and all looked good," said a KLM spokeswoman. "We've also checked whether there was deposit on the plane, such as the wings. Yesterday's plane was all well..."

German airline Air Berlin was quoted as expressing irritation at the way the shutdown was decided.

"We are amazed that the results of the test flights done by Lufthansa and Air Berlin have not had any bearing on the decision-making of the air safety authorities," Chief Executive Joachim Hunold told the mass circulation Bild am Sonntag paper.

"The closure of the air space happened purely because of the data of a computer simulation at the Volcanic Ash Advisory Center in London."


I understand that safety officials need to make decisions based on the best information available at the time the decision needs to be made. However, when that information changes, the decision maker should re-evaluate his or her position. This reminds me of the silly policies mandated by various rule-makers regarding password complexity and frequency of change. They are basically completely disconnected with the modern attack and exploitation environment. That thinking recalls a time when guessing credentials or brute-forcing passwords took weeks instead of near-real-time, and was the prevalent way to compromise a system.

Returning to the volcano cloud -- I'm sure safety officials think they are acting in the best interests of passengers, but I don't see the airlines about to take actions that jeopardize their customers. Furthermore, customers who would be wary about flying through or near the ash cloud could decide not to do so. The problem is that safety officials bear none of the cost of their decisions while airlines and customers do.

Friday, April 16, 2010

Vulnerable Sites Database: More Intrusion as a Service

Last year I blogged about Shodan, and today thanks to Team Cymru I learned of the latest evolution of Intrusion as a Service. It's called the Vulnerable Sites Database.

According to the site, to be listed as a vulnerable site a submitter must provide "1. site name 2. vulnerability or JPG proof." This reminds me of a Web defacement archive where the submitter demonstrates having defaced a Web site, but with www.vs-db.info we get details like "local file inclusion" or "SQL injection."

All we need now is to pair the search capability of a site like Shodan with the vulnerability data for an entire site as provided by the Vulnerable Sites Database. How about a cross-reference against sites currently whitelisted by Web proxy providers and others who use reputation to permit access? Something like:

Select sites where the reputation is GOOD, that are hosted in the US, and are vulnerable to SQL injection?

Next, exploit vulnerable sites and use them for hosting malware, acting as command and control servers, and so on.

While neat, I thought Shodan was dangerous enough to attract LE attention and be shut down. I wonder how long www.vs-db.info will last. A site like I just described would probably really cross the line. I hope.

Update: Thanks to @jeremiahg for pointing me towards www.xssed.com.

"Cyber insecurity is the paramount national security risk."

Thanks to @borroff I read a fascinating article titled Cybersecurity and National Policy by Dan Geer. The title of my blog post is an excerpt from this article, posted in the Harvard National Security Journal on 7 April. This could be my favorite article of the year, and it proves to me that Dan Geer's writing has the highest signal-to-noise ratio of any security author, period.

(Personal note: I remember seeing Dan speak at a conference, and he apologized for reading his remarks rather than speaking extemporaneously. He said he respected our time too much to not read his remarks, since he wanted to conserve time and words.)

I've reproduced my favorite excerpts and tried to thus summarize his argument.

First, security is a means, not an end. Therefore, a cybersecurity policy discussion must necessarily be about the means to a set of desirable ends and about affecting the future. Accordingly, security is about risk management, and the legitimate purpose of risk management is to improve the future, not to explain the past.

Second, unless and until we devise a scorekeeping mechanism that apprises spectators of the state of play on the field, security will remain the province of “The Few”. Sometimes leaving everything to The Few is positive, but not here as, amongst other things, demand for security expertise so outstrips supply that the charlatan fraction is rising.

Third, the problems of cybersecurity are the same as many other problems in certain respects, yet critically different in others... these differences include the original owner continuing to possess stolen data after the thief takes it, and law enforcement lacking the ability to work at the speed of light.


Security is a forward-looking function, requiring a scorecard (sound familiar?) with problems that are both common and unique.

[B]ecause the United States’s ability to project power depends on information technology, cyber insecurity is the paramount national security risk...

[R]emember the definition of a free country: a place where that which is not forbidden is permitted. As we consider the pursuit of cybersecurity, we will return to that idea time and time again; I believe that we are now faced with “Freedom, Security, Convenience: Choose Two”

Dan then outlines three national security risks:

[W]hat types of risks rose to such a level that they could legitimately be considered national security concerns[?]...

The first is any mechanism that, to operate correctly, must be a single point of function, thereby containing a single point of failure...

[The second] national security scale risk is cascade failure, and cascade failure is so much easier to detonate in a monoculture...

[The third is that it] is simply not possible to provide product or supply chain assurance without a surveillance state...


Dan next provides us with what I may adopt as my own definition of security:

I currently define security as the absence of unmitigatable surprise.

This definition resonates with me, although it could be twisted for some odd consequences. Could one simply choose to never feel surprised in order to feel secure? I hope not! Dan provides some conclusions next:

[1] our paramount aim cannot be risk avoidance but rather risk absorption — the ability to operate in degraded states, in both micro and macro spheres, to take as an axiom that our opponents have and will penetrate our systems at all levels, and to be prepared to adjust accordingly...

[2] free society rulemaking will trail modalities of risk by increasing margins...

[3] if the tariff of security is paid, it will be paid in the coin of privacy...

[4] market demand is not going to provide, in and of itself, a solution.


I believe these are true. While explaining the third conclusion Dan notes:

It has been said over and over for twenty years, “If only we could make government’s procurement engine drive the market toward secure products.” This, ladies and gentlemen, is a pleasant fiction.

That is also true! I'm going to skip his discussion of government action and list three essential capabilities:

[T]he ability to operate in a degraded state is an essential capability for government systems and private sector systems.

A second essential capability is a means to assure broad awareness of the gravity of the situation...

There is a third essential, one that flows from recognizing the limits of central action in a decentralized world, and that is some measure of personal responsibility and involvement.


Dan concludes with:

For me, I will take freedom over security and I will take security over convenience.

I highly encourage reading the whole article. I skipped Dan's discussion of "regulation, taxation, and insurance pricing," but that is also worth understanding.

Thursday, April 15, 2010

Response to Dan Geer Article on APT

A few people sent me a link to Dan Geer's article Advanced Persistent Threat. Dan is one of my Three Wise Men, along with Ross Anderson and Gene Spafford. I'll reproduce a few excerpts and respond.

Let us define the term for the purpose of this article as follows: A targeted effort to obtain or change information by means that are difficult to discover, difficult to remove, and difficult to attribute.

That describes APT's methodology, but APT is not an effort -- it's a proper noun, i.e., a specific party.

Given that the offense has the advantage of no legacy drag, the offense's ability to insert innovation into its product mix is unconstrained. By contrast, the CIO who does the least that can be gotten away with only increases the frequency of having to do something, not the net total work deficit pending.

In other words, the offense expends work whenever innovation is needed; the defense expends work each day and never catches up.

This "least expensive defense" is not insane, just ineffective because the offense is a sentient being with a strategic advantage.


I love the characterization of offense as having "no legacy drag," and "defense expends work each day and never catches up." That perfectly describes the advantage of offense over defense.

Even if you don't think the advanced persistent threat is all that advanced, realize that if this is so, it is only because it doesn't have to be when your defenses don't require it to be. Even more central, do not think that the supplier of defensive weapons will ever have weapons to thwart (the deployment of) offensive weapons that are sufficiently well targeted to hit only some people, some computers, some data.

Dan nicely counters the argument that some make, namely "APT doesn't sound so 'advanced.'"

The advanced persistent threat, which is to say the offense that enjoys a permanent advantage and is already funding its R&D out of revenue, will win as long as you try to block what he does. You have to change the rules. You have to block his success from even being possible, not exchange volleys of ever better tools designed in response to his. You have to concentrate on outcomes, you have to pre-empt, you have to be your own intelligence agency, you have to instrument your enterprise, you have to instrument your data.

In one paragraph Dan reminds us to change the plane, be field-assessed, not control-compliant (outcomes over inputs), and build intelligence and instrumentation.

With data, not networks or infrastructure, as the unit of surveillance and action, an adaptable approach to data security is possible. Not another shield for every arrow, but a comprehensive fortress of information control and risk management -- a unifying framework that can best be described as Enterprise Information Protection (EIP).

EIP unifies data-leak prevention, network access control, encryption policy and enforcement, audit and forensics, and all the other wayward data protection technologies from their present state of functional silos into an extensible platform supported by policy and operational practices.


Dan's conclusion seems too short, which is probably the result of the constraints imposed by writing for NetworkWorld. I don't think an enterprise that adopts his approach will beat APT. Stopping this threat requires direct and indirect pressure in a threat-centric approach, not a vulnerability-centric approach.

Last Chance for TCP/IP Weapons School 2.0 in Las Vegas

Yesterday I returned home from teaching TCP/IP Weapons School 2.0 in Barcelona for Black Hat. I'd like to thank Black Hat and my students for a great class. I believe the current format, which is a mix of methodology, labs, and answering whatever questions the students have, in about 15-20 minute spontaneous presentations, is working really well. I plan to retire the current cases this year, and develop TWS3 with new cases for teaching in 2011.

My last class of the year will be at Black Hat USA 2010 Training on 25-28 July 2010 at Caesars Palace in Las Vegas, NV. I will be teaching two sessions of TCP/IP Weapons School 2.0, one on the weekend and one during the week.

Registration is now open. Black Hat has four remaining price points and deadlines for registration.

  • Early ends 1 May

  • Regular ends 1 Jul

  • Late ends 22 Jul

  • Onsite starts at the conference


Seats are filling -- it pays to register early!

If you review the Sample Lab I posted last year, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

Feedback from my earlier sessions was great. Two examples:

"Truly awesome -- Richard's class was packed full of content and presented in an understandable manner." (Comment from student, 28 Jul 09)

"In six years of attending Black Hat (seven courses taken) Richard was the best instructor." (Comment from student, 28 Jul 09)

If you've attended a TCP/IP Weapons School class before 2009, you are most welcome in the new one. Unless you attended my Black Hat training in 2009, you will not see any repeat material whatsoever in TWS2. Older TWS classes covered network traffic and attacks at various levels of the OSI model. TWS2 is more like a forensics class, with network, log, and related evidence.

I plan to retire TWS2 after Vegas this year and teach TWS3 in 2011, if Black Hat invites me back.

I recently described differences between my class and SANS if that is a concern.

I look forward to seeing you. Thank you.

Bejtlich on Visible Risk Podcast

My friend Rocky DeStefano from Visible Risk posted the video (streaming) and audio (.mp3, 124 MB) of a discussion he hosted on advanced persisten threat. Myself, Mike Cloppert, Rob Lee, and Shawn Carpenter discussed APT for about an hour on video and about an hour and a half on audio. Let Rocky know what you think as a comment here or via Twitter to @visiblerisk.

One comment -- slightly before the 24:00 mark, Rob made a remark about "what you and I respond to in the Air Force was laughable at this point, compared to what we're seeing today, actual intelligence being pulled back, potential nation state actors, potential organized crime, earning thousands or millions of dollars..." I disagree with part of that comment and agree with part of that comment. For the "disagree" part: Rob was stationed in the 609th, which was not the AFCERT. In the AFCERT we detected and responded to nation state activity of the caliber we see today. I don't know what the 609th dealt with. For the "agree" part: in 1998 it was much rarer to see organized crime operating at the level they do today. I didn't respond during the video because I didn't feel the need to interrupt any time I didn't fully agree with a speaker, and this exchange was mostly between Rob and Mike!

Tuesday, April 06, 2010

Defense Security Service Publishes 2009 Report on "Targeting U.S. Technologies"

Thanks to Team Cymru I learned of a new Defense Security Service report titled Targeting U.S. Technologies:
A Trend Analysis of Reporting from Defense Industry
. The report seems to be the 2009 edition, which covers reporting from 2008. I'll have to watch for a 2010 version. From the report:

The Defense Security Service (DSS) works with defense industry to protect critical technologies and information. Defense contractors with access to classified material are required to identify and report suspicious contacts and potential collection attempts as mandated in the National Industrial Security Program Operating Manual (NISPOM). DSS publishes this annual report based on an analysis of suspicious contact reports (SCRs) that DSS considers indicative of efforts to target defense-related information.

The executive summary offers these bullet points:

  • East Asia and Pacific-originated contacts continued to generate the greatest number of suspicious reports attributable to a specific region of origin. For the fifth year in a row, reporting with an East Asia and Pacific nexus far exceeded those from any other region suggesting a continuing, concerted, and growing effort to exploit contacts within United States industry for competitive, economic, and military advantage.

  • Aggressive collection attempts by commercial actors continued to surge. In FY08, commercial entities attempted to collect defense technology at a rate nearly double that of governmental or individual collector affiliations. This trend likely represents a purposeful attempt to make the contacts seem more innocuous, shifting focus from government collectors to commercial or non-traditional entities.

  • Collectors continued bold and overt exploitation of the Internet to acquire information via direct requests. Facilitated by ever increasing world wide connectivity, the ease of inundating industry with overt email requests and webpage submissions made direct requests a premier vehicle for solicitation and/or collection. While not all direct requests for information or services represent organized collection attempts, exploitation of this medium provides collectors an efficient, low-cost, high-gain opportunity to acquire classified or restricted information.

  • Unmanned aerial vehicle (UAV) technology has emerged as a priority target of aggressive collectors from multiple regions. In FY08, DSS noticed a significant increase in exploitation attempts against UAV systems and technologies at CDCs. Targeting of UAVs is non-region specific, broadly based, and spans all phases of research, development, and deployment. It is highly likely that this interest and probable targeting is the direct result of a growing and increasingly competitive world market for UAV systems.


This report is good background and support for your threat-centric security measures.

BeyondTrust Report on Removing Administrator: Correct?

Last week BeyondTrust published a report titled BeyondTrust 2009 Microsoft Vulnerability Analysis. The report offers several interesting conclusions:

[R]emoving administrator rights will better protect companies against the exploitation of:

  • 90% of critical Windows 7 vulnerabilities reported to date

  • 100% of Microsoft Office vulnerabilities reported in 2009

  • 94% of Internet Explorer and 100% of Internet Explorer 8 vulnerabilities reported in 2009

  • 64% of all Microsoft vulnerabilities reported in 2009


Initially I was pleased to read these results. Then I read BeyondTrust's methodology.

This report uses information found in the individual Security Bulletins to classify vulnerabilities by Severity Rating, Vulnerability Impact, Affected Software, as well as to determine if removing administrator rights will mitigate a vulnerability. A vulnerability is considered mitigated by removing administrator rights if the following sentence is located in the Security Bulletin’s Mitigating Factors section

Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.
(emphasis added)

"Could be less impacted?" In other words, BeyondTrust didn't do any testing. They just read Microsoft vulnerability reports, checked for that sentence, and published the results. I would be more comfortable with their conclusions if they conducted exploitation tests against suitable targets to determine if administrator rights made a difference or not.

This doesn't necessarily mean BeyondTrust is wrong. Removing administrator rights does help reduce exposures, but testing is required against modern exploitation methods to determine just how effective that countermeasure is.