Thursday, June 28, 2018

Why Do SOCs Look Like This?

When you hear the word "SOC," or the phrase "security operations center," what image comes to mind? Do you think of analyst sitting at desks, all facing forward, towards giant screens? Why is this?

The following image is from the outstanding movie Apollo 13, a docudrama about the challenged 1970 mission to the moon.


It's a screen capture from the go for launch sequence. It shows mission control in Houston, Texas. If you'd like to see video of the actual center from 1970, check out This Is Mission Control.

Mission control looks remarkably like a SOC, doesn't it? When builders of computer security operations centers imagined what their "mission control" rooms would look like, perhaps they had Houston in mind?

Or perhaps they thought of the 1983 movie War Games?


Reality was way more boring however:


I visited NORAD under Cheyenne Mountain in 1989, I believe, when visiting the Air Force Academy as a high school senior. I can confirm it did not look like the movie depiction!

Let's return to mission control. Look at the resources available to personnel manning the mission control room. The big screens depict two main forms of data: telemetry and video of the rocket. What about the individual screens, where people sit? They are largely customized. Each station presents data or buttons specific to the role of the person sitting there. Listen to Ed Harris' character calling out the stations: booster, retro, vital, etc. For example:


This is one of the key differences between mission control and any modern computerized operations center. In the 1960s and 1970s, workstations (literally, places where people worked) had to be customized. They lacked the technology to have generic workstations where customization was done via screen, keyboard, and mouse. They also lacked the ability to display video on demand, and relied on large television screens. Personnel with specific functions sat at specific locations, because that was literally the only way they could perform their jobs.

With the advent of modern computing, every workstation is instantly customizable. There is no need to specialize. Anyone can sit anywhere, assuming computers allow one's workspace to follow their logon. In fact, modern computing allows a user to sit in spaces outside of their office. A modern mission control could be distributed.

With that in mind, what does the current version of mission control look like? Here is a picture of the modern Johnson Space Center's mission control room.



It looks similar to the 1960s-1970s version, except it's dominated by screens, keyboards, and mice.

What strikes me about every image of a "SOC" that I've ever seen is that no one is looking at the big screens. They are almost always deployed for an audience. No one in an operational role looks at them.

There are exceptions. Check out the Arizona Department of Transportation operations center.


Their "big screen" is a composite of 24 smaller screens showing traffic and roadways. No one is looking at the screen, but that sort of display is perfect for the human eye.

It's a variant of Edward Tufte's "small multiple" idea. There is no text. The eye can discern if there is a lot of traffic, or little traffic, or an accident pretty easily. It's likely more for the benefit of an audience, but it works decently well.

Compare those screens to what one is likely to encounter in a cyber SOC. In addition to a "pew pew" map and a "spinning globe of doom," it will likely look like this, from R3 Cybersecurity:


The big screens are a waste of time. No one is standing near them. No one sitting at their workstations can read what the screens show. They are purely for an audience, who can't discern what they show either.

The bottom line for this post is that if you're going to build a "SOC," don't build it based on what you've seen in the movies, or in other industries, or what a consultancy recommends. Spend some time determining your SOC's purpose, and let the workflow drive the physical setting. You may determine you don't even need a "SOC," either physically or logically, based on maturing understandings of a SOC's mission. That's a topic for a future post!

Monday, June 25, 2018

Bejtlich on the APT1 Report: No Hack Back

Before reading the rest of this post, I suggest reading Mandiant/FireEye's statement Doing Our Part -- Without Hacking Back.

I would like to add my own color to this situation.

First, at no time when I worked for Mandiant or FireEye, or afterwards, was there ever a notion that we would hack into adversary systems. During my six year tenure, we were publicly and privately a "no hack back" company. I never heard anyone talk about hack back operations. No one ever intimated we had imagery of APT1 actors taken with their own laptop cameras. No one even said that would be a good idea.

Second, I would never have testified or written, repeatedly, about our company's stance on not hacking back if I knew we secretly did otherwise. I have quit jobs because I had fundamental disagreements with company policy or practice. I worked for Mandiant from 2011 through the end of 2013, when FireEye acquired Mandiant, and stayed until last year (2017). I never considered quitting Mandiant or FireEye due to a disconnect between public statements and private conduct.

Third, I was personally involved with briefings to the press, in public and in private, concerning the APT1 report. I provided the voiceover for a 5 minute YouTube video called APT1: Exposing One of China's Cyber Espionage Units. That video was one of the most sensitive, if not the most sensitive, aspects of releasing the report. We showed the world how we could intercept adversary communications and reconstruct it. There was internal debate about whether we should do that. We decided to cover the practice in the report, as Christopher Glyer Tweeted:


In none of these briefings to the press did we show pictures or video from adversary laptops. We did show the video that we published to YouTube.

Fourth, I privately contacted former Mandiant personnel with whom I worked during the time of the APT1 report creation and distribution. Their reaction to Mr Sanger's allegations ranged from "I've never heard of that" to "completely false." I asked former Mandiant colleagues, like myself, in the event that current Mandiant or FireEye employees were told not to talk to outsiders about the case.

What do I think happened here? I agree with the theory that Mr Sanger misinterpreted the reconstructed RDP sessions for some sort of "camera access." I have no idea about the "bros" or "leather jackets" comments!

In the spirit of full disclosure, prior to publication, Mr Sanger tried to reach me to discuss his book via email. I was sick and told him I had to pass. Ellen Nakashima also contacted me; I believe she was doing research for the book. She asked a few questions about the origin of the term APT, which I answered. I do not have the book so I do not know if I am cited, or if my message was included.

The bottom line is that Mandiant and FireEye did not conduct any hack back for the APT1 report.

Update: Some of you wondered about Ellen's role. I confirmed last night that she was working on her own project.

Tuesday, May 15, 2018

Bejtlich Joining Splunk


Since posting Bejtlich Moves On I've been rebalancing work, family, and personal life. I invested in my martial arts interests, helped more with home duties, and consulted through TaoSecurity.

Today I'm pleased to announce that, effective Monday May 21st 2018, I'm joining the Splunk team. I will be Senior Director for Security and Intelligence Operations, reporting to our CISO, Joel Fulton. I will help build teams to perform detection and monitoring operations, digital forensics and incident response, and threat intelligence. I remain in the northern Virginia area and will align with the Splunk presence in Tyson's Corner.

I'm very excited by this opportunity for four reasons. First, the areas for which I will be responsible are my favorite aspects of security. Long-time blog readers know I'm happiest detecting and responding to intruders! Second, I already know several people at the company, one of whom began this journey by Tweeting about opportunities at Splunk! These colleagues are top notch, and I was similarly impressed by the people I met during my interviews in San Francisco and San Jose.

Third, I respect Splunk as a company. I first used the products over ten years ago, and when I tried them again recently they worked spectacularly, as I expected. Fourth, my new role allows me to be a leader in the areas I know well, like enterprise defense and digital operational art, while building understanding in areas I want to learn, like cloud technologies, DevOps, and security outside enterprise constraints.

I'll have more to say about my role and team soon. Right now I can share that this job focuses on defending the Splunk enterprise and its customers. I do not expect to spend a lot of time in sales cycles. I will likely host visitors in the Tyson's areas from time to time. I do not plan to speak as much with the press as I did at Mandiant and FireEye. I'm pleased to return to operational defense, rather than advise on geopolitical strategy.

If this news interests you, please check our open job listings in information technology. As a company we continue to grow, and I'm thrilled to see what happens next!

Monday, May 07, 2018

Trying Splunk Cloud

I first used Splunk over ten years ago, but the first time I blogged about it was in 2008. I described how to install Splunk on Ubuntu 8.04. Today I decided to try the Splunk Cloud.

Splunk Cloud is the company's hosted Splunk offering, residing in Amazon Web Services (AWS). You can register for a 15 day free trial of Splunk Cloud that will index 5 GB per day.

If you would like to follow along, you will need a computer with a Web browser to interact with Splunk Cloud. (There may be ways to interact via API, but I do not cover that here.)

I will collect logs from a virtual machine running Debian 9, inside Oracle VirtualBox.

First I registered for the free Splunk Cloud trial online.

After I had a Splunk Cloud instance running, I consulted the documentation for Forward data to Splunk Cloud from Linux. I am running a "self-serviced" instance and not a "managed instance," i.e., I am the administrator in this situation.

I learned that I needed to install a software package called the Splunk Universal Forwarder on my Linux VM.

I downloaded a 64 bit Linux 2.6+ kernel .deb file to the /home/Downloads directory on the Linux VM.

richard@debian:~$ cd Downloads/

richard@debian:~/Downloads$ ls

splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb

With elevation permissions I created a directory for the .deb, changed into the directory, and installed the .deb using dpkg.

richard@debian:~/Downloads$ sudo bash
[sudo] password for richard: 

root@debian:/home/richard/Downloads# mkdir /opt/splunkforwarder

root@debian:/home/richard/Downloads# mv splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb /opt/splunkforwarder/

root@debian:/home/richard/Downloads# cd /opt/splunkforwarder/

root@debian:/opt/splunkforwarder# ls

splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb

root@debian:/opt/splunkforwarder# dpkg -i splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb 

Selecting previously unselected package splunkforwarder.
(Reading database ... 141030 files and directories currently installed.)
Preparing to unpack splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb ...
Unpacking splunkforwarder (7.1.0) ...
Setting up splunkforwarder (7.1.0) ...
complete

root@debian:/opt/splunkforwarder# ls
bin        license-eula.txt
copyright.txt  openssl
etc        README-splunk.txt
ftr        share
include        splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-amd64.deb
lib        splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-x86_64-manifest

Next I changed into the bin directory, ran the splunk binary, and accepted the EULA.

root@debian:/opt/splunkforwarder# cd bin/

root@debian:/opt/splunkforwarder/bin# ls

btool   copyright.txt   openssl slim   splunkmon
btprobe   genRootCA.sh   pid_check.sh splunk   srm
bzip2   genSignedServerCert.sh  scripts splunkd
classify  genWebCert.sh   setSplunkEnv splunkdj

root@debian:/opt/splunkforwarder/bin# ./splunk start

SPLUNK SOFTWARE LICENSE AGREEMENT

THIS SPLUNK SOFTWARE LICENSE AGREEMENT ("AGREEMENT") GOVERNS THE LICENSING,
INSTALLATION AND USE OF SPLUNK SOFTWARE. BY DOWNLOADING AND/OR INSTALLING SPLUNK
SOFTWARE: (A) YOU ARE INDICATING THAT YOU HAVE READ AND UNDERSTAND THIS

...

Splunk Software License Agreement 04.24.2018

Do you agree with this license? [y/n]: y

Now I had to set an administrator password for this Universal Forwarder instance. I will refer to it as "mypassword" in the examples that follow although Splunk does not echo it to the screen below.

This appears to be your first time running this version of Splunk.

An Admin password must be set before installation proceeds.
Password must contain at least:
   * 8 total printable ASCII character(s).
Please enter a new password: 
Please confirm new password: 

Splunk> Map. Reduce. Recycle.

Checking prerequisites...
Checking mgmt port [8089]: open
Creating: /opt/splunkforwarder/var/lib/splunk
Creating: /opt/splunkforwarder/var/run/splunk
Creating: /opt/splunkforwarder/var/run/splunk/appserver/i18n
Creating: /opt/splunkforwarder/var/run/splunk/appserver/modules/static/css
Creating: /opt/splunkforwarder/var/run/splunk/upload
Creating: /opt/splunkforwarder/var/spool/splunk
Creating: /opt/splunkforwarder/var/spool/dirmoncache
Creating: /opt/splunkforwarder/var/lib/splunk/authDb
Creating: /opt/splunkforwarder/var/lib/splunk/hashDb
New certs have been generated in '/opt/splunkforwarder/etc/auth'.
Checking conf files for problems...
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...  
Done

With that done, I had to return to the Splunk Cloud Web site, and click the link to "Download Universal Forwarder Credentials" to download a splunkclouduf.spl file. As noted in the documentation, splunkclouduf.spl is a "credentials file, which contains a custom certificate for your Splunk Cloud deployment. The universal forwarder credentials are different from the credentials that you use to log into Splunk Cloud."

After downloading the splunkclouduf.spl file, I installed it. Note I pass "admin" as the user and "mypassword" as the password here. After installing I restart the universal forwarder.

root@debian:/opt/splunkforwarder/bin# ./splunk install app /home/richard/Downloads/splunkclouduf.spl -auth admin:mypassword

App '/home/richard/Downloads/splunkclouduf.spl' installed 

root@debian:/opt/splunkforwarder/bin# ./splunk restart
Stopping splunkd...
Shutting down.  Please wait, as this may take a few minutes.
.......
Stopping splunk helpers...

Done.

Splunk> Map. Reduce. Recycle.

Checking prerequisites...
Checking mgmt port [8089]: open
Checking conf files for problems...
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...  
Done

It's time to take the final steps to get data into Splunk Cloud. I need to forwarder management in the Splunk Cloud Web site. Observe the input-prd-p-XXXX.cloud.splunk.com in the command. You obtain this (mine is masked with XXXX) from the URL for your Splunk Cloud deployment, e.g., https://prd-p-XXXX.cloud.splunk.com. Note that you have to add "input-" before the fully qualified domain name used by the Splunk Cloud instance.

root@debian:/opt/splunkforwarder/bin# ./splunk set deploy-poll input-prd-p-XXXX.cloud.splunk.com:8089

Your session is invalid.  Please login.
Splunk username: admin
Password: 
Configuration updated.

Once again I restart the universal forwarder. I'm not sure if I could have done all these restarts at the end.

root@debian:/opt/splunkforwarder/bin# ./splunk restart
Stopping splunkd...
Shutting down.  Please wait, as this may take a few minutes.
.......
Stopping splunk helpers...

Done.

Splunk> Map. Reduce. Recycle.

Checking prerequisites...
Checking mgmt port [8089]: open
Checking conf files for problems...
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...  
Done

Finally I need to tell the universal forwarder to watch some logs on this Linux system. I tell it to monitor the /var/log directory and restart one more time.

root@debian:/opt/splunkforwarder/bin# ./splunk add monitor /var/log
Your session is invalid.  Please login.
Splunk username: admin
Password: 
Added monitor of '/var/log'.

root@debian:/opt/splunkforwarder/bin# ./splunk restart

Stopping splunkd...
Shutting down.  Please wait, as this may take a few minutes.
...............
Stopping splunk helpers...

Done.

Splunk> Map. Reduce. Recycle.

Checking prerequisites...
Checking mgmt port [8089]: open
Checking conf files for problems...
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-7.1.0-2e75b3406c5b-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...  
Done

At this point I return to the Splunk Cloud Web interface and click the "search" feature. I see Splunk is indexing some data.


I run a search for "host=debian" and find my logs.


Not too bad! Have you tried Splunk Cloud? What do you think? Leave me a comment below.

Update: I installed the Universal Forwarder on FreeBSD 11.1 using the method above (except with a FreeBSD .tgz) and everything seems to be working!

Monday, February 26, 2018

Importing Pcap into Security Onion

Within the last week, Doug Burks of Security Onion (SO) added a new script that revolutionizes the use case for his amazing open source network security monitoring platform.

I have always used SO in a live production mode, meaning I deploy a SO sensor sniffing a live network interface. As the multitude of SO components observe network traffic, they generate, store, and display various forms of NSM data for use by analysts.

The problem with this model is that it could not be used for processing stored network traffic. If one simply replayed the traffic from a .pcap file, the new traffic would be assigned contemporary timestamps by the various tools observing the traffic.

While all of the NSM tools in SO have the independent capability to read stored .pcap files, there was no unified way to integrate their output into the SO platform.

Therefore, for years, there has not been a way to import .pcap files into SO -- until last week!

Here is how I tested the new so-import-pcap script. First, I made sure I was running Security Onion Elastic Stack Release Candidate 2 (14.04.5.8 ISO) or later. Next I downloaded the script using wget from https://github.com/Security-Onion-Solutions/securityonion-elastic/blob/master/usr/sbin/so-import-pcap.

I continued as follows:

richard@so1:~$ sudo cp so-import-pcap /usr/sbin/

richard@so1:~$ sudo chmod 755 /usr/sbin/so-import-pcap

I tried running the script against two of the sample files packaged with SO, but ran into issues with both.

richard@so1:~$ sudo so-import-pcap /opt/samples/10k.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/10k.pcap: The file appears to be damaged or corrupt
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)
Error while merging!

I checked the file with capinfos.

richard@so1:~$ capinfos /opt/samples/10k.pcap
capinfos: An error occurred after reading 17046 packets from "/opt/samples/10k.pcap": The file appears to be damaged or corrupt.
(pcap: File has 263718464-byte packet, bigger than maximum of 262144)

Capinfos confirmed the problem. Let's try another!

richard@so1:~$ sudo so-import-pcap /opt/samples/zeus-sample-1.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
mergecap: Error reading /opt/samples/zeus-sample-1.pcap: The file appears to be damaged or corrupt
(pcap: File has 1984391168-byte packet, bigger than maximum of 262144)
Error while merging!

Another bad file. Trying a third!

richard@so1:~$ sudo so-import-pcap /opt/samples/evidence03.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
...setting sguild debug to 2 and restarting sguild.
...configuring syslog-ng to pick up sguild logs.
...disabling syslog output in barnyard.
...configuring logstash to parse sguild logs (this may take a few minutes, but should only need to be done once)...done.
...stopping curator.
...disabling curator.
...stopping ossec_agent.
...disabling ossec_agent.
...stopping Bro sniffing process.
...disabling Bro sniffing process.
...stopping IDS sniffing process.
...disabling IDS sniffing process.
...stopping netsniff-ng.
...disabling netsniff-ng.
...adjusting CapMe to allow pcaps up to 50 years old.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2009-12-28/snort.log.1261958400

Import complete!

You can use this hyperlink to view data in the time range of your import:
https://localhost/app/kibana#/dashboard/94b52620-342a-11e7-9d52-4f090484f59e?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2009-12-28T00:00:00.000Z',mode:absolute,to:'2009-12-29T00:00:00.000Z'))

or you can manually set your Time Range to be:
From: 2009-12-28    To: 2009-12-29


Incidentally here is the capinfos output for this trace.

richard@so1:~$ capinfos /opt/samples/evidence03.pcap
File name:           /opt/samples/evidence03.pcap
File type:           Wireshark/tcpdump/... - pcap
File encapsulation:  Ethernet
Packet size limit:   file hdr: 65535 bytes
Number of packets:   1778
File size:           1537 kB
Data size:           1508 kB
Capture duration:    171 seconds
Start time:          Mon Dec 28 04:08:01 2009
End time:            Mon Dec 28 04:10:52 2009
Data byte rate:      8814 bytes/s
Data bit rate:       70 kbps
Average packet size: 848.57 bytes
Average packet rate: 10 packets/sec
SHA1:                34e5369c8151cf11a48732fed82f690c79d2b253
RIPEMD160:           afb2a911b4b3e38bc2967a9129f0a11639ebe97f
MD5:                 f8a01fbe84ef960d7cbd793e0c52a6c9
Strict time order:   True

That worked! Now to see what I can find in the SO interface.

I accessed the Kibana application and changed the timeframe to include those in the trace.


Here's another screenshot. Again I had to adjust for the proper time range.


Very cool! However, I did not find any IDS alerts. This made me wonder if there was a problem with alert processing. I decided to run the script on a new .pcap:

richard@so1:~$ sudo so-import-pcap /opt/samples/emerging-all.pcap

so-import-pcap

Please wait while...
...creating temp pcap for processing.
...analyzing traffic with Snort.
...analyzing traffic with Bro.
...writing /nsm/sensor_data/so1-eth1/dailylogs/2010-01-27/snort.log.1264550400

Import complete!

You can use this hyperlink to view data in the time range of your import:
https://localhost/app/kibana#/dashboard/94b52620-342a-11e7-9d52-4f090484f59e?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2010-01-27T00:00:00.000Z',mode:absolute,to:'2010-01-28T00:00:00.000Z'))

or you can manually set your Time Range to be:
From: 2010-01-27    To: 2010-01-28

When I searched the interface for NIDS alerts (after adjusting the time range), I found results:


The alerts show up in Sguil, too!



This is a wonderful development for the Security Onion community. Being able to import .pcap files and analyze them with the standard SO tools and processes, while preserving timestamps, makes SO a viable network forensics platform.

This thread in the mailing list is covering the new script.

I suggest running on an evaluation system, probably in a virtual machine. I did all my testing on Virtual Box. Check it out! 

Monday, January 22, 2018

Lies and More Lies

Following the release of the Spectre and Meltdown CPU attacks, the security community wondered if other researchers would find related speculative attack problems. When the following appeared, we were concerned:

"Skyfall and Solace

More vulnerabilities in modern computers.

Following the recent release of the Meltdown and Spectre vulnerabilities, CVE-2017-5175, CVE-2017-5753 and CVE-2017-5754, there has been considerable speculation as to whether all the issues described can be fully mitigated. 

Skyfall and Solace are two speculative attacks based on the work highlighted by Meltdown and Spectre.

Full details are still under embargo and will be published soon when chip manufacturers and Operating System vendors have prepared patches.

Watch this space..."

It turns out this was a hoax. The latest version of the site says, in part:

"With little more than a couple of quickly registered domain names, thousands of people were hooked...

Skyfall

The idea here was to suggest a link to Intel's Skylake processor.

Solace

The idea here was to suggest a link to the Solaris operating system.

Copy the styling of the original Meltdown and Spectre sites and add a couple of favicons based loosely on the Intel and Solaris logos and I was nearly done.

The final step was to add on https, because if a site's got an SSL certificate it must be legitimate, and the bait was set."

The problem with this "explanation" is that it wasn't just a logo, domain name and SSL certificate. The "security professional" who created this site outright lied, as shown at the top of this post. Don't fall for his false narrative.

I'm not naming names or linking to the sites here, because the person responsible already thinks he's too clever.

Tuesday, January 16, 2018

Addressing Innumeracy in Reporting

Anyone involved in cybersecurity reporting needs a strong sense of numeracy, or mathematical literacy. I see two sorts of examples of innumeracy repeatedly in the media.

The first involves the time value of money. Recently CNN claimed Amazon CEO Jeff Bezos was the "richest person in history" and Recode said Bezos was "now worth more than Bill Gates ever was." Thankfully both Richard Steinnon and Noah Kirsch recognized the foolishness of these reports, correctly noting that Bezos would only rank number 17 on a list where wealth was adjusted for inflation.

This failure to recognize the time value of money is pervasive. Just today I heard the host of a podcast claim that the 1998 Jackie Chan movie Rush Hour was "the top grossing martial arts film of all time." According to Box Office Mojo, Rush Hour earned $244,386,864 worldwide. Adjusting for inflation, in 2017 dollars that's $367,509,865.67 -- impressive!

For comparison, I researched the box office returns for Bruce Lee's Enter the Dragon. Box Office Mojo lacked data, but I found a 2017 article stating his 1973 movie earned "$25 million in the U.S. and $90 million worldwide, excluding Hong Kong." If I adjust the worldwide figure of $90 million for inflation, in 2017 dollars that's $496,864,864.86 -- making Enter the Dragon easily more successful than Rush Hour.

If you're wondering about Crouching Tiger, Hidden Dragon, that 2000 movie earned $213,525,736 worldwide. That movie earned less than Rush Hour, and arrived two years later, so it's not worth doing the inflation math.

The take-away is that any time you are comparing dollars from different time periods, you must adjust for inflation to have your comparisons have any meaning whatsoever.

Chart by @CanadianFlags
The second sort of innumeracy I'd like to highlight today also involves money, but in a slightly different way. This involves changes in values over time.

For example, a company may grow revenue from 2015 to 2016, with 2015 revenue being $100,000 and 2016 being $200,000. That's a 100% gain.

If the company grows another $100,000 from 2016 to 2017, from $200,000 to $300,000, the growth rate has declined to 50%. To have maintained a 100% growth rate, the company needed to make $400,000 in 2016.

That same $100,000 dollar increase isn't so great when compared to the new base value.

We see the same dynamic at play when tracking the growth of individual stocks or market indices over time.

CNN wrote a story about the 1,000 point rise in the Dow Jones Industrial Average over a period of 7 days, from 25,000 to 26,000. One person Tweeted the chart at the above right, asking "is that healthy?" My answer -- you need a proper chart!

My second reaction was "that's a jump, but it's only (1-(25000/26000)) = 3.8%. Yes, 3.8% in 7 days is a lot, but that doesn't even rate in the top 20 one-day percentage gains or losses over the life of the index.

If the DJIA gained 1,000 points in 7 days 5 years ago, when the market was at 13,649, a rise to 14,649 would be a 6.8% gain. 20 years ago the market was roughly 3,310, so a 1,000 point rise to 4,310 would be a massive 23.2% gain.

A better way to depict the growth in the DJIA would be to use a logarithmic chart. The charts below show a linear version on the top and a logarithmic version below it.



Using barcharts.com, I drew the last 30 years of the DJIA at the top using a linear Y axis, meaning there is equal distance between 2,000 and 4,000, 4,000 and 6,000, and so on. The blue line shows the slope of the growth.

I then drew the same period using a logarithmic Y axis, meaning the percentage gains from one line to another are equal. For example, a 100% increase from 1,000 to 2,000 occupies the same distance as the 100% increase from 5,000 to 10,000. The green line shows the slope of the growth.

I put the blue and green lines on both charts to permit comparison of the slopes. As you can see, the growth, when properly indicated using a log chart and the green line, is less than the exaggerations introduced by the linear chart blue line.

There is indeed an upturn recently in the log chart, but the growth is probably on trend over time.

While we're talking about the market, let's take one minute to smack down the old trope that "what comes up, must come down." There is no "law of gravity" in investing, at least for the US market, as a whole.

The best example I have seen of the reality of the situation is this 2017 article titled The Dow’s tumultuous 120-year history, in one chart. Here is the chart:

Chart by Chris Kacher, managing director of MoKa Investors

What an amazing story. The title of the article should not be gloomy. It should be triumphant. Despite two World Wars, a Cold War, wars in Korea, Vietnam, the Middle East, and elsewhere, assassinations of world leaders, market depressions and recessions, and so on, the trend line is up, and up in a big way. While the DJIA doesn't represent the entire US market, it captures enough of it to be representative. This is why I do not bet against the US market over the long term. (And yes I recognize that the market and the economy are different.)

Individual companies may disappear, and the DJIA has indeed been changed many times over the years. However, those changes were made so that the index roughly reflected the makeup of the economy. Is it perfect? No. Does it capture the overall directional trend line since 1896? Yes.

Please keep in mind these two sorts of innumeracy -- the time value of money, and the importance of percentage changes over time -- when dealing with numbers and time.