Pricing Login
Pricing
Support
Demo
Interactive demos

Click through interactive platform demos now.

Live demo, real expert

Schedule a platform demo with a Sumo Logic expert.

Start free trial
Back to blog results

April 20, 2017 By Dan Reichert

Best Practices for Creating Custom Logs - Part II

Diving Deeper

Now that you have an overview for custom logs and what is involved in creating a good log practice from Part I of the series, it’s time to look further into what and why you should log in your system. This will be broken up into two parts. The first will cover timestamps and content, and the second part will cover syntax and documentation.

Timestamp

The first and most critical component of just about any log syntax is your timestamp - the “when”. A timestamp is important as it will tell you exactly the time an event took place in the system and was logged. Without this component, you’ll be relying on your log analysis solution to stamp it based upon when it came in. Adding a timestamp at the exact point of when an entry is logged will make sure you are consistently and accurately placing the entry at the right point in time for when it occurred. RFC 3339 is used to define the standard time and date format on the internet. Your timestamp should include year, month, day, hour, minute, second, and timezone. Optionally, you’ll also want to include sub-second depending on how important and precise you’ll need your logs to get for analysis. For Sumo Logic, you can read about the different formats for timestamps that are supported here - Timestamps, Time Zones, Time Ranges, and Date Formats.

Log Content

To figure out what happened, this can include data such as the severity of the event (e.g., low, medium, high; or 1 through 5), success or failure, status codes, resource URI, or anything else that will help you or your organization know exactly what happened in an event. You should be able to take a single log message or entry out of a log file and know most or all critical information without depending on logs’ file name, storage locations, or automatic metadata tagging from your tool. Your logs should tell a story. If they’re complex, they should also be documented as discussed later on.
Bad Logs
For a bad example, you may have a log entry as such:
2017-04-10 09:50:32 -0700 Success
While you know that on April 10, 2017 at 9:50am MT an event happened and it was a success, you don’t really know anything else. If you know your system inside and out, you may know exactly what was successful; however, if you handed these logs over to a peer to do some analysis, they may be completely clueless!
Good Logs
Once you add some more details, the picture starts coming together:
2017-04-10 09:50:32 -0700 GET /checkout/flights/ Success
From these changes you know on April 10th, a GET method was successfully performed on the resource /checkout/flights/. Finally, you may need to know who was involved and where. While the previous log example can technically provide you a decent amount of information, especially if you have tiny environment, it’s always good to provide as much detail since you don’t know what you may need to know for the future. For example, usernames and user IPs are good to log:
2017-04-10 09:50:32 -0700 dan12345 10.0.24.123 GET /checkout/flights/ Success
Telling the Story
Now you have even more details about what happened. A username or IP may individually be enough, but sometimes (especially for security) you’ll want as much as you can learn about the user since user accounts can be hacked and/or accessed from other IPs. You have just about enough at this point to really tell a story. To make sure you know whatever you can about the event, you also want to know where things were logged. Again, while your logging tool may automatically do this for you, there’s many factors that may affect the integrity and it’s best to have your raw messages tell as much as possible. To complete this, let’s add the gateway that logged the entry:
2017-04-10 09:50:32 -0700 dan12345 10.0.24.123 GET /checkout/flights/ credit.payments.io Success
Now you know that this was performed on a gateway named credit.payments.io. If you had multiple gateways or containers, you may come to a point of needing to identify which to fix. Omitting this data from your log may result in a headache trying to track down exactly where it occurred. This was just 1 example of some basics of a log. You can add as much detail to this entry to make sure you know whatever you can for any insight you need now or in the future. For example, you may want to know other info about this event. How many flights were purchased?
2017-04-10 09:50:32 -0700 dan12345 10.0.24.123 GET /checkout/flights/ credit.payments.io Success 2
Where 2 is the amount of flights. What was the total value of the flights purchased?
2017-04-10 09:50:32 -0700 dan12345 10.0.24.123 GET /checkout/flights/ credit.payments.io Success 2 241.9
Where 2 is the amount of flights, and they totalled $241.98. Now that you know what to put into your custom logs, you should also consider deciding on a standard syntax throughout your logs. This will be covered in the last part of this series on best practices for creating custom logs.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Categories

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial

Dan Reichert

Dan Reichert is a Sales Engineer at Sumo Logic with over a decade of experience in technology in the US Army, IBM, and various startups. He is a graduate of the iSchool at Syracuse University with a masters degree in Information Management and University of Central Florida with a bachelors degree in Information Systems Technology. He is an AWS Certified Solutions Architect - Associate.

More posts by Dan Reichert.