Log Management and Analytics

Explore the full capabilities of Log Management and Analytics powered by SolarWinds Loggly

View Product Info

FEATURES

Infrastructure Monitoring Powered by SolarWinds AppOptics

Instant visibility into servers, virtual hosts, and containerized environments

View Infrastructure Monitoring Info

Application Performance Monitoring Powered by SolarWinds AppOptics

Comprehensive, full-stack visibility, and troubleshooting

View Application Performance Monitoring Info

Digital Experience Monitoring Powered by SolarWinds Pingdom

Make your websites faster and more reliable with easy-to-use web performance and digital experience monitoring

View Digital Experience Monitoring Info
Use Cases

How to Analyze MongoDB Logs for Troubleshooting

Start Free Trial

Fully Functional for 30 Days

Last updated: October 2024

MongoDB is a cross-platform and document-oriented database program. It’s classified as a NoSQL database, which means it’s nontabular and stores data differently from relational databases like MySQL. MongoDB uses JSON-like documents with optional schemas. This makes it easy to use and store data without worrying about the relationships and tables.

Many large companies use MongoDB, and it’s easy to see why. MongoDB is easy to scale, and it’s faster than a SQL database. If you’re looking for a database that won’t cost you time thinking about relationship and scalability, MongoDB is a great option. It’s a leading open-source NoSQL database, and it’s written in C++. This is another reason to choose it—it’s free and has better performance. In this post, I’ll explain how you can analyze MongoDB log messages and use the results of your analysis to solve problems.

What Are MongoDB Logs?

Like any other database, such as MySQL, MongoDB also logs some messages when you’re using it. These logs can be any information that, when used well, can save your business. Knowing the problem before it becomes a problem is helpful. You can solve the problem before it even affects the business. For example, if you’re using MongoDB as the database on your website, through the logs you can solve an error before it affects users.

For this to work, you might need a tool to monitor MongoDB logs and send notifications in real time. This is where a log management tool like SolarWinds® Loggly® can help. Loggly provides a variety of ways to quickly visualize and analyze log data. This lets you organize, search, and alert on log data and detect issues in your application and infrastructure before there’s a user impact.

Loggly

See unified log analysis and monitoring for yourself.
Start FREE Trial

Fully Functional for 30 Days

What Types of Event Messages Should I Monitor in MongoDB Logs?

It’s important to know what can be useful from MongoDB logs. Not every log message will be used to solve the problem you might be facing on your application. A lot of log messages will be echoed from MongoDB. Basically, you’ll be looking for log messages like fatal, error, warning, and debug.

According to the MongoDB official documentation, log messages have levels ranging from fetal to debug, debug being the lowest level.

Starting in MongoDB 4.4, mongod / mongos instances output all log messages in structured JSON format. Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as “severity,” and each corresponding value records the associated logging information for that field type, such as “informational.”

MongoDB Log Types

MongoDB provides various types of logs to help you monitor and troubleshoot the database. Each type of log serves a specific purpose, and examples include:

Operation Logs

Operation logs track MongoDB’s activities and operations. They provide insights into the database’s performance, the queries executed, and the status of different operations. For example, they capture detailed information, such as CRUD operations, queries, updates, inserts, and deletes.

Error Logs

Error logs capture information about errors and warnings encountered by the MongoDB server. Examples may include failed operations, system failures, and other critical issues. These logs are essential for diagnosing and resolving problems that may affect the database’s stability and performance.

Audit Logs

Audit logs provide a detailed record of security-related events, such as who accessed the database, what actions were performed, and when. These logs are crucial for compliance and security monitoring.

MongoDB Log Levels

MongoDB uses log levels to determine the verbosity and type of information logged by the MongoDB server.

  • DEBUG: provides the most detailed information about MongoDB’s internal operations
  • INFO: gives messages about the MongoDB server’s normal operations like startup and shutdown
  • WARN: highlights potentially harmful situations that might not immediately cause an error but could lead to issues if not addressed
  • ERROR: reports serious issues that have caused an operation to fail
  • FATAL: signals critical errors that need immediate attention as they typically cause the MongoDB process to terminate

Understanding MongoDB Log Messages

MongoDB logs can be found in the MongoDB log files at /var/log/mongodb/mongodb.log. If you can’t find the log files from this location, you can check the mongodb.conf. This is the configuration file that specifies where the logs are stored. You can locate mongodb.conf by navigating to /etc/mongodb.conf. Once you’ve found the mongodb.conf file, you can look for the logpath, which will specify the directory where your log file is located.

The first thing you’ll need to know is how the log file is structured. As mentioned earlier, the log file is a structure in the JSON format. Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as “severity.” Each corresponding value records the associated logging information for that field type, such as “informational.” Previously, log entries were output as plain text, which were not always easily readable . The following is an example log message in JSON format as it would appear in the MongoDB log file:

{
  "t": {
    "$date": "2020-12-12T15:16:17.180+00:00"
  },
  "s": "I",
  "c": "NETWORK",
  "id": 12345,
  "ctx": "listener",
  "msg": "Listening on",
  "attr": {
    "address": "127.0.0.1"
  }
}

In this log, you can see the s key, representing severity; the severity field type indicates the severity level associated with the logged event. In addition, I represents informational and c represents a component. The component field type indicates the category a logged event is a member of, such as a network. In this example, the component has a corresponding value of NETWORK. The log file above indicates the “network” component was responsible for this particular message.

Setting Up Logging in MongoDB

Here’s a list of steps for configuring MongoDB Logging:

1. First, you’ll need to find the MongoDB configuration file. It is usually found in /etc/mongod.conf or /etc/mongodb.conf.

Configure the log path in the configuration file. Set the systemLog.path option to specify the location of the log file.

systemLog:
  destination: file
  path: /var/log/mongodb/mongod.log
  logAppend: true #Append logs to the file instead of overwriting

2. Set log verbosity levels. Configure the verbosity of logs using the systemLog.verbosity setting. Verbosity levels range from 0 to 5, where 0 is the default level and 5 is the most verbose.

systemLog:
  verbosity: 1

3. Start or restart MongoDB. Apply the new settings by starting or restarting the MongoDB service.

Analyzing a MongoDB Log File

Now that you can understand what a MongoDB log message is and what information you can expect from it, let’s analyze a log file. When you’re working with the MongoDB, you’ll have a number of options for how to open and analyze the log file. One option is using the Unix cat command to open the file.

$ cat mongod.log | jq

#Output
------------------------------------------------------

{
  "t": {
    "$date": "2020-12-12T15:16:17.180+00:00"
  },
  "s": "F",
  "c": "NETWORK",
  "id": 1002,
  "ctx": "listener",
  "msg": "Failed to connect",
  "attr": {
    "address": "127.0.0.1"
  }
}

When you pass an option jq, jq is a lightweight and flexible command-line JSON processor. It helps in printing the messages on the terminal in a visually pleasing print and readable format. If you want to get the most recent log entries, you can run this command combining cat, tail, and jq.

$ cat mongod.log | tail -1 | jq

We already know the MongoDB log file is in the path /var/log/mongodb/mongodb.log, and we’ve opened mongodb.log. Depending on the problems on your server, you’re likely to see a different message from what I got here.

Using mtools to Analyze MongoDB Logs

While it’s fine to use Unix commands to read and analyze MongoDB logs, some tools are built specifically to help you analyze log files better. One very useful open-source tool is mtools.

mtools is a collection of helper scripts developed in Python designed to parse and filter MongoDB log files, visualize information from log files, and quickly set up complex MongoDB test environments on a local machine. Let’s say you have slow queries running against MongoDB affecting the database’s performance. Using mtools, you can get an idea of where MongoDB is slowing down. As a first step, look at the “queries” section of mloginfo. Here’s an example output, created with the following mtools command:

mloginfo mongod.log --queries
<p>QUERIES</p>
<p>namespace                    pattern                                      count    min (ms)    max (ms)    mean (ms)    95%-ile (ms)    sum (ms)</p>
<p>serverside.scrum_master   {"datetime_used": {"$ne": 1}}                    20       15753       17083        16434         17039.3      328692
serverside.django_session    {"_id": 1}                                       562         101        1512          317           842.6      178168
serverside.user              {"_types": 1, "emails.email": 1}                 700         101        1262          201          684.85      162311
local.slaves                 {"_id": 1, "host": 1, "ns": 1}                   131         101        1048          310           819.5       40738
serverside.email_alerts      {"_types": 1, "email": 1, "pp_user_id": 1}        13         153       11639         2465          8865.2       32053
serverside.sign_up           {"_id": 1}                                        77         103         843          269           551.0       20761
serverside.user_credits      {"_id": 1}                                         6         204         900          369          763.75        2218
serverside.counters          {"_id": 1, "_types": 1}                            8         121         500          263           470.6        2111
serverside.auth_sessions     {"session_key": 1}                                 7         111         684          277           645.6        1940
serverside.credit_card       {"_id": 1}                                         5         145         764          368           705.0        1840
serverside.email_alerts      {"_types": 1, "request_code": 1}                   6         143         459          277           415.0        1663

Each line from left to right shows the namespace, query pattern, and various statistics of this particular namespace/pattern combination. The rows are sorted by the “sum” column in descending order. Sorting using sum shows where the database spent most of its time when the query was executed. The example shows around half the total time is spent on $ne-type queries in the serverside.scrum_master collection. $ne queries are inefficient because they can’t use an index, resulting in a high number of documents scanned and the query taking more time to finish.

In fact, all of the queries took at least 15 seconds (“min” column). The “count” column also shows only 20 of the queries were executed, yet these queries contributed to a large amount of the total time spent, more than double the time of the 700 email queries on serverside.user. This information can help you know where the problem is coming from and be able to solve it.

Conclusion

Just like any other log messages, MongoDB logs are very helpful when it comes to troubleshooting issues. You can use the terminal and be able to display the log file contents for analysis. However, even though this works OK, it’s not very easy to read or search data via the terminal. It gets even worse when there’s a large amount of data in the log file. Using a tool like mtools can help you achieve more because of the many commands it comes with, such as mloginfo, mlogfilter, mplotqueries, mlogvis, and mlaunch. Despite having these great commands, this tool has some limitations, and one of them is displaying data on the terminal.

It can be better if you use a system built just for log management and analysis and can provide you with the information you need to resolve an issue or analyze the usage of your MongoDB. A log management tool like SolarWinds Loggly can help simplify collecting and searching log data, so you can focus on using the log data to quickly identify and resolve issues. Loggly provides multiple ways to visualize and analyze log data to help you detect and understand issues in MongoDB as well as issues in other systems and applications.

While setting up and configuring MongoDB logs is crucial for maintaining your database’s health and performance, using an advanced observability tool can significantly enhance your monitoring and troubleshooting capabilities. SolarWinds Observability goes beyond monitoring to provide  holistic visibility across your entire infrastructure. It  helps you correlate logs with user, application, infrastructure, and network performance metrics and other critical data to anticipate bottlenecks and enable you to take action to prevent emerging issues. If you’re curious, try SolarWinds Observability and see how much easier performance management can be.


This post was written by Mathews Musukuma. Mathews is a software engineer with experience in web and application development. Some of his skills include Python/Django, JavaScript, and Ionic Framework. Over time, Mathews has also developed interest in technical content writing.

Maximize Observability

See it all in one place. Dozens of log sources, no proprietary agents.