Skip to main content

Django: Logging Quickstart in Django 1.3

Logging is always one of those things that seems like there is a lot of boilerplate code that I tend to always have to lookup oneline each time.  Though logging is very customizable, most of the time I need simply to only need log to a file or files.   Therefore, I'm writing this in hopes of compling a simple configuration that can be dropped into a project.  My understanding is still rudimentary so feel free to drop a comment.

A logging framework is built into Django 1.3 which makes it easier to integrate into a given project.  The logging framework contains three main components: formatters, handlers, and loggers.  Formatters determine how the output will be displayed when it is logged.  Handlers handle where output is logged, whether that be a file, the counsole, or an email.  Loggers are associated with handlers and are the objects that one interacts with when logging something.  As in the example below, more than one formatter, handler or logger can be defined in a project.

In settings.py:
import os

# choose a path that is your virtual environment root
VENV_ROOT = os.path.join('/','web','myvenv')

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'verbose': {
            'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
        },
        'simple': {
            'format': '%(levelname)s %(message)s'
        },
    },
    'handlers': {
        'file_userlogins': {                # define and name a handler
            'level': 'DEBUG',
            'class': 'logging.FileHandler', # set the logging class to log to a file
            'formatter': 'verbose',         # define the formatter to associate
            'filename': os.path.join(VENV_ROOT, 'log', 'userlogins.log') # log file
        },

        'file_usersaves': {                 # define and name a second handler
            'level': 'DEBUG',
            'class': 'logging.FileHandler', # set the logging class to log to a file
            'formatter': 'verbose',         # define the formatter to associate
            'filename': os.path.join(VENV_ROOT, 'log', 'usersaves.log')  # log file
        },
    },
    'loggers': {
        'logview.userlogins': {              # define a logger - give it a name
            'handlers': ['file_userlogins'], # specify what handler to associate
            'level': 'INFO',                 # specify the logging level
            'propagate': True,
        },     

        'logview.usersaves': {               # define another logger
            'handlers': ['file_usersaves'],  # associate a different handler
            'level': 'INFO',                 # specify the logging level
            'propagate': True,
        },        
    }       
}

In application code such as in views.py:
import logging   # import the required logging module
logger_logins = logging.getLogger('logview.userlogins')  # logger from settings.py
logger_logins.info('Log info')
logger_logins.debug('Log Debug information %s' % ("can pass in variables"))

# we can log to a different file using the other logger
logger_saves = logging.getLogger('logview.usersaves')
logger_saves.error('log an error')

Comments

  1. What I've started doing in every file is a simple "logger = logging.getLogger(__name__)". That __name__ translates into the module name (so something like "yourproject.yourmodule"). Consistent naming everywhere!

    I personally haven't had any use for separate loggers in one file, yet.

    ReplyDelete
  2. Thanks Reinout. That seems like a nice convention.

    ReplyDelete

Post a Comment

Popular posts from this blog

Docker: Run as non root user

It's good practice to run processes within a container as a non-root user with restricted permissions.  Even though containers are isolated from the host operating system, they do share the same kernel as the host. Also, processes within a container should be prevented from writing to where they shouldn't be allowed to as extra protection against exploitation. Running a Docker process as a non-root user has been a Docker feature as of version 1.10. To run a Docker process as a non-root user, permissions need to be accounted for meticulously.  This permission adjustment needs to be done when building a Dockerfile. You need to be aware of where in the filesystem your app might write to, and adjust the permissions accordingly.  Since everything in a container is considered disposable, the container process really shouldn't be writing to too many locations once build. Here is an annotated example of how you might create a Dockerfile where the process that runs within runs a

Django: Using Caching to Track Online Users

Recently I wanted a simple solution to track whether a user is online on a given Django site.  The definition of "online" on a site is kind of ambiguous, so I'll define that a user is considered to be online if they have made any request to the site in the last five minutes. I found that one approach is to use Django's caching framework to track when a user last accessed the site.  For example, upon each request, I can have a middleware set the current time as a cache value associated with a given user.  This allows us to store some basic information about logged-in user's online state without having to hit the database on each request and easily retrieve it by accessing the cache. My approach below.  Comments welcome. In settings.py: # add the middleware that you are about to create to settings MIDDLEWARE_CLASSES = ( .... 'middleware.activeuser_middleware.ActiveUserMiddleware' , .... ) # Setup caching per Django docs. In actuality, you

Automatic Maintenance Page for Nginx+Django app

If you've used Django with Nginx, you are probably familiar with how to configure the Nginx process group to reverse proxy to a second Gunicorn or uWSGI Django process group.  (The proxy_pass Nginx parameter passes traffic through Nginx to Django.) One benefit of this approach is that if your Django process crashes or if you are preforming an upgrade and take Django offline, Nginx can still be available to serve static content and offer some sort of "the system is down" message to users.  With just a few lines of configuration, Nginx can be set to automatically display a splash page in the above scenarios. If the Django process running behind the reverse proxy becomes unavailable, a 502 error will be emitted by Nginx.  By default, that 502 will be returned to the browser as an ugly error message.  However, Nginx can be configured to catch that 502 error and respond with custom behavior.  Specifically, if a 502 is raised by Nginx, Nginx can check for a custom html erro