I found a really neat usage for Python context managers in the codebase of my new job over at Custobar. We’re running sometimes large and long-running ElasticSearch indexing tasks with Celery, and exceptions inside these tasks would sometimes clog up our Sentry. Someone smarter than me wrote a context manager that would queue any exceptions and do some magic before handing them to Sentry. I liked the idea and tried writing a super simple context manager just for making Celery tasks in my own projects simpler.
SentryContextManager
I came up with the following:
# core.utils.sentry.py
from sentry_sdk import capture_exception
class SentryContextManager:
"""Context manager to capture exceptions and send them to Sentry."""
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is not None:
capture_exception(exc_value)
print("Exception captured by Sentry: ")
print(exc_type, exc_value, traceback)
return True
and then:
# core.utils.__init__.py
from core.utils.sentry import SentryContextManager
send_exceptions_to_sentry = SentryContextManager()
and finally I could replace this:
# core.tasks.py
from celery import shared_task
from sentry_sdk import capture_exception
@shared_task
def my_task():
try:
division_by_zero = 1 / 0
except Exception as e:
capture_exception(e)
print("This always prints!")
with this:
# core.tasks.py
from celery import shared_task
from core.utils import send_exceptions_to_sentry
@shared_task
def my_task():
with send_exceptions_to_sentry:
division_by_zero = 1 / 0
print("This always prints!")
I like the simplicity and readability of the context manager approach a lot. This is obviously not useful everywhere but for simple cases like this it works great!