Skip to content

Conversation

@solnic
Copy link
Collaborator

@solnic solnic commented Jan 2, 2026

Initial work on support for Logs.

This follows requirements described in https://develop.sentry.dev/sdk/telemetry/logs/


Closes #906
Closes #907
Closes #908

@solnic solnic linked an issue Jan 2, 2026 that may be closed by this pull request
@solnic solnic force-pushed the 906-sentry-logger-handler-for-structured-logging branch 15 times, most recently from bcacf6e to bcb816d Compare January 5, 2026 12:17
@solnic solnic marked this pull request as ready for review January 5, 2026 12:21
@solnic solnic mentioned this pull request Jan 5, 2026
@solnic solnic force-pushed the 906-sentry-logger-handler-for-structured-logging branch from bcb816d to 094f947 Compare January 6, 2026 12:31
@solnic solnic force-pushed the 906-sentry-logger-handler-for-structured-logging branch from 094f947 to 9e1f449 Compare January 7, 2026 07:49
@solnic solnic force-pushed the 906-sentry-logger-handler-for-structured-logging branch from 9e1f449 to 3258a25 Compare January 7, 2026 10:17
Copy link
Collaborator

@whatyouhide whatyouhide left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's quite a few things to think about here and a couple of race conditions in the buffer process, but I’m excited for this getting in the SDK.

@doc since: "12.0.0"
@spec send_log_events([LogEvent.t()]) ::
{:ok, envelope_id :: String.t()} | {:error, ClientError.t()}
def send_log_events([]), do: {:ok, ""}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have a "log batch" struct, would it make sense to have send_log_batch instead of this?

| ClientReport.t()
| Event.t()
| LogBatch.t()
| LogEvent.t()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do we have an envelope with log events in it instead of a log batch?

Creates a new envelope containing log events.
According to the Sentry Logs Protocol, log events are sent in batches
within a single envelope item with content_type "application/vnd.sentry.items.log+json".
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nits:

Suggested change
within a single envelope item with content_type "application/vnd.sentry.items.log+json".
within a single envelope item with content type `application/vnd.sentry.items.log+json`.

within a single envelope item with content_type "application/vnd.sentry.items.log+json".
All log events are wrapped in a single item with { items: [...] }.
"""
@doc since: "11.0.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo?

Suggested change
@doc since: "11.0.0"
@doc since: "12.0.0"

@impl GenServer
def handle_cast({:add_event, event}, state) do
# Check if queue is at max capacity
if length(state.events) >= @max_queue_size do
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to be called constantly. Can we keep track of the current number of events instead of recomputing length(state.events) every time?

else
events = [event | state.events]

if length(events) >= state.max_events do
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do what I suggest above we don't need to calculate length(events) again here (it's just that number + 1)

@impl GenServer
def handle_call(:flush, _from, state) do
send_events(state.events)
cancel_timer(state.timer_ref)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an (unlikely) race condition here where the timer fired while we're in this function and cancel_timer/1 has no effect, leading to the :flush message being in the message queue and handled right after flushing here.

Two solutions:

  • Switch to gen_statem, which makes all this pretty easy with its timeout handling.
  • Flush the :flush_timeout message by using a receive do :flush_timeout -> :ok after 0 -> :ok end or something.

do_send_events(events)
else
# Send asynchronously to avoid blocking in production
Task.start(fn -> do_send_events(events) end)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we start this task under a Task.Supervisor with a configured number of :max_children? We run the (low) risk of blowing things up here if these tasks become backlogged and this GenServer keeps spawning new ones.

@solnic solnic force-pushed the 906-sentry-logger-handler-for-structured-logging branch from 3258a25 to 4ea6119 Compare January 8, 2026 09:03
@solnic solnic force-pushed the 906-sentry-logger-handler-for-structured-logging branch from 7c929ff to 515fa87 Compare January 8, 2026 09:27
Copy link
Member

@sl0thentr0py sl0thentr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 comments about schema changes

@dingsdax dingsdax removed their request for review January 12, 2026 08:43
@solnic solnic requested a review from sl0thentr0py January 14, 2026 12:58
@solnic solnic requested a review from whatyouhide January 14, 2026 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support for parameterized log payloads Support for default attributes in Structured Logging Sentry logger handler for Structured Logging

4 participants