class Datadog::Profiling::Scheduler

Periodically (every DEFAULT_INTERVAL_SECONDS) takes data from the `Recorder` and pushes them to all configured `Exporter`s. Runs on its own background thread.

Constants

DEFAULT_INTERVAL_SECONDS
MINIMUM_INTERVAL_SECONDS
PROFILE_DURATION_THRESHOLD_SECONDS

Profiles with duration less than this will not be reported

Attributes

exporters[R]
recorder[R]

Public Class Methods

new( recorder, exporters, fork_policy: Workers::Async::Thread::FORK_POLICY_RESTART, interval: DEFAULT_INTERVAL_SECONDS, enabled: true ) click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 26
def initialize(
  recorder,
  exporters,
  fork_policy: Workers::Async::Thread::FORK_POLICY_RESTART, # Restart in forks by default
  interval: DEFAULT_INTERVAL_SECONDS,
  enabled: true
)
  @recorder = recorder
  @exporters = [exporters].flatten

  # Workers::Async::Thread settings
  self.fork_policy = fork_policy

  # Workers::IntervalLoop settings
  self.loop_base_interval = interval

  # Workers::Polling settings
  self.enabled = enabled
end

Public Instance Methods

after_fork() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 69
def after_fork
  # Clear recorder's buffers by flushing events.
  # Objects from parent process will copy-on-write,
  # and we don't want to send events for the wrong process.
  recorder.flush
end
loop_back_off?() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 65
def loop_back_off?
  false
end
loop_wait_before_first_iteration?() click to toggle source

Configure Workers::IntervalLoop to not report immediately when scheduler starts

When a scheduler gets created (or reset), we don't want it to immediately try to flush; we want it to wait for the loop wait time first. This avoids an issue where the scheduler reported a mostly-empty profile if the application just started but this thread took a bit longer so there's already samples in the recorder.

# File lib/ddtrace/profiling/scheduler.rb, line 81
def loop_wait_before_first_iteration?
  true
end
perform() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 50
def perform
  # A profiling flush may be called while the VM is shutting down, to report the last profile. When we do so,
  # we impose a strict timeout. This means this last profile may or may not be sent, depending on if the flush can
  # successfully finish in the strict timeout.
  # This can be somewhat confusing (why did it not get reported?), so let's at least log what happened.
  interrupted = true

  begin
    flush_and_wait
    interrupted = false
  ensure
    Datadog.logger.debug('#flush was interrupted or failed before it could complete') if interrupted
  end
end
start() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 46
def start
  perform
end
work_pending?() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 85
def work_pending?
  !recorder.empty?
end

Private Instance Methods

duration_below_threshold?(flush) click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 129
def duration_below_threshold?(flush)
  (flush.finish - flush.start) < PROFILE_DURATION_THRESHOLD_SECONDS
end
flush_and_wait() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 91
def flush_and_wait
  run_time = Datadog::Utils::Time.measure do
    flush_events
  end

  # Update wait time to try to wake consistently on time.
  # Don't drop below the minimum interval.
  self.loop_wait_time = [loop_base_interval - run_time, MINIMUM_INTERVAL_SECONDS].max
end
flush_events() click to toggle source
# File lib/ddtrace/profiling/scheduler.rb, line 101
def flush_events
  # Get events from recorder
  flush = recorder.flush

  if duration_below_threshold?(flush)
    Datadog.logger.debug do
      "Skipped exporting profiling events as profile duration is below minimum (#{flush.event_count} events skipped)"
    end

    return flush
  end

  # Send events to each exporter
  if flush.event_count > 0
    exporters.each do |exporter|
      begin
        exporter.export(flush)
      rescue StandardError => e
        Datadog.logger.error(
          "Unable to export #{flush.event_count} profiling events. Cause: #{e} Location: #{Array(e.backtrace).first}"
        )
      end
    end
  end

  flush
end