Paullgdc/native spans#1937
Draft
paullegranddc wants to merge 143 commits into
Draft
Conversation
# What does this PR do? This PR allows to send process_tags through Remote Configuration payload. This PR comes with DataDog/dd-trace-php#3658 testing the feature in PHP Tracer. # Motivation Process tags must be sent by dd-trace-php for every product including RC. PHP tracer is using libdatadog for sending RC payload. # How to test the change? The changes is mainly validated in dd-trace-php. I also added a test here and modified the one impacted by the add of process_tags in the `.proto`. Co-authored-by: louis.tricot <louis.tricot@datadoghq.com>
# What does this PR do? We send target triple as `target_triple` tag. This is fine, but runtime platform is a more appropriate name, and maintains parity with profiling: [feat(profiling): add runtime_platform tag automatically](#954) # Motivation What inspired you to submit this pull request? # Additional Notes # How to test the change? corresponding unit tests have been updated Co-authored-by: gyuheon.oh <gyuheon.oh@datadoghq.com>
…ues (#1722) # What does this PR do? * Change the header map type passed throughout data-pipeline and trace utils from `Hashmap<&'static str, String> to `http::HeaderMap` This should not cause extra allocations for fixed header names, as the header names for string values are "const constructed" and trivially copyable. In fact it should cause less allocations as header values are now `http::HeaderValue` instead of `String`. The static ones don't require an allocation and clone becomes a shallow copy. # Motivation OTLP supports requires the ability to defined extra headers sent with the payload in configuration # Additional Notes The first iteration I went through created a `Hashmap<http::HeaderName, String>` but this does not work, as http::HeaderName implement Borrow<str> but does not hash like the &str it represents (see hyperium/http#824)
# What does this PR do? This PR replaces a bunch of sequentially consistent atomic accesses on ops counters by weaker relaxed accesses, cleaning a leftover TODO. # Motivation The motivation to use the weakest memory ordering applicable is two folds: 1. Performance: relaxed accesses compile to normal, non-atomic loads and stores on standard platforms (x86_64 and arm64 in particular). Whether this particular change has any performance impact is less obvious. 2. Readability: I think my main motivation is that I find it _easier_, at least as a reader, to reason about weaker orderings. For example, a relaxed access indicates that there's no other unsynchronized data that this atomic protect or interact with, which enables local reasoning (you don't have to care about what other threads might be doing). A sequentially consistent access is the converse: they lead to a global order which involves all other seqcst accesses to this atomic, which is a strong and far-reaching assumption. # Additional Notes This atomic is a counter, which is the poster child for `Relaxed` ordering (you usually only need the atomicity). This counter doesn't protect or interact with unsynchronized memory, so there's no reason to use a stronger ordering. # How to test the change? Should see no difference in behavior except maybe for performance. Co-authored-by: yann.hamdaoui <yann.hamdaoui@datadoghq.com>
# What does this PR do? Use ephemeral branches rather than a lon lived release branch. # Motivation Having a long lived branch for releases lead to several problems with very little benefits.
# What does this PR do? This adds the strings "thread id" and "thread name" as well-known strings in both Rust and FFI. # Motivation These strings are used by at least PHP, Python, and Ruby as label keys. # Additional Notes None, this is straight-forward. # How to test the change? Existing tests were updated; use existing tests. Co-authored-by: levi.morrison <levi.morrison@datadoghq.com>
…#1468) # What does this PR do? On the FFI headers on windows, this attends to replace `"extern "` with `"extern __declspec(dllimport) "`. # Motivation These static variables result in a crash if you use them when they don't have `__declspec(dllimport) `. # Additional Notes This went unfound because none of the examples run on Windows in CI. I am planning to look into running these as part of CI (this is in draft still). # How to test the change? Build and note you don't get a crash anymore when using these static vars on Windows 😆 Co-authored-by: gleocadie <gregory.leocadie@datadoghq.com> Co-authored-by: levi.morrison <levi.morrison@datadoghq.com>
# What does this PR do? Only included the necessary compression features used by the crate. # Motivation We have determined that adding tracer flare to dd-trace-py was the primary cause that pushed the library size limit over what is acceptable for datadog-lambda-python. We found that `zip` was including all compression methods by default which take up a lot of space and are unused. ``` # Before ❯ ls -hal target/release/libdatadog_tracer_flare.* -rw-r--r--@ 1 brett.langdon staff 7.3K Mar 18 12:01 target/release/libdatadog_tracer_flare.d -rw-r--r--@ 1 brett.langdon staff 3.6M Mar 18 12:01 target/release/libdatadog_tracer_flare.rlib # After ❯ ls -hal target/release/libdatadog_tracer_flare.* -rw-r--r--@ 1 brett.langdon staff 7.3K Mar 18 12:02 target/release/libdatadog_tracer_flare.d -rw-r--r--@ 1 brett.langdon staff 3.3M Mar 18 12:00 target/release/libdatadog_tracer_flare.rlib ``` # Additional Notes Anything else we should know when reviewing? # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: brett.langdon <brett.langdon@datadoghq.com>
# What does this PR do? Avoid to wait for macos runners when releasing. # Motivation Since the release process only modifies depencies and changelogs it's a waste of time running the tests on MacOS. Co-authored-by: julio.gonzalez <julio.gonzalez@datadoghq.com>
This PR merges the release branch to main Co-authored-by: dd-octo-sts[bot] <200755185+dd-octo-sts[bot]@users.noreply.github.com>
# What does this PR do? Adds a batch variant of `ddog_prof_ProfilesDictionary_insert_str` called `ddog_prof_ProfilesDictionary_insert_strs` that interns a slice of CharSlices into an existing ProfilesDictionary in a single call, writing the resulting StringId2s to a caller-provided MutSlice. # Motivation Every profiler adopting `ProfilesDictionary` needs to insert their known label keys, so may as well make it convenient. # Additional Notes Taegyun and I thought of this while going over some memory allocations, and they had unexpected large allocations for adding 3 label keys. This is because the profile was aggregated over multiple processes, but also it reports the full 1 MiB of virtual memory per shard of the sharded set, and these label keys were the first in their respective buckets. This function doesn't really change that, but it at least makes it more convenient to intern all label strings up front. # How to test the change? Tests were added to existing suites, can just run `cargo test` or nextest etc. Co-authored-by: levi.morrison <levi.morrison@datadoghq.com>
…environments (#1447) # What does this PR do? Implements a thread-based sidecar connection mode as an alternative to the existing subprocess mode. When enabled, the sidecar runs as a Tokio thread within the PHP process rather than as a separate subprocess. **Key implementation details:** - New `thread` connection mode alongside existing `subprocess` mode - Uses an abstract Unix socket (Linux) or named pipe (Windows) for IPC between the PHP-FPM master thread listener and worker processes - The master UID is encoded in the socket/pipe name to support cross-user scenarios (e.g. FPM master as root, workers as `www-data`) - SHM open mode is configurable via a global hook (`set_shm_open_mode`) to support cross-user shared memory access via `fchown`/`SO_PEERCRED` - Orphan promotion: if the master's thread listener is unavailable, a worker can promote itself to master - Uses `current_thread` Tokio runtime to avoid spawning additional OS threads beyond the single listener thread - Windows support via named pipes (where subprocess mode had limitations) # How to test the change? Tested via the `dd-trace-php` integration test suite: - `SidecarThreadModeTest`: verifies multi-request tracing works in thread mode - `SidecarThreadModeRootTest`: verifies cross-user SHM access when FPM master runs as root - `.phpt` unit tests for connection mode configuration and auto-fallback behavior Co-authored-by: bob.weinand <bob.weinand@datadoghq.com>
## What does this PR do? We're migrating Datadog repositories from Codecov to [Datadog Code Coverage](https://docs.datadoghq.com/code_analysis/code_coverage/) for tracking test coverage. This PR is the first step: it adds a Datadog coverage upload **alongside** the existing Codecov upload so we can run both systems in parallel and verify parity before switching over. ## Changes - Added a `DataDog/coverage-upload-github-action@v1` step to the `coverage` workflow, immediately after the existing Codecov upload step. - The existing Codecov upload is **unchanged** — nothing is removed or modified. - The Datadog upload uses `continue-on-error: true`, so it will never block CI even if it fails. ## Why are we doing this? As part of a company-wide effort, we're consolidating code coverage reporting into Datadog's own Code Coverage product. This gives us: - Coverage data integrated directly into Datadog CI Visibility - PR gates and coverage checks natively in Datadog - No dependency on a third-party service (Codecov) for coverage reporting ## Validation CI has run on this PR and both uploads completed successfully. Coverage numbers match: | System | Coverage | |--------|----------| | Codecov | 71.24% | | Datadog | 71.25% | The 0.01% difference is within expected tolerance (rounding differences in line counting). ## Next steps (not in this PR) Once this PR is merged and we've confirmed Datadog coverage is stable over several commits: 1. Remove the Codecov upload step and `CODECOV_TOKEN` secret 2. Remove `.codecov.yml` 3. Optionally configure PR gates in `code-coverage.datadog.yml` ## No action needed from reviewers beyond normal review This is a low-risk, additive change. The new step runs independently of the existing CI pipeline and cannot cause test failures. Co-authored-by: bjorn.antonsson <bjorn.antonsson@datadoghq.com>
# What does this PR do? Ran a [fuzzer](https://github.com/DataDog/obfuscation-parity-tester/tree/fuzzer/crates/fuzzer) to find output difference between this obfuscator and the agent's obfuscator, fixed issues one by one, even the nonsensical edge cases. # Motivation Reach 100% parity between obfuscation libs. # Additional Notes - Semver check shows breaking changes because the url crates dependencies were implementing their traits but it's not a real breaking change # How to test the change? Here is the list of input that are fixed in this PR (one per line). These are obviously not correct urls but we need to get the exact same outputs as the agent even in these cases. ``` ჸ ! !#ჸ !?ჸ !ჸ # #!ჸ ## #% #'ჸ #\u0001 #\u0001ჸ #ჸ % % %30ჸ %802 . .# .#ჸ ../ჸ /ჸ 0 : : :#\u0001 <! ?# ?#ჸ ?#ჸ ?ჸ ?ჸ#ჸ A:ჸ C:# C:\u0001 [ჸ \"! \\ \\ჸ \u0001 \u0001C: \u0010 \u0010ჸ ჸ ჸ# ჸ#! ჸ#% ჸ#%\u0001 ჸ#'ჸ ჸ#0 ჸ#\u0010 %#ꦿô�¿¿𭄄!!ۓ͡(\u0002ۓߤꬃ ჸ?# ჸ?% ჸ?ჸ झ#\u0003\n䕞\u0006ô�¿¿̿筚͡➑\u0002{ô�¿¿ô�¿¿' झ#\u0003\n䕞\u0006ô�¿¿̿筚͡➑\u0002{ô�¿¿ô�¿¿' झ#\u0003\n䕞\u0006ô�¿¿̿筚͡➑\u0002{ô�¿¿ô�¿¿' ``` Co-authored-by: oscar.ledauphin <oscar.ledauphin@datadoghq.com>
# What does this PR do? Ran a [fuzzer](https://github.com/DataDog/obfuscation-parity-tester/blob/main/crates/fuzzer/src/lib.rs) to find output difference between this obfuscator and the agent's obfuscator, fixed issues one by one, even the nonsensical edge cases. # Motivation Reach 100% parity between obfuscation libs. # Additional Notes # How to test the change? Here are the inputs this PR fixes: redis: `\u000bჸ` redis: `ჸ\n\tჸ` redis_quantize: `\r\n` redis_quantize: `\t` redis_quantize: `ꭺ` redis_quantize: `` redis_quantize: `ᛓᾜਝ\u001b੨` Co-authored-by: oscar.ledauphin <oscar.ledauphin@datadoghq.com>
# What does this PR do? When a crate doesn't exist in the baseline (new crate being added), `cargo semver-checks` exits with code 1 and prints `package 'X' not found`. The semver-level script had no handler for this case, causing it to exit with `Error: unknown level ()`. Added an `elif` branch to detect this message and treat it as a `minor` change — a new crate is purely additive. # Motivation Fixes the semver-check CI failure on #1624, which adds the new `libdd-http-client` crate. # Additional Notes N/A # How to test the change? Re-run the semver-check workflow on #1624. Co-authored-by: yann.hamdaoui <yann.hamdaoui@datadoghq.com>
`dd-trace-php` needs to propagate container_tags_hash through DBM when a config is enabled. This hash is in the response headers of agents `/info`. However, dd-trace-php is using libdatadog to call that endpoint, therefore this PR adds a wait to retrieve container tags hash header from the /agent call. Co-authored-by: louis.tricot <louis.tricot@datadoghq.com>
# What does this PR do? It enables all warnings and set them as errors in order to prevent logic erros. # Motivation While fiddling with the ffi interface I found a bug which was not detected by the compiler where an incompatible casting was compiled away. Co-authored-by: julio.gonzalez <julio.gonzalez@datadoghq.com>
# What does this PR do? saves errno prior to signal handling, and restores it before chaining # Motivation We save/restore errno on signal handling. This is good practice; although this is in crash context # Additional Notes using this crate: https://crates.io/crates/errno Neat crate, cross platform, and compared to std libs errno utils: ``` This crate provides these extra features: 1. No heap allocations 2. Optional #![no_std] support 3. A set_errno function ``` # How to test the change? None as a unit test. Kind of hard to test the main signal hander function as a unit test. I could have made a `with_errno_preserved` wrapper around the signal handling function and write a test for the wrapper, but I felt that doing so would be more complexity vs the value it brings. However, no strong opinions on this; happy to implement if a second opinion thinks it is a good idea. There is a bin test in a following PR: [chore(crashtracking): add integration test for errno preservation](#1768) Co-authored-by: gyuheon.oh <gyuheon.oh@datadoghq.com>
) # What does this PR do? Adds an integration test that checks that errno is preserved before and after the crashtracker signal handler # Motivation Its good practice to maintain errno, especially in the potential case chained handlers do something with it This was done in [chore(crashtracking): preserve errno for crashtracker](#1767). We should test this. # Additional Notes There already exists integration tests that chain handlers. However, this test introduces file writing and verifying logic. I do not want to rip out the existing chained handler tests from the "simple test" harness. The added upside of this is that this lets us write a test to test a specific flow and keep each test atomic in its responsibility, instead of complicating the verification logic. # How to test the change? Run the bin test Co-authored-by: gyuheon.oh <gyuheon.oh@datadoghq.com>
# What does this PR do? Update rustls and hyper-rustls to update transitive dependencies on `aws-lc-sys`, `aws-lc-fips-sys`, `aws-lc-rs` In addition to updating those two direct dependencies I also had to run `cargo update aws-lc-fips-sys` to get it to use a new enough version. # Motivation https://github.com/DataDog/libdatadog/security/dependabot/45 https://github.com/DataDog/libdatadog/security/dependabot/46 https://github.com/DataDog/libdatadog/security/dependabot/47 https://github.com/DataDog/libdatadog/security/dependabot/48 # Additional Notes This update also removes making outbound http requests from our unit tests for `mini-agent` gated tests. They were unnecessarily making connections to `example.com` to test the `webki-roots` fallback. We can verify the fallback is working correctly without the requests. # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: edmund.kump <edmund.kump@datadoghq.com>
Everything we used tarpc for was dispatching ... and not much else. But it also imposed some constraints on the messaging stream, like it did not directly allow us to backpressure directly on the read stream. Finally, the old code, under some, not understood circumstances, would have file descriptors pile up in the sink, which were not ending up being associated with a message. (I suspect, when a message was dropped on the sender side?!) This proposes a radically different approach based on message passing instead of streaming: - SOCK_SEQSTREAM is used on Linux. - Macos does not support this, so we fall back to a dgram socketpair, which results in the same thing, effectively. - Windows uses Named Pipe Messages. The messaging approach promises to strongly tie file descriptors to the passed text, structurally eliminating the possibility of file descriptions leaking. It also avoids a manual length delimiting codec and stream buffering. It generally avoids all buffering, except the send and receive buffers. In fact, previously we had multiple additional buffering channels around the different executors - and still, in the end we execute stuff serially... Avoiding tarpc also allows us to trivially tie some metadata directly to the connection. This can still be improved upon. E.g. session_ids are now fundamentally tied to the connection. Finally we also improve upon connection state stability when messages are dropped: the SidecarOutbox will buffer state-critical data and resubmit when space becomes available. Co-authored-by: bob.weinand <bob.weinand@datadoghq.com>
…t alert (#1774) # What does this PR do? Just bumping `reqwest` and a transitive dependency on `quinn-proto` for https://github.com/DataDog/libdatadog/security/dependabot/49 # Motivation What inspired you to submit this pull request? # Additional Notes Anything else we should know when reviewing? # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: edmund.kump <edmund.kump@datadoghq.com>
# What does this PR do? Update tar for https://github.com/DataDog/libdatadog/security/dependabot/50 https://github.com/DataDog/libdatadog/security/dependabot/51 Also update the MSRV for clippy-annotation-reporter to match the current workspace MSRV. # Motivation What inspired you to submit this pull request? # Additional Notes As the branch name suggests I was also going to update jsonwebtoken, time, and idna for https://github.com/DataDog/libdatadog/security/dependabot/40 https://github.com/DataDog/libdatadog/security/dependabot/41 https://github.com/DataDog/libdatadog/security/dependabot/42 https://github.com/DataDog/libdatadog/security/dependabot/44 That isn't currently possible with our MSRV and 2021 edition. I'll create tickets for common components to follow up. # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: edmund.kump <edmund.kump@datadoghq.com>
Turns out the ProcessToken was the right authority all along. Fixing APMS-18332. Co-authored-by: bob.weinand <bob.weinand@datadoghq.com>
# What does this PR do? Rename stats serialization name from GrpcStatusCode to GRPCStatusCode to match agent code
…nal handler execution (#1771) # What does this PR do? Guards SIGCHLD and SIGPIPE during crashtracker signal handler execution # Motivation During execution of the signal handler, it cannot be guaranteed that the signal is handled without SA_NODEFER, thus it also cannot be guaranteed that signals like SIGCHLD and SIGPIPE will _not_ be emitted during this handler as a result of the handler itself. At the same time, it isn't known whether it is safe to merely block all signals, as the user's own handler will be given the chance to execute after ours. Thus, we need to prevent the emission of signals we might create (and cannot be created during a signal handler except by our own execution) and defer any other signals. To put it another way, it is conceivable that the crash handling code will emit SIGCHLD or SIGPIPE, and instead of risking responding to those signals, it needs to suppress them. On the other hand, it can't just "block" (`sigprocmask()`) those signals because this will only defer them to the next handler. # Additional Notes This was originally implemented in: [Crashtracker receiver is spawned on crash](#692) but subsequently removed. # How to test the change? Unit tests for saguard.rs. Integration test will be in a following PR Co-authored-by: gyuheon.oh <gyuheon.oh@datadoghq.com>
…#1708) # What does this PR do? Implements missing features from agent's sql obfuscation. Ran a [fuzzer](https://github.com/DataDog/obfuscation-parity-tester/tree/fuzzer/crates/fuzzer) to find output difference between this obfuscator and the agent's obfuscator, fixed issues one by one, even the nonsensical edge cases. # Motivation Reach 100% parity between obfuscation libs. # Additional Notes Anything else we should know when reviewing? # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: oscar.ledauphin <oscar.ledauphin@datadoghq.com>
# What does this PR do? Adds an allocation size tracking allocator that can be used to benchmark memory used by functions. # Motivation Measure all the things... Co-authored-by: bjorn.antonsson <bjorn.antonsson@datadoghq.com>
Fixes dd-trace-php. Co-authored-by: bob.weinand <bob.weinand@datadoghq.com>
…nt-computed-stats (#1900) # What does this PR do? Treat an empty string as a falsy value for the `Datadog-Client-Computed-Stats` header. # Motivation We want to support agent computed stats in the Serverless Compatibility Layer. Currently when the `Datadog-Client-Computed-Stats` header is sent it always disables agent computed stats, even when the value of the header is an empty string. https://datadoghq.atlassian.net/browse/SVLS-8789 # Additional Notes - [Java Tracer sends an empty string](https://github.com/DataDog/dd-trace-java/blob/6e28457d70c41bf847c4af7b3ba4e2f6c1371070/dd-trace-core/src/main/java/datadog/trace/common/writer/ddagent/DDAgentApi.java#L110-L117) for `Datadog-Client-Computed-Stats` header where other tracers omit the header altogether - Go Agent [treats an empty string as a falsy value](https://github.com/DataDog/datadog-agent/blob/76aff83162011a15e5ee50295ac835f708e8ffa9/pkg/trace/api/api.go#L1049) - See DataDog/serverless-components#51 for adding agent computed stats in the Serverless Compatibility Layer. # How to test the change? Added a debug log in a test build: Before change with `DD_TRACE_STATS_COMPUTATION_ENABLED=false` ``` DEBUG datadog_trace_agent::trace_processor: Resolved tracer header tags: TracerHeaderTags { lang: "java", lang_version: "21.0.6", lang_interpreter: "OpenJDK 64-Bit Server VM", lang_vendor: "Microsoft", tracer_version: "1.61.1~e32291a78b", container_id: "", client_computed_top_level: true, client_computed_stats: true, dropped_p0_traces: 0, dropped_p0_spans: 0 } ``` After change with `DD_TRACE_STATS_COMPUTATION_ENABLED=false` ``` DEBUG datadog_trace_agent::trace_processor: Resolved tracer header tags: TracerHeaderTags { lang: "java", lang_version: "21.0.6", lang_interpreter: "OpenJDK 64-Bit Server VM", lang_vendor: "Microsoft", tracer_version: "1.61.1~e32291a78b", container_id: "", client_computed_top_level: true, client_computed_stats: false, dropped_p0_traces: 0, dropped_p0_spans: 0 } ``` Co-authored-by: duncan.harvey <duncan.harvey@datadoghq.com>
# What does this PR do? Fix unbound variable when going through the changelog creation path.
# What does this PR do? This updates from rustc-hash 1.1 to 2.1.2. # Motivation This is general maintenance but the hash quality is better in some cases, leading to improvements in bench `profile_add_sample2_frames_x1000`: ``` On main: run 1: 253.57 µs run 2: 266.87 µs run 3: 262.89 µs On this branch: run 1: 239.81 µs run 2: 233.25 µs run 3: 243.97 µs ``` On real code (not adding the same specific thing over and over again), your results may be better or worse. # Additional Notes I have a commit in 2.1.2, though it's not particularly relevant: it removes an unreachable panic from the generated code. # How to test the change? Regular testing applies. Co-authored-by: levi.morrison <levi.morrison@datadoghq.com>
As per https://datadoghq.atlassian.net/browse/DEBUG-5324. Co-authored-by: bob.weinand <bob.weinand@datadoghq.com>
…#1919) # What does this PR do? Refactor cfg statements so it works wit all features enabled # Motivation Publishing job requires that all tests pass with `--all-features` and `--no-default-features`. Co-authored-by: julio.gonzalez <julio.gonzalez@datadoghq.com>
# What does this PR do? Makes all CI workflows **dynamic** — they detect which crates were changed/affected by a PR and only run jobs for those crates, skipping everything on pushes to `main` where nothing changed. ### Key changes **New `crates-reporter` composite action** (replaces `changed-crates`) - The old action was a bash-heavy composite action doing crate detection in shell script - The new one is a compiled Rust binary that uses `cargo-metadata` to detect changed crates **and** compute transitive dependants (affected crates), giving a richer output: `crates`, `affected_crates`, `crates_count`, `affected_crates_count`, `status` - A new `ci-shared` library crate provides shared logic (`git`, `workspace`, `crate_detection`, `github_output`) reused across multiple action binaries - A shared `Cargo.toml`/`Cargo.lock` workspace at `.github/actions/` replaces per-action workspaces **`test.yml`** — unit tests now only run on affected packages (direct + transitive dependants), not the full workspace. The `cross-centos7` job was made dynamic: it builds the nextest command conditionally based on `$PACKAGES`/`$CRASHTRACKER_FEATURE` from the setup stage **`test-ffi.yml`** (new, split from `test.yml`) — FFI jobs extracted into their own workflow, also driven by the setup stage; only runs when `-ffi` crates are affected **`lint.yml`** — `rustfmt` now runs only on directly changed crates; `clippy` runs on affected crates (since a change can introduce warnings in dependants) **`miri.yml`** — Miri now runs only on affected crates on PRs; still runs `--workspace` on pushes to `main` **`fuzz.yml`** — fuzz jobs now filter to only the fuzz-capable crates that were affected # Motivation Previously, every PR triggered CI jobs against the entire workspace regardless of what changed. With the current approach: - Reduces CI runtime and cost by skipping unaffected crates - Uses transitive dependency analysis (not just direct changes) so a change in a shared crate still triggers tests for its dependants - Replaces a fragile bash-based crate detector with a reliable, tested Rust implementation using `cargo-metadata` - Splits FFI testing into its own workflow for cleaner separation of concerns and run both test jobs (nextests and ffi) in parallel. # Results The new pipeline has been tested with two crates. libdd-common which is a foundational crate and has a lot of transitive dependencies (almost all workspace) and libdd-data-pipeline-ffi which only has one transitive dependency. | Workflow | libddcommon | libdd-data-pipeline-ffi | |----------|----------------------------|-----------------------------------------| | Fuzz | ~22m (95%) | ~13m (92%) | | Miri | ~1m (4%) | **~23m (92%)** | | Test | ~1m (4%) | **~12m (43%)** | | Lint | -1m (-14%, setup overhead) | ~6m (50%) | | **Total wall-clock** | **~0m (0%)** | **11m 6s (39%)** | Co-authored-by: iunanua <igor.unanua@datadoghq.com> Co-authored-by: julio.gonzalez <julio.gonzalez@datadoghq.com>
# What does this PR do? Fixes a case in which affected crates variable is formatted as a multiline string leading to github to fail due is unable to process it.
…bilities-impl (#1924)) (#1925) This PR merges the release branch to main Co-authored-by: dd-octo-sts[bot] <200755185+dd-octo-sts[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: hoolioh <107922352+hoolioh@users.noreply.github.com>
Follow-up to #1817. Co-authored-by: bob.weinand <bob.weinand@datadoghq.com>
# What does this PR do? Derives PartialEq and Eq on TracerMetadata, enabling equality comparisons between instances using `==` and `!=`. # Motivation Adding these standard library traits enables equality assertions without requiring callers to implement their own field by field comparison. Needed for DataDog/serverless-components#51 (comment) # Additional Notes # How to test the change? Unit tests Co-authored-by: duncan.harvey <duncan.harvey@datadoghq.com>
# What does this PR do? Crates should be able to compile and run the tests with `--no-default-features` and `--all-features` # Motivation While trying to publish a new crate I found that libdd-common was not passing the release stage due to a failure when compiling with `--all-features`. Co-authored-by: julio.gonzalez <julio.gonzalez@datadoghq.com>
# What does this PR do? Per title # Motivation Few months ago, dd-trace-java drops by accident two thirds of its test base. Twice. Following this incidents, we realized that with our growing test base, it was relatively easy to drop a part of it, as a green CI is always a trigger of "don't check anymore" (which is fine). So we implemented a safe guard on test optim that triggers an alert if ever a repo does does not report its tests to Test Optim. And this monitor caught one other occurences few weeks later ... The caveats is that we do not have any reliable way to trigger it : once a commit is pushed, we can wait for several minutes, even hours before having all the tests reported. So we implemented it with a compromise : every day we must have the full test base. In consequences, we need to have a schedule that run the test base once a day (including WE, limitation of datadog monitors) , to avoid triggering an alert for days that does not have any commit. I also adapted the concurrency clause to have all runs runnings on pushes on `main` # Additional Notes Anything else we should know when reviewing? # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: charles.debeauchesne <charles.debeauchesne@datadoghq.com>
… the crate (#1399) # What does this PR do? Make necessary ammendments to publish tracer flare crate. Co-authored-by: anais-raison <77939650+anais-raison@users.noreply.github.com> Co-authored-by: julio.gonzalez <julio.gonzalez@datadoghq.com>
# What does this PR do? Add a claude skill to create a new release # Motivation What inspired you to submit this pull request? # Additional Notes Anything else we should know when reviewing? # How to test the change? Describe here in detail how the change can be validated. Co-authored-by: vianney.ruhlmann <vianney.ruhlmann@datadoghq.com>
# What does this PR do? This comment is no longer relevant, as we now collect stacks for MacOS with [chore(crashtracking): emit a best effort stacktrace for Mac](#1645) # Motivation What inspired you to submit this pull request? # Additional Notes Anything else we should know when reviewing? # How to test the change? Describe here in detail how the change can be validated.
Contributor
Clippy Allow Annotation ReportComparing clippy allow annotations between branches:
Summary by Rule
Annotation Counts by File
Annotation Stats by Crate
About This ReportThis report tracks Clippy allow annotations for specific rules, showing how they've changed in this PR. Decreasing the number of these annotations generally improves code quality. |
Contributor
🔒 Cargo Deny Results📦
|
Contributor
📚 Documentation Check Results📦
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
A brief description of the change being made with this pull request.
Motivation
What inspired you to submit this pull request?
Additional Notes
Anything else we should know when reviewing?
How to test the change?
Describe here in detail how the change can be validated.