• jcgrillo 6 minutes ago

    Why do all these things use such damnably inefficient wire formats?

    For metrics, we're shipping a bunch of numbers over the wire, with some string tags. So why not something like:

      message Measurements {
        uint32 metric_id = 1;
        uint64 t0_seconds = 2;
        uint32 t0_nanoseconds = 3;
        repeated uint64 delta_ns [packed = true] = 4;
        repeated int64 values [packed = true] = 5;
      }
    
    Where delta_ns represents a series of deltas from timestamp t0 and values has the same length as delta_ns. Tags could be sent separately:

      message Tags {
        uint32 metric_id = 1;
        repeated string tags = 2;
      }
    
    That way you only have to send the tags if they change and the values are encoded efficiently. I bet you could have really nice granular monitoring e.g. sub ms precision quite cheaply this way.

    Obviously there are further optimizations we can make of e.g. we know the values will respond nicely to delta encoding.

    • pranay01 4 hours ago

      I think the biggest value I see with OpenTelemetry is the ability to instrument your code and telemetry pipeline once (using otel collector) and then choose a backend and visualisation framework which meets your requirement.

      For example at SigNoz [0], we support OpenTelemetry format natively from Day 1 and use ClickHouse as the datastore layer which makes it very performant for aggregation and analytics queries.

      There are alternative approaches like what Loki and Tempo does with blob storage based framework.

      If your data is instrumented with Otel, you can easily switch between open source projects like SigNoz or Loki/Tempo/Grafana which IMO is very powerful.

      We have seen users switch within a matter of hours from another backend to SigNoz, weh they are instrumented with Otel. This makes testing and evaluating new products super efficient.

      Otherwise just the effort required to switch instrumentation to another vendor would have been enough to not ever think about evaluating another product

      (Full Disclosure : I am one of the maintainers at SigNoz)

      [1] https://github.com/signoz/signoz

      • PeterCorless 20 minutes ago

        This is definitely the same sort of take we have at StarTree (Apache Pinot, but equivalent to the Clickhouse story you have above). We're working now on making sure StarTree is compliant with PromQL (we just demoed this at Current 2024), plus also TraceQL and LogQL. Because, not surprisingly, query semantics actually matter!

        https://thenewstack.io/reimagining-observability-the-case-fo...

        • nevon 3 hours ago

          Related to this, as a library author it's great that the library can now come pre-instrumented and the application can use whatever backend they want. Previously, I would have to expose some kind of event emitter or hooks that the application would have to integrate against.

          • pranay01 2 hours ago

            Yeah, totally agree this is very helpful

        • pards 4 hours ago

          I tried to introduce OTel in a greenfield system at a large Canadian bank. Corporate IT pushed back hard because they'd heard a lot of "negative feedback" about it at a Dynatrace conference. No surprises there.

          Corporate IT were not interested in reducing vendor lock-in; in fact, they asked us to ditch the OTel Collector in favour of Dynatrace OneAgent even though we could have integrated with Dynatrace without it.

          • sofixa 3 hours ago

            > they'd heard a lot of "negative feedback" about it at a Dynatrace conference

            It's funny because Dynatrace fully support OpenTelemetry, even having a distribution of the OpenTelemetry Collector.

          • demurgos 18 hours ago

            OpenTelemetry is nice overall as there are library for multiple platforms. I introduced it this year for a web game platform with servers in Node, Java, PHP and Rust and it all worked roughly similarly which made it good for consistency.

            I like how OpenTelemetry decouples the signal sink from the context, compared to other structured logging libs where you wrap your sink in layers. The main thing that I dislike is the auto-instrumentation of third-party libraries: it works great most of the time, but when it doesn't it's hard to debug. Maintainers of the different OpenTelemetry repos are fairly active and respond quickly.

            It's still relatively recent, but I would recommend OpenTelemetry if you're looking for an observability framework.

            • nicholasjarnold 20 hours ago

              I can confirm that this is a pretty good way. Building out a basic distributed tracing solution with OTEL, jaeger and the relevant Spring Boot configuration and dependencies was quite a pleasant experience once you figure out the relevant-for-your-use-cases set of dependencies. It's one of those nice things that Just Works™, at least for Java 17 and 21 and Spring Boot 3.2 (iirc) or greater.

              There appeared to be wide array of library and framework support across various stacks, but I can only attest personally to the quality of the above setup (Java, Boot, etc).

              • ljm 19 hours ago

                > once you figure out the relevant-for-your-use-cases set of dependencies

                > It's one of those nice things that Just Works™

                did it Just Work™ or did you have to do work to make it Just Work™?

                • barake 18 hours ago

                  Java has really good OTel coverage across tons of libraries. It should mostly Just Work™, though you'll still need to consider sampling strategies, what metrics you actually want to collect, etc.

                  Would say .NET isn't too far behind. Especially since there are built-in observability primitives and Microsoft is big on OTel. ASP.NET Core and other first party libraries already emit OTel compliant metrics and traces out of the box. Instrumenting an application is pretty straightforward.

                  I have less experience with the other languages. Can say there is plenty of opportunity to contribute upstream in a meaningful way. The OpenTelemetry SIGs are very welcoming and all the meetings are open.

                  Full disclosure: work at Grafana Labs on OpenTelemetry instrumentation

                  • moxious 4 hours ago

                    they say good technology makes "easy things easy, and hard things possible".

                    A lot of those java built-in libraries personally, I think they make easy things easy. Will it Just Work™ Out Of The Box™?

                    Hey we're engineers right? We know that the right answer to every question, bar none is "it depends on what you're trying to do" right? ;)

                  • roshbhatia 18 hours ago

                    In my experience, it just worked -- I was at an org that ran 3rd party java services alongside our normal array of microservices (that all used our internal instrumentation library that wrapped OTEL) and using the OTEL autoinstrumentation for those 3rd party services was pretty trivial to get setup and running (just wrap the command to run the app with the OTEL wrapper, hand it a collector url.) Granted -- we already had invested in OTEL elsewhere and were familiar with many of the situations in which it didn't just work.

                    • azthecx 18 hours ago

                      I had a very similar experience with a Quarks REST API where it's supported very well out of the box, we just had to point it to the appropriate otel collector endpoint and traces are created or propagated automatically.

                  • aramattamara 14 hours ago

                    The problem with OpenTelemetry is that it really only good for tracing. Metrics and logs are kinda bungee strapped later: very inefficient and clunky to use.

                    PS: And devs (Lightspeed?) seem to really like "Open" prefix: OpenTracing + OpenCensus = OpenTelemetry.

                    • prabhatsharma 2 hours ago

                      Opentelmetry is definitely a good thing that will help reduce vendor lock-in and exploitative practices from some vendors when they see that the customer is locked in due to the proprietary code instrumentation. In addition, opentelemetry autoinstrumentation is fantastic and allows one to get started with zero code instrumentation.

                      Going back to the basics - 12 factor app principles must also be adhered to in scenarios where opentelemtry might not be an option for observability. e.g. Logging is not very mature in Opentlemetry for all the languages as of now. Sending logs to stdout provides a good way to allow the infrastructure to capture logs in a vendor neutral way using standard log forwarders of your choice like fluentbit and otel-collector. Refer - https://12factor.net/logs

                      OTLP is a great leveler in terms of choice that allows people to switch backends seamlessly and will force vendors to be nice to customers and ensure that enough value is provided for the price.

                      For those who are using kubernetes you should check the opentelemtry operator, which allows you to autoinstrument your applications written in Java, NodeJS, Python, PHP and Go by adding a single annotation to your manifest file. Check an example here of sutoinstrumentation -

                                                                       /-> review (python)
                                                                      /
                      frontend (go) -> shop (nodejs) -> product (java) \ \-> price (dotnet)

                      Check for complete code - https://github.com/openobserve/hotcommerce

                      p.s. An OpenObserve maintainer here.

                      • PeterCorless 9 minutes ago

                        I'd argue that since most observability use cases are "write once, read hardly ever" they aren't really transactional (OLTP-oriented). You're doing inserts at a high rate, quick scans in real time, and a few exploratory reads when you need. Generally you're not doing upserts, overwriting existing data; you're doing time-series, where each "moment" has to be captured atomically.

                        Given that, it makes sense that if you have an OLAP data, like Apache Pinot, you can do the same and better than OLTP. You can do faster aggregations. Quicker large range or full table scans.

                        (Disclosure: I work at StarTree, which is powered by Apache Pinot)

                      • kbouck 19 hours ago

                        For anyone that has built more complex collector pipelines, I'm curious to know the tech stack:

                          - otel collector?
                          - kafka (or other mq)?
                          - cribl?
                          - vector?
                          - other?
                        • azthecx 18 hours ago

                          What sort of complexity do you need? I've used them on my previous job and am implementing it on the current one. I have never heard of the last three you mention.

                          Otel collector is very useful for gathering multiple different sources, eg I am at a big corporation and we both have department level Grafana stack (Prometheus Loki etc) and we need to also send the data to Dynatrace. With otel collector these things are a minor configuration away.

                          For Kafka if you mean tracing through Kafka messages previously we did it by propagating it in message headers. Done at a shared team level library the effort was minimal.

                          • bripkens 5 hours ago

                            The OpenTelemetry collector works very well for us – even for internal pipelines. You can build upon a wide array of supported collector components; extending it with your own is also relatively straightforward. Plus, you can hook up Kafka (and others) for data ingress/egress.

                          • PeterZaitsev 16 hours ago

                            Looking for target for your OTEL data checkout Coroot too - https://coroot.com/ Additionally to OTEL visualization it can use eBPF to generate traces for applications where OpenTelemetry installation can't be done.

                            • candiddevmike 16 hours ago

                              This is hypocritical content marketing from a company that doesn't want you to be vendor neutral. As seen by the laughable use of hyperlinks to their products but no links when mentioning Prometheus or elasticsearch.

                              OTEL is great, I just wish the CNCF had better alternatives to Grafana labs.

                              • najork 16 hours ago

                                Check out Perses: https://github.com/perses/perses

                                Less mature than Grafana but recently accepted by the CNCF as a sandbox project, hopefully a positive leading indicator of success.

                                • bripkens 5 hours ago

                                  Perses is an excellent step towards vendor neutrality. We at Dash0 are basing our dashboarding data model entirely on Perses to allow you to bring/take away as much of your configuration as possible.

                                  The team around Perses did a solid job coming up with the data model and making it look very Kubernetes manifest-like. This makes for good consistency, especially when configuring via Kubernetes CRs.

                                  • physicles 9 hours ago

                                    Didn't know about Perses. Looks promising! I've had one foot out the door with Grafana for a couple years -- always felt heavy and not-quite-there (especially the Loki log explorer), and IMHO they made alerts far less usable with the redesign in version 8.

                                  • scubbo 16 hours ago

                                    Well, no, they don't _want_ you to be vendor neutral. But they allow and support you to do so - unlike DataDog.

                                  • exabrial 19 hours ago

                                    JMX -> Jolokia -> Telegraf -> the-older-TICK-stack-before-influxdb-rewrote-it-for-the-3rd-time-progressively-making-it-worse-each-time

                                    • mmanciop 5 hours ago

                                      I can 100% confirm that OpenTelemetry is a fantastic project to get rid of most observability lock-in.

                                      For context: I am the Head of Product of Dash0, a recently-launched Observability product 100% based on OpenTelemetry. (And Dash0 is not even the first observability based on OpenTelemetry I work on.)

                                      OTLP as a wire protocol goes a long way in ensuring that your telemetry can be ingested by a variety of vendors, and software like the OpenTelemetry Collector enables you to forward the same data to multiple backends at the same time.

                                      Semantic conventions, when implemented correctly by the instrumentations, put the burden of "making telemetry look right" on the side of the vendor, and that is a fantastic development for the practice of observability.

                                      However, of course, there is more to vendor lock-in than "can it ingest the same data". The two other biggest sources of lock in are:

                                      1) Query languages: Vendors that use proprietary query languages lock your alerting rules and dashboards (and institutional knowledge!) behind them. There is no "official" OpenTelemetry query language, but at Dash0 we found that PromQL suffices to do all types of alerting and dashboards. (Yes, even for logs and traces!)

                                      2) Integrations with your company processes, e.g., reporting or on-call.