Plumbr
@JavaPlumbr
Followers
2K
Following
566
Media
219
Statuses
1K
Release fast. Respond faster. Intelligent alerts, distributed traces and root-cause detection tell you what's gone wrong
Estonia
Joined November 2011
🎉🎉🎉
Splunk has acquired @JavaPlumbr and agreed to acquire @TeamRigor, two companies with deep expertise and intellectual property that extend our end-to-end observability solution to be the most comprehensive, practical and proven in the industry. Read on to learn more. #splunkconf20
0
0
6
We are excited to share that Plumbr has been acquired by Splunk! Here's the announcement in our blog: https://t.co/iaRCd8EcMG Additional context by Tim Tully, Splunk CTO:
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
1
6
16
We just launched nodeJS support via integration with OpenTelemetry vendor-neutral instrumentation agent: https://t.co/79Mc99Ruhs
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
4
2
Looking forward to having virtual threads in the JDK. In addition to performance improvement, their performance will also be easier to monitor as they use existing Java abstractions (unlike RxJava, Reactor, etc).
0
0
1
New UX metrics from browsers coming soon to Plumbr: * Largest Contentful Paint * First Input Delay * Cumulative Layout Shift Any other metrics that you would like to see? Let us know! More info here: https://t.co/yexcpJsXml
0
1
3
The Plumbr Universal Agent got a significant update, and expands bottleneck coverage with slow HTTP requests:
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
1
3
What are the “three pillars of observability”? –> Metrics, logs, and distributed traces How do you achieve a holistic three-pillar monitoring solution with Plumbr? –>
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
0
1
We have some exciting product news - Plumbr can now automatically explain *every* bottleneck in your Java code that makes your application or API slow:
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
1
5
I'm no expert in this area, but I've read a bunch of JVM GC tuning guides and I think @JavaPlumbr's Java GC handbook is easily the best:
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
6
15
Have applications or APIs that are implemented in #python? Want to be notified when they have impactful errors or performance bottlenecks? Plumbr can now also be used with Python applications:
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
0
5
Never found time to figure out how the distributed tracing works? Read our latest post covering the basics of tracing - https://t.co/ZW5lIqPQdF
0
1
3
Tracing is an excellent means to gain observability to your application during runtime. We make adopting tracing as simple as possible - check it out via
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
0
1
Throughput-based performance model predicting resource saturation for scaling: https://t.co/BQe4Jh8VeF
#devops
splunk.com
Splunk APM helps isolate latency and errors faster in production environments with end to end visibility, full trace analysis, and AI-driven troubleshooting.
0
2
3
Performance modelling can be a daunting exercise. See a practical example how using system monitoring and tracing allowed to build and verify the model to be used in up- and downscaling the service - https://t.co/BQe4Jh8VeF
0
4
5
Seems to have worked like a charm. So if you want to get rid of your users, do the same.
0
0
0
Apparently there are usecases where poor performance is desirable - like "shadowbanning" users via deliberately adding sleep(5000) for the request processing from such users -
1
4
4
Discover simple means to improve the signal quality for your alerting - https://t.co/HVAWFtrVIi
0
1
1