news

First the E-Bike, Next the Flying Car

Carbon fiber composites areincredibly strong for their weight; that’s why they’re key to the newest aircraft designs. However, they’re only strong in one direction, so they’re generally layered or woven in grid patterns before being shaped into structures. That means one set of fibers carries the load some of the time, and another set carries it atother times—which is not the most efficient use of the material. In 2014, Hemant Bheda was CEO of Quantum Polymers, a company that makes extruded plastic rods, plates, and other shapes for machined parts.
Read more

Re-Architecting the Video Gatekeeper

This is the story about how the Content Setup Engineering team used Hollow, a Netflix OSS technology, to re-architect and simplify an essential component in our content pipeline — delivering a large amount of business value in the process. Each movie and show on the Netflix service is carefully curated to ensure an optimal viewing experience. The team responsible for this curation is Title Operations. Title Operations will confirm, among other things: When a title meets all of the minimum above requirements, then it is allowed to go live on the service.
Read more

MTTR is dead, long live CIRT

The game is changing for the IT ops community, which means the rules of the past make less and less sense. Organizations need accurate, understandable, and actionable metrics in the right context to measure operations performance and drive critical business transformation. The more customers use modern tools and the more variation in the types of incidents they manage, the less sense it makes to smash all those different incidents into one bucket to compute an average resolution time that will represent ops performance, which is what IT has been doing for a long time.
Read more

A quick guide to GitLab CI/CD pipelines

Automation is essential for successful DevOps teams, and CI/CD pipelines are a big part of that journey. At its most basic level, a pipeline gets code from point A to point B. What makes a better pipeline is how quickly and efficiently it accomplishes this task. A CI/CD pipeline automates steps in the SDLC like builds, tests, and deployments. When a team takes advantage of automated pipelines, they simplify the handoff process and decrease the chance of human error, creating faster iterations and better quality code.
Read more

Knative & Linkerd Support, JSON logging, and more in Ambassador 0.73

We’re releasing Ambassador 0.73 today, with native support for the Linkerd 2.0 service mesh and the Knative serverless platform. Ambassador focuses on the ingress (“north-south”) use case for traffic management within modern cloud-native applications, and accordingly we’ve had integrations for quite some time with service meshes that focus on the service-to-service (“east-west”) use cases, such as Istio, Consul, and Linkerd. At the recent KubeCon EU, we saw an increased amount of interest around Linkerd, and also some potentially avoidable friction with the integration, and so we’ve improved the operator experience around this (more details below).
Read more

How to Use New Relic for Performance Engineering and Load Testing

Performance engineering and load testing are critical parts of any modern software organization’s toolset. In fact, it’s increasingly common to see companies field dedicated load testing teams and environments—and many companies that don’t have such processes in place are quickly evolving in that direction. Driven by key performance indicators (KPIs), performance engineering and load testing for software applications have three main goals: More specifically, a typical load test might look like this: While there are plenty of tools available for generating the user load for a performance test, the New Relic platform (particularly New Relic APM, New Relic Infrastructure, and New Relic Browser) provides in-depth monitoring and features that can give crucial insights into the analysis of such tests—from browser response times to user sessions to application speed to utilization of backend resources.
Read more

Announcing TraefikEE v1.1

It’s been two months since the general availability of the 1.0 version of Traefik Enterprise Edition. Encouraged by its successful launch, and propelled by the immense feedback we received from customers, the team started to work on 1.1 right away. Traefik Enterprise Edition is a new platform built on top of Traefik, the popular open-source cloud-native edge router, designed for business-critical deployments. It adds clustering features to satisfy the needs of enterprise customers.
Read more

Zoncolan: How Facebook uses static analysis to detect and prevent security issues

Facebook’s web codebase currently contains millions of lines of Hack code. To handle the sheer volume of code, we build sophisticated systems and tools to augment the comprehensive reviews our security engineers conduct. Today, we are sharing the details of one of those tools, called Zoncolan, for the first time. Zoncolan helps security engineers scale their work by using static analysis to automatically examine our code and detect potentially dangerous security or privacy issues.
Read more

Grafana Labs Teams Use Jaeger to Improve Query Performance Up to 10x

Grafana Labs works everyday to break traditional data boundaries with metric-visualization tools accessible across entire organizations. It began as a pure open-source project and has since expanded into supported subscription services. The Grafana open-source project is a platform for monitoring and analyzing time series data. There are also subscription offerings such as the supported Grafana Enterprise version. Grafana Labs’ engineers service more than 150,000 active installations. Users include companies such as PayPal, eBay and Booking.
Read more

Key metrics for monitoring Consul

HashiCorp Consul is agent-based cluster management software that addresses the challenge of sharing network and configuration details across a distributed system. Consul handles service discovery and configuration for potentially massive clusters of hosts, spread across multiple datacenters. Consul was released in 2014, and organizations have adopted it for its service discovery capabilities, distributed key-value store, and automated health checks, among other features (including, recently, a service mesh). Monitoring Consul is necessary for making sure that up-to-date network and configuration details are reaching all hosts in your cluster, allowing them to communicate with one another and perform the work of your distributed applications.
Read more