Report: Security hole in macOS Keychain puts passwords at risk

Apple released macOS High Sierra on Monday, so it should be a nice way to spotlight the Mac this week after last week’s iOS 11 and iPhone 8 releases. But a report by a security researcher at Synack puts a bit of a damper on the High Sierra release.

Patrick Wardle, Synack’s head of research, posted a video on Monday that shows how code he wrote can be used to get passwords from macOS’s Keychain. Keychain is the password manger built into macOS, and it usually requires a master password to access it. But Wardle’s code was able to access Keychain and collect passwords. The video below is a demonstration posted by Wardle.

To read this article in full or to leave a comment, please click here

Read more here:: IT news – Security

Source: Deloitte Breach Affected All Company Email, Admin Accounts

deloitte

Deloitte, one of the world’s “big four” accounting firms, has acknowledged a breach of its internal email systems, British news outlet The Guardian revealed today. Deloitte has sought to downplay the incident, saying it impacted “very few” clients. But according to a source close to the investigation, the breach dates back to at least the fall of 2016, and involves the compromise of all administrator accounts at the company as well as Deloitte’s entire internal email system.

In a story published Monday morning, The Guardian said a breach at Deloitte involved usernames, passwords and personal data on the accountancy’s top blue-chip clients.

“The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached,” The Guardian’s Nick Hopkins wrote. “The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was ‘impacted’ by the hack.”

In a statement sent to KrebsOnSecurity, Deloitte acknowledged a “cyber incident” involving unauthorized access to its email platform.

“The review of that platform is complete,” the statement reads. “Importantly, the review enabled us to understand precisely what information was at risk and what the hacker actually did and to determine that only very few clients were impacted [and] no disruption has occurred to client businesses, to Deloitte’s ability to continue to serve clients, or to consumers.”

However, information shared by a person with direct knowledge of the incident said the company in fact does not yet know precisely when the intrusion occurred, or for how long the hackers were inside of its systems.

This source, speaking on condition of anonymity, said the team investigating the breach focused their attention on a company office in Nashville known as the “Hermitage,” where the breach is thought to have begun.

The source confirmed The Guardian reporting that current estimates put the intrusion sometime in the fall of 2016, and added that investigators still are not certain that they have completely evicted the intruders from the network.

Indeed, it appears that Deloitte has known something was not right for some time. According to this source, the company sent out a “mandatory password reset” email on Oct. 13, 2016 to all Deloitte employees in the United States. The notice stating that employee passwords and personal identification numbers (PINs) needed to be changed by Oct. 17, 2016, and that employees that failed to do so would be unable to access email or other Deloitte applications. The message also included advice on how to pick complex passwords:

A screen shot of the mandatory password reset email Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

A screen shot of the mandatory password reset email Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

The source told KrebsOnSecurity they were coming forward with information about the breach because, “I think it’s unfortunate how we have handled this and swept it under the rug. It wasn’t a small amount of emails like reported. They accessed the entire email database and all admin accounts. But we never notified our advisory clients or our cyber intel clients.”

“Cyber intel” refers to Deloitte’s Cyber Intelligence Centre, which provides 24/7 “business-focused operational security” to a number of big companies, including CSAA Insurance, FedEx, Invesco, and St. Joseph’s Healthcare System, among others.

This same source said forensic investigators identified several gigabytes of data being exfiltrated to a server in the United Kingdom. The source further said the hackers had free reign in the network for “a long time” and that the company still does not know exactly how much total data was taken.

In its statement about the incident, Deloitte said it responded by “implementing its comprehensive security protocol and initiating an intensive and thorough review which included mobilizing a team of cyber-security and confidentiality experts inside and outside of Deloitte.” Additionally, the company said it contacted governmental authorities immediately after it became aware of the incident, and that it contacted each of the “very few clients impacted.”

“Deloitte remains deeply committed to ensuring that its cyber-security defenses are best in class, to investing heavily in protecting confidential information and to continually reviewing and enhancing cyber security,” the statement concludes.

Deloitte has not yet responded to follow-up requests for comment. The Guardian reported that Deloitte notified six affected clients, but Deloitte has not said publicly yet when it notified those customers.

Deloitte has a significant cybersecurity consulting practice globally, wherein it advises many of its clients on how best to secure their systems and sensitive data from hackers. In 2012, Deloitte was ranked #1 globally in security consulting based on revenue.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a private company based in the United Kingdom. According to the company’s Web site, Deloitte has more than 263,000 employees at member firms delivering services in audit and insurance, tax, consulting, financial advisory, risk advisory, and related services in more than 150 countries and territories. Revenues for the fiscal year 2017 were $38.8 billion.

The breach at the big-four accountancy comes on the heels of a massive breach at big-three consumer credit bureau Equifax. That incident involved several months of unauthorized access in which intruders stole Social Security numbers, birth dates, and addresses on 143 million Americans.

This is a developing story. Any updates will be posted as available, and noted with update timestamps.

Read more here:: KrebsOnSecurity

Loihi is Intel’s brainy chip designed to outthink your PC’s Core CPU

Intel is reportedly preparing to fabricate “Loihi,” a self-learning “brain chip” that mimics how the human intellect functions, as a foundation for further developments in artificial intelligence.

Named after an active undersea volcano south of the island of Hawaii, Intel said in a statement Monday that Loihi includes a total of 130,000 silicon “neurons” connected with 130 million “synapses,” the junctions that in humans connect the neurons within the brain. The Loihi chip, which Wired reported will be manufactured next month on Intel’s 14-nm process technology, will be shared with leading universities and research institutions next year in a bid to advance AI development, Intel said.

To read this article in full or to leave a comment, please click here

Read more here:: IT news – Hardware Systems

macOS High Sierra: How to turn off website tracking in Safari 11

Ever do something on the web, like shop for shoes, and then notice that every other website you visit has ads for shoes? That’s the result of website tracking. It’s a little creepy, the idea that you’re essentially being followed on the web, targeted with advertising. (And yes, Macworld.com is guilty as charged.)

Apple’s Safari uses WebKit as its engine to present websites through a browser window. WebKit has features to reduce the amount of tracking a site does, and the latest feature is called Intelligent Tracking Prevention. ITP cuts down on the ability of a site to do cross-site tracking, which leads to the experience I mentioned above.

To read this article in full or to leave a comment, please click here

Read more here:: IT news – Internet

macOS High Sierra: How to use Reader mode in Safari 11

Safari’s Reader mode is a way for users to peruse a webpage without distractions from ads, videos, sponsored content links, and other web elements you may not consider essential to the article you are reading.

In Safari 10 (the version that comes with macOS Sierra), Reader mode has to be enabled manually (View > Show Reader or press Shift-Command-R). With Safari 11 in macOS High Sierra and Sierra, you can set the browser to open most pages in Reader mode. (Some pages, like the homepage of news sites, can’t be opened in Reader mode. But the individual articles can.)

Here’s how to set Reader mode in Safari 11. What this will do is open any pages that are Reader compatible in Reader mode when you visit a particular website.

To read this article in full or to leave a comment, please click here

Read more here:: IT news – Internet

Canadian Man Gets 9 Months Detention for Serial Swattings, Bomb Threats

wantswat

A 19-year-old Canadian man was found guilty of making almost three dozen fraudulent calls to emergency services across North America in 2013 and 2014. The false alarms, two of which targeted this author — involved phoning in phony bomb threats and multiple attempts at “swatting” — a dangerous hoax in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with deadly force.

Curtis Gervais of Ottawa was 16 when he began his swatting spree, which prompted police departments across the United States and Canada to respond to fake bomb threats and active shooter reports at a number of schools and residences.

Gervais, who taunted swatting targets using the Twitter accounts “ProbablyOnion” and “ProbablyOnion2,” got such a high off of his escapades that he hung out a for-hire shingle on Twitter, offering to swat anyone with the following tweet:

Several Twitter users apparently took him up on that offer. On March 9, 2014, @ProbablyOnion started sending me rude and annoying messages on Twitter. A month later (and several weeks after blocking him on Twitter), I received a phone call from the local police department. It was early in the morning on Apr. 10, and the cops wanted to know if everything was okay at our address.

Since this was not the first time someone had called in a fake hostage situation at my home, the call I received came from the police department’s non-emergency number, and they were unsurprised when I told them that the Krebs manor and all of its inhabitants were just fine.

Minutes after my local police department received that fake notification, @ProbablyOnion was bragging on Twitter about swatting me, including me on his public messages: “You have 5 hostages? And you will kill 1 hostage every 6 times and the police have 25 minutes to get you $100k in clear plastic.” Another message read: “Good morning! Just dispatched a swat team to your house, they didn’t even call you this time, hahaha.”

po2-swatbk

I told this user privately that targeting an investigative reporter maybe wasn’t the brightest idea, and that he was likely to wind up in jail soon. On May 7, @ProbablyOnion tried to get the swat team to visit my home again, and once again without success. “How’s your door?” he tweeted. I replied: “Door’s fine, Curtis. But I’m guessing yours won’t be soon. Nice opsec!”

I was referring to a document that had just been leaked on Pastebin, which identified @ProbablyOnion as a 19-year-old Curtis Gervais from Ontario. @ProbablyOnion laughed it off but didn’t deny the accuracy of the information, except to tweet that the document got his age wrong.

A day later, @ProbablyOnion would post his final tweet before being arrested: “Still awaiting for the horsies to bash down my door,” a taunting reference to the Royal Canadian Mounted Police (RCMP).

A Sept. 14, 2017 article in the Ottawa Citizen doesn’t name Gervais because it is against the law in Canada to name individuals charged with or convicted of crimes committed while they are a minor. But the story quite clearly refers to Gervais, who reportedly is now married and expecting a child.

The Citizen says the teenager was arrested by Ottawa police after the U.S. FBI traced his Internet address to his parents’ home. The story notes that “the hacker” and his family have maintained his innocence throughout the trial, and that they plan to appeal the verdict. Gervais’ attorneys reportedly claimed the youth was framed by the hacker collective Anonymous, but the judge in the case was unconvinced.

Apparently, U.S. Ontario Court Justice Mitch Hoffman handed down a lenient sentence in part because of more than 900 hours of volunteer service the accused had performed in recent years. From the story:

Hoffman said that troublesome 16-year-old was hard to reconcile with the 19-year-old, recently married and soon-to-be father who stood in court before him, accompanied in court Thursday by his wife, father and mother.

“He has a bright future ahead of him if he uses his high level of computer skills and high intellect in a pro-social way,” Hoffman said. “If he does not, he has a penitentiary cell waiting for him if he uses his skills to criminal ends.”

According to the article, the teen will serve six months of his nine-month sentence at a youth group home and three months at home “under strict restrictions, including the forfeiture of a home computer used to carry out the cyber pranks.” He also is barred from using Twitter or Skype during his 18-month probation period.

Most people involved in swatting and making bomb threats are young males under the age of 18 — the age when kids seem to have little appreciation for or care about the seriousness of their actions. According to the FBI, each swatting incident costs emergency responders approximately $10,000. Each hoax also unnecessarily endangers the lives of the responders and the public.

In February 2017, another 19-year-old — a man from Long Beach, Calif. named Eric “Cosmo the God” Taylor — was sentenced to three year’s probation for his role in swatting my home in Northern Virginia in 2013. Taylor was among several men involved in making a false report to my local police department at the time about a supposed hostage situation at our house. In response, a heavily-armed police force surrounded my home and put me in handcuffs at gunpoint before the police realized it was all a dangerous hoax.

Read more here:: KrebsOnSecurity

SAP buys Gigya to boost customer identity access management offering

SAP is giving its business software users a new way to track their customers, with the acquisition of customer identity management specialist Gigya.

If your business isn’t already a Gigya customer, then you’re most likely to have seen its name flickering in your browser’s status or address bar as you log in to consumer websites. Banks, hotel chains, media companies and e-commerce stores use its opt-in registration service to track their customers’ identities and provide them access to services.

SAP’s wants to combine the 1.3 billion identities Gigya holds with the data-matching capabilities of SAP Hybris Profile, the multichannel customer profiling module of its Hybris e-commerce platform. The two companies have been testing the combination on a smaller scale since 2013, when they began offering an integration service for businesses that were joint clients.

To read this article in full or to leave a comment, please click here

Read more here:: IT news – Internet

Unmetered Mitigation: DDoS Protection Without Limits

This is the week of Cloudflare’s seventh birthday. It’s become a tradition for us to announce a series of products each day of this week and bring major new benefits to our customers. We’re beginning with one I’m especially proud of: Unmetered Mitigation.

CC BY-SA 2.0 image by Vassilis

Cloudflare runs one of the largest networks in the world. One of our key services is DDoS mitigation and we deflect a new DDoS attack aimed at our customers every three minutes. We do this with over 15 terabits per second of DDoS mitigation capacity. That’s more than the publicly announced capacity of every other DDoS mitigation service we’re aware of combined. And we’re continuing to invest in our network to expand capacity at an accelerating rate.

Surge Pricing

Virtually every Cloudflare competitor will send you a bigger bill if you are unlucky enough to get targeted by an attack. We’ve seen examples of small businesses that survive massive attacks to then be crippled by the bills other DDoS mitigation vendors sent them. From the beginning of Cloudflare’s history, it never felt right that you should have to pay more if you came under an attack. That feels barely a step above extortion.

With today’s announcement we are eliminating this industry standard of ‘surge pricing’ for DDoS attacks. Why should customers pay more just to defend themselves? Charging more when the customer is experiencing a painful attack feels wrong; just as surge pricing when it rains hurts ride-sharing customers when they need a ride the most.

End of the FINT

That said, from our early days, we would sometimes fail customers off our network if the size of an attack they received got large enough that it affected other customers. Internally, we referred to this as FINTing (for Fail INTernal) a customer.

The standards for when a customer would get FINTed were situation dependent. We had rough thresholds depending on what plan they were on, but the general rule was to keep a customer online unless the size of the attack impacted other customers. For customers on higher tiered plans, when our automated systems didn’t handle the attacks themselves, our technical operations team could take manual steps to protect them.

Every morning I receive a list of all the customers that were FINTed the day before. Over the last four years the number of FINTs has dwindled. The reality is that our network today is at such a scale that we are able to mitigate even the largest DDoS attacks without it impacting other customers. This is almost always handled automatically. And, when manual intervention is required, our techops team has gotten skilled enough that it isn’t overly taxing.

Aligning With Our Customers

So today, on the first day of our Birthday Week celebration, we make it official for all our customers: Cloudflare will no longer terminate customers, regardless of the size of the DDoS attacks they receive, regardless of the plan level they use. And, unlike the prevailing practice in the industry, we will never jack up your bill after the attack. Doing so, frankly, is perverse.

CC BY-SA 2.0 image by Dennis Jarvis

We call this Unmetered Mitigation. It stems from a basic idea: you shouldn’t have to pay more to be protected from bullies who try and silence you online. Regardless of what Cloudflare plan you use — Free, Pro, Business, or Enterprise — we will never tell you to go away or that you need to pay us more because of the size of an attack.

Cloudflare’s higher tier plans will continue to offer more sophisticated reporting, tools, and customer support to better tune our protections against whatever threats you face online. But volumetric DDoS mitigation is now officially unlimited and unmetered.

Setting the New Standard

Back in 2014, during Cloudflare’s birthday week, we announced that we were making encryption free for all our customers. We did it because it was the right thing to do and we’d finally developed the technical systems we needed to do it at scale. At the time, people said we were crazy. I’m proud of the fact that, three years later, the rest of the industry has followed our lead and encryption by default has become the standard.

I’m hopeful the same will happen with DDoS mitigation. If the rest of the industry moves away from the practice of surge pricing and builds DDoS mitigation in by default then it would largely end DDoS attacks for good. We took a step down that path today and hope, like with encryption, the rest of the industry will follow.

Want to know more? Read No Scrubs: The Architecture That Made Unmetered Mitigation Possible and Meet Gatebot – a bot that allows us to sleep.

Read more here:: CloudFlare

No Scrubs: The Architecture That Made Unmetered Mitigation Possible

When building a DDoS mitigation service it’s incredibly tempting to think that the solution is scrubbing centers or scrubbing servers. I, too, thought that was a good idea in the beginning, but experience has shown that there are serious pitfalls to this approach.

A scrubbing server is a dedicated machine that receives all network traffic destined for an IP address and attempts to filter good traffic from bad. Ideally, the scrubbing server will only forward non-DDoS packets to the Internet application being attacked. A scrubbing center is a dedicated location filled with scrubbing servers.

Three Problems With Scrubbers

The three most pressing problems with scrubbing are: bandwidth, cost, knowledge.

The bandwidth problem is easy to see. As DDoS attacks have scaled to >1Tbps having that much network capacity available is problematic. Provisioning and maintaining multiple-Tbps of bandwidth for DDoS mitigation is expensive and complicated. And it needs to be located in the right place on the Internet to receive and absorb an attack. If it’s not then attack traffic will need to be received at one location, scrubbed, and then clean traffic forwarded to the real server: that can introduce enormous delays with a limited number of locations.

Imagine for a moment you’ve built a small number of scrubbing centers, and each center is connected to the Internet with many Gbps of connectivity. When a DDoS attack occurs that center needs to be able to handle potentially 100s of Gbps of attack traffic at line rate. That means exotic network and server hardware. Everything from the line cards in routers, to the network adapter cards in the servers, to the servers themselves is going to be very expensive.

This (and bandwidth above) is one of the reasons DDoS mitigation has traditionally cost so much and been billed by attack size.

The final problem, knowledge, is the most easily overlooked. When you set out to build a scrubbing server you are building something that has to separate good packets from bad.

At first this seems easy (let’s filter out all TCP ACK packets for non-established connections, for example), and low level engineers are easy to excite about writing high-performance code to do that. But attackers are not stupid and they’ll throw legitimate looking traffic at a scrubbing server and it gets harder and harder to distinguish good from bad.

At that point, scrubbing engineers need to become protocol experts at all levels of the stack. That means you have to build a competency in all levels of TCP/IP, DNS, HTTP, TLS, etc. And that’s hard.

CC BY-SA 2.0 image by Lisa Stevens

The bottom line is scrubbing centers and exotic hardware are great marketing. But, like citadels of medieval times, they are monumentally expensive and outdated, overwhelmed by better weapons and warfighting techniques.

And many DDoS mitigation services that use scrubbing centers operate in an offline mode. They are only enabled when a DDoS occurs. This typically means that an Internet application will succumb to the DDoS attack before its traffic is diverted to the scrubbing center.

Just imagine citizens fleeing to hide behind the walls of the citadel under fire from an approaching army.

Better, Cheaper, Smarter

There’s a subtler point about not having dedicated scrubbers: it forces us to build better software. If a scrubbing server becomes overwhelmed or fails then only the customer being scrubbed is affected, but when the mitigation happens on the very servers running the core service it has to work and be effective.

I spoke above about the ‘knowledge gap’ that comes about with dedicated DDoS scrubbing. The Cloudflare approach means that if bad traffic gets through, say a flood of bad DNS packets, then it reaches a service owned and operated by people who are experts in that domain. If a DNS flood gets through our DDoS protection it hits our custom DNS server, RRDNS, the engineers who work on it can bring their expertise to bear.

This makes an enormous difference because the result is either improved DDoS scrubbing or a change to the software (e.g. the DNS stack) that improves its performance under load. We’ve lived that story many, many times and the entire software stack has improved because of it.

The approach Cloudflare took to DDoS mitigation is rather simple: make every single server in Cloudflare participate in mitigation, load balance DDoS attacks across the data centers and servers within them and then apply smarts to the handling of packets. These are the same servers, processors and cores handling our entire service.

Eliminating scrubbing centers and hardware completely changes the cost of building a DDoS mitigation service.

We currently have around 15 Tbps of network capacity worldwide but this capacity doesn’t require exotic network hardware. We are able to use low cost or commodity networking equipment bound together using network automation to handle normal and DDoS traffic. Just as Google originally built its service by writing software that tied together commodity servers into a super (search) computer; our architecture binds commodity servers together into one giant network device.

By building the world’s most peered network we’ve built this capacity at reasonable cost and more importantly are able to handle attack traffic globally wherever it originates with low latency links. No scrubbing solution is able to say the same.

And because Cloudflare manages DNS for our customers and uses an Anycasted network attack traffic originating from botnets is automatically distributed across our global network. Each data center deals with a portion of DDoS traffic.

Within each data center DDoS traffic is load balanced across multiple servers running our service. Each server handles a portion of the DDoS traffic. This spreading of DDoS traffic means that a single DDoS attack will be handled by a large number of individual servers across the world.

And as Cloudflare grows our DDoS mitigation capacity grows automatically, and because our DDoS mitigation is built into our stack it is always on. We mitigate a new DDoS attack every three minutes with no downtime for Internet applications and have no need to ‘switch over’ to a scrubbing center.

Inside a Server

Once all this global and local load balancing has occurred packets do finally hit a network adapter card in a server. It’s here that Cloudflare’s custom DDoS mitigation stack comes into play.

Over the years we’ve learned how to automatically detect and mitigate anything the internet can throw at us. For most of the attacks, we rely on dynamically managing iptables: the standard Linux firewall. We’ve spoked about the most effective techniques in past. iptables has a number of very powerful features which we select depending on specific attack vector. From our experience xt_bpf, ipset, hashlimits and connlimits are the most useful iptables modules.

For very large attacks the Linux Kernel is not fast enough though. To relieve the kernel from processing excessive number of packets, we experimented with various kernel bypass techniques. We’ve settled on a partial kernel bypass interface – Solarflare specific EFVI.

With EFVI we can offload the processing of our firewall rules to a user space program, and we can easily process millions of packets per second on each server, while keeping the CPU usage low. This allows us to withstand the largest attacks, without affecting our multi-tenant service.

Open Source

Cloudflare’s vision is to help to build a better internet. Fixing DDoS is a part of it. While we can’t really help with the bandwidth, and cost, needed to operate on the internet, we can, and are, helping with the knowledge gap. We’ve been relentlessly documenting the most important and dangerous attacks we’ve encountered, fighting botnets and open sourcing critical pieces of our DDoS infrastructure.

We’ve open sourced various tools, from the very low level projects like our BPF Tools, that we use to fight DNS and SYN floods, to contributing to OpenResty a performant application framework on top of NGINX, which is great for building L7 defenses.

Further Reading

Cloudflare has written a great deal about DDoS mitigation in the past. Some example, blog posts: How Cloudflare’s Architecture Allows Us to Scale to Stop the Largest Attacks, Reflections on reflection (attacks), The Daily DDoS: Ten Days of Massive Attacks, and The Internet is Hostile: Building a More Resilient Network.

And if you want to go deeper, my colleague Marek Majkowski dives deeper into the code we use DDoS mitigation.

Conclusion

Cloudflare’s DDoS mitigation architecture and custom software makes Unmetered Mitigation possible. With it we can withstand the largest DDoS attacks and as our network grows our DDoS mitigation capability grows with it.

Read more here:: CloudFlare

Meet Gatebot – a bot that allows us to sleep

In the past, we’ve spoken about how Cloudflare is architected to sustain the largest DDoS attacks. During traffic surges we spread the traffic across a very large number of edge servers. This architecture allows us to avoid having a single choke point because the traffic gets distributed externally across multiple datacenters and internally across multiple servers. We do that by employing Anycast and ECMP.

We don’t use separate scrubbing boxes or specialized hardware – every one of our edge servers can perform advanced traffic filtering if the need arises. This allows us to scale up our DDoS capacity as we grow. Each of the new servers we add to our datacenters increases our maximum theoretical DDoS “scrubbing” power. It also scales down nicely – in smaller datacenters we don’t have to overinvest in expensive dedicated hardware.

During normal operations our attitude to attacks is rather pragmatic. Since the inbound traffic is distributed across hundreds of servers we can survive periodic spikes and small attacks without doing anything. Vanilla Linux is remarkably resilient against unexpected network events. This is especially true since kernel 4.4 when the performance of SYN cookies was greatly improved.

But at some point, malicious traffic volume can become so large that we must take the load off the networking stack. We have to minimize the amount of CPU spent on dealing with attack packets. Cloudflare operates a multi-tenant service and we must always have enough processing power to serve valid traffic. We can’t afford to starve our HTTP proxy (nginx) or custom DNS server (named RRDNS, written in Go) of CPU. When the attack size crosses a predefined threshold (which varies greatly depending on specific attack type), we must intervene.

Mitigations

During large attacks we deploy mitigations to reduce the CPU consumed by malicious traffic. We have multiple layers of defense, each tuned to specific attack vector.

First, there is “scattering”. Since we control DNS resolution we are able to move the domains we serve between IP addresses (we call this “scattering”). This is an effective technique as long as the attacks don’t follow the updated DNS resolutions. This often happens for L3 attacks where the attacker has hardcoded the IP address of the target.

Next, there is a wide range of mitigation techniques that leverage iptables, the firewall built in to the Linux kernel. But we don’t treat use it like a conventional firewall, with a static set of rules. We continuously add, tweak and remove rules, based on specific attack characteristics. Over the years we have mastered the most effective iptables extensions:

  • xt_bpf
  • ipsets
  • hashlimits
  • connlimit

To make the most of iptables, we built a system to manage the iptables configuration across our entire fleet, allowing us to rapidly deploy rules everywhere. This fits our architecture nicely: due to Anycast, an attack against a single IP will be delivered to multiple locations. Running iptables rules for that IP on all servers makes sense.

Using stock iptables gives us plenty of confidence. When possible we prefer to use off-the-shelf tools to deal with attacks.

Sometimes though, even this is not sufficient. Iptables is fast in general case, but has its limits. During very large attacks, exceeding 1M packets per second per server, we shift the attack traffic from kernel iptables to a kernel bypass user space program (which we call floodgate). We use a partial kernel bypass solution using Solarflare EF_VI interface. With this on each server we can process more than 5M attack packets per second while consuming only a single CPU core. With floodgate we have comfortable amount of CPU left for our applications, even during the largest network events.

Finally, there are a number of tweaks we can make on at the HTTP layer. For specific attacks we disable HTTP Keep-Alives forcing attackers to re-establish TCP sessions for each request. This sacrifices a bit of performance for valid traffic as well, but is a surprisingly powerful tool throttling many attacks. For other attack patterns we turn the “I’m under attack” mode on, forcing the attack to hit our JavaScript challenge page.

Manual attack handling

Early on these mitigations were applied manually by our tireless SREs. Unfortunately, it turns out that humans under stress… well, make mistakes. We learned it the hard way – one of the most famous incidents happened in March 2013 when a simple typo brought our whole network down.

Humans are also not great at applying precise rules. As our systems grew and mitigations became more complex, having many specific toggles, our SREs got overwhelmed by the details. It was challenging to present all the specific information about the attack to the operator. We often applied overly-broad mitigations, which were unnecessarily affecting legitimate traffic. All that changed with the introduction of Gatebot.

Meet Gatebot

To aid our SREs we developed a fully automatic mitigation system. We call it Gatebot1.

The main goal of Gatebot was to automate as much of the mitigation workflow as possible. That means: to observe the network and note the anomalies, understand the targets of attacks and their metadata (such as the type of customer involved), and perform appropriate mitigation action.

Nowadays we have multiple Gatebot instances – we call it them “mitigation pipelines”. Each pipeline has three parts:

1) “attack detection” or “signal” – A dedicated system detects anomalies in network traffic. This is usually done by sampling a small fraction of the network packets hitting our network, and analyzing them using streaming algorithms. With this we have a real-time view of the current status of the network. This part of the stack is written in Golang, and even though it only examines the sampled packets, it’s pretty CPU intensive. It might comfort you to know that at this very moment two big Xeon servers burn all of their combined 48 Skylake CPU cores toiling away counting packets and performing sophisticated analytics looking for attacks.

2) “reactive automation” or “business logic”. For each anomaly (attack) we see who the target is, can we mitigate it, and with what parameters. Depending on the specific pipeline, the business logic may be anything from a trivial procedure to a multi-step process requiring a number of database lookups and potentially confirmation from a human operator. This code is not performance critical and is written in Python. To make it more accessible and readable by others in company, we developed a simple functional, reactive programming engine. It helps us to keep the code clean and understandable, even as we add more steps, more pipelines and more complex logic. To give you a flavour of the complexity: imagine how the system should behave if a customer upgraded a plan during an attack.

3) “mitigation”. The previous step feeds specific mitigation instructions into the centralized mitigation management systems. The mitigations are deployed across the world to our servers, applications, customer settings and, in some cases, to the network hardware.

Sleeping at night

Gatebot operates constantly, without breaks for lunch. For the iptables mitigations pipelines alone, Gatebot got engaged between 30 and 1500 times a day. Here is a chart of mitigations per day over last 6 months:

Gatebot is much faster and much more precise than even our most experienced SREs. Without Gatebot we wouldn’t be able to operate our service with the appropriate level of confidence. Furthermore, Gatebot has proved to be remarkably adaptable – we started by automating handling of Layer 3 attacks, but soon we proved that the general model works well for automating other things. Today we have more than 10 separate Gatebot instances doing everything from mitigating Layer 7 attacks to informing our Customer Support team of misbehaving customer origin servers.

Since Gatebot’s inception we learned greatly from the “detection / logic / mitigation” workflow. We reused this model in our Automatic Network System which is used to relieve network congestion2.

Gatebot allows us to protect our users no matter of the plan. Whether you are a on a FREE, PRO, BIZ or Enterprise plan, Gatebot is working for you. This is why we can afford to provide the same level of DDoS protection for all our customers3.


Dealing with attacks sounds interesting? Join our world famous DDoS team in London, Austin, San Francisco and our elite office in Warsaw, Poland.


  1. Fun fact: all our components in this area are called “gate-something”, like: gatekeeper, gatesetter, floodgate, gatewatcher, gateman… Who said that naming things must be hard?

  2. Some of us have argued that this system should be called Netbot.

  3. Note: there are caveats. Ask your Success Engineer for specifics!

Read more here:: CloudFlare

Additional information regarding the recent CCleaner APT security incident

We would like to update our customers and the general public on the latest findings regarding the investigation of the recent CCleaner security incident. As published in our previous blog posts (here and here), analysis of the CnC server showed that the incident was in fact an Advanced Persistent Threat (APT) attack, targeting specific high-tech and telecommunications companies. That is, despite the fact that CCleaner is a consumer product, the purpose of the attack was not to attack consumers and their data; instead, the CCleaner customers were used to gain access to corporate networks of select large enterprises.

Read more here:: Avast