Dangers of accidental open access to applications and databases.

Author: Keshav Kamble

Sophisticated breaches require some effort on the part of hackers. Some of the easiest breaches are actually due to unfortunate events where applications or part of applications are accidentally left open to public access. In today’s era of intense monitoring, your enterprise IT deployments are routinely scanned in real-time by the illegitimate scanners and attackers. Therefore, the smallest mistake of leaving an application or database in default configuration can expose the entire enterprise to outside attacks and breaches.

The graph above published by Verizon Enterprise Services study shows the number of breaches that occurred in 2017-18 time frame by industry sectors. Clearly not all of these attacks were due to misconfiguration or accidental exposure. What is more important is the intensity of effort undertaken by attackers to breach enterprises. The study also shows the level of difficulty to hack and the amount of time the breach went undetected.

A staggering 87 percent of the breaches took just minutes to penetrate and 68 percent of the breaches went undiscovered for months or more.

This comprehensive study points to one problem; the lack of real time threat monitoring and interception at or inside application and database levels. As all of us know, most high profile breaches occurred despite strong perimeter security.

Thoughts and analysis

As a CISO or a CIO, I am sure the Verizon Enterprise study is quite familiar to you but at the same time causes tremendous discomfort. At points, it might even create doubts regarding your own preparedness.

At Avocado Systems research labs, we performed some interesting experiments on a public cloud. The observations are equally interesting. For the experiment, the lab deployed a multi-tier application ecosystem involving Tomcat servers with Java applications and two layers of databases in clustered and single instances. All applications were completely enabled to be protected and monitored by the Avocado Security Platform. Only certain parts of select applications were monitored only and not protected by configuring micro-policy actions. The Avocado Security Orchestrator generated real time maps of this application access by nefarious clients all across the world. It also prepared real time maps of nefarious accesses and attempts to penetrate into protected applications.

So let me offer a few observations:

  1. Exposed part of Tomcat accessed by nefarious clients

This is a real time representation over a period of 4 days. All of these attempts were closely monitored by micro-policies applied on a part of the Tomcat application. This image clearly shows the devastating repercussions of mistakes in perimeter security as well as misconfigurations at application level.

  1. In every instance, protected sub-components of MySQL and MongoDB intercepted and mitigated penetration attempts.

Attacks intercepted and mitigated by MySQL Database deployed in UK –

Attacks intercepted and mitigated by MongoS shard deployed in UK –

Again a picture worth thousand words! Avocado’s innovative pico-segmentation which brings the deepest level threat monitoring and interception capabilities into applications caught every attempt on every part of the databases and applications. This despite intentional errors to allow threats an opportunity to penetrate the web layers.

Application Security and DevSecOps automation certainly provides you the capability to build real-time threat monitoring and response. It is time to reassess the security methods and practices of your organization and adopt new methods of enterprise application security. Avocado is always here to help.

Innovation is our way to bring confidence to all the hardworking security professionals, business owners and executives.

For detail reports of the study please contact us at info@avocadosystems.net or call at 1-844-778-7955.

Marriott-Starwood: How can a threat survive that long?

Author: Keshav Kamble

In yet another sensational announcement, Marriott International revealed that hackers breached its Starwood reservation system and stole the personal data of up to 500 million guests. The New York Times reported, “The assault started as far back as 2014, and was one of the largest known thefts of personal records, second only to a 2013 breach of Yahoo that affected three billion user accounts and larger than a 2017 episode involving the credit bureau Equifax.”

If you have not yet read the whole story, you can read it here at the New York Times. Obviously, the FBI and many other private forensics agencies are involved. Preliminary indications are that the threat had been doing its job since 2014 and remained active until September 2018 when it was detected due to an unauthorized access to Starwood’s registry. Starwood was acquired by Marriott for $13.6B in 2016.

Customers across the globe have started filing lawsuits against Marriott, including Europeans with GDPR compliance for data protection. At this rate, just like Equifax which required $400M to half-cleanup the breach, Marriott is looking at hundreds of millions of dollars in damages.

If you are a business executive, your second question would be – How on the earth can the breach survive, undetected that long? The breach could be state sponsored to collect intelligence, privately funded, industrial espionage, insider job, or even mere somebody’s mind game. But still, how can the threat stay undetected for so long?

Thoughts and analysis

As a business executive, you have spent large part of your professional life in building and rebuilding your business to better serve your clients.  A breach of this nature can destroy decades of good will and successful enterprise operations.

So let me offer a few observations:

  1. The malicious software (malware/Advance Persistent Threat) might have been part of Starwood’s enterprise IT ecosystem before the acquisition. Despite what I am sure was some of the best perimeter security money could buy, Starwood’s security might have failed to detect it. While most enterprises focus on perimeter security many do not have strong internal application security. Consequently, reliance on superior perimeter security can in the end be catastrophic.
  2. Polymorphic malware infested application ecosystem: This method of malware or virus architecture generates highly dynamic signatures. When plotted on top of the application, it becomes highly difficult to differentiate between the real application versus the malware. Because today’s enterprises have very loosely defined application sanctioning and verification methods built in to the IT security practice they struggle to detect, much less remediate this kind of threat.
  3. False positives in security screening: False positives and false negatives in the threat interception go uninvestigated about 90% of the time due to the employment of expensive manual expertise. Most security methods deployed by enterprises lack deterministic threat interception that can reduce or eliminate false positives and automate the verification process.
  4. Last but not the least – if the administrative credentials were stolen then the security methods of Starwood could not stop data exfiltration due to privileges. One of the biggest deficiencies in the enterprise security today is inability to detect access violations committed by applications with administrative permissions. How do you protect when credentials are stolen?

The root cause will be known to the world after the forensics are analyzed and the attack footprint identified. Detecting the source of the threat is an monumental task.

However, Application Security and DevSecOps automation certainly provides you the capability to build real-time threat response. Holiday season is fast approaching. It is time to reassess the security methods and practices of your enterprise. Avocado is always here to help.

Happy Holidays to all the hardworking security professionals, business owners and executives.

Step up your security posture with real time threat response.

Author: Keshav Kamble

Incidence Response (IR) is a thought-provoking topic for discussions with CISO/CIOs and security architects. Consider a scenario where a burglary is in progress. The alarms are sounding and the monitoring cameras are recording the act in real time. Unfortunately, the burglars are masked men acting swiftly. They complete their crime and flee even before the first responders arrive at the scene. This, despite the fact that the Response Time (RT) is supposed to be under 4 minutes in parts of California.

Now consider the same scenario but the burglars are 10 million times faster and the response time is still a few minutes. Not much would be left to salvage.

This is the state of many Enterprise Security Operations Centers (SOC) today. Incidence response is a critical part of the SOC’s responsibilities but the Security Operators are not equipped with the right tools to respond to data breaches and application security threats in micro-seconds versus at best, minutes.

What are the problems?

The Advanced Security Operations Center’s primary tasks are continuous monitoring, incident & breach response, and upgrading the enterprise security posture based on a continuous feedback loop. For such complex tasks, there is a multitude of tools and monitoring platforms available. Aided with Security Information and Event Management (SIEM), log analytics, transaction co-relations, behavior analytics and other tools, SOCs have been constantly trying to stay ahead of the threats. Despite the availability of these tools, three main problems with SOC’s responses still remain unresolved:

  1. Incidence response is very slow.
  2. Incidence response is based on limited or no understanding of the affected application and threats.
  3. Large numbers of false positives.


Mr. Rishi Shah, CEO of AptusCare, provided a great analogy based on his deep knowledge of the healthcare business. “Medical doctors and surgeons diagnose and perform surgeries with deep knowledge and visibility into organs and cells. To respond to emerging threats in enterprise applications, security professionals also need tools for deep visibility into application ecosystems and threats in real time!”

What are the options?

I completely agree with Mr. Shah as he pointed out the recent example of the ransomware attack on Fetal Diagnostic Institute of the Pacificwhich affected more than 40 thousand patient records. For security professionals to respond to emerging threats in real time, they require powerful platforms with real time capabilitiesthat identify threats, remediate the threats and provide deep application visibility.

Unfortunately, current methods for threat interception are inadequate.  They are too complicated, too analytical, and too slow and too often fail.If intercepting threats is so critical, why don’t existing solutions do it better?

Colonel USAF (Ret.) Mr. Michael Hodgeemphatically suggests, “For real time incidence response, any viable solution must optimize on the following two variables:

  1. The certainty of threat interception 
  2. The time it takes to mitigate the threat

In addition, these capabilities must be available with zero impact to application architecture, performance and DevSecOps.

To learn more about Avocado and how we can help you stepup your security posture with real time threat response, visit https://www.avocadosys.com. You can also email us at: hello@avocadosystems.netto set up a personalized demo.

Step up your threat detection with a deterministic security platform.

Author: Keshav Kamble

Over the past few months, I have spoken withdozens of partners, customers, and security visionaries. Everyone believes that the data center and cloud security ecosystems are evolving rapidly. While everyone has a different opinion on how to approach their security issues, they all agree on one central point: If we can catch a threat, we can deal with it.

What companies need now are smart approaches to threat detection.

Unfortunately, current methods for threat interception are inadequate.  They are too complicated, too analytical, and too slow and too often fail.

If intercepting threats is so critical, why don’t existing solutions do it better?

It’s clear that you can’t deal with something you don’t know exists. As a result, any viable solution must optimize on two variables:

1) The certainty of threat interception 

2) The time it takes to intercept the threat

Cloud Security requires innovative approach to threat detection focused on getting these two variables right.

First step requires identifying the threat deterministically. Deterministic threat detectionalso means solving grey-area problems related to the growing number of false positives that security staffs deal waste resources dealingwith.

Second step involve, extraction ofthe right detail from the event to understand the threat and its source. There is a delicate balance here: too little detail about a threat means dealing with it poorly; too much means dealing with it too slowly.  That is why we group threats efficiently and collect the right amount of information to trigger the right actions.

What comes after threat interception?

Next, keep the threat in suspension until the right action is identified. This has to be lightning fast. Clearlywithout fast suspension, applications are slowedand scalability is reduced.

Finally, isolate the threat using a granular approach.  Current methods of preventing the lateral movement of threats inside the application eco-system are ineffective. Granular segmentation, on the other hand, does stop the lateral spread of threats and also protects the application and data systems.

So keep monitoring, interception, and mitigation in mind when you consider your application security.  Look for a reliable threat identification and interception system that gives you greater control over how you mitigate your threats.

For more information, read the Forrester report on the New Wave of Security Technology.



The Emergence of Deep Application Security, Segmentation and Compliance

Author: Keshav Kamble

Various business regulatory bodies have defined methods of sensitive data handling based on the nature of the business. Rapidly changing compute, network and storage environments such as private, public and hybrid cloud necessitate constant upgrades in the security and compliance clauses. Highly scalable and dynamic application architectures such as containers, micro-services and third party API driven architectures are making DevOps efficient but at the same time increasing the complexity of security, segmentation and business compliance. In such environments, preparing for business compliance audits, producing all the required monitoring data and passing the compliance audits has become excruciatingly painful and expensive.

Not a day goes by which does not frighten businesses and customers with data breaches. Hackers may have the names and Social Security numbers of 143 million Americans after a massive breach of credit reporting agency Equifax. Now the subsequent consequences on the life of customers, Equifax as a company, the executives and the shareholders cannot be imagined. “This breach has shaken the faith of the world in credit rating agencies”, said Jarrod Hicks, an Intellectual Property Lawyer at Zilka-Kotab of San Jose, CA.


While business compliance is mandatory, the aim is not just checking off a list of requirements to pass compliance. The ultimate test is keeping the data safe and preserving customer trust. The penultimate goal is to build a strong and thriving business.

Some of the most vulnerable business sectors include Payment Card industry, health care industry, finance, banks, Fintech or any other businesses that deal with sensitive customer information. The security operations departments of those businesses follow three common, critical and quite challenging clauses listed below:

  1. Application and data separation also known as segmentation
  2. Data encryption at rest and in flight.
  3. Detailed reporting on security threats, events, and modifications in the IT environment.

As per the recommendations, the intent of segmentation is to prevent out-of-scope systems from being able to communicate with systems in the sensitive data environment or impact the security of those systems. Segmentation is typically achieved by technologies and process controls that enforce separation between the sensitive critical systems and out-of-scope systems. When properly implemented, a segmented (out-of-scope) system component could not impact the security of the critical systems, even if an attacker obtained administrative access on that out-of-scope system.

The existence of separate network segments alone does not automatically create compliant segmentation. Segmentation is achieved via purpose-built controls that specifically create and enforce separation and to prevent compromises originating from the out-of-scope network(s) from reaching critical systems. To help support ongoing security, such technologies must be implemented properly with specific configuration settings and processes to ensure ongoing secure management of the technology. These controls should be part of annual verification and testing to confirm that they are operating effectively.

On the topic of encryption of data inflight, the recommendations are very clear on the migration of applications and systems from SSL/early versions of TLS (1.0 and 1.1) to TLS 1.2 as of June 2017. All businesses with sensitive applications and data are required to upgrade no later than June 30, 2018. This migration should be backed up with a complete Risk Mitigation and Migration Plan.

Now let’s talk about multiple challenges which might force you to compromise security and compliance of your critical applications and data.

  1. Large number of legacy applications mixed with new applications.
  2. Lack of reliable application segmentation methods for data center, cloud or multi-cloud based applications.
  3. Complexity of upgrades of encryption methods for data inflight.
  4. Heterogeneous, multi-layer threat monitoring, interception and mitigation mechanisms.
  5. Virtual Patching of major software systems and open source software.
  6. Lack of automation to achieve compliance.

Let’s dive little deeper on the technology side.  We all are witnessing attacks of phenomenal proportions on the enterprises that are shattering economies. Without a drastic shift in thought processes, the security, segmentation and compliance problems are not going to be resolved. The solution has to reach to the bottom of the vulnerabilities and thus the emergence of the bottoms up approach!  In Avocado terms, the solution has to come from within the application itself. The applications need to be empowered “seamlessly” to provide security to the lowest level attack surfaces inside the application, the segmentation has to be pico-segmentation which comes from within the application and the compliance has to be deterministic in nature. With such a deterministic application protection, segmentation and compliance capabilities, comes the power of deep application visualization, threat visualization, threat interception and mitigation capabilities. Enforcing compliance for data inflight, segmentation, virtual patching for your old and new applications comes at click of a button, from within the application itself. Best of all, this method of application empowerment does not require applications to be rebuilt, recompiled or reengineered at all. The approach is not only deterministic in nature but also provides huge scalability and performance enhancement to your critical enterprise applications. Those enterprise applications can be monolithic, virtualized, containerized (micro-services), serverless or just API driven; the method works seamlessly and effectively for all application architectures.

Many open source application platforms e.g. Tomcat are major back bones of enterprise applications.  The responsibility of securing and enforcing compliance rests upon enterprise application and security operations. Pico-segmentation goes deep down inside the application and virtually divides it into large number of threat surfaces and enforces  pico-segmentation, security and compliance at that deep level.  Security operations personnel get control of the deep security and compliance of the application eco-systems with deep application visibility, security and threat visibility.

Today, we are suffering but solutions are fast emerging. Of course, the truth is out there!

How to protect applications from third party services and APIs?

Author: Keshav Kamble

Application architecture is rapidly changing from monolithic virtual machine based to containerized micro-services based. Micro-services provide the perfect agility and DevOps freedom that IT managers need. Scaled-out distributed applications consume local as well as web based services by service commissioning and subscription models. Web based services and REST APIs for consumption of those services have been largely accepted as an ideal model by the development and security operations community.

Read More