Why are MongoDB deployments under attack?

Author: Keshav Kamble

Introduction

Critical observations and thoughts.

Today, I wanted to discuss the most pressing problems which MongoDB based applications are suffering with. Worldwide the attacks on MongoDB including taking the database for hostage grew multi-fold in 2017 only. By some analysis, within one week of January 2017, these incidents grew to 28000.

Read More

WannaCrypt: What should you do?

Author: Keshav Kamble

I cannot emphasize enough that botnets and ransomware remains one of the biggest threats to various businesses. The damages to the economy are enormous. Just about 2 days ago, I responded to a Dale Drew’s (CISO of Level3 Communications and a well-regarded security expert) blog –   256: WHY HEALTHCARE SECURITY IS VULNERABLE AND BOTNETS & RANSOMWARE REMAIN OUR BIGGEST THREAT.  I highlighted the severity due to international nature of ransomware.

WannaCrypt ransomware is creating havoc on internet connected Windows PCs and servers and bringing down businesses one after the other.  Here are some of the very important factors you need to know about this threat.

What does it do?

WannCrypt enters PCs, Laptops, servers and devices running unpatched Windows 7, Windows 8 or previous versions of Windows operating system using SMB v1 vulnerability.  It encrypts all the files on the device and communicates to Command and Control Center with the details of the device held hostage.

A red and white window with a message board gets displayed with the details of the threat.

  1. Warns you about all of your data files, documents, pictures etc. being encrypted.  Few of them can be decrypted for free but for others you need to pay certain number of bitcoins for decryption.
  2. Gives warning on number of days you have to pay the ransom. Not paying within the given time limit triggers deletion of your files.
  3. It also shows the amount of time left for you to pay the ransom.
  4. A link to pay ransom to is clearly shown at the bottom of the warning page.

Some more details of the vulnerability are given here as described by the Microsoft Security Protection Center .

Some technical details:
  1. This ransomware uses Microsoft Server Message Block 1.0 (SMB v1) server vulnerabilities to get transported into the system and uses the same protocol to transport to other systems or spread.
  2. Operating Systems targeted: Unpatched versions of Windows including Windows 7, Windows 8, Windows Vista.
  3. Alternative names of ransomware: WannaCrypt, WanaCrypt0r 2.0, Wanna Decryptor.
  4. According to Microsoft Security analysts when run it tries to communicate with the following URL: iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.comon port 80.
  5. It creates services like %SystemRoot%tasksche.exe and mssecsvc2.0
  6. It appends .WNCRY extensions to all the encrypted files.
  7. It looks for files of almost all the important types with corresponding extensions. It does not use wildcard in filenames for encryption.
What should you know if affected?
  1. The ransom seems to be different for different cases.
  2. There is no guarantee or assurance that you would get your files decrypted back after the ransom is paid. Ransomware and MDN (Malware Delivery Networks) are complex mechanisms. They depend upon multiple factors on the network and connectivity for destruction and recovery.
  3. As far as possible, keep your system connected to the network if you plan to pay the ransom. Again, this is not an encouragement but it would be a complex decision based on your business critical operations.
  4. If there is any special observation, post it to this blog or any blog of your choice and share the information.
What should you do to avoid infection?
  1. Keep all the unaffected systems offline or move them to different segment of the network with appropriate segmentation or firewalls.
  2. Get the latest software updates to your Windows systems.
  3. Install up-to-date security patches.
  4. Use segmentation technology for your servers which provide protection for legacy protocols as well.
  5. Always follow strong and frequent data backup procedures for archival.

There are resources available for you to avoid infections by this ransomware. Please ask your IT experts on usage of effective segmentation for security and firewalling. For more information, please email info@avocadosystems.net

Deterministic Application and Data Security

Author: Keshav Kamble

Application security is relatively new technology compared to traditional network security. The gravity and importance of application security has increased multi-fold with the rise of public, private and hybrid cloud environments ,where the underlying infrastructure such as compute, network and storage may or may not belong to application owners. Large number of legacy applications is being ported to cloud environments. Then there are cloud native applications which are completely developed, tested and productized on cloud environments. In other words, your application requires special consideration for security. How big is the problem? The answer has two variants. Financially, it will be about $7 billion dollars by year the 2021. Not alarmed? Technologically, it will kill the Digital Transformation and Industry 4.0 which is worth $380 billion dollars by year the 2021. Digital transformation and industry 4.0 market sizes by Gartner. Now, that we are on the same page lets discuss Deterministic Application and Data Security.

Read More

Pico-Segmentation of Application Instances

Author: Keshav Kamble

Modern multi-tier application design with web, application, and database tiers, has vastly expanded the ratio of east-west traffic (traffic within a data center — i.e. server to server traffic) to north-south (traffic is client to server traffic, between the data center and the rest of the network) traffic. By some estimates, data centers may have five times the amount of east-west traffic as north-south traffic, since hundreds of web tier, application tier, and database tier servers constantly communicate to deliver services.

Read More

Why Pico-Segmentation matters?

Author: Keshav Kamble

Definition of a Pico-Segment A Pico-Segment or Descriptor Segment can be defined as an impenetrable segment, formed of such resources based on their standard, proprietary, and relational context attributes for secure interaction.
Keshav Kamble

 

First, Some Thoughts…

More than half my professional friends are in the security compliance business. We always manage to become entwined in heated debates, centered around whether PCI DSS is ‘too loose’ in defining sensitive workload separation – from the rest of the workloads in computing environments.

Not only PCI DSS, but Personally Identifiable Information (PII), and other sensitive proprietary data require careful handling and separation from other workloads in data centers or computing environments. The most common method followed by Security Operations: adding masses of rules in segment firewall appliances, and rely on it to work.

That might tick-mark the compliance action, but, guess what? Unfortunately, that method has failed astronomically to protect the PCI/PII and proprietary sensitive data from nefarious access by hackers or insiders. As of late, there have been attempts to separate sensitive applications/workloads, and correlating processes through use of host based rules/routing tables, based on various criteria. Leaving major limitations on effectiveness, and scalability.

When it comes to security of applications for Industry 4.0 concept and massive digital transformation, left-brain or rules-based thinking processes are inadequate. What’s needed is out-of-the-box thinking, directly from the right brain lobe. Omnipresent security technology, which can deterministically secure applications in private data centers as well as enable migration to cloud, and secure them in public cloud environments, is a must. Three legs of the Application Security tripod are deterministic, scalable, and without performance bottlenecks. This is the dramatic entry of The Last Samurai ☺; the last warrior in the chain to fight out and kill the cyber-threat which has penetrated through multiple layers of security. But let’s stop the analogy here.

The new deterministic and penetration proof method of segmentation is called ‘pico-segmentation’ (aka descriptor segmentation). The centralized thinking here, is around the application resources, which are main targets of the cyber-attacks. The resources are the data handles, e.g.; file descriptor, logical elements such as threads, and communication channels such as sockets and pipes. This new methodology seamlessly secures these resources through segmentation, while in tandem providing you the deepest level of visualization into the threats, and powerful capabilities to kill them entirely.

Definition of Pico-Segment

Applications utilize sets of Operating System resources such as logical threads, file descriptors, socket and channel descriptors, etc. A Pico-Segment (See Figure 1) or Descriptor Segment can be defined as an impenetrable segment, formed of such resources based on their standard, proprietary, and relational context attributes for secure interaction.

Agreeing that cyber security involves higher mathematics, computer science, and engineering – including out-of-box thinking, I am not going to over-describe the pico-segment. We can certainly have deeper discussions on it later.  However, the most important aspect is, what are the advantages of this method?

Advantages
  1. Security at the lowest Attack Surface: which means security with promise.
    Infinitesimally small (again, a mathematical term) attack surface: translates to the highest certainty of attack interception. Therefore, this method provides you a deterministic form of security and segmentation.
    In fact, one application can be divided into multiple pico-segments with each segment protected independently,
    further stopping the spread of threat inside the application, as well.

  2. Pico-segmentation security is delivered through Avocado Security Platform, which is completely independent of underlying PaaS, Connectivity, and perimeter security.
    The Application carries its own security along with it.

  3. Highly scalable: as it is not centralized or delivered through appliances.

  4. Removes performance bottlenecks from Application Security. Obviously, as every application and its resource is independently secured.

  5. I am sure, by now it’s clear this is a sharp way to simplify security architecture for your multi-tiered applications.

  6. “I Want To Believe”, the X-Files fan is waking up: this method enables our next-generation of cloud-centric workloads, and high value assets, such as Databases.

As always, it’s with great passion I share these and other meaningful insights, while retaining our continual quest to keep cybercrime at bay.

Feel free to email me @ kkamble@avocadosystems.net

Keshav K.

What is Application Discovery? Why is it Important?

Author: Keshav Kamble

Industry 4.0 phenomenon is happening as we speak. Cloud based e-commerce and setting up of IT and application systems for businesses are going to be a single click away. We are talking about times when cloud will be an integral part of every business; small or big.

It’s not only about new and emerging applications and technologies; emerging and legacy applications would need to co-exist and inter-operate in cloud environments. And just as is the case with scale and performance, security is high on the agenda.

How is our journey to life-on-cloud looking so far ?
Data Center computing, storage and network environments have been growing in magnitude and complexity. Ecosystems of complex workloads made up of applications from diverse software vendors add to the mix of already overwhelming security challenges. Unexpected damages by Advance Persistent Threats (APTs), Shadow IT, and use of unsanctioned applications by employees have skyrocketed year over year.

Pressure of operational excellence, security compliance, and high availability of services under reduced and constrained budgets are status quo, while taxing the creativity of IT Managers and Executives alike. The simple saying applies: ‘what can’t be counted – can’t be controlled’. Such is the state of the large number of applications in Data Center and computing environments today.

All these issues pose unwieldy problems and risks while migrating your enterprise workloads to cloud environments, not to mention the requirement to re-architect and/or re-engineer existing applications.

Snapshot: Present-day migration of workloads to cloud environments

The below highlights the high-level steps that occur during a typical migration process:

  1. Investigative activity, screening, and application inventory
  2. Target environment, PaaS (Platform as a Service) and security architecture selection
  3. Multi-stage migration
  4. Testing and performance checks for each migration stage
  5. Back to step #3: This process continues until all intended workloads are securely moved to the cloud, with achievement of interoperability in full-motion

However, within the process of application inventory – as one of many steps before migrating workloads to a cloud environment – organizations are required to compile a list of all sanctioned applications, dependent applications, storage requirements, security classification and connectivity requirements, among others.

The action appears simple; though quite the contrary. Cumbersome tasks remain in banding the entire list of all applications and dependent applications. Moreover, large numbers of legacy applications – where support and documentation is virtually non-existent, adds painful and time consuming agony to the process. How can this be achieved in a manner consistent with fluid efficiency? It’s definitely not simple.

So what’s the answer?
Shooting straight and simply put: the answer is Application Auto-Discovery. This intelligent mechanism streamlines all existing or new applications to self-enable themselves for easy identification and discovery. The application auto-discovery process helps in identifying and listing all applications, their processes, communication & application dependencies.

This entails full descriptive identifiers related to the applications – including application names, associated file names, types (binary, JVM, etc.), underlying platform used (java, python, etc.), communication processes, mathematical and un-spoofable signatures of each executable platform, scripts and binary files, and the physical paths of each file. Workload location attributes such as VM details, container details including IP addresses, container IDs, and process IDs of application workloads.

This type of unprecedented precision in Application Auto-Discovery empowers IT Managers, System Integrators, and Architects to flawlessly plan their activities for securing applications, as well as migrations to hybrid and public clouds.

This method provides much more than a baseline listing of sanctioned or unsanctioned applications, and creates laser focused efficiency while delivering a simpler and effective process.

Application Auto-Discovery helps simplify a variety of processes and achieve Operational Excellence across multiple areas, including:

  • Application Security Architecture, Design and Management
  • Selection of PaaS architectures: where based on your application inventory and details,
  • specific PaaS can be chosen or tuned
  • Consolidation and secure migration of applications to varied cloud environments
  • Capacity planning for High Value Assets such as PCI, and PII Databases (HVA)

In summary, I can’t emphasize enough how integrated Application Auto-Discovery helps ease the burden of understanding applications eco-systems and related complex dependencies. And the advantages of IT Managers and IT Security Managers being empowered to estimate their cloud migration efforts, while in tandem understanding provisioning the right kind of protection to their entire set of legacy and emerging applications. Now that’s a ‘Win-Win’.