What is Application Discovery? Why is it Important?

Author: Keshav Kamble

Industry 4.0 phenomenon is happening as we speak. Cloud based e-commerce and setting up of IT and application systems for businesses are going to be a single click away. We are talking about times when cloud will be an integral part of every business; small or big.

It’s not only about new and emerging applications and technologies; emerging and legacy applications would need to co-exist and inter-operate in cloud environments. And just as is the case with scale and performance, security is high on the agenda.

How is our journey to life-on-cloud looking so far ?
Data Center computing, storage and network environments have been growing in magnitude and complexity. Ecosystems of complex workloads made up of applications from diverse software vendors add to the mix of already overwhelming security challenges. Unexpected damages by Advance Persistent Threats (APTs), Shadow IT, and use of unsanctioned applications by employees have skyrocketed year over year.

Pressure of operational excellence, security compliance, and high availability of services under reduced and constrained budgets are status quo, while taxing the creativity of IT Managers and Executives alike. The simple saying applies: ‘what can’t be counted – can’t be controlled’. Such is the state of the large number of applications in Data Center and computing environments today.

All these issues pose unwieldy problems and risks while migrating your enterprise workloads to cloud environments, not to mention the requirement to re-architect and/or re-engineer existing applications.

Snapshot: Present-day migration of workloads to cloud environments

The below highlights the high-level steps that occur during a typical migration process:

  1. Investigative activity, screening, and application inventory
  2. Target environment, PaaS (Platform as a Service) and security architecture selection
  3. Multi-stage migration
  4. Testing and performance checks for each migration stage
  5. Back to step #3: This process continues until all intended workloads are securely moved to the cloud, with achievement of interoperability in full-motion

However, within the process of application inventory – as one of many steps before migrating workloads to a cloud environment – organizations are required to compile a list of all sanctioned applications, dependent applications, storage requirements, security classification and connectivity requirements, among others.

The action appears simple; though quite the contrary. Cumbersome tasks remain in banding the entire list of all applications and dependent applications. Moreover, large numbers of legacy applications – where support and documentation is virtually non-existent, adds painful and time consuming agony to the process. How can this be achieved in a manner consistent with fluid efficiency? It’s definitely not simple.

So what’s the answer?
Shooting straight and simply put: the answer is Application Auto-Discovery. This intelligent mechanism streamlines all existing or new applications to self-enable themselves for easy identification and discovery. The application auto-discovery process helps in identifying and listing all applications, their processes, communication & application dependencies.

This entails full descriptive identifiers related to the applications – including application names, associated file names, types (binary, JVM, etc.), underlying platform used (java, python, etc.), communication processes, mathematical and un-spoofable signatures of each executable platform, scripts and binary files, and the physical paths of each file. Workload location attributes such as VM details, container details including IP addresses, container IDs, and process IDs of application workloads.

This type of unprecedented precision in Application Auto-Discovery empowers IT Managers, System Integrators, and Architects to flawlessly plan their activities for securing applications, as well as migrations to hybrid and public clouds.

This method provides much more than a baseline listing of sanctioned or unsanctioned applications, and creates laser focused efficiency while delivering a simpler and effective process.

Application Auto-Discovery helps simplify a variety of processes and achieve Operational Excellence across multiple areas, including:

  • Application Security Architecture, Design and Management
  • Selection of PaaS architectures: where based on your application inventory and details,
  • specific PaaS can be chosen or tuned
  • Consolidation and secure migration of applications to varied cloud environments
  • Capacity planning for High Value Assets such as PCI, and PII Databases (HVA)

In summary, I can’t emphasize enough how integrated Application Auto-Discovery helps ease the burden of understanding applications eco-systems and related complex dependencies. And the advantages of IT Managers and IT Security Managers being empowered to estimate their cloud migration efforts, while in tandem understanding provisioning the right kind of protection to their entire set of legacy and emerging applications. Now that’s a ‘Win-Win’.

How Do We Simplify East-West Security? The Imperative Path is Upon Us.

Author: Keshav Kamble

80% of total traffic in a data center is internally generated and consumed by assorted applications within the data center.

Ahhh, the endless saga of streamlining hazards around application security of today and emerging Industry 4.0 phenomenon; and with that, let’s talk about the East-West component of the conundrum. But first, it’s important to clarify the variances of ‘East-West’ vs. ‘North-South’ traffic in a typical data center environment. By definition: North-South traffic is the communication that occurs between server applications deployed inside data centers and internet based client applications. Theoretically, it can also include inter-data center traffic.

East-West communication can be loosely defined as traffic between various application instances within the data center. Most often, East-West traffic is an initial result of North-South traffic. In example, in a data center of a search engine, one search query from an internet based end-user can result in large amounts of internal communications between multiple application servers, attempting to resolve the query in the best possible manner.

Various studies performed on data center traffic statistics suggest that the ratio of North-South traffic to East-West traffic is 20%:80%. Clearly, data centers are designed to scale and perform swiftly by deploying faster computing, storage and connectivity solutions, which in turn is meant to provide quicker execution. Enterprise data centers are virtualized and multi-tenant based on various purpose-driven factors.

80% of total traffic in a data center is internally generated and consumed by assorted applications within the data center. A mere 20% of traffic comes from the outside (e.g. via the internet), and then makes its way back outside. Why do I bring this point to light? To emphasize the threat surface and vulnerabilities associated to a data center based application eco-system. Virtualized, multi-tenant data centers, be it Service Provider or Enterprise Data Centers, require complex internal hierarchies of services to secure and scale them. One can imagine the substantial complexities of deploying Service Function Chains (SFC) for all East-West traffic.

In 2014, we were designing connectivity solutions for within the data center, using 40GbE and 100GbE form factors on servers. As such, was the need for performance and bandwidth. It just so happened that to the contrary, the SFC performance was bottle-necking already virtualized computing and storage environments. Even the security services only chain was overwhelmingly frustrating, let alone the NAT and load balancers.

At one point, adding security SFC became more of a feeling as though we deliberately added choking points, knowing it was inadequate to protect the workloads. Inefficient application security and segmentation deficiencies have compromised applications in many ways. It was a matter of ‘when’ –  not ‘if’ the deployment would get compromised. But then again, everything was done in the name of compliance.

Typical hierarchies of security services included Edge Firewall, Segment Firewall, Application Firewall, and DPI Services (IPS/IDS of limited set of functionality) Monitoring. I’ll not mention the vendors and appliance names here, but many of the services were aggregated – and in some cases – the same appliance would perform different services, depending on the position of deployment.

Virtualized multi-tenant data centers unfortunately suffer, due to performance and expansion limitations of virtual security appliances, which are part of security services chaining. Furthermore, it does not stop there: architectural complexity further hinders scalability, manageability and increases costs.

How About an Out-of-Box Approach ?

And one which provides same or better functionality –  in a real-time deterministic manner, while removing performance bottlenecks, complexities and allows infinite scalability? Those with a good comprehension of scaled-out distributed systems would instantly understand this concept. The question is, where do you start breaking down – and how far down do you go? The answer is out there! (Just like the saying goes “The truth is out there!”, this coming from the X-Files fan in me.)

Another way of looking at it is a little more complicated, but mathematical in nature. It starts with defining the term ‘Threat Surface’, which is the number of vulnerabilities (of each kind) associated with a software module under consideration.  For simplicity, the unit I assigned is τ (τρωτό) (in English trotó). Consider a vast application eco-system with large numbers of diverse applications – interacting with one another. The Threat Surface of such a system is massive. Even a Next Generation Firewall (NGFW) would crumble under its own weight if the application environment is not streamlined. Therefore, security provisioning methods by appliances or chain of services (SFC) won’t be scalable, deterministic, or real-time in nature.

What we need is an approach – and thought process, where the applications get a non-penetrable, deterministic layer of protection built into the application itself. The application can be legacy or new, simple or complex, web- or database tier, data center or cloud based; the intelligent segmentation and deterministic security capabilities can be selected by an Application Security Administrator working with DevOps. Once chosen, the application security springs into action whenever an application comes up, stays with the application, moves with the application and disperses with it. With that, the application not only defends itself from legacy and current sets of threats, but at the same time – addresses emerging threats.

It clearly is not as simple as I’ve spelled out here. It requires complex mathematical analysis which involves applications, their attributes, communications, security aspects, and more, including methods to parameterize them. The aim is not to simply inherit the technology used by Service Function Chains, namely the security services –  but to develop more spoof-proof methods to protect resources –  in a real-time and deterministic manner.

And There You Have It:

Your applications and systems of applications have self-protection capabilities!

This can be a lifetime exercise, or even a PhD thesis for some. But what comes out of it is simplified security architecture with:

  • Application Self-Protection capabilities, less the bottlenecks of security service appliances. Removing them entirely

  • Applications are enabled to carry their own protection anywhere – namely private, public or hybrid cloud environments

  • Infinite scale: as provisioning starts at the lowest level

  • Speed of rise-to-action is as fast as the application itself. No more worries of micro-workloads or containerized workloads.

  • Highly programmable while empowering auto-formation capabilities along with the application itself

  • Costs and/or Total Cost of Ownership (TCO) is reduced by more than 80%

A Compelling Result :

Now, your data center or cloud deployment for access layer becomes much simpler, smarter, faster and agile. It should take form as illustrated in the following figure.

Other Advantages :

There are many, including:

  • Simplified DevOps via OpenShift, CloudFoundary and others

  • Deployments and upgrade management using Puppet and Chef

  • Next Generation security delivered & managed in a simplified manner

  • Provides more time for security engineers to focus on understanding emerging threats, vs. struggling with security layers

  • Built-in intelligence sharing across hierarchies, including Application to Application, B2B, and B2C for business excellence

  • Built-in forensic extraction capabilities: providing added capabilities for security analytics to share threat intelligence across organizations

Wrapping it All Up:

I believe the ultimate success of the Industry 4.0 phenomenon which would take us to the completely Cloud based business & e-commerce eco-system and IoT based mechanization & industrial controls, depends entirely upon how experts view the IT infrastructure and application security. Instead of trying to patch existing techniques in security provisioning, we must adopt emerging methods of security which are more effective, efficient, scalable and seamless. Cybercrime isn’t going away: hence, the fact remains that we need to be aware and diligent – but we can heighten our defenses and revolutionize options to defy the hazards.