Thursday, August 4, 2022

20 years of Vulnerability Managment - Why we've failed and continue to do so.




Cyber Security: Keeping Pace with Change.

Getting breached can really ruin your day. Actually it normally happens on a friday evening as you are about chill for the weekend. The cause of must breaches is not rocket science, its more to do with the poor approach we have accepted because we underestimate the threat actor.  - An attacker does not scan your website/network once a quarter with a commercial or open source scanner or perform an annual penetration test against your systems to see if there is any low hanging fruit, so how do we expect to defend against such an advisory using that approach?

Systems change now more frequently than ever due to the ease of cloud deployments and the speed of software deployments due to iterative development techniques. The rate of change increase results in exposures quickly manifesting and the organisation not even being aware of the exposure in the first place. Many organisations dont know what they have exposed on the public Internet.

We need to keep pace with change be it in a cloud environment, software deployed, new feature, network architecture change etc etc.

The below applies across the full stack. From network and cloud environments to API's, Web applications and mobile apps.- it's all software!

Lets talk about the root of all risk - Change. Risk is the probability of loss or injury. If the world was static and nothing changed we would not need to continuously assss risk. Change gives rise to risk.....

Change occurs when

A system does not change: Over time critical vulnerabilities are discovered & patches are released. "Yesterday I was secure, Today I’ve a Critical Risk."   - I did not change anything the world around me did.

A system changes: New features deployed, new services exposed, larger attack surface, more exposed, more to attack, more headaches. (obviously).

We need to Keep pace with change. (Keeping pace with potential risks).

Traditional tool based/consultant based approaches have failed to keep pace due to a lack in depth/coverage or frequency of change detection. Scanners alone suffer from coverage, accuracy issues and some "poor sod" spending their days in validation purgatory. False positives are the "white noise" of vulnerability management.

  • Validation of severity and prioritization needs to be tasked somwhere in the management cycle. If not by the solution you are using, somwhere else. 
  • Risk based vulnerabilty Intel is key for priortization. Focus on what is activley exploited in the wild not all the vulnerabilities. All vulnerabilities are not created equal. 

So what’s wrong? Why are up the creek without a paddle? Systems still being breached by advanced attackers (AKA Finding exposed remote login services with default credentials or unpatched systems or insecure code!!💀💀😀😎).  

Let’s look at current ways to dynamically assess systems for cyber security.

Penetration Test

Manual assessment of a system. Coupling of usage of automated tools, scripts and expertise.


Strengths: Logical issues. Accurate / (should be) False positive free. Complex exploits, Support.

Weaknesses: Not scalable, Expensive, Not on-demand, Does not fit with DevOps etc. Point-in-time scan. No Metrics??

 

Vulnerability Management

Automation/Software testing software – scanners

 

Strengths: Scale/Volume, On-demand, DevOps

Weaknesses: Accuracy, Risk Rating, Coverage, Depth (Logical vulnerabilities). Requires Expertise to validate output. Metrics are poor, require multiple tools.

 

Hybrid /PTaaS (Penetration Testing as a Service)

Automation augmented with Expertise coupled with Attack Surface Management


Strengths: Complex issues, Logical exploits, False positive Free, Scale/Volume, On-demand, DevOps, Accuracy, Coverage, Metrics, Support. Scale via automation. Depth via expertise.

Weaknesses: Potentially more costly up front than automation (but return on investment is high due to validated vulnerability data being received, less false positives and better coverage.)


 Why is traditional Vulnerability Management Failing – The basics.

  • Reliance on Software to test software (scanners) alone is folly! – Scanners alone don’t work.
  • Automation accuracy is not a strong as human accuracy – Our attackers are humans.
  • Scale vs Depth – Scanners do scale, Humans “do” depth. – Our enemies do Depth every time and are focused.
  • Change is constant – Consultant based security does not keep pace with change. – Our enemies love change.

What vulnerability management should look like…

  • On-demand: Assurance of coverage & depth of testing on demand. – DevOps, Security Team, Deployment process
  • Continuous & Accurate: Continuous assessments detecting and validating new vulnerabilities all the time.
  • Good for: Metrics, Risk lifecycle tracking, TTR Metrics, Root Cause etc etc
  • Integration: Continuous flow of validated  vulnerability intelligence into your SoC/Bug Tracker/GRC systems – Situational awareness. Cloud integrations to keep pace with systems spinning up and flux.
  • Full stack: “Hackers don’t give a S*#t”. Risk can be in web or hosting infrastructure, internal or external systems. Multiple tools for the same purpose? Multiple data sets? No complete picture of risk. We need risk convergence.

Shift Left?

We talk alot about Shift Left, moving security practices closer to the developer which helps us catch vulnerabilities earlier in the lifecycle. This paradigm is designed to result in quicker bug detection, more efficiency, less potential impact to deployed live systems.


Shift Left: Enable & Assist developers build and deploy secure code & systems. Prevention. Catch Early, Dont deploy vulnerable systems.

Shift Right: Detection, Vigilance, Detect currently unknown vulnerabilities. Detect “the next CVE” or "Log4shell"/Framework vulnerability and also mop-up anything that we missed in pre-prod.

Even the Risk profile of a static system can change. Today’s secure environment is at risk tomorrow via a vulnerability were not aware of yet. - Fight the future.


 


 

 

 

Thursday, May 12, 2022

Five Ways You Can Make Your Vulnerability Management (VM) Program Smart Now

 

Five Ways You Can Make Your Vulnerability Management (VM) Program Smart Now



So you are convinced that your need to adopt a “Smart” Vulnerability Management (VM) approach but you are not quite sure how to get started or even what to shoot for. Here are Five Very Important Steps you need to take to bring on the “Smart”.

 

Number 1 - Understand Business Goals and Then Automate Ranked Alerts

Yes, take a step back and think holistically how your business runs and what business processes are most critical to achieving your enterprise goals. Talk to your business line leaders and operational staff. Hit the whiteboard and talk through “what if” scenarios. Rank all of your business concerns as it pertains to any potential exposures to your attack surface. Then take on a Smart VM Platform that enables you to rank and automate each alert type across each IT layer so you receive automated business-ranked alerts. This is all done in the set-up stage. This is necessary. This is not sufficient – read on.

 

Number 2 - Make Sure its 100% Accurate

Want to ensure your get zero confidence from your support team when you present alerts – send them the automated alerts with no validation and let them spend days chasing false positives. You need to get Smart about the burden of noise generated by automated alerts. You need to adopt a Platform that integrates security specialists that rule our false positives BEFORE they are presented. In 2022, running your VM program virtually false-positive free is doable. VM with virtual 100% accuracy IS smart.

           

Number 3 - Don’t Waste Anyone’s Time – Give them the Whole Snapshot and Show Them Clearly What Matters Most

It’s easy to follow the typical IT stack layered specialist approach. One automated scanning tool for web applications. One tool for API scanning, One tool for network and devices. One ad hoc request for a pen test. For the past 10 years, most global enterprises have taken on the layered point-solution approach and then spent mountains of times hobbling together fractured intelligence reports across the attack surface. In 2022, that is no longer acceptable, nor is it Smart VM. There are full stack VM platforms that present your security posture in one snapshot.  They are pre-built to provide one single touchstone of truth that shows your security team AND your operational support team what issues need resolving now. Can we agree to buck the point solution tradition and take on Smart Full Stack VM now?

 

Number 4 - Understand Your Operational Support’s Daily Workflow (DO NOT INTERUPT IT) and Become a Part of It

The vernacular of “Smart” typically places a high emphasis on the Intelligence it produces but when we run a VM Program – we have a higher standard. We have to make the enterprise resilient itself. We have to continuously ensure that the important vulnerabilities are remediated in a timely manner. And the way we do that is take Smart approaches when integrating with support staff’s daily workflow. And this can be as simple as asking the support team how they like to take in their ticket information for seamless resolution. To achieve that seamless workflow integration in 2022 there are Smart VM platforms that integrate with whatever system your support team uses. And like the alert engine – it’s all automated. It’s all Smart.

 

Number 5 - Don’t Be An Alert Engine – Be a Remediation Engine

Congrats if you have completed  the above Four Steps. Now here’s a challenge. On the one side you have continuous, ranked business-intelligent alerts and on the other side you have IT Operational Support staff that are not security experts but who are required to remediate the issue. So how to you get Security Specialist Remediation guidance into the hands of the IT Support staff? Good news once again is that there are Smart VM Platforms that can integrate Security Specialist Validation not only to rule out false positives but to provide timely, contextualized guidance on how to resolve that pressing issue at hand. With a Smart approach, that guidance and be integrated into the ticketing system for easy access or can be just a phone call away for verbal step-by-step specific remediation guidance. And you get bonus Smart points when you adopt proactive security specialist guidance when bad programming patterns are noted and best practice guidance is deployed before a vulnerability is actually picked up.

 

Be Smart, Be Bold

If you take these Five Significant Steps to Smart VM, we allow you to walk with a bit of swagger. For if you now have delivered to your company a proactive, continuous and business-intelligent remediation machine and you have a resilient enterprise to show for it – your Smart VM Program entitles you to bragging rights. If you don’t have your Smart VM swagger yet, let’s talk.

Wednesday, August 25, 2021

Attack Surface Management - What's old is new again!!

 






Attack Surface Management (ASM), a new sexy approach to cyber security visibility. 

"How about we try to see what systems are exposed to the public Internet  so we can make sure they are being secured."

ASM is not Vulnerability management (detection of cyber security weaknesses) but rather takes a step back to answer the question, "What do I need to secure?" but is can also help identify the SBoM (Software Bill of Materials) across deployed systems.

Attack Surface Management (ASM) which provides you the ability to see all services exposed to the public internet across your global estate. As new systems are deployed, decommissioned or a system changes, ASM can inform you of the event.  This is done in real-time and on a continuous basis in most cases.

I wrote a bog in 2018  when we first introduced Edgescan's ASM solution which has evolved since by including both API discovery and multi-region monitoring.

API discovery locates exposed API endpoints using multilayered probing techniques. In many cases organizations simply don't know what API's they have exposed this can be due to poor asset management or the fact that some web application frameworks deploy an API by default.

Multi-Region monitoring: performs ASM from different source IP's globally to help you understand if there are any Geo-related traffic controls you may not see by scanning from a single Geo-IP.

The value of ASM is to provide real-time information as systems change and to help identify and alert you of items which may require attention such as exposed services, insecure protocols, rogue deployments, outdated software and so on.

Features we employ in edgescan ASM are as follows:

  • Fast network host discovery and asynchronous port scanning across the whole global perimeter. Allowing the identification of networking devices, platforms, operating systems, databases and applications.
  • Mapping and indexable results which help determine which service ports are present and listening for transactions. The can result in detecting exposed ports, vulnerable services or misconfigured firewalls.
  • Customizable scan profiling – to help us be specific about the services and systems you care about, say a random high port system or specific service in a specific region.
  • Service Detection – Discovery of exposed services based on response fingerprints and identifiers. Resulting in discovery of older or deprecated exposed systems. Coupled with continuous vulnerability management this is very effective of rapid detection of weaknesses due to Vulnerable and outdated software.
  • On demand live retests on exposed ports. As you close off exposures you may want on-demand probing to ensure you have fixed the exposure.
  • Historical host information for point in time reads of endpoints. Detailing a history of historical discoveries can assist with incident reporting and root cause analysis.
  • Detection of misconfigured ACL's or Firewall rules leading to service exposure resulting in weakness.
  • Customizable targeted alerting, which notifies you automatically of any potential exposures (e-mail, webhook, SMS) in real time.
  • IoT detection; as we know lots of vulnerable IoT deployed out there, much of it connected to corporate networks and much of it with little or no security controls enabled.


...We have observed very effective cyber security programs when ASM is coupled with continuous full stack vulnerability management, in particular if the newly discovered assets via ASM are automatically assessed for vulnerabilities. In effect ASM and vulnerability management working together...resulting in rapid vulnerability detection and response....

For real precision and fidelity, ASM combined with fullstack vulnerability coverage is required. ASM is not an application security or a network security solution but a full stack visibility.....

Edgescan ASM is in many cases included as a feature and is available with Edgescan's Vulnerability Intelligence Service. More at www.edgescan.com








Tuesday, June 15, 2021

Edgescan, why we do what we do.....

 


The cyber security industry is full of solutions to make you more secure. Some are unproven and other approaches work if deployed properly. Our industry is very fragmented. for example a recent "Cyber Defense" award I noticed has 195 categories! 

I suppose we need to ask ourselves as companies from time to time why we do what we do? 

So, the following post is, I guess, the reason we developed Edgescan and why we believe its a decent solution to help organizations improve and be more resilient in relation to cyber security and system protection....


Vulnerability scanning alone did not work.

The idea of software testing software for vulnerabilities is a good one but both sides of the equation may have bugs. Bugs in one side (The target) may result in vulnerabilities, whilst bugs on the other side (Scanner) may result in false negatives and false positives. 

Accuracy: To that end we built edgescan as a combination of automation to discover vulnerabilities at scale but  when certain types of potential vulnerability are discovered it informs a human to validate and triage the issue. The result of this is to ensure we have no false positives and the discovered issues are risk rate appropriately.

Coverage: The human element of edgescan makes sure the assessments are getting the coverage they need to be successful. Even in functional unit or system testing when developing software 100% coverage is extremely hard to achieve. It requires following every logical flow of code in an application which could be hundreds or thousands of permutations. To make this challenge even more complex different technologies require different types of automation be they API's javascript-heavy frameworks or generic n-tier applications.


Splitting vulnerability management into Silos of network and application vulnerability intelligence is not intelligent.

When defending the enterprise we need full stack visibility. Why? "Hackers don't give a S*it". We need to understand what risks and blind spots are present and make sure we have nothing exposed which can be used against us.

Combining network, host and web application vulnerability in a single view provides this. Even better is its validated and provides a single source of truth. Full stack visibility provides the ability to prioritize mitigation across the entire tech stack rather than using different sources of vulnerability data from different providers.


Accuracy and "noise suppression" would help people move more efficiently and quickly

Most folks would agree, receiving a feed of accurate and triaged vulnerability intel helps make decisions very quickly. It helps with priority and answering the questions regarding "Which vulns should we fix today?" Removing false positives and appropriate or custom risk rating is what we call "Noise suppression" it cuts through the noise to help organizations be more effective. Also when vulnerability data is used to kick off an automated process it better be accurate!!!

Traditional penetration testing was not scalable and "clunky"

Traditional penetration testing requires contracts, is not immediate and results in a PDF as the output. It is slow, clunky and expensive. Delivering penetration testing via the same portal as vulnerability management allows you to go deep and get a complete picture. Having penetration testing via the portal provides the ability to retest mitigated vulnerabilities on demand also rather than waiting for a consultant and can be invoked via automation

Metrics and trending data is required for measuring improvement.

The idea of having a extensible platform with the ability to extract and view validated/accurate  vulnerability data on demand and integrate to any other ticketing or GRC system was important. This helps with vulnerability lifecycle management and development pipeline integration.

Bugbounties are good but are a compliance and GDPR risk and not very controllable.

Bug bounty platforms use NDAs to trade bounty hunter silence for the possibility of a payout. If this NDA is broken there is no real recourse. Suing a bounty hunter in a third world country wont pay your GDPR fine!!

Bug bounty platforms may violate California and federal labor law, and the EU’s General Data Protection Regulation (GDPR). - Your vulnerability data  (and possibly client PII is on random laptops of bounty hunters globally. no governance, possibly no encryption. Do your clients understand their data could be on a random hunters laptop in say Pakistan?

Good article here: https://www.csoonline.com/article/3535888/bug-bounty-platforms-buy-researcher-silence-violate-labor-laws-critics-say.html

Attack Surface Management (ASM) & API discovery is important

We built ASM and API discovery in 2017 believing visibility is super important. Being informed in real-time of exposures of rogue deployments as they happen is key to continuous resilience. We cant secure what we cant see.

More here: 

https://info.edgescan.com/hubfs/Datasheets/Attack%20Surface%20Management%20Datasheet.pdf 

https://www.edgescan.com/services/api-security-testing/

Support for technical staff is important

We decided to deliver support to  our clients. We don't expect our clients to be cyber security experts. Everyone in the Edgescan team is a seasoned penetration tester due to our internal rotation on a monthly basis of teams from Edgescan support to consultancy, SAST, Software security and stuff not suitable for a SaaS which our clients require.


Validator v False Positive


Be Safe,

- ek

 

Tuesday, May 25, 2021

HSE Hack - What should we do now......personal opinion

What I would do to make the HSE a more resilient organization from a cyber standpoint......



This is somewhat an open letter to my government on how to secure *our* data. I do not cover compliance or certification but more practical "Must-have" items.

Awareness & Resilience (and budget)

Folks who write the cheques need to understand the value and importance of cyber security. Its not a "Tax" or an "Insurance" its a process to which we try to help ensure we are somewhat resilient to breach. Breach is 9 times out of 10 more expensive than multiple years of cyber spend.

Embrace cyber security! "Hackers don't give a shit" and if you are weak you will be hit. Cyber-Resilience and awareness may not prevent breach but it may limit the extent of the breach and enable us to act in a timely manner before the genie is out of the bottle. 

Investment in cyber security is paramount due to the potential losses due to fraud and breach recovery. Compliance is not security, focus needs to be on practical technical controls and a technical framework.

Asset Management and Attack surface Management - Identify and prioritize - Risk 

Maintain a list of what assets you have (Data and systems), What's the bill of materials for your network or system? 

We cant secure what we cant measure. Tracking of system resilience is of key importance. Deploy continuous monitoring and management of your external Internet facing estate. This will help detect weaknesses and exposures as they arise. Real-time attack surface management is a simple but very effective solution to understand what can be hacked at any point in time.

Establish an asset register and an IT BOM (Bill of materials). Identify critical assets (Systems and Data). Layer stronger controls around such systems. Perform  threat modeling exercises surrounding critical systems to identify cyber chokepoints and audit points to detect malice.

Threat Awareness - Intelligence

Deploy a solution to monitor lateral movement, brute forcing and typical indicators of compromise (IoC) traffic and artefacts. Threat awareness is important to both help detect post breach activities and also internal threats and weakness. Early detection is important in terms of limiting breach.

Processing of logs. Maintaining of logs. Tracking what's important.

Ensure we are auditing transactions, traffic and events on core systems. Such audit logs need to be consolidated and monitored for anomalies. Log scraping looking for errors and non standard events would be a great start. Logging non-idempotent transactions, authentication between users and systems and between systems themselves.

Vulnerability Management

Detect weaknesses as they occur. Patching, web application and API weaknesses. Exposed remote access services, administration consoles, weak cryptography all need to be tracked continuously. Key to this solution to be effective is accuracy. Solutions with guaranteed accuracy are preferred resulting in a reduction of "white-noise" so we can focus on real issues. The majority of ransomware leverages CVE's to exploit target systems. Full stack Vulnerability management makes systems more resilient to such attacks.

Focus on a risk based approach to patching and addressing weakness. "All vulnerabilities are not created equal." focus on what matters; critical systems and data first, moving down the list.

Penetration testing

Hackers manually probe systems and they are expert operators. Using software alone to assess security is never going to work. To level the playing field we need to fight fire with fire. Todays cybercrime consists of working professionals and industrialized capability. We need to be the same. Penetration testing consists of manual "deep dive" assessments using human intelligence simulating a determined attacker. Generally more effective in uncovering weakness but it is expensive and not as scalable.

Metrics & Measure improvement

Record improvement. What's difficult what's taking a long time. What cyber security activities are taking a long time and are challenging. Which systems cause the most cyber security effort. Which systems are historically more problematic and require the most attention.

Which layer (network or application) has the highest risk density and where to we focus our efforts. Examine vulnerability types; be they patching, developer or architecture related. figure out the root cause to focus on training,  nd awareness in order to prevent such bugs and errors which manifest as weaknesses.

Patch

Every year 1000's of CVE (Common Vulnerabilities and Exposures) are discovered. Systems previously thought secure today suffer from a critical risk tomorrow. Constant tracking is required, constant vulnerability management to detect, risk based parching is required. Establish a patching programme. Use automation if possible.

Email and Internet Browsing Security

Locking down email systems, deploying an email security service to help minimize exposure.  Locking down users browsing access to a whitelist of legitimate sites.

Data Encryption and secure Storage

Data which is critical to the business, sensitive in nature of contains PII needs to be encrypted with a suitable key management solution in place. Passwords should be stored in an un-recoverable way (Salted-hashed).

Backup Frequently

Backing up of data and systems is undervalued and paramount to restoring after a breach. The frequency of backup has a bearing on loss. More frequent backups = Less window of exposure. Try to deploy a Realtime backup solution if possible. The backups should be stored in a secure part of the network which requires authentication etc. to limit the chance of malware affecting backup repositories.

Authentication and Limitation & Zero Trust

Enable multifactor authentication (MFA) for critical systems. Be it certificate based combined with password or other means. Ensure system-to-system authentication is also enabled, adopt a "Zero trust model".  IP limit traffic between systems from a architectural standpoint in order to make a network more hierarchical and less "flat". This can limit the spread of infection.


The extent of this problem is only growing based on the statistics we produce every year alongside other organizations. 

More statistics can be found here including the Verizon DBIR and Edgescan Vulnerability Stats Report 2021.....

https://www.edgescan.com/company/blog/






Tuesday, May 18, 2021

The HSE Data Breach and the State of Irish Cyber Security


Many years ago, shortly after I founded the Irish chapter of OWASP ( http://www.owasp.org ) (in 2007??) we were delivering free application and software development classes to anyone who wanted them. It was a local low key affair but every class we delivered was "sold out". We have 60-80 folks mostly developers willing to spend 4-5 hours on learning the fundamentals of secure application development and testing.


I suppose we felt cyber security was an important issue because that's what we did. At the time many folks in business felt cyber security was an overhead or a "tax" and did not give it much time.


A few years later (late 2010) when the the foundation of the NCSC (National Cyber Security Centre) was announced, a few of us (local OWASP Ireland leaders) wrote a number of emails to the Irish government offering free cyber security training. As we were working for a non profit (501.3c) charity (OWASP ) we thought we could to this locally and "move the dial". The result was.....nothing. We got no response.


Since then I've always wanted Ireland to have a "Kite mark" regarding cyber security and secure application development. This is something I've proposed to many "talking heads " in government and industry over the years but everyone likes to talk but few actually do.


This could be free or tax deductible for employers and be of massive benefit.



In 2018 myself, Tony Clarke (CISO Marken) and David Cahill (AIB) had the idea of reigniting this idea...again no response. We also wrote an open letter to the government discussing the partnership model....as follows...

Tuesday, March 30, 2021

BBQ Cyber Security Thoughts......

BBQ Cyber Security Thoughts......

During lockdown, I've taken to standing over the BBQ staring at the temperature gauge, lifting the lid occasionally and slow cooking various meats. Given the lockdown situation this provided a focal point for the day; something to attend to for the afternoon. 

When standing there in a mindful stasis things go through your head, these are some of mine...


  • "Software testing Software, who thought that would work?"
  • "Using systems with potential vulnerabilities to discover potential vulnerabilities in systems"
  • "Shift Left would make more sense if development was linear"
  • "The reliance on automation to defend against a human adversary, sounds fair.....💀"
  • "We cant improve what we cant measure; We cant secure what we cant see."
  • "We accept false positives in scanners (Software getting it wrong) but we don't accept vulnerabilities (Software getting it wrong)." - Software testing software.
  • "The DevSecOps elephant in the room is "Validation"
  • "Change gives rise to Risk. Change occurs when a system does not change & When a system changes (duh!!)….Over time critical vulnerabilities are discovered. Patches are released. Yesterday I was secure, Today I’ve a Critical Risk. Need to patch/Redeploy. Also....when a system changes: New features deployed, new services exposed, larger attack surface, more exposed, more to attack, more headaches this also gives risk to risk."

  • "Scale vs Depth – Scanners do scale, Humans “do” depth. – Our enemies "do" depth every time and are focused."

  • "Automation accuracy is not a strong as human accuracy – Our attackers are humans."
  • "Shift Left, Shift Right,  Not just pushing left, need to push both directions. Eg A System is live, nothing changes but might be vulnerable tomorrow." 
  • Shift Left: Prevention. Catch Early. Shift Right: Detection, Vigilance
  • Shift Left: Enable & Assist developers build and deploy secure code & systems. Shift Right: Detect “the next CVE” and also mop-up anything that we missed in pre-prod.
  • We’re protecting our systems against breach by humans, not scanners right!!







Wednesday, March 10, 2021

Edgescan Weasel - Our new Web Security Scanning Tech

 

Web Application Scanning...Evolution

For the past 24 months Edgescan has been developing a new Web Scanning engine, namely "Weasel". Its a core component to the edgescan SaaS web security aspect of the service. We built it for many reasons:

  • Faster Assessment speed.
  • Increased coverage.
  • Better Accuracy.
  • More user control and configuration.
  • Improved API support and navigation.
  • More metrics.
  • Javascript/Single-Page-Application (SPA) improvement.
  • Improved content discovery.
  • Dynamic Learning

A cool thing about weasel is it has a dedicated team that not only consists of developers but also analysts and researchers. This was exciting as some of our penetration testers trained and pushed the engine and our developers implement ongoing changes. Developing a web scanning engine is certainly a treadmill and a never-ending process. Change is good, and to change often is to live well.

Dynamic Learning - Once aspect that is exciting for us is the idea of continuously integrated test cases; ensuring as new vulnerabilities are discovered they are included in our scanning without the need for client interaction or lengthy delays between version releases, while also ensuring known vulnerability test cases are up to date proof of concept's as research is discovered. - Keeping pace with change.

Scalability - In some cases clients have hundreds or thousands of web-layer targets. Weasel provides the ability to deliver a policy based service per application ensuring bandwidth throttling, schedule window scanning while also delivering both finesse and precision ensuring high quality advanced proof of concepts reflecting in cleaner intel delivered to the client.

Advanced automated content discovery - SPA indexing, development, configuration, backup file endpoint discovery. Time after time with internal and external testing we have discovered sensitive content leading to critical risk vulnerabilities which is continuously added to our checks resulting in automated detection.

Better Accuracy - Our engine uses both dynamic and static vectors to find vulnerabilities. We've worked hard on defining powerful testing vectors in order to test for vulnerabilities more efficiently but also to delivery coverage in a shorter timeframe. Of course, as ever, all findings are validated via the Edgescan core technology and expert validation in addition if required also.

API discovery and assessment: Weasel automatically searches for API manifest/Swagger files in order to detect unknown API's. API detection is a little more involved than just swagger file detection as is discussed here but once a manifest is discovered edgescan parses the file to understand how to use and navigate the API and hence test it.

With the introduction of our new Weasel scanning engine coupled with Edgescans fullstack coverage were pretty excited that we are leading the market in relation to continuous vulnerability intelligence. 

There is lots more to discuss at a later date.....

Edgescan Review:

https://www.itsecurityguru.org/2021/04/21/product-review-edgescan-makes-fullstack-vulnerability-management-easy/





Wednesday, September 9, 2020

Application Security Validation Pitfalls, False Positives and Misconceptions

I recently did a webinar with one of our senior security warriors, James Mullen discussing where automated validation works and where it doesn't. 

We also discussed false positives in both technical and logical vulnerabilities. 

This is worth tuning into if you want to understand the constraints of automation, where is falls down and why we think reliance on automation alone for vulnerability management is a poor idea, we currently still need "the Human Element".


Check it out if you want to learn more..

Tuesday, September 1, 2020

 


What’s the worst that can happen…..An Ode to Risk

Risk a widely used word in many walks of life but do we understand what it means…

Risk involves uncertainty about the effects/implications of an activity with respect to something that human’s value (such as health, well-being, wealth, property or the environment), often focusing on negative, undesirable consequences.”

Cyber security often talks about risk.... 

A high-risk vulnerability or the risk of an event occurring. So, risk is related to statistical occurrence of an event and the negative outcome….We often talk about likelihood and impact. The chance of something happening and the effect the of it happening.

As CISO’s or cyber security professionals we try to first address items with the highest risk or combination of likelihood and impact we call this prioritization.

The reason we need to prioritize is because we can’t fix all the issues and not every vulnerability is created equal. We all have limited capacity, budget and resources we need to do the best we can with what we have.

We try to discover risks via reviews of designs, procedures, technical system reviews and testing. Some of these activities are up-front and others are reoccurring in order to keep pace with change in our environments we control and the environments we don’t [control].

Keeping pace with risk is hard, we simply don’t have the man-power or budget to focus deeply on all risks to the business. Again, we need to focus on risks which are impactful or have a high chance of occurring.

Automation is good for scale and frequency (keeping pace); we can use automation to detect vulnerabilities but its weak at determining actual risk (and alone is prone to false positives). The determination of risk is contextual, based on what the likelihood is, the impact to the systems in question and ultimately the business impact.

Automation is not good at context. Risk is all about context. Without context we can’t determine priority. Without priority we can’t focus on what matters to the business.

In order to move the cybersecurity dial, improve resilience, detect threats and weakness I believe a combination of automation and human intelligence is required. 

At edgescan our mantra is “let’s automate like crazy, but never at the cost of accuracy”.

Accuracy is the combination of a few things…1. No false positives, 2. Appropriate risk rating & 3. Depth of coverage.

Combining both of these aspects results in reliable vulnerability intelligence

Vulnerability intelligence is actionable, prioritized and helps focus on what matters. – a core aspect of the edgescan approach.

 

 

 

 

Thursday, May 28, 2020

Edgescan inclusion in the Verizon DBiR
















For the third year running Edgescan contributed to the Verizon DBiR. The DBiR is recognized as the defacto cyber report which casts a wide net across all types of cyber security and breaches, this includes vulnerability management in both infrastructure and applications.


Edgescan vulnerability data is curated and validated, sanitized and reflects tens of thousands of assessments we deliver globally across the full stack to our clients.


As stated by Gabriel Basset of Verizon "I think there’s a positive story around how vulnerability scanning, patching, and filtering are preventing exploiting vulns from being the easiest way to cause a breach but that asset management is needed to identify and patch unpatched systems..."


A few things that stand out to me in the report are as follows:

Nearly half of breaches involved Hacking and 70% of breaches were external threat actors. To me this makes sense as in our experience most large enterprises have at lease one critical vulnerability living in their estate and the majority of risk (as per our research) is in the web layer/Layer 7 - Web sites, Applications and API's.


Of a 977 breach sample-space the majority of threat actors were associated with organized crime. These folks are professional, determined hackers. Its how they make their living. They don't care where the vulnerability resides in the stack. An automated approach to vulnerability management alone wont ensure your defense.



Using software/tools alone to defend against experienced humans wont result in robust security. 

This is the case in particular when the people we are trying to defend against actors who are very skilled and determined, professional blackhat folks, if you will.

Human Error was cited to be a significant contribution to system insecurity and breach in the 2020 DBiR report.

Misconfiguration taking the prize for main contributor;
 "They are now equally as common as Social breaches and more common than Malware, and are truly ubiquitous across all industries." according to the report authors.


























What we see in Edgescan is pretty much aligned with this metric. Misconfigurations are a common vulnerability and not going away anytime soon. Insecure deployments, misconfigured frameworks, directory listing, data exposure via errors all cousins and steadily increasing over the past number of years.

The concept of continuous assessment, profiling and validation is key to detecting such issues. Generally they are not difficult to detect or fix but if we don't know about them we're leaving the door open for someone else to use.



Wednesday, April 8, 2020

API Detection and Assessment: What they don't tell you in class...


API’s  (Application Programming Interfaces) are backend services  which expose an interface which can be used to connect to and transact or read/write information to and from a backend system. The are super useful and a great architecture decision delivering flexibility and extensibility of a service.

API’s deliver functionality once the client service knows how to “talk” to the API. API’s generally sit behind a HTTP port and can’t be “seen” unlike a website but they may deliver an equal level of value and functionality to the requesting client.
Many websites may use an API but the User does not invoke the API directly but rather the Website /App is a proxy for the API. API’s are not built to be human readable, like a website, but rather machine readable.

There are two challenges relating to API security assessment:

1. API Discovery: Do we have an inventory of all API’s deployed on the public Internet.
You may have API’s hosted on systems behind HTTP ports but are undiscovered. They may be well known but they may also be old or development deployments which are forgotten about. We can’t secure what we don’t know about.

Adequate assessment involves coverage of entire corporate ranges (CIDR ranges), large lists of IP’s, domain names (FQDN’s) and using a multi-layer probing methodology detailed below:

API discovery is a combination of both host layer and web layer investigation. Some are easier to discover than others.

Discovering API artifacts
Discovery of API’s may require multiple layers of probing. If we don’t know how to invoke a given API. API identification across may levels is required to accurately provide a confidence interval of if an API is present or not.

Detection probes (in edgescan) include: • Known API format requests • HTTP status type checks • TLS Certificate checks • API format Requests (SOAP/JSON etc) • Standard and Non-Standard API indicators • Manifest file detection • Hostname checks • Cert common name checks • Common API routes detection • API description files (Swagger/WADL) • SOAP protocol detection • JSON/XML response analysis • API endpoints Metadata detection • API routes in HTTP attributes • Cookie based API detection

2. API Assessment: Keeping pace with change and development.

Assessment of API’s can be difficult as the assessment methodology requires knowledge of how to communicate and invoke the API.

Running a simple web scanner against an API simply does not work. A scanner would just hit an initial URL and not know how to invoke or traverse the various API calls.

Good API assessment should have the ability to read/ingest descriptor files in order to understand how to communicate and invoke the API. 
Once this is done a scanner can assess the API method calls. 

As the development team alter and change the API the assessment technology can read the newly updated descriptor file and assess the API including new changes. – Keeping pace with change.

Assessment of vulnerabilities specific to API’s is also important. Items discussed in the OWASP API Top 10 are an important aspect to true API specific testing.

Devops: In a DevOps environment the descriptor file can be used to determine change/deltas since the last deployment of the API and only assess the changes saving valuable time in a fast DevOps environment - Iterative testing when frequent change occurs.



For more on edgescan's API services see: