With the advent of staff being more and more mobile with work activities, they are often not in the office. This remote teleworking has been a growing trend for many years and can mean that employee systems are not on the corporate network ever or very infrequently. If the systems are not on the corporate network, then the audit logs and other activity from their laptops cannot always be collected in near real time as there is no connection to the internal SIEM system that is typically on the corporate network. The corporate SIEM systems need to be protected and rarely on open networks as they contain sensitive information and need to be protected from tampering and viewing from unauthorized parties. This can leave the end point systems exposed to unauthorized activity from the staff member doing something they should not be doing, or to being hacked from an external party while on some other open network like a Starbucks, a cafe, a hotel or an airport’s wireless network for example. If the system gets compromised, then no log or alert information can be sent to the corporate SIEM and the security teams won’t know that one of their employees was just hacked. In general, most connections would require the employee to VPN when remote or go into an office location to connect to the LAN so the logs can be sent to the corporate SIEM. But by then the system may already be compromised so it could spread the malware on the corporate network and result in a larger scale incident. Many attacks can go unnoticed after a seemingly innocuous event such as not patching the system, a user clicked on a malicious link and malware was installed, a remote hacker exploits some weakness in the systems settings or via a new day zero vulnerability. Some attacks may try and hide on users’ systems until the user connects back to the corporate network but there will still be subtle bits of activity that can be detected and reported on with software installs and process execution.

So how can Snare help with this problem?

We have the capability to collect the logs from the employee’s system in near real time over the internet, all securely over TLS using a mutual authentication key to our Snare Collector/Reflector technology. The system can be open to the internet and only allow authorized connections from the Snare agents to the Snare Collector/Reflector. Any system that does not have the relevant authorization keys won’t be allowed to connect. Along with TLS certificate strict validation the destination connection can be trusted and securely send the log data to the central SIEM. The connection works much like a VPN does for the traditional laptop to corporate network when a user is remote, but it limited only to the Snare Agent and the Snare Collector/Reflector for sending log data. This then allows all remote workers systems to have near real time monitoring and collect the audit logs whenever they are on the Internet such as in a cafe, hotel, airport or a Starbucks, which are all common areas they can be exposed from a remote exploitation attack, so this ability helps with early incident detection and data breaches of the users endpoint system before it can spread to other users and the corporate network. The technology can be deployed on the corporate network or in the cloud and reflected around to other parts of the network and multiple SIEM systems as needed to facilitate early warnings and reporting for the security team and any SOC an organisation has in place. The time to detection of a breach is always critical to containment and minimizing any business impact. That’s why collecting the data in near real time is always important to minimize the impact to the business.

The US Intelligence Community and National Media concluded that “Russian hackers in 2016 worked to compromise state voting systems and the companies that provide voting software and machines to states” fivethirtyeight  What needs to be done to protect state and county election system infrastructure to ensure a fair election in 2020?

Election Systems Infrastructure is made up of the following components, with each component having a unique set of vulnerabilities, according to the Center for Internet Security and the National Conference of State Legislatures:

  • Voter registration systems provide voters with the opportunity to establish their eligibility and right to vote, and for states and local jurisdictions to maintain each voter’s record, often including assigning voters to the correct polling location.
  • Pollbooks assist election officials by providing voter registration information to workers at each polling location. Historically, these were binders that contained voter information and could be used to mark off voters when they arrived to vote. While paper pollbooks remain in use today, many pollbooks are electronic and aim to facilitate the check-in and verification process at in-person polling places. The primary cybersecurity-related risks to paper pollbooks come from the transmission of pollbook data to formatting and printing services
  • Vote capture devices are the means by which actual votes are cast and recorded. Approaches vary greatly both across and within jurisdictions. Any given jurisdiction, and even a single polling place, is likely to have multiple methods for vote capture to accommodate both administrative decisions and different needs of voters
  • Vote tabulation is any aggregation or summation of votes. Vote tabulation is the aggregation of votes (e.g., cast vote records and vote summaries) for the purpose of generating totals and results report files.
  • Election results reporting and publishing: After votes are tabulated, results must be communicated both internally and to the public. The systems used for reporting and publishing are likely networked, and, in many cases, have public facing websites.

Transmission between components creates vulnerabilities While securing elections systems components is important, one of the largest sources of vulnerabilities, and thus most common methods of attack—attack vectors in cybersecurity parlance—lies not in the systems but in the transmission of data between systems. Weaknesses in communications protocols, or in their implementation, risk exposure or corruption of data, even for systems that are otherwise not network connected. For instance, while paper pollbooks wouldn’t typically have cybersecurity risks, if the data for the pollbooks is sent electronically to a printing service, this transmission introduces risks that must be addressed. https://www.cisecurity.org/wp-content/uploads/2018/02/CIS-Elections-eBook-15-Feb.pdf

National Security Organizations offer guidance on election security:

Organizations like the Center for Internet Security (CISecurity), the National Conference of State Legislatures (NCSL) and the Multi-State Information Sharing and Analysis Center (MS-ISAC), and the National Institute of Standards and Technology (NIST) are providing election security guidance to states and counties.

The Center for Internet Security (CIS) and its partners published a handbook as part of a comprehensive, nationwide approach to protect the democratic institution of voting.  The handbook is about establishing a consistent, widely agreed-upon set of best practices for the security of systems infrastructure that supports elections. Some of those key guidelines include:

  • Applicable CIS Controls #5.1: Minimize and Sparingly Use Administrative Privileges and only use administrative accounts when they are required. Implement focused auditing on the use of administrative privileged functions and monitor for anomalous behavior.
  • Applicable CIS Controls #6.2: Ensure logging is enabled on the system: Validate audit log settings for each hardware device and the software installed on it, ensuring that logs include a date, timestamp, source addresses, destination addresses, and various other useful elements of each packet and/or transaction. Systems should record logs in a standardized format such as syslog entries or those outlined by the Common Event Expression initiative. If systems cannot generate logs in a standardized format, log normalization tools can be deployed to convert logs into such a format.
  • Applicable CIS Controls #6.6: Use automated tools to assist in log management and where possible ensure logs are sent to a remote system: Deploy a SIEM (Security Information and Event Management) or log analytic tools for log aggregation and consolidation from multiple machines and for log correlation and analysis.
  • Applicable CIS Controls #9.2: Leverage Host-based Firewalls Apply host-based firewalls or port filtering tools on end systems, with a default-deny rule that drops all traffic except those services and ports that are explicitly allowed.
  • Applicable CIS Controls #12.2 Deploy Network Intrusion Detection System (IDS): On DMZ networks, configure monitoring systems (which may be built in to the IDS sensors or deployed as a separate technology) to record at least packet header information, and preferably full packet header and payloads of the traffic destined for or passing through the network border. This traffic should be sent to a properly configured Security Information Event Management (SIEM) or log analytics system
  • Applicable CSS Controls #14: Controlled Access Based on the Need to Know: The processes and tools used to track/control/prevent/correct secure access to critical assets (e.g., information, resources, systems) according to the formal determination of which persons, computers, and applications have a need and right to access these critical assets based on an approved classification.
  • Applicable CIS Controls#14.1: Implement Network Segmentation Based On Information Class: Segment the network based on the type of information and the sensitivity of the information processes and stored. Use virtual LANS (VLANS) to protect and isolate information and processing with different protection requirements with firewall filtering to ensure that only authorized individuals are able to communicate with systems necessary to fulfill their specific responsibilities.
  • Applicable CIS Controls #16.10: Ensure that user activity is logged and monitored for abnormal activities: Profile each user’s typical account usage by determining normal time-of-day access and access duration. Reports should be generated that indicate users who have logged in during unusual hours or have exceeded their normal login duration. This includes flagging the use of the user’s credentials from a computer other than computers on which the user generally works.

Election Security Requires Funding and Investment.

On March 23, 2018 the Consolidated Appropriations Act of 2018 was signed into law, which included $380 million in Help America Vote Act (HAVA) grants for states to make election security improvements. In order to receive the grant funds states must provide at least a 5 percent match within two years of receiving the federal funds and submit a state plan detailing how the funds are to be used. Every state received a base of $3 million, with the remaining funds disbursed using the voting age population formula described in Sections 101 and 103 of HAVA. This means that states received anywhere from $3 million to $34 million, depending on the population of the state (see this chart for state by state details).   http://www.ncsl.org/research/elections-and-campaigns/election-security-state-policies.aspx

The replacement of election equipment will continue into 2019. Although not a new trend, the requirement of paper ballots or a paper trail may be central in 2019 legislation. Bills in Indiana (HB 1315, SB 570), Missouri (HB 543, SB 113), Mississippi (HB 28), New York (SB 308), South Carolina (HB 3304, HB 3043, HB 3302, SB 182, SB 183, SB 140), Texas (SB 277, HB 22) all deal with phasing out paperless voting machines or requiring a paper trail for new equipment. Some bills include an appropriation. New Hampshire HB 345 would require new ballot-counting equipment to be acquired at regular intervals, and bills in Texas (HB 362) and Wyoming (HB 21) would create grant funds to assist local governments with purchasing new voting equipment.       http://www.ncsl.org/research/elections-and-campaigns/the-canvass-january-2019.aspx

Open Source Election System Technology Efforts

Election officials in Los Angeles County gave final approval, last Tuesday, to a new system of counting ballots, named “Voting Solutions for All People (VSAP) Tally 1.0”. The VSAP Tally 1.0 system is created to make the upcoming elections more secure.  The new tally system, VSAP Tally 1.0, is an open-source platform that runs on technology owned by the county instead of a private vendor. This is the first publicly-owned, open-source election tally system certified under the California voting system standards. The certification process of VSAP Tally 1.0 involved rigorous functional and security testing conducted by the Secretary of State’s staff as well as a certified voting system test lab. The testing ensured that the new system complies with California Voting System Standards (CVSS)… John Sebes, the chief technology officer, Open Source Election Technology Institute, points out that “their intention is to make it freely available to other organizations, which it is not as of now. It’s open source in the sense that it was paid for by public funds and the intent is to share it.”  https://hub.packtpub.com/vsap-tally-1-0-a-new-open-source-vote-counting-system-by-la-county-gets-final-state-approval/

The OSET (Open Source Election Technology) Institute is about researching, developing, and making innovative election software public technology (i.e., publicly available open source technology subject to an OSI-accredited license) in order to increase verification, accuracy, security, and transparency (in process), and ensure that ballots are counted as cast. The mission of the OSET Institute, a nonpartisan, nonprofit election technology research, development, and education organization, is to increase confidence in elections and their outcomes in order to preserve the operational continuity of democracy — ultimately worldwide — and because everyone deserves a better voting experience.  The Institute’s goal is to help defend democracy worldwide by ensuring the integrity, security, and usability of election administration technology. These principles guide our work.  The result, ElectOS, a Framework of public election technology available for any jurisdiction to adopt, adapt, and deploy for elections whether done in-house or by an outside commercial systems integration organization (however, not the OSET Institute).

The Timing for Planning and Implementation of Election Security is Now!

Snare Advanced Threat Intelligence combines syslog data from Snare including Windows, Linux, Unix, OSX, Routers, Switches and Firewalls as well as other data sources including external threat databases like STIX (Structured Threat Information eXpression) – a global data base of known cyber threats, directory and authentication data, server patching data, backups.   Snare Advanced Threat Intelligence also combines data from cloud based APIs including Office 365 and Amazon Web Services. This enables customers to manage the security posture of all their systems regardless of the location or type of data feed.   All products are available either on premise or hosted in the cloud and are available as a subscription service.

Information Sources

Center for Internet Security:  eBook – A Handbook for Elections Infrastructure Security  https://www.cisecurity.org/wp-content/uploads/2018/02/CIS-Elections-eBook-14-Feb.pdf

Electronic Poll Books | e-Poll Books http://www.ncsl.org/research/elections-and-campaigns/electronic-pollbooks.aspx

Electronic Poll Books-California Code of Regulations https://www.sos.ca.gov/administration/regulations/current-regulations/poll-books/

As many know, the US Cyber Command issued a recent emergency directive for DNS Infrastructure Tampering.

While much of the directive relates to validating organisational DNS, password and MFA settings, one key aspect of the directive discusses the monitoring and management of authorised and unauthorised changes to the DNS environment. In order to meet this requirement, adequate logging should be in place to monitor changes to the DNS settings, and log data should include date/time information as well as information on who is making the changes. Snare can help meet this requirement in several ways.

The Snare enterprise agents can track all access and modification to the DNS settings on Windows and Unix systems.

The key aspects of the logs that can be collected are:

  • All user authentication activity. If the user logs into the system either from the local console, Active Directory, or via ssh on Unix then Snare can collect the relevant operating system audit events or kernel events to show that a specific user logged into the system. This data will include the source IP, authentication type, relevant success and failure of the attempt and the date and time stamp of the activity.
    • Microsoft has technical articles on how to configure your audit policy to generate the specific events both on legacy 2003 and newer 2008R2, 2012R2, 2016 and 2019 systems that support advanced audit policies.
    • All the events are quite detailed, and include:
      • Who made the changes,
      • What the changes were,
      • What zones were affected and obviously,
      • When these changes occurred.
  • The Microsoft custom event logs on Windows 2008R2, 2012R2, 2016 and 2019 also include DNS Server and DNS client eventlog categories. The Snare agent will collect these using the default objectives. The events collected show additional changes to the DNS records that can occur through either manual or dynamic updates associated with Active Directory DNS and zone files. A summary of the event types are:
    • 512, 513,514,515,516 – ZONE_OP – These can be part of major updates and changes to the zone files.
    • 519,520 DYNAMIC_UPDATE
    • 536 CACHE_OP
    • 537,540,541 Configuration – these events will be the areas of main concern with systems changes.
    • 556 SERVER_OP
    • 561 ZONE_OP
  • The Snare agent for windows will collect DNS Server logs as part of the default configuration.

  • As part of the installation process, the Windows agent can be told to manage the configuration of the Windows audit subsystem, to ensure that it generates the relevant administrative events.  Alternatively, the Snare for Windows agent can be configured to be subservient to manually configured local policy or group policy settings. It should be noted that unless the associated audit subsystem is appropriately configured, events may not be delivered to the Snare for Windows agent, for processing.

  • For Unix systems the the DNS files are usually flat text files.  The Snare Linux agent can use two aspects to monitor the files
    • File watches: The agent can be configured to watch for any and all changes to specific files related to DNS configuration settings, and will raise kernel audit events on access or modification, including details of who accessed/changed the file, and date/time information associated with the event. On Linux  systems, configuration files related to bind, dnsmasq or other DNS server tools may be monitored.
    • The default administrative Objectives for the Linux agent, track all user logins, administrative activity, an privileged commands. File watches are also configured for for changes to the /etc directory, which hosts system level configuration files for the operating system.

  • File Integrity Monitoring – The Snare Linux agent can also perform sha512 checksum operations on system configuration files, such as DNS configuration files, in order to watch for changes. This will track all new files, changes to files or deletion of files and directories being monitored. These events dont show who did the change but will track the actual changes and permission changes to files. The FIM monitoring can be run on a configurable schedule (eg: once per hour or once per day) depending on the level of granularity wanted.

  • Once the logs have been generated then its up to the SIEM and reporting systems to provide reports or alerts relating to the changes. Snare offers two complimentary method for this:
    • Snare Central – this can provide objective reports looking for the specific event IDs and produce a report in tabular format as well as graph and pie charts of the activity. These can be emailed out on any schedule needed to include the PDF report, CSV and text output as needed.

  • Snare Advanced Analytics – For this we can provide a a view of changes that occur in the system and update the dashboard in near real time as the logs are being collected.

  • As part of normal operations all changes should be validated as part of approved activity as per your normal operating procedures and anything that is not approved would be escalated as a incident for investigation.

If your organisation needs help in this area and you would like more information, please contact our friendly sales team at snaresales@prophecyinternational.com for a chat on how we can help your business achieve a more effective and efficient CISA DNS monitoring solution.

Steve Challans

Chief Information Security Officer

https://www.snaresolutions.com

 

How good data management applies to log collection.

I love data. I was a math geek growing up and turned my affinity for statistics into a career. Intuitively most of us know that data drives informed decision making leading to better business outcomes. That’s only if, however, you do a good job collecting, managing, and interpreting that data. Often times this doesn’t seem to be the case. Data management best practices apply to log collection, as that is essentially what it is. The caveat, though, is that not collecting certain logs can have dire consequences. There are plenty of tools, though, that can strengthen your security posture by drastically improving the way you collect and manage your logs.

When it comes to log collection, two approaches seem to dominate the marketplace. The first one is, “Whatever we have to do to steer clear of negative consequences – particularly auditors.” These people, of course, take the Minimalist approach, collecting as little as possible to make managing the whole system as easy as possible. And who can blame them? Unless you are passionate about data management, you probably have dozens of other priorities you’d rather spend your valuable time on. There is inherently a lot of risk in this approach as you will probably not have the information you need when the time comes. If you ride a motorcycle or know somebody who does you may have heard the common phrase “There are two types of riders, those who have been in an accident, and those who will be in an accident.” The wisdom being that an accident is inevitable and it’s best to be prepared, motorcycles are dangerous after all. This axiom is equally applicable to cybersecurity, because there are really only two types of business, those that have been breached, and those that will be breached. It’s one thing to pass an audit, it’s another beast entirely coping with a breach.

The other common approach I often see, when it comes to log collection, is “D. All the Above.” If you have unlimited resources, it’s an awfully attractive option. After all, why risk it. When push comes to shove, if you have all the data at your disposal, you know you have the answers in your archives somewhere. While I don’t totally take issue with this “Maximalist” approach, I think there are a lot of ways to enhance it. For starters, a small environment is generating at least 5 gigs in logs a day, which means around 125 plus logs per second, or 10.8 million logs per day. That is a lot of data and larger environments produce thousands of times more data than that. This means more overhead from hardware costs to SIEM costs. When you are being charged by your solution by units of data ingested into the system, ingesting everything not only costs your business unnecessarily, but for many organizations it makes their SIEM solutions prohibitively expensive. We see it every day. A medium size enterprise goes with a market leader in Security Analytics only to see their bill go from a couple hundred thousand a year to several million. You only have to mingle at the RSA conference story to hear frustration after frustration, and it doesn’t have to be this way.

There are so many ways to focus log collection, but I have three favorites:

  • Log Truncation
  • Forensic Storage
  • Tiered Analytics

Log truncation is simple enough, Snare has a whole paper on it and how it works. What has always surprised me is the reluctance to do it. Log truncation is the removal of the superfluous text on windows event logs. Every windows event has a string of verbose text that has no forensic value. There is no need to collect it and it will only bog down your network and your storage. We’ve seen environments where upwards of 70% of log data was verbose text, and if cut from the log would have saved the organization considerable cost. Generating cost on data that your analytics tool will eventually ignore is quite simply, a waste of money.

Forensic storage is a trending topic in our industry now. Data volume is growing at an incredible rate and we need a lot of this data to piece together what happened in the unfortunate event of a breach. The problem is trying to detect a breach by always sifting through all that data all the time increases mean-time-to-detection (MTTD) which also increases mean-time-to-response (MTTR). That’s where forensic storage comes in. Forward on all critical event logs to your security analytics platform, while keeping everything else with even a shred of forensic value on separate servers. They are there if you need them, but they aren’t driving up hardware and software costs, and more importantly, aren’t bogging down your analytics platform or your incident response team. Every study on data management you see has data scientists reporting that they spend over 70% of their time on data preparation.1 That is wild. Highly skilled and highly paid employees, like data scientists, should not be spending up to 4/5ths of their time on menial tasks. That’s what happens when you inundate your systems with data though. Forensic storage is an easily implemented solution that not only improves security KPIs, but saves you money as well.

That brings us to Tiered Analytics. Tiered analytics, as the data dork that I am, is my favorite solution to data management, but it is also the most complex. While in theory companies of every size can take advantage, it gets increasingly important as your organization grows. A lot of companies do it to a degree already. When branch offices and/or individual departments have their own KPIs and dashboards while that same data is fed into executive level dashboards at corporate, that is Tiered Analytics. This approach helps your business get insight into that data your business generates at multiple levels, with varying degree of detail and varied perspectives.

SANS has a white paper written by Dace Shackleford that gives great examples from each tier. The dashboards and KPIs, for example, built for the C-Level executives would need to answer the following questions:

  • What is our overall risk posture?
  • What are our high-value targets?
  • What are the risks if our high-value targets are compromised?
  • What are the most cost-effective ways of reducing risks?

While IT management would need analytics to answer questions like:

  • What is really happening on the network?
  • Are any systems operating outside of policy?
  • Based on current system workloads can anything be virtualized?
  • What impact would an outage or downtime of a system have on the business?
  • Can we decommission any assets?

Then, another tier down you have various monitoring and response teams looking to answer an almost indefinite number of questions requiring a number of purpose-built dashboards and potentially custom KPIs:

  • Are website links in various emails from a known list of bad websites?
  • Are their network assets that should be monitored more frequently?
  • Are there changes to any host configuration settings or files closely tied to a website visit?
  • What is the impact on other network assets if this one is compromised?
  • Have there been any unusual example of port and protocol usage?
  • Should we monitor some assets more frequently because of the amount of use they get?
  • Is an employee who is supposed to be on vacation logged in at the office?
  • Is their suspicious activity after a USB port was used?

These questions are obviously geared to reducing MTTD to detection and improving incidence response. Some are also aimed at understanding the impact of individual assets on the business.2 Several questions require pulling together disparate datasources, not just logs. Workplace management software, for example, can help you identify when an employee in Barbados on vacation is also somehow at their machine in the office. STIX can help you correlate activity on the network with malicious websites and IPs. Inundating your analytics tools with superfluous data from logs only makes it that much more tedious to bring in data from the rest of your business, which is already becoming an imperative. With a tiered analytics solution, you can even pick and choose which datasources to bring in where, giving your teams easily digestible data sets to analyze and report on increasing the efficiency of each business unit and drastically improving your security posture.

You have only to look at the most recent Verizon Data Breach Report (2018) to see how little progress we’ve made in uncovering breaches. It takes 68% of businesses months or more to uncover they’ve been breached.3 There is so much an organization can do to improve its posture that it can be daunting to even begin. The first step is better data management, make life easier for the people living and working in all that data. The second is to prioritize and work towards more sophisticated approaches action item by action item. Truncation is an easy first step, and forensic storage is a no-brainer. After that comes security analytics whose architecture will vary company to company, but whose implementation will be critical to improving both an organization’s security posture and the cost effectiveness of their security solutions. By improving the way we tackle today’s security challenges, we’ll be better equipped to meet tomorrow’s.

 

1 Cleaning Big Data: Most Time-consuming, Least Enjoyable Data Science Task, Survey Says: Gil Press – https://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/#44e4090d6f63

2 Shackleford, D. (2016, January). Using Analytics to Prevent Future Attacks and Breaches. Retrieved December 18, 2018.

3 Verizon. (2018, March). 2018 Data Breach Investigations Report. Retrieved November 27, 2018, from https://www.verizonenterprise.com/resources/reports/rp_DBIR_2018_Report_execsummary_en_xg.pdf

The new reality for Canadian businesses

The Personal Information Protection and Electronic Documents Act or PIPEDA applies to the collection, use or disclosure of personal information by every Canadian organization in the course of a commercial activity.

The Office of the Privacy Commissioner of Canada introduced new data breach reporting requirements that came into effect on November 1, 2018. This requirement was introduced due to “The number and frequency of significant data breaches over the past few years” and the “Mandatory breach reporting and notification will create an incentive for organizations to take security more seriously and bring enhanced transparency and accountability to how organizations manager personal information” according to Commissioner Daniel Therrien.

The reporting requirement works in conjunction with the Privacy Act for the Federal Sector and the Personal Information Protection and Electronics Document Act (PIPEDA) for the private sector.

This new requirement applies to allow business within Canada and those that organizations that collect the personal information of Canadians.
With this new requirement, organizations must:

  • Report to the Privacy Commissioner’s office any breach of security safeguards where it creates a “real risk of significant harm;”
  • Notify individuals affected by a breach of security safeguards where there is a real risk of significant harm;
  • Keep records of all breaches of security safeguards that affect the personal information under their control; and
  • Keep those records for two years.

The definition of real risk of significant harm is humiliation, damage to reputation/relationship, and identity.

While the requirement refers to the reporting of a data breach, it also identifies the need to improve the security posture of every organization to ensure that the likelihood of a breach is minimized. There are numerous traditional security tools that are designed to protect our network such as firewalls and endpoint protection, however given the number of breaches that have occurred it is evident that they are not enough – organizations need to be proactive and vigilant. This requires a tool that is designed to review all activity within the organization as well as provide the ability to compare that activity day to day, such as an SIEM or analytics tool.

Where Snare Comes In

The Snare Product Suite by Prophecy has been designed to provide clear, concise and accurate reporting of all activity within your network.

Snare Agents are feature-rich, reliable, lightweight log programs that can be installed on Windows, Linux, Solaris and OSX, plus two agents for text-based logs, as well as the MS SQL agent, and then send in near – real time the events/activity on your devices.

Snare Server provides for data collection and reporting in real time, providing critical information required to monitor your organizations network infrastructure. Additionally, it provides for the ability to store and retrieve event data for historical review.

The Snare Analytics product provides organizations with a single pain of glass to review activity over time, check that systems are patched to prevent attacks from out of date software, unusual activity, escalation of or improper use of admin privileges which will allow you to identify and responds to a potential breach before it escalates.

Want to find out more about how the Snare Product Suite can assist, call today for more information book a one-on-one demonstration or request an evaluation.

State Governments manage and must protect a wide range of citizen information from cyber security threats, including credit card records, personal health information, employment records, revenue and tax information and election systems. With much of this information available online, State Departments and Agencies are a primary target for cyber-thieves. A 2017 cybersecurity report compiled by Verizon found that public-sector entities were the third-most common breach victims, behind financial and health care organizations.

Based on the number and severity of past cyber security breaches, States are keenly aware and have or are taking action to secure their networks and databases. According to a 2018 NASCIO Report – State CIO Top Ten Policy and Technology Priorities for 2018, security and risk management is the number one priority of State CIOs. While State Governments have acknowledged the security threat, different States are addressing the threat in different ways.

The Challenges: In addition to the increase in cyber security threats, States are challenged by limited budgets and competition for information security human resources. State Executives must determine how to protect not only State-level networks and information systems, but dozens of State Agencies that they oversee. While it is not cost effective for every State Agency to separately fund and manage their own information security systems and staff, States CIOs must determine what level of security services and support they can and should provide to their State Agencies.

Steps Taken: Over the past decade, State Legislatures have created state-wide Offices of Information Technology (OIT), and mandated the staffing of Chief Information Officers (CIO), and Chief Information Security Officers (CISO). A 2018 Deloitte-NASCIO Cybersecurity Study reported that all 50 states now have a statewide CISO or equivalent. Based on information sourced from 50 State Web Sites, 23 States now offer Managed Security Services, with the majority of States providing Security Governance, Compliance Audits and InfoSec Training and Consulting. The most frequently offered Managed Security Services are:

  • Security Information & Event Management (SIEM)
  • Incident Management and Response
  • Firewall, Proxy and VPN Services
  • Intrusion Detection/Prevention (IDS/IPS)
  • Vulnerability/Pen Testing
  • Encryption/SSL/TLS/Certificates
  • Malware, Spam & Virus Filtering
  • Forensic Investigations

Alternative Business Models:

In addition to staffing State CIOs and CISOs with specific duties and responsibilities, an increasing number of States are consolidating oversight and management of State Agency IT resources under a single statewide Office of Information Technology. But there are different business implementation models offered by different States.

Education & Governance (only) Model, where State CISOs establish, oversee and facilitate statewide security management programs to ensure government information is adequately protected. Examples of responsibilities of the CISO position under state laws include:

  • creating statewide security policies and IT standards,
  • requiring information security plans and annual assessments or reporting, and
  • requiring periodic security awareness training for employees

National Associations, including: NASCIO, National Conference of State Legislatures, National Association of State Chief Information Officers, and the Multi-State Information Sharing & Analysis Center, contribute significantly by identifying information security threats and best practices.

Brokerage Models differ depending on whether they are Sole Sourced or Multi-Vendor Sourced. The Texas Department of Information Resources (DIR), for example, contracted with AT&T to provide a comprehensive suite of Managed Security Services that give state agencies, local governments, school districts and other public entities access to resources to protect systems and data. Agencies can go to the DIR portal, identify the services they need and place an order for them.

An alternative model is to source a mix of security services from multiple vendors and coordinate the provision of these services to State Agencies. A 2018 NASCIO State CIO Survey showed 4 States already function as a broker of services, 5 see themselves migrating to primarily a broker of services and 16 see themselves offering some brokered services as well as providing services directly.

Managed Security Services: A number of States offer a range of managed security services to their State Agencies, most notably: Idaho, Iowa, Kentucky, Louisiana, Missouri, New Jersey, Pennsylvania, Tennessee, Vermont, but business models vary depending on whether they have centralized info security resources, including IT infrastructure, security systems and Infosec human resources, or whether infrastructure is centralized and Infosec resources are distributed, reporting to a centralized State OIT or reporting to a specific Agency.

Security Solutions for State OIT’s:

State Offices of Information Technology must balance the need for information security, with the availability of limited budgets and human resources, and the security software and services available from vendors that support their particular business model. Snare by Prophecy International is a Vendor Partner to State OITs – with over a decade of providing syslog collection, filtering and forwarding for Security Information & Eventlog Management (SIEM). Snare Security Solutions address the two primary challenges faced by State OIT organizations, offering cost-effective, easy to deploy, and easy to use solutions. Snare’s Business Intelligence Platform, built on an elastic.index, combines and correlates syslog events with a host of IT (ITSM, Patch and Backup Histories) and 3rd Party (STIX Malware Threats, Firewalls, DNS, IDS/IPS) security sources for threat-hunting forensics. It includes a prebuilt KPI monitoring dashboard and a smart user interface, so users can build and share queries and reports through a multi-tenant premise or cloud platform. Offered as an op-ex subscription, Snare complements any State’s primary SIEM platform, integrating with Active Directory and supporting Single-Sign-On.

View a pre-recorded demonstration of Snare Business Intelligence Dashboard by our Chief Product Officer here. To learn how Snare leverages Splunk, QRadar or another SIEM platform, go here.

In previous blogs, I’ve tongue-in-cheek (mostly) suggested our organisations would be a lot more protected from nefarious actors if we simply disconnected and went back to pen and paper. I may have also suggested that having employees makes enterprise security quite challenging. And Wi-Fi, visitors, BYOD, and IoT are also threat vectors: perhaps we should also get rid of them. Imagine the money we’d save.

OK, let’s assume you do need your internet connection, staff, and applications. How do we secure it all?

In earlier blogs, I’ve discussed a range of topics that look at different aspects of IT security and offered some thoughts on how best to go about building a secure and resilient organisation.

However, there’s a new kind of threat management technology emerging (we are one of the pioneers who invented it, so indulge me). It takes all of the feeds from small-footprint logging agents installed on every device and application in an organisation (think PCs, laptops, servers, and remote desktops) and intelligently profiles and flags areas of concern.

I’m not talking about SIEM here either in case you’re wondering. SIEM (security information and event management) collects the logs from our Snare agents and other syslog feeds from devices and applications, and then provides alerts and automatically remediates (in some instances) or identifies other security problems that need to be fixed.

You can see the hole. SIEM focuses on the data streams coming from the security apparatus but it doesn’t do a great job of building contextual insights from other data sources.

This is where threat intelligence comes into play.

A threat intelligence solution scans and collects everything that generates a log or provides intelligence on business operations.

It captures and secures log information coming from IT ticketing systems, configuration management databases (CMDB), change management systems, and structured threat information expression (STIX) data feeds to gain intelligence from threat actors, LDAP sources, group policy, system and application patching information, and backup status, as well as the traditional logs from Windows domain controllers, servers, desktops, mobile devices, webservers, and syslog feeds from firewalls, routers, switches, IP phones, and wireless access points.

So, pretty much anything that can have a logging agent installed on it or provide a syslog feed.

Effective logging agents (like ours at Snare) even log when someone tries to wipe a log to cover their tracks. Every log entry ends up on a highly secure central log server in near real-time. Even if an attacker deletes device logs, the agent already collected the logs and sent the logs to the central system. So, all of the malicious activity before the logs were deleted from the system was already captured and stored away from the system under attack.

As the logs are kept secure on another system away from the system under attack, we have the forensic of what occurred. The threat intelligence system will generate an alert (either on the dashboard or sent to a recipient) and, when you compare the log records, the anomaly of missing device logs will show up as someone trying to cover their tracks.   Then this information can be correlated with other systems and user activity as part of the incident management process.

Once you have logs for everything, the challenge is making sense of that information. Until now, it’s been pretty difficult and often expensive.

Threat intelligence software helps to overcome that problem. It presents a cascading series of preformatted dashboards which provide visual alert cues to the health, or otherwise, of the network, devices, and applications generating logs.

The power of threat intelligence comes from two main areas:

  1. It collates vast amounts of log data into meaningful information. This information can be visualised on dashboards calibrated out of the box to highlight potential problems using predefined key performance indicators (KPIs) to find potential security incidents. Regardless of what kind of application, system or device is generating the log, it can offer summary and detailed insights, drilling down to the raw data.  Once baselines are established, you can customise further, perhaps desensitising certain alerts and filtering out other noise to reduce false positives. Or, you can increase sensitivity on systems or applications that have highly restricted access in certain security zones.  Additionally, you can easily plug in new log sources at any time from other applications that provide better context of activity or devices such as the new vending machine in the hall which polls an internet connection once a day.
  2. Threat intelligence looks across the entire log universe in your organisation, pulling data from many sources to help connect the dots on what is occurring. It looks for patterns and behaviours which indicate that an attack (internal or external) is being attempted, policies are breached, strange or unauthorised user activity is occurring, or a device or application isn’t behaving as expected. By reducing false positives, security teams can spend more time on real and important incidents.

While most security platforms will pick up obvious outside hacking behaviour like DDoS or multiple random user login attempts, they won’t see more subtle things like a successful change to a firewall policy conducted at an uncharacteristic time of day, or a legitimate user asking for password resets when their account is suspended while on leave (common practice for people in financial roles), or users being granted administrative access, or when an admin generates multiple user accounts or passwords over and above what they normally do, or when a switch or system is remotely switched off and on again multiple times, perhaps in an attempt to load a compromised boot file.

In short, threat intelligence solutions collect, store, and analyse everything. And, they increasingly apply machine learning to make connections within the data that simply wouldn’t be apparent to other systems, or even to highly skilled analysts as they often suffer information overload. Finding the proverbial needle in the haystack is the key.

The irony is we’ve been insisting on capturing logs for decades, and who knows how many opportunities have been lost because we simply couldn’t act on the information in real-time or understand it in the wider context of how our organisations operate. As organisations have grown and more systems are on the network the logging load has increased exponentially.

This threat intelligence capability is being coined as next-generation SIEM technology in the market. It’s pretty obvious that it will become pervasive technology very quickly as the market needs more context with security log data that is a result of incidents.

Traditional SIEM is not going anywhere soon and clearly has a role to play but, increasingly, you will see the same information going to a next-generation SIEM with threat intelligence capability in the platform, which can also take some of its data feed from the traditional SIEMs.

Unless you’ve been out of contact with civilisation for the last few years, you’ll know about the Internet of Things (IoT).

Just to catch you up, it’s the advent of a myriad of devices which are not only connected to the internet but also, in many cases, generate data.

What sort of devices? Think about any smart device, or any monitored device or any internet-aware device. It could be any or all of the following, which can be found in most organisations:

  • vending machines that notify the operator when stock is low, cash boxes are full, or change is required
  • remotely-monitored exit signs that light the way to your fire exits.
  • IP phone systems
  • multifunction printers (a recent exploit has been uncovered which allows bad actors onto enterprise networks via unsecured fax lines connected to certain multifunction printers)
  • smart whiteboards and projectors
  • security swipe card systems
  • elevator and other building management and monitoring systems
  • unmanaged end user devices connected over the enterprise Wi-Fi network (a reasonably recent example was an internet-connected thermometer in a fish tank in a casino’s lobby, which let hackers access the company network and steal high roller data. I assume the fish denied everything. Or maybe they were just being koi. (Sorry.))
  • CCTV systems which may connect to third-party security providers
  • smart TVs, fridges and other appliances in the corporate kitchen, even though the ‘smart’ component often isn’t even used in a business kitchen setting.

And, as we know, where there’s an internet connection, there’s a threat vector.

The problem with IoT is the unstructured and unmanaged nature of these connected devices. In many cases, the manufacturers of these more general devices are mostly focused on the specific functionality of their appliance and may not even consider wider enterprise security ramifications.

Internet connections for many devices may be active by default, and often not able to be patched or managed as they are hard-soldered onto circuit boards. And, in some cases, you may not even know that a device is internet-aware and could be acting as a gateway onto your corporate network.

It’s fair to say that, for many organisations, worrying about being hacked via the smart TV or the Wi-Fi sound bar in the company boardroom is not top of mind.

So what’s the answer?

First, if you haven’t thought about it already, be aware that this is a threat vector. It’s one that only deliberate attackers would attempt to use, which makes any kind of breach probably quite serious.

Consider that it takes serious and direct effort to try to break into an enterprise network via a smart fridge or the CCTV system.

Second, identify and isolate these devices with network segmentation. Use any of the available technology tools to find devices that transmit or attempt to connect to the network or the internet, and determine the best course of action from there. If they need to remain connected (or you can’t turn the connectivity off) then make sure they can only access quarantined parts of the network. If they’re wired devices, ensure patch panels are wired correctly and network leads aren’t accidently plugged into a secured or other production networks.

If devices transmit and receive wirelessly, ensure they can only communicate over guest or utility-rated network connections.

Third, (or maybe first depending on your approach) ensure your IT security management procedures and policies address IoT. Develop protocols and procedures around the receipt, activation, screening, and management of internet-enabled devices which are consistent with adding any other network-enabled devices. Make sure facility managers know about these protocols and procedures, as building management systems are increasingly the focus of external attacks.

Fourth, train people and ask them to acknowledge the policies you have in place. It’s important that staff, contractors, and visitors understand the implications of connecting any kind of device to any active network in the organisation and don’t do it without -permission.

Last, put technology in place to monitor, log, and notify you if there is suspicious activity on your networks. Many organisations are doing this anyway as part and parcel of managing IT security, but this is becoming more important in an IoT world. Logging tools and threat intelligence solutions are the cornerstone here.

While IoT offers many benefits when it comes to productivity, convenience, cost savings, and many more areas, it does open a whole new front when it comes to fighting cyberattacks and protecting organisational assets.

Incident management isn’t too far from most CISOs’ minds in any given day.

If you read the news, any news, you’d be forgiven for thinking incident equates to some kind of catastrophic breach. Well, that is an incident of course, but the reality is that in the IT management world, an incident is any kind of unplanned activity as it relates to IT infrastructure.

It can cover the newsworthy major security breaches, but more often, incidents are equipment failures, corrupted applications, incomplete backups and damaged end user devices. They can also be unauthorised data leaks, theft of equipment, computer viruses, breaches of internet usage policy or the intentional destruction or theft of data.

It’s a bit of a laundry list and you can see why policies and security management frameworks have become critical for dealing with them. Not all incidents are of equal importance to any one organisation and the process for dealing with them will always vary.

The over-arching standard most organisations work to is ISO27001/2. Actually, it’s a whole series of standards under the ISO27000 umbrella with some 45 documents going from high level to very detailed areas – but if you Google it you’ll get the gist.

Give or take a few sub points, the IT security management standard (ITSM) lays out a framework to assess an incident (What sort of incident is it? How serious is it?), respond to it, eradicate it, restore whatever function was disrupted by the incident and finally, review what happened and how the incident was dealt with to see if there are learnings and improvements to be made.

There are additional standards and regulations that different organisations will lay on top of this more general ITSM approach. For example, if you need to comply with PCI DSS regulation because you access, store, transmit or manage card holder data then you need to run at least an annual incident test to see how well your systems, policies and processes stand up in the case of a breach.

Likewise, banks and other large financial institutions can be required by regulators to prove their disaster recovery systems will stand up in the event of primary site failure.

An interesting side-note here. Policies also form part of an ITSM system. These are those extra documents you sign when you join an organisation, and cover areas like agreeing that you won’t use company-owned equipment to host Call of Duty games, or you won’t email data to a personal account, or comment about confidential company information on social media. Breaches of policy are also incidents, and the consequences are usually laid out in the agreements you sign on joining a firm.

Logging is good

Logging tools have an important role to play in not only flagging incidents as they happen, but also in providing an audit trail of events leading up to the incident. That might be a technical malfunction or a deliberate attempt to hack into the corporate network.

Logging tools like the ones we offer at Snare have a small footprint, are extremely durable and can be dialled in to the specific characteristics of different kinds of networks and end user activities. If you don’t use logging across your system now, it’s definitely worth considering and will improve your ITSM significantly.

Managed Security Service Partners – where black and white turns grey

Last but not least, it’s worth quickly touching on the additional complexity that comes when your organisation signs up with a managed security service provider (MSSP).

It seems kind of obvious, but make sure you really have a handle on exactly where your responsibilities end and the MSSP’s begin. Often MSSPs will be tasked on edge security, but core security is up to you. Or they’re focused on enterprise systems, but not end user devices.

Make sure you map out exactly who is responsible for what and build detailed service level agreements to suit. It’s an increasingly common story (more so in mid-to-small sized firms) that a lack of clarity about responsibilities between parties led to significant, expensive and sometimes company-ending incidents. Don’t assume the MSSP has got you covered unless it’s in writing.

If you want to know more about our logging tools you can find more information at https://www.snaresolutions.com/resources/ and you can find some great resources on ISO27001 here

https://www.snaresolutions.com/wp-content/uploads/2018/05/Snare-for-ISO-27001.pdf.

For specific areas on incident management NIST publish a great guide, it’s 70 pages of goodness and not a difficult read and also contains some other links to good references for any security teams.

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf

It seems like a silly question but how many companies take the extra steps to know that the millions of lines of code in their solutions don’t have any vulnerabilities? It’s easy to say your code is secure, it’s completely different to pay an accredited third party to review each and every line of code in your applications to ensure they’re free from vulnerabilities. It is with this in mind that Snare teamed with CA Veracode to review our Snare agent software and put them through the Veracode Verified program that would review the executable and application source, putting their own brand reputation behind their certainty. It is a lengthy process and the first to finish was our Snare Windows Agent with version 5.1 and Snare Agent Manager v1.1.0 that achieved Veracode VL4 security compliance. The VL4 status means that there were no Very high, High or Medium risk vulnerabilities in the applications as reviews by Veracode using the OWASP top 10 and SANS top 25 secure coding vulnerabilities. As part of the Verified program we have achieved Verified Standard.

What exactly goes into being Veracode VerAfied? It’s a back and forth between us and Veracode as they go through our application reviewing the code and check it against a policy using the Veracode OWASP top 10 and SANS top 25 known coding vulnerabilities to provide assurance that they did not contain coding vulnerabilities at the time of the scan. As part of the program we are required to perform rescans for every release and or every 6 months whichever occurs first to maintain the Verified Status. So its now built into our development and release process where the Windows Agent and Snare Agent Manager are constantly reviewed. Talk about an extra mile (or kilometer for those of you on the metric system).

Our competitors haven’t taken this extra step, and while we understand why, it was important to us that our best-selling products are built securely and are free from all known vulnerabilities. You can’t go a week anymore without major breaches making headlines and vulnerabilities can often be found in the most unassuming places. So, we went ahead and made sure that we are not only helping you secure your organization but we continually do so with the most secure solutions on the market.

Check out Veracode’s website to learn more about being Verified. 

Check out our page on Snare Agents to learn more about the world’s favorite logging tool.