Google IT Cert – Week 29 – Defense in Depth

This is week 5 of Course 5, Defense Against the Digital Dark Arts. This is the 29th week of the IT Support Professional Certification course from Google and Coursera. Next week looks to involve a series of videos and a final project relating to creating a culture of security awareness within an organization. This week, its all about layered defense, or “defense in depth.”

System Hardening

Intro to Defense in Depth

The concept of defense in depth means multiple, overlapping security systems should be deployed to protect IT systems. This creates redundancy in protection, which can help when one system fails. For example, if a firewall is compromised by an attacker, they may then have to find a vulnerability in your authentication system to continue with their attack.

This week we’re bringing all the security we’ve learned so far into a comprehensive body of security strategies.

Disabling Unnecessary Components

Remember back when we learned about zero-day vulnerabilities, that this is an as-yet unknown exploit. How do you protect your network from unknown threats?

An attack vector is the method or mechanism used to gain access to a system. This could be a network service or protocol, an email attachment, or user input.

An attack surface is “the sum of all the different attack vectors in a given system.” It is all the possible ways attackers could access your systems, known and unknown. It is not possible to know if you are aware of your entire attack surface, and networks with any complexity are almost guaranteed to feature unknown vulnerabilities.

This means it is important to keep the attack surface as small as possible, and a good way to do this is to remove complexity from systems, reducing the possibility for unknown flaws.

Disabling unused services and protocols is a critical practice—if it isn’t necessary, turn it off. This same practice can be applied to access and ACLs—only allow access to users that really need it.

Reducing software deployments is another way of reducing complexity and attack surface. Minimizing the amount of active code on your systems will minimize the amount of exploitable code.

It is just as important to do this on desktops and laptops as it is on servers.

The example they give here is “Telnet access for a managed switch has no business being enabled in a real-world environment.”

Defense in depth is all about mitigation, and implementing layers of security. Try thinking about securing a component in the context of “how do we secure this if all the layers above it have failed?”

Host-Based Firewall

Host-based firewalls are an important part of an in-depth defense. Host-based firewalls protect individual hosts in potentially malicious environments, and will also protect from compromised peers inside a network.

Just as with a network-based firewall, host-based firewalls should also be configured for an implicit deny policy.

Minimizing the attack surface is aided by using host-based firewalls, by reducing what an outside attacker can access.

A bastion host is one that has been specifically hardened by reducing allowed connections from specific IP ranges or networks. Bastion hosts are usually exposed to the internet, so they should be carefully locked down, but they can act sort of like a gateway to other services like domain controllers or authentication services.

Applications allowed to run on a bastion host are only those that are absolutely necessary.

A host-based firewall configuration will probably include ACLs to allow access from the VPN subnet.

Be sure to use logging to monitor systems when using host-based firewalls: if users have admin rights on their machine, they will be able to control the firewall. If possible, disable the ability to deactivate firewalls.

Logging and Auditing

Logs and alerts are a critical part of network defense, and logs must be kept in ways that allow them to be accessed and analyzed to inform admins of how the network behaves. Large companies will have teams dedicated to log auditing and security analysis. Small organizations will rely on the IT team to perform these functions.

All systems and services running on hosts will generate logs depending on its functions, but can depend on configurations.

Security information and event management systems (SIEMS) are part of a centralized logging system, and can also feature enhanced analysis tools. A SIEMS system will gather all available logs from every system and centralize it for management and security analysis.

Normalization is the process of converting logs from different systems in different formats to one uniform and standardized log structure. The example given is a log from a firewall may use a dd/mm/yyyy format for timestamps, and the authentication server may use mm/dd/yyyy. Normalization is converting this kind of data to one standard across all logging devices and services.

Logging too much data will be expensive to store and hard to sort through. Logging too little data is also bad because you may be missing critical data.

Using a centralized system for logging and analysis makes it easier to secure logs, which are often attacked after a breach. Attackers will try to alter logs to help cover their tracks and obfuscate attack vectors.

Once logs are centralized and standardized, you can write rules to automate alerts for types of events and behaviors. If you suddenly see a number of Windows machines attempting to connect to one server outside the network, this could indicate a malware attack.

Retention of logs will depend on your organization, network configuration and security situation. It will also dictate on your storage needs, another consideration for the organization.

Reading: Logging and Auditing

Read here about open source logging solution rsyslog, and enterprise solutions Splunk, IBM Qradar, and RSA Security Analytics.

Antimalware Protection

Malware is becoming more and more prevalent and sophisticated. Obviously, every IT professional needs an understanding of any antimalware solutions used in their organization. Many systems would be compromised in a matter of minutes if they were connected to the internet without any safeguard in place.

Most systems have some kind of basic firewall functions enabled, but there is a vast and growing number of attack traffic and vectors coursing through the internet.

Antivirus software has been around for a long time, but industry best-practices usually demand protections stronger than basic antivirus. There are not that many computer viruses circulating anymore, as they have been displaced by complex, often revenue-generating malware such as ransomware and cryptocurrency miners.

Antivirus programs are signature-based, meaning that they check potential threats against a database of known malware to see if it has similar features, such as a unique file hash or a file known to be associated with a prior infection.

Antivirus software will monitor things like new file creating or file modification, in order to detect behaviors that match a known malware signature. If any signatures are detected, the software will attempt to block the behavior, or, if the infection has already happened, to quarantine the infected files.

The first problem with this process is the dependence on the virus signatures, and the second is that the database relies on the antivirus vendor discovering new threats quickly.

There is also the additional problem of the antivirus software itself becoming an attack surface. In 2012, the SOPHOS antivirus engine developed this type of vulnerability. Read about it soon!

These problems aside, antivirus is still useful as it protects against the most common threats coming in over the internet.

Antivirus programs work on a blacklisting principle—blocking what is bad while allowing that which is good (theoretically).

Binary Whitelisting Software, however, only allows “good” and known software to perform actions on the system. Everything is blocked except those programs that are on the whitelist. This is like the implicit-deny rule in ACLs.

There are few ways binary whitelisting software checks if software is legitimate.

  • Using the unique cryptographic hash of a binary. Used to whitelist individual executables.
  • Software signing certificates. Using the public key to check the validity of a certificate. Whitelisting software can be configured to trust a vendor’s certificates.

If an attacker is able to compromise a vendor’s certificate creating process, your operation would still be verifying and whitelisting any malware the attacker chose to pass on to your organization.

Reading: Antimalware Protection

How long for your unprotected to become compromised?

Questioning traditional antivirus software.

The Sophos AV compromise.

Attackers bypassing whitelisting defenses.

Disk Encryption

Here is a more in-depth look at disk encryption. Full-disk encryption (FDE) automatically transforms all the data on a disk rendering it unreadable unless it is first unlocked with an encryption key. IT support may involve the design, deployment and troubleshooting of FDE systems.

FDE is a powerful tool in hardening systems against attack, and is especially useful in protecting mobile devices. FDE helps protect systems and devices even if an attacker has physical access to the device.

Machines with an FDE system still require access to some unencrypted files, kept in an unencrypted partition, like the kernel and bootloader, in order to boot. This means there is a weakness in the system, which can be exploited by being replaced with malicious files.

The Secure Boot Protocol, part of the UEFI specification, uses public key cryptography to secure the boot process and the files used. Secure boot uses a platform key, written to the firmware, to verify the boot files. Only properly signed and trusted files will be used for the boot process.

Microsoft uses BitLocker for FDE, and Apple uses FileVault 2. There are many other third-party solutions to FDE.

FDE uses a secret key system, which password-protects user keys which are then used to gain access to the master key. FDE systems that rely on passwords must be implemented with procedures to keep passwords safe and include plans in case a password is forgotten. Without the password, the data on the disk is no longer accessible.

Many enterprise FDE systems include a key escrow function, which securely stores encryption keys for authorized users to access later. Sysadmins can use this system to regain access to disks for which passwords have been lost.

Home directory or file-based encryption differs from FDE in that it only protects specified files or directories. This is less secure, but offers a convenience/usability trade-off. Attackers can still compromise system files that are not normally encrypted on file-based encrypted systems.

Reading: Disk Encryption

Check out BitLocker and FileVault 2, the Linux FDE system dm-crypt, as well as PGP, TrueCrypt and VeraCrypt.

>>>quiz

Heather, Self Learning

One must actively learn new things, says Heather. Thank you, Heather.

CORSAIR K70 RGB MK.2 RAPIDFIRE Mechanical Gaming Keyboard  Cherry MX  Speed - RGB LED 

Application Hardening

Software Patch Management

Keeping software updated is extremely important and is a large part of any IT support team’s responsibility.

Software bugs and vulnerabilities are exploited by attackers in the time between discovery and patching. This means that patching software with security updates is critical to keeping systems secure.

The Heartbleed was disclosed in 2014 and was a critical flaw at the core of the open source TLS library known as Open SSL. This bug allowed malformed  “heartbeat” message to be composed by an attacker which contained an unverified value requesting data from the target system’s memory, This was then transmitted back to the attacking system in the response to the “heartbeat” request. This bug meant an attacker may be able to recover keys in TLS sessions as well as login credentials.

This SSL library was widely used at the time, and while it was possible to recompile the software with the “heartbeat” functionality disabled, this was an impractical solution for most users. Waiting for a patch was necessary, which allowed attacks to proliferate.

Companies must have strict policies in place for deployment, patching and management of software. Platforms for software management allow sysadmins to see the state of their software across all systems.

Every device—computers, routers, printers—can have security vulnerabilities. While specific devices may receive security patches, it is common for OS vendors to push out updates that address bugs found in other devices. It is important to proceed carefully with peripheral updates, as that can introduce new bugs or break other functionalities.

Application Policies

Software is a large part of the attack surface in any system. Having application policies in place is critical to mitigating that attack surface. Policies will create boundaries for the users of an application, as well as help educate users in safer usage of that software.

A common policy, and often a strict requirement, is to only allow the use of the latest version of a piece of software. This will help with users who often consider installing updates a hassle, as it may require restarting the application.

Another common policy is to disallow classes of software that are considered problematic, such as file-sharing applications, as these are closely associated with piracy and malware.

Understanding what users need to do their jobs will inform software policies. A more secure environment will be established by recommending specific software for specific tasks.

Any binary whitelisting program deployment will likely require a business use case justification for software titles. That means you’ll have to make a pretty good argument for why Fortnite needs to be on your workstation.

Browser extensions should also be included in consideration of software use policies. Many workflows exist fully in web applications, so extensions that require full access to websites can be a major risk to security, as extension developers can view and modify any data on the site.

>>>quiz

Graded Assessments

>>quiz 


Lenovo ThinkPad T480 14" HD Business Laptop i5-8250U, Win10 Pro SSD

Case Study: Chrome OS

The Security Principles of Chrome OS

The Chrome OS is an example of the use of defense in depth security principles. If one security layer is compromised, the others are still in effect.

Chrome OS runs on a Linux platform. The user primarily interacts with the Chrome browser, and only has “user” permissions to the system. Since users do not have any admin level permissions, the entire systems defaults to protecting the OS.

Chrome OS features automatic updates. Once an update is available, it is downloaded and installed in the background without any interaction with the user. The user will be prompted to restart the machine, and the update is implemented when the system is restarted.

Sandboxing means segregating each of the system services as well as tabs in the Chrome browser. They all run in individual processes, so that if a malicious site attacks a web session it will only affect that tab, not any other processes running on the system.

Recovery Mode is automatically started when the system detects file corruption or potential tampering by an attacker.

Powerwash allows the user to quickly reset the machine to its default factory settings. Because all the user data is stored in the cloud, this process removes all user files—downloads, documents, etc., stored on the hard drive.

Chrome OS Verified Boot

Similar to Secure Boot, Verified Boot checks the integrity of the OS being used. If the OS has been compromised, the system will detect it and refuse to boot. Verified Boot will also stop out-of-date versions of the OS from booting.

Here’s how it works:

Read-only Firmware is installed at the factory, and it cannot be changed without physically altering the machine, preventing remote attacks. This firmware contains the cryptographic keys needed to verify the proceeding component has been signed by a trusted source, and that it meets the minimum amount of code needed by each component in the chain.

Read-write firmware can be updated automatically when needed, and is verified by the read-only firmware. If it cannot be verified, the latest backup version will be verified. If the backup is not verified, recovery mode is invoked.

When the read-write firmware is verified, it is tasked with verifying the actual OS to be used, by confirming that the kernel is from a trusted source.

When the kernel is executed, it verifies the integrity of the root file system, which contains the actual OS system installation and any locally stored user data.

Chrome OS stores the root file system and the kernel on separate pairs of hard drive partitions. Updates are downloaded to the partition not currently in use. When the system is rebooted, the update is verified and that partition is used to boot the system.

Chrome OS Data Encryption

Most user data in Chrome OS is stored in the cloud, but there is some local data, including downloads, bookmarks, cookies and caches. This data is encrypted using the user’s password and the Trusted Platform Module (TPM), a physical cryptographic device used by the OS to store cryptographic keys.

Newer versions of Chrome OS refer to the TPM as H1.

When a user logs into Chrome OS for the first time, the user’s password is used along with the H1 device to create an encrypted directory for that user’s data, called a vault. To decrypt the data in this directory, the specific machine must boot to Chrome OS and the user’s password must be used to log in.

If the Powerwash feature is activated the keys in the TPM (or H1) are wiped along with the rest of the data on the system. This makes it impossible for an attacker to recover information from the hard drive.

According to Google, Chrome OS is a great OS for security!

>>>Formative Quiz: Chrome OS

This quiz, though “formative” was 5 questions. Easy!


Vacationland: True Stories from Painful Beaches | by John Hodgman

WD 4TB Black My Passport  Portable External Hard Drive - USB 3.0

The Hapless Rube's Apocalypse Survival Guide | by Jack Barker

HTML and CSS: Design and Build Websites 1st Edition by Jon Duckett

Chain of Title: How Three Ordinary Americans Uncovered Wall Street’s Great Foreclosure Fraud | by David Dayen
Advertisement

2 thoughts on “Google IT Cert – Week 29 – Defense in Depth”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.