It was not so long ago that IT asset management consisted of an individual walking around a company with clipboard in hand, putting numbered stickers on computers, printers and other physical assets, then listing that number on a form. That approach was hardly sufficient then and today it’s absolutely archaic. Daniel Trauner, Director of Security at Axonius, is here to answer some pressing questions CISOs and security teams need to know in order to improve their IT asset management operations.
SC Media: What are some examples of things companies generally get wrong when they’re executing an asset management plan and how would you recommend they do these tasks better?
Daniel Trauner, Director of Security, Axonius: One of the easiest mistakes to make when centralizing your asset management practice is picking one or two “primary key” values meant to be used as correlation identifiers, and not taking into account the context for how those values are used and whether they actually make sense as a way to determine that two separate reports of an asset should be de-duplicated against one another.
There’s no easy solution unfortunately, but it’s important to avoid false positive correlations at all costs, which may be a counter-intuitive approach vs. tools such as dynamic security scanners, which often produce a fair number of false positive reports of vulnerabilities in order to minimize the chance of false negatives – i.e. failing to report a valid vulnerability.
If you mistakenly match two unrelated assets together based on, say, just a MAC (media access control) address, you can cause confusion for all users of your centralized asset database if it turns out that one of the devices was an unrelated virtual machine using a spoofed MAC. Always err on the side of not correlating things you aren’t sure are really the same device, and try to take a multi-dimensional, context-based approach to your correlation decisions.
SC: Asset management is a lot of things; one thing it’s not is plug and play. What are some suggestions on how companies can do a better job of monitoring and executing their asset management program?
DT: Automate, automate, automate. Machines are great at doing exactly what they’ve been programmed to do (bugs aside, of course). You should focus on identifying all sources of asset data within your organization which include some sort of programmatic API access, and become familiar with those APIs (application programming interface) so you can avoid the human error that’s always going to occasionally happen when assets are manually tracked.
Some of the most common and important asset data sources today, such as major cloud providers, often have extremely advanced, well-documented APIs. This should enable you to programmatically scan your environment looking for new assets or changes to existing assets, and to focus on identifying whichever types of assets are most relevant for your use cases.
SC: One of the killers of an asset management program is bad data. What are some tricks companies can use to ensure that they are finding the assets they need to monitor and, at the same time, removing assets from their database that have been decommissioned or removed from service?
DT: It’s always a good idea to track when an asset was first added into your database, as well as when it was “last seen” by any automated API integrations you have built to detect new assets automatically. A number of other systems, such as your SIEM, may also be able to provide event-based information to help indicate that an asset was “last seen” or is otherwise still being actively used. Especially if you’re automating the process of detecting new assets via an API, you can use what you know about the business purpose of those assets to automate the “decommissioning” process within your asset database.
For example, you may heavily use cloud-based virtual servers for your CI/CD system, and the entire fleet of servers may autoscale up or down depending on the volume of parallel builds taking place, or a set autoscaling schedule. Because these machines are constantly started and stopped (and may actually be only used for a few minutes to hours before they are replaced with new machines), you can leverage “last seen” to understand that any build server in that particular cloud environment older than X days is no longer relevant. And after some set retention period, you can then automatically drop this server from your asset inventory.
SC: How often should companies do formal reviews of their asset management program? Are there any tips on how to perform asset reviews?
DT: Early on — especially prior to significant automation efforts to avoid having to manually enter all new assets into your asset database — you should be performing regular reviews with the team involved in developing your asset database to correct bad data, ensure correlations are correct, and otherwise maintain appropriate data accuracy for all relevant fields depending on your use cases.
It may help to specifically measure the performance impact of a particular process which may have previously relied on asset data across multiple disparate sources, before and after use of the asset database as a part of that process. If the process has gotten much faster and less error-prone, your asset database is doing its job of lowering the cost of performing that operation. If multiple teams are relying on the same centralized asset database for different use cases, this performance measurement should extend to all of the processes involved.
SC: As is the case for nearly all big-ticket purchases, it’s essential to have senior management onboard for the purchase of resources, staff and consulting. What are some recommendations you have for ways to convince senior management that asset management resources are essential for the company’s compliance and cybersecurity survival?
DT: Aside from a direct proof-of-concept evaluation demonstrating a measurable performance improvement before and after the use of an accurate asset inventory on specific processes such as security incident response, consider that such a tool has wide applicability across multiple teams outside of security, including those involved in IT operations and compliance.
Multi-team applicability means that investing in an asset inventory early on – even if initially owned and managed by a single team – can mean future growth across the company to address new use cases outside of the original team. Multiple teams working with the same data will further improve the quality and effectiveness of the system, and lead to significantly lower maintenance costs over the long run vs. managing multiple disparate systems involving the same data sources and potentially overlapping use cases.
Daniel Trauner is the Director of Security at Axonius – a cybersecurity asset management company – where he leads the implementation of security and IT practices for a distributed and rapidly growing team. Previously, he was the Director of Platform Security at Bugcrowd, where he worked with (and was sometimes a part of) the thousands of security researchers worldwide who collectively attempt to understand, break, and fix anything that companies will let them. Growing up, he was always the kid who had more fun knocking down Lego towers than actually building them. Outside of security, Daniel enjoys reading, writing, collecting art, and trying to solve problems that others consider to be Kobayashi Maru scenarios.