I was thinking whether or not publish this one. Upgrading FirePOWER from 5.3.x to 5.4.x is perhaps most trickier of all upgrades I have ever done. Now, wait a sec, somebody will say, upgrade the DefenseCenter and then upgrade SFR modules. How tough can that be? Well, conceptually, that’s exactly how it’s done, but we have to read a ton of papers in order to have it all done right. We have to take care of versions for both DC and SFRs, as well as ASAs, free disk space, hypervisor versions, upgrade paths and so on. What bothers me most is that in upgrade documentation, several times we can find the sentence like “If something goes wrong, please contact Support.” I don’t like that, but the upgrade needs to be done, so let’s begin.
Our current version is 5.3.1, which can be verified by going to Help->About in our Defense Center:
Our SFR modules should be running about the same version. We verify this in the following portion of the DC: Devices->Device Management->Device->System:
Our task in this blog is to upgrade DC to the 22.214.171.124 and SFR modules to 126.96.36.199. These are the latest available versions at the time this blog is written. We will do an upgrade to 6.x later, as soon as that version gets available.
Here is what we are going to do:
- Make sure that there is no any health issues raised within the DC
- Reapply our policies
- Transfer our 5.3.x backups to external storage, because the upgrade will delete them
- Download required upgrade scripts and verify check sums
- Plan the upgrade for off-peek hours
- Transfer our upgrade files to the DC
- Read as many documents as possible on the topic
- Upgrade our DC to 5.4.0
- Upgrade the DC to 188.8.131.52
- Upgrade SFR modules to 5.4.0
- Upgrade modules to 184.108.40.206
Read the release notes, don’t be lazy!
I’m not kidding! This blog is summary of the upgrade process and the upgrade process using this blog should go smoothly. After all, it has been tested in both lab and production environments. However, there are lots of stuff that can go wrong. For example version numbers, health issues, supported browsers, plugins and so on. Always have the “Plan B” prepared: what to do if something goes wrong. The upgrade process itself is not that tough, once the preparation is done as it should be.
Taking care of health issues
Under the Health menu, we have to make sure that there is no any issues, warnings, errors or criticals. We do not proceed until all is green.
Here we have two critical events, but they are found to be a false positives. In this specific case, the DC showed unusual high memory usage on two SFR modules, but after investigation it turned out to be a bug in SFR version 5.3.1 which reported the wrong memory usage to the DC and that actually memory allocation was just fine. So, we still have all green, although we have an critical reported by DC. Again, don’t start the process until all warnings/errors/criticals are corrected.
Reapply our policies
Because the upgrade process will reboot the DC and all modules, we need to make sure that our policies are applied, so we don’t lose any changes we made. We should reapply policies after each upgrade step.
Transfer our backups to external storage
The migration process will delete all 5.3.x backups from the DC, so we have to transfer our backups to external location. Here we can read how to do a backup and transfer it to a safe external location.
Download packages and verify checksums
We have to have a valid subscription in order to download needed packages. These are actual shell scripts that will be executed on the DC or SFR modules. We need the following packages:
First one is a DC upgrade from 5.3.1 to 5.4.0, second one is a patch that is going to move our basic 5.4.0 version to 220.127.116.11. The third one is upgrade for SFR module from 5.3.1 to 5.4.0, and finally, last one is going to move SFR version from 5.4.0 to 18.104.22.168.We need to verify sums of each of them, because we don’t want to break our upgrade process because we had an error while downloading the file. There are many tools available for calculating hashes. For windows we can use FCIV tool, for example. This tools have to be downloaded and installed. On Linux, there are tools already installed. From Cisco site, we copy hash values, for example SHA512 and then, once the file is downloaded, we calculate the hash ourselves. For example:
$ sha512sum Sourcefire_3D_Defense_Center_S3_Upgrade-5.4.0-763.sh
We verify the calculated sum against one we copied from Cisco site. If we have a match, we can proceed. We do this for all packages.
After all files are updated, we can see them listed in the DC:
Upgrade the Defense Center
This is where the action begins. Before we go to the 22.214.171.124, we need to go to 5.4.0 first. In our scenario, we have virtual DC. There are no special requirements as far as virtualization is concerned. Supported platforms are VMware ESXi 5.1 and 5.5. Beside this, on the DC, we have to have 300MB free disk space for / partition and at least 5.5GB free disk space for /Volume partition. This requirement is for upgrading the DC. For SFR modules upgrade, we need 100MB of free disk space on SFR for / partition and 3.5GB free space for /Volume partition. If we are upgrading SFR modules from DC instead of CLI, we need additional 1GB of free space on /Volume partition on the DC. We can verify our DC disk space status by going to Dashboards->Summary Dashboard->Status->Disk Usage:
Here is one thing for consideration: although this is virtual platform, I didn’t find anything regarding making snapshot of the DC virtual machine before doing upgrade, just in case we are forced to revert back if the upgrade fails. Perhaps this can be done, but regarding failed DC upgrades, the release notes says:
“If the update fails for any reason, the page displays an error message indicating the time and date of the failure, which script was running when the update failed, and instructions on how to contact Support. Do not restart the update.Caution: If you encounter any other issue with the update (for example, if a manual refresh of the Update Status page shows no progress for several minutes), do not restart the update. Instead, contact Support”
So, if there is no anything about recovering from failed update using snapshots, I would not try using them.
To initiate the upgrade, we go to System->Updates and click the update icon next to the update we are about to start:
The upgrade of the DC will reboot the DC. Before that, at some stage, we will be logged out and when we log back in, we can track the upgrade process. It is important to note that the upgrade and restart of the DC will not cause interruption in the SFR modules traffic flow. We just cannot manage devices while the DC is upgrading. Once the upgrade starts, it will take some time to complete. Cisco does not provide any estimates, because the speed of the process depends on the hardware platform the DC runs on. Before we actually click that button, we have to have in mind that we cannot roll back from 5.4.x to 5.3.x, because the upgrade process deletes all uninstaller scripts. If we want to go back, we have to (yet again) contact the Support. Once the upgrade starts, we can monitor but not interrupt it! Unless we power off virtual machine, which is very, very bad idea! Let’s click the button…
When we do so, we select the DC we want to upgrade and click Install:
When the upgrade process begins, for a while we can see the process under running tasks:
At some point, the browser session will end and we have to log back in. When we do, we can track on the update status:
By clicking “show log for current script” we can see a log for the upgrade process:
As we can see, the process really takes some time. So, let’s grab a cup of coffee …
Now, that’s what I’m talking about! Almost two hours:
Now the DC will reboot. We should clear our browser cache and restart web session. When we log into our new DC, we need to accept the EULA. After that, let’s verify our DC version. The Help->About produces the following output:
Now we may need to apply new intrusion rules and install latest VDB and GeoDB. We may not be required to do so if these rules are up to date. But if we do need, this is done under System->Updates->Rule Updates.
The VDB is updated from System->Updates->Product Updates.
The GeoDB is updates from System->Updates->Geolocation Updates.
Finally, we need to reapply our policies to SFR devices. We already know how to do that.
Now we need to verify the upgrade process. Check the versions, health status, traffic logging, communication to SFR modules and so on…
This completes our first major task. We have upgraded the DC to 5.4.0 version. Before we proceed with the SFR modules upgrade, we are going to repeat the whole DC upgrade process, but this time, we are going from 5.4.0 to 126.96.36.199. The process is almost the same, but instead of selecting the upgrade file, for example
we should chose the patch file, for example
This file is listed as a patch within the DC, with the appropriate version number:
Upgrade SFR modules
At this point we have our DC at 188.8.131.52 version, but our modules are 5.3.1. We need to upgrade modules to 5.4.0 first, and then to the 184.108.40.206. In order to upgrade our modules to 220.127.116.11, the ASA software must be at least 9.3.1. Disk space requirements for module upgrades can be found in the DC upgrade section. Again, don’t be lazy and read the release notes for versions that are about to be installed!
We have to have in mind the fact that the during the upgrade, SFR modules will reboot. This can cause a traffic flow interruption. With the ASA software modules, we have two options when sending the traffic to the module:
- sfr fail-open
- sfr fail-close
The “sfr fail-open” allows traffic to flow even if the module is down for some reason, such as an upgrade. With this option, if the module’s status is up, then the traffic is sent to it, and if the module is in different state, then the traffic won’t be sent to the module, but rather go through the ASA without inspection. With “sfr fail-close” the traffic will be dropped if the module is down or rebooted.
Above mentioned applies to the data plane, which means the traffic itself. The module has also the management plane. If we remember when we deployed these software modules, they share Management interface with the physical ASA box. When the module reboots, we will not be able to access the module for the sake of management or monitoring.
This is what we are about to do. First, we will monitor the simple ICMP echo request going through the ASA/SFR while upgrading the module from 5.3.1 to 5.4.0 and with “sfr fail-open“. Then we will do the upgrade from 5.4.0 to 18.104.22.168 with the “sfr fail-close” and compare results. In the first case, we lost only two pings, perhaps because a CPU spiked, or something like that. I believe that no sessions were dropped, but I would not count on that in the production environment. In the second case, the traffic was being dropped the whole time the module was upgrading.
We trigger the update process the same way we did with the DC, by selecting appropriate update, clicking install icon and select desired module:
We have option to select the module we want to upgrade:
It would be nice if we had at least one test module, so we can do the upgrade on that one. After we confirm that the upgrade was successful, we can then apply the upgrade to some or all other modules.
We have to confirm the action:
Now we wait and watch the progress:
During the module upgrade, no activities should be performed on the DC except for monitoring the upgrade process.
Fine. We have successfully upgraded our test module and now it’s time to go with the production upgrade. If we have a single ASA/SFR, the process is exactly the same. As we saw, we missed only two pings. I suppose that all connection won’t suffer if we are in fail-open mode. But, Cisco says that we can expect traffic interruption, so we may need to plan for the downtime. The estimate SFR module upgrade is about 45 minutes, so we can expect the downtime to last close to that.
If we had an Active/Standby fail over (most probably even Active/Active, but I did not try that) then we should not have any downtime. With the A/S failover, the SFR module is treated as any other interface. This means that we should first upgrade the module inside the standby unit. During the module upgrade, active unit will pass the traffic, as usual. After the upgrade is successful, we initiate manual fail over, so the standby unit becomes active and starts passing the traffic. Then, we upgrade the other module. After this upgrade is done, we can fail over back to the previously active unit. This step is optional, though.
Once we have upgraded our modules to 5.4.0, we repeat the same steps in order to do the upgrade to the 22.214.171.124 version.
Of course, we should not forget to reapply our policies after each upgrade!
That’s all for now. Until the 6.0 version, stay safe!
Thanks for reading.