tl;dr

If you use App Volumes and FSLogix, you need to add exclusions for FSLogix to your App Volumes snapvol.cfg file. Read this post written by Carlo Costanzo for the exclusions to add to your snapvol.cfg. If you’re interested in the gritty details, read on…

The Problem

After upgrading to FSLogix 2210 hotfix 4 (2.9.8884.27471), users started reporting issues when attempting to log into our environment. We’re using Horizon instant clones with FSLogix ODFC containers, DEM for the other profile stuff, and App Volumes to deliver applications. The issue was that the previous session was not completing the logoff process. When attempting to get a new Horizon session, users would be presented with an FSLogix error preventing them from logging in:

FSLogix Error Message

The FSLogix Cloud Cache log file showed entries like this:

[14:07:45.665][tid:000013c0.000019ec][ERROR:00000000] [Provider 2: 17680790259282527896] Unable to open write cache file: TimedWriteCache.cpp(117): [WCODE: 0x00000002] Failed to open data file: C:\ProgramData\FSLogix\Cache\user01_S-1-5-21-XXXXXXXXX-XXXXXXXXX-XXXXXXXXXXX-XXXXXXX\17680790259282527896.0

[14:07:45.665][tid:000013c0.000019ec][ERROR:00000000] [Provider 1: 2449361224450135397] Unable to open write cache file: TimedWriteCache.cpp(117): [WCODE: 0x00000002] Failed to open data file: C:\ProgramData\FSLogix\Cache\user01_S-1-5-21-XXXXXXXXX-XXXXXXXXX-XXXXXXXXXXX-XXXXXXX\2449361224450135397.0

When consoling to the VM in vSphere, it would be stuck on the “Signing Out” screen in Windows. Speaking with others in the World of EUC Slack, it does appear that this issue can manifest itself in extremely slow logoffs (30-45 minutes) rather than permanently “hung” logoffs like we experienced.

The Solution

After working with Microsoft Support for 7 months, they came up with a solution: there are FSLogix exclusions that need to be added to your App Volumes snapvol.cfg file. This is the email from Microsoft (lightly edited for length):

When I go through the Procmon I pick one specific File and see what happens to them to check the timing etc. And during this inspection I notice that the Path out of the sudden change from:

C:\ProgramData\FSLogix\Cache\user01_S-1-5-21-XXXXXXXXX-XXXXXXXXX-XXXXXXXXXXX-XXXXXXX\17680790259282527896.0

To:

C:\{00000000-0000-0000-0000-000000000000}\SVROOT\ProgramData\FSLogix\Cache\user01_S-1-5-21-XXXXXXXXX-XXXXXXXXX-XXXXXXXXXXX-XXXXXXX\17680790259282527896.0

The Path Is valid and after some time if changes the path again… First, I had no idea what the new Path was and where does it come from. But it seems that this Path is coming from former VmWare now Omnissa AppVolumes. As said, we did not catch the issue here, but with that knowledge now I check the Dump again… I shared before that in FSLogix Cache the File we was indeed missing.

So it’s clear now that AppVolumes is causing this issue. I recommend configuring AppVolumes to exclude Fslogix Stuff

  • exclude_path=\Users
  • exclude_path=\Programdata\FSLogix\
  • exclude_process_path=\Program Files\FSLogix\Apps
  • exclude_process_name=frxcontext.exe
  • exclude_process_name=frxshell.exe
  • exclude_process_name=frxsvc.exe
  • exclude_process_name=frxccds.exe
  • exclude_process_name=frx.exe
  • exclude_process_name=ConfigurationTool.exe
  • exclude_process_name=frxtray.exe
  • exclude_registry=\REGISTRY\MACHINE\SOFTWARE\FSLogix

Apparently, Microsoft Support found this blog post written by Carlo Costanzo on his blog vCloudInfo back in 2019. Bizarrely, this does not appear to be documented in any official format by either Microsoft or Omnissa. The source blog post mentions that recapturing all existing App Volumes is necessary, although I did not find that to be the case in our environment. Perhaps it is because FSLogix is not deployed to our application capture VMs.

Again, this did not occur prior to the FLSogix 2210 HF4 release. I have not verified whether the issue still exists in the latest FSLogix release 25.02.

To resolve this issue, update your App Volumes snapvol.cfg with the FSLogix exclusions outlined in Carlo’s blog and push a new image with the update. You should no longer be able to recreate the issue when users log off.