I agree that this is a potential issue, and having a single global obd_timeout value is something that doesn't align with configurations where e.g. one filesystem is local and another is remote, and they should really have different timeout values.
There are a few options that can be tried to resolve this problem without needing to wait for a patch and new release:
1) Try mounting the filesystems on a test client in the opposite order: the filesystem with the longer timeout (FS300) mounted first and the shorter timeout (FS100) mounted second, and then check lctl get_param timeout to see if this client uses the 100s timeout. If yes, then this could be put into production immediately without any further changes, except in the rare case where one filesystem is being mounted inside the other. If the client still has a timeout of 300s, then it appears that FS100 is using the default obd_timeout of 100s and not explicitly setting a timeout at all, and something more needs to be done.
2) As with #1 above, change the mount order to mount FS300 first and FS100 second, and also explicitly set the timeout parameter for FSshort via lctl conf_param <fsname>.sys.timeout=100 and see if this allows the client to store the shorter timeout.
3) Set the timeout for FS100 to 300s to match FS300, so that the servers will wait up to 300s for the pings to arrive. However, this will also increase the recovery time for FS100 and that may not be desirable for some configurations.
There are also potential code fixes for this problem, in particular we discussed to add a per-target ping_interval tunable in /proc, similar to max_rpcs_in_flight and max_pages_per_rpc that allows setting the ping interval for a single filesystem explicitly.
Closing this as a duplicate of LU-9912, I've copied CC's over already.