A UNIX host on which ClearCase has not been installed can use non-ClearCase access to read VOB data from a UNIX VOB server. Typically, the technique is as follows:
A UNIX host running ClearCase must export a view-extended pathname to the VOB mount point (for example, /view/exportvu/vobs/vegaproj). Edit the file /etc/exports.mvfs to specify this pathname.
One or more non-ClearCase hosts access the VOB through a view-extended pathname. For example, a host may have an entry in its file-system table that begins
mars:/view/exportvu/vobs/vegaproj /usr/vega nfs ...
For information on setting up an export view, see Setting Up an Export View for Non-ClearCase Access.
Non-ClearCase access is restricted to UNIX computers, and carries several restrictions:
VOB access: Users on the non-ClearCase host can only read data from VOBs on UNIX VOB server hosts configured for non-ClearCase access; they cannot modify the VOB in any way. They are also restricted to using the element versions selected by the specified view. They cannot use version-extended or view-extended pathnames to access other versions of the VOB's elements.
Building: Although users cannot modify VOBs that are mounted through a view, they can write to view-private storage. Users can modify these view-private files with an editor and build them with a native make program or with scripts, though not with clearmake. Files created by such builds do not become derived objects; they are view-private files, unless developers take steps to convert them. (For more on this topic, see Building Software.)
Because clearmake does not run on the non-ClearCase host, configuration lookup and derived object sharing are not available on these hosts.
After a ClearCase view/VOB pair has been exported from a ClearCase system using NFS, any properly authorized NFS client system can access the files within that view/VOB pair. If the NFS client can mount the view/VOB pair directly with a mount command, you can also put that view/VOB pair (explicitly or implicitly) into a map used by the NFS client's automount daemon. Explicit entries name the exported view/VOB pair directly. Implicit entries may arise from wildcard syntax or other advanced automount features.
For example, using the typical automount wildcard syntax, suppose an indirect map is configured at /remote/viewname with a map file listing server:/view/viewname/vobs/&. This means that when a process on the NFS client accesses a subdirectory of /remote/viewname, the automount process performs an NFS mount from the corresponding subdirectory of server:/view/viewname/vobs.
NOTE: Listing the directory /remote/viewname usually shows only active mounts, not all possible mounts. This is similar to the result of listing /net for a hosts map.
If this type of map does not work correctly, verify that an explicit mount command works properly. If it does, then it is likely that the problem lies in the client automounter. Please consult your NFS client's documentation for full details on map syntax.
NOTE: Using the -hosts map for automount access does not work properly if the root file system and a view/VOB pair are exported on a ClearCase server. Suppose an NFS client host tries to access /net/cchost/view/viewname/vobs/vobpath. The automounter mounts the server's root directory on /net/cchost, then tries to mount the view/VOB on /net/cchost/view/viewname/vobs/vobpath. However, /net/cchost/view has no subdirectories, because NFS exports do not follow local file-system mounts such as /view. This mount fails because the local client is unable to find a directory on which to mount the view/VOB pair.
For more information on automounting, see Using automount with ClearCase on UNIX.
Most NFS client implementations include caches to speed up access to frequently used file data and metadata. Newer client implementations typically cache more aggressively than older ones. When the NFS client believes its cache is valid, but something in the view or VOB has changed so that it is inconsistent with the cached data, the client may access the wrong file from the VOB.
A common inconsistency arises when a file is checked in from another view, or when the exporting view's config spec is changed. If, as a result, the view selects a new version of a file, the NFS client may not notice the change. The NFS client expects that any change in the name-to-file binding changes the time stamp of the directory that contains the file. In this case, the directory in the exporting view has not changed, but the file cataloged in that directory has changed versions. The NFS client may not revalidate its cached name-to-file binding (the association of the name with a certain version of the file) until it believes the directory has changed or the entry is pushed out of the cache because of capacity constraints.
Most NFS clients consider a cache to be valid for only a short period, typically 60 seconds or less. If the cache is not updated in a short time, you can use one of the following methods to work around this restriction:
Create and remove a dummy file from the containing directory. This changes the directory time stamp, which invalidates the client's cache and forces the client to look up the new file version.
Disable the client's attribute cache (usually with the noac mount option). However, our testing indicates that this works only for some NFS V2 clients, and that it will increase the network traffic between the NFS client and the exporting view server. If your client uses NFS V3 by default, and you want to use noac, we recommend that you edit the mount options to request NFS V2.
As root user, unmount the file system, and then mount it. This flushes the NFS client cache for the file system. (Even if the unmount fails, it flushes the cache.)
We also recommend that you limit the dynamic nature of non-ClearCase access by using config specs that do not continually select new versions of files. For example, you can change the config spec for an exported view to contain a label-based rule rather than the /main/LATEST rule.
Because non-ClearCase access does not support NFS file locking for its files, application packages that use and require file locking do not work properly on files accessed with non-ClearCase access. Though file locking may work for view-private files on some UNIX operating systems running the MVFS, it may not work for VOB files and it does not work at all on some operating systems. An application package can hang if it insists on retrying lock requests until it can obtain a lock, and it can also be subject to file corruption if it continues when it cannot obtain a lock and multiple clients are modifying the same file. If your application requires file locking, use snapshot views or the ClearCase Web interface for access to VOB data. (See ClearCase Data and Non-ClearCase Hosts.)
Feedback on the documentation in this site? We welcome any comments!
Copyright © 2001 by Rational Software Corporation. All rights reserved. |