Buffer tuning is how you tell z/OS how much memory VSAM may use to cache pieces of your cluster while your job or transaction runs. More is not always better: oversized buffer requests steal from other files in the same address space and can increase paging on memory-constrained LPARs. The art is to align BUFND (data), BUFNI (index), optional BUFSP (shared pool), and STRNO (strings) with the access pattern you profiled. This page assumes you already know what a CI is and how splits work; here we connect those structural facts to I/O latency and show a repeatable way to evolve AMP parameters on a DD statement without folklore.
DASD latency dwarfs CPU for small random reads. VSAM mitigates latency by keeping recently used control intervals and index control intervals in buffers so repeated access hits memory instead of hardware. Sequential jobs benefit from read-ahead behavior when buffers hold the next CI before the program asks. Random jobs benefit when index buffers keep the path from root to sequence set warm across many keys in the same hot subtree.
BUFND sets the number of buffers for the data component. Think of each buffer as one CI-sized parking spot. If you have too few, VSAM may reuse a slot before your scan returns to a nearby record, causing reread from disk. If you have too many for tiny files, you waste memory without improving elapsed time.
BUFNI sets buffers for the index component. Keyed random access with poor locality jumps around the dataset; good index buffering keeps higher index levels pinned so each GET does not reread the same index blocks. Duplicate-key chains and wide keys increase index traffic; plan BUFNI accordingly.
BUFSP allocates additional bytes in the shared pool for VSAM internal structures and overflow needs beyond the explicit BUFND and BUFNI slots. It is useful when diagnostics or IBM manuals suggest pool shortage for complex operations. Treat BUFSP as a safety margin, not the primary knob.
STRNO declares how many concurrent strings VSAM should optimize for. Online regions with many tasks hitting the same cluster may benefit from thoughtful increases, but each string consumes resources. Coordinate with systems programmers when raising STRNO in CICS or IMS because global defaults also apply.
| Access | BUFND focus | BUFNI focus | Note |
|---|---|---|---|
| Random OLTP reads | Moderate; avoid over-fetching huge CIs without need | Higher to pin hot index levels | Watch STRNO if many concurrent strings |
| Sequential REPRO or reporting | Higher to stage sequential CIs | Lower if index walk is mostly forward and cached | Pair with larger CI only if splits allow |
| Mixed dynamic COBOL | Balanced | Balanced | Prototype with test jobs before production DD copy |
AMP parameters ride on the DD that allocates the VSAM cluster. Always verify parentheses and continuation rules with your shop JCL guide; the numbers below are illustrative, not universal best practices.
1234//STEP1 EXEC PGM=MYPROG //KSDS DD DSN=ACCT.MASTER.KSDS,DISP=SHR, // AMP=('BUFND=8','BUFNI=5','BUFSP=20K','STRNO=2') //SYSIN DD DUMMY
Larger CISZ means each buffer slot covers more user bytes, which can help sequential throughput but raises the memory cost per slot. CA tuning changes how often you transition across CA boundaries during scans; buffers do not remove the need for sensible FREESPACE. Think CI/CA for structural efficiency and AMP for runtime caching; both layers must agree for predictable service levels.
Batch jobs read AMP= from the DD statement. Online subsystems often supply equivalent parameters through file definitions or global VSAM options. A classic mismatch is tuning BUFNI beautifully in overnight batch while the daytime CICS file entry still uses conservative defaults; users see no improvement during business hours. Export your tuning decisions into both places: the JCL prolog for batch and the CICS or IMS resource definition for online. When platform teams roll out new z/OS levels, revisit defaults because maintenance can reset shipped samples—diff your RDOs after upgrades.
If CI splits dominate SMF or LISTCAT, raising BUFND is lipstick on a structural problem. If the catalog is on a stressed volume pair, index buffers cannot fix slow catalog I/O. If the application issues one GET per millisecond but each GET is followed by a syncpoint to a remote database, VSAM buffers will never be the bottleneck. Always confirm with profiling that VSAM wait time is material before spending political capital on AMP changes that steal memory from co-resident programs.
BUFND and BUFNI are like how many trays you keep on your desk for papers from the filing cabinet. If you have only one tray, you run back and forth all day. If you have enough trays for the papers you actually flip between, work feels instant. If you bring every tray in the building, you cannot fit on your chair. Pick the number of trays that matches your real homework pile.
1. Which AMP parameter primarily targets index buffering?
2. Why might raising BUFND help a long sequential read?
3. What risk accompanies blindly maximizing STRNO?