VSAM sequential versus random access

Sequential and random are two different contracts between your program and VSAM about how positions move through the cluster. Sequential means you mostly read the next or previous logical record in order. Random means you position by key, RRN, or RBA and read that record without walking everything in between. Performance tuning depends on which contract matches reality: a program that declares sequential but performs random jumps confuses the optimizer inside the runtime and usually wastes I/O. This page compares the patterns in plain language, ties them to COBOL ACCESS phrases, sketches index behavior for KSDS, and gives decision hints for beginners who must talk to developers about batch and online workloads.

Side-by-side comparison

Sequential versus random at a glance
AspectSequentialRandom
Typical I/O patternAscending (or descending) CI chain; index walked once per levelJump per key; repeated index root visits without locality
Buffer friendlinessHigh reuse of the next CI in BUFNDNeeds BUFNI for index stability across scattered keys
Best whenReports, extracts, full-file validationOLTP lookups, sparse updates by business key

COBOL mapping refresher

In FILE-CONTROL, ACCESS IS SEQUENTIAL pairs with READ NEXT loops after an optional START for positioning. ACCESS IS RANDOM pairs with READ INTO when you supply a key for each call. ACCESS IS DYNAMIC allows both styles in one open session following IBM rules for when each verb is legal. OPEN INPUT versus I-O also gates whether updates appear. Performance tuning begins by reading the FILE-CONTROL section literally instead of assuming from the job name what the program does.

Skip sequential and special cases

Skip sequential is a hybrid: sequential with gaps in key comparisons. It can reduce work when programs need every nth key or jump by a step. Treat it as sequential tuning with extra care: incorrect KEY lengths or compare fields cause subtle skips. If your shop rarely uses it, document any usage clearly so future maintainers do not mislabel the job as plain sequential.

Implications for buffers and CISZ

Sequential-heavy

Favor data buffers that hold upcoming CIs and CISZ aligned to meaningful multiples so each physical read returns useful neighbors. Index buffers still matter at the start of the run and across CA boundaries, but the steady state is data-dominated.

Random-heavy

Favor index buffers to keep hot B-tree levels resident when keys concentrate in subtrees. Oversized CISZ may mean each random read pulls more neighbor bytes than the transaction needs; revisit CI tuning when latency matters.

Common misunderstandings

  • Assuming VSAM "figures out" intent from READ verbs alone—ACCESS still matters in COBOL.
  • Treating dynamic access as free performance—it trades flexibility for complexity.
  • Ignoring descending sequential, which still benefits from sequential locality but walks the chain backward.

Production-shaped examples

Picture a nightly consolidation job that reads every open order in key sequence to build a summary tape for a partner. That job should be sequential end to end: one pass, minimal index churn after the initial positioning, and BUFND tuned for pipeline reads. Now picture an online authorization screen that retrieves one cardholder record per transaction by plastic number. That workload is random: each transaction is independent, keys are scattered, and sequential read-ahead would mostly prefetch irrelevant neighbors unless the keys are artificially clustered. Hybrid customer service tools that first pull one account then scan all statements for that account in one session are the classic argument for dynamic access: a single OPEN session avoids reopen costs while still allowing both keyed jumps and ordered walks. When you document performance requirements for new applications, attach narratives like these so operations can predict buffer and geometry needs before the first production load.

Talking to systems programmers

Systems programmers will ask for SMF excerpts, buffer pool sizes, and LPAR paging rates when you claim a sequential job is slow. Random-heavy workloads trigger questions about index cache residency and coupling facility latency if sharing crosses LPARs. Speaking their vocabulary—EXCP, service time, disconnect time—speeds approvals for tuning changes. Bring a table that states ACCESS mode, percent of READ NEXT versus keyed READ, and average keys per transaction so the discussion stays grounded in observed ratios instead of opinions about COBOL style.

Practical exercises

  1. Trace one nightly job: count READ NEXT versus keyed READ in a listing or trace tool.
  2. On a sandbox KSDS, run the same extract sequential versus random and compare EXCP totals.
  3. Write a one-paragraph explanation for managers describing why random OLTP cannot be tuned like a SORT copy.

Explain like I'm five

Sequential is reading a story page by page. Random is flipping to a bookmarked page whenever someone shouts a page number. If you read the whole story, page order is fastest. If you only peek at three pages, flipping beats starting at chapter one every time.

Test your knowledge

Test Your Knowledge

1. A job reads 99% of records in key order once per night. Which ACCESS is the best default?

  • RANDOM
  • SEQUENTIAL
  • EXCLUSIVE only
  • OUTPUT

2. A CICS program reads one record per transaction by customer id. Which pattern dominates?

  • Sequential full file
  • Random keyed GET
  • BSAM only
  • PRINT

3. When is DYNAMIC preferable to forcing SEQUENTIAL?

  • Never
  • When the same program both scans a range and jumps by key in one open
  • Only for tape
  • Only when FILE STATUS is zeros
Published
Read time11 min
AuthorMainframeMaster
Reviewed by MainframeMaster teamVerified: IBM VSAM programming interfacesSources: IBM Enterprise COBOL FILE-CONTROL; DFSMS VSAM guidesApplies to: COBOL and VSAM KSDS; concepts extend to ESDS/RRDS