Data Collection

BAVU includes campaign GPS data collected by six different agencies (U.C. Berkeley; U.S.G.S.; Stanford; U.C. Davis; U. Alaska, Fairbanks; CalTrans) over a decade from 1993 - 2003. At U. C. Berkeley we occupy each benchmark in our campaign GPS networks yearly. When possible, we collect data for at least two continuous 24 hour sessions, with some occupations spanning as long as seven days. However, much of the study area is in urban or suburban settings, making it impossible to leave GPS equipment unattended and limiting the occupation time to the logistical limits of the human operator. For these sites, occupations may be as short as 6 hours or as long as 12 hours, depending upon the time it takes to travel to the site and the efficiency of the operator. We frequently repeat surveys of these sites for a total of two observations during each year. Other agencies contributing data to the BAVU dataset generally follow the same guidelines and provide at least 6 hours of data per site per day, however a substantial portion of the CalTrans data is limitted to 3 hours or less.

Processing Baselines

We process campaign GPS data using the GAMIT/GLOBK software package developed at the Massachusetts Institute of Technology, which uses double-difference phase observations to determine baseline distances and orientations between ground-based GPS receivers. Along with campaign data, we include about five global stations from the International GPS Service (IGS) network and four to six nearby continuous stations from the BARD network in our processing runs. Cycle slips are automatically identified and fixed using the AUTCLN routine within GAMIT. We use standard models for satellite radiation pressure and tropospheric delay. Ambiguities are fixed using the widelane combination followed by the narrowlane, with the final position based on the ionospheric free linear combination (LC or L3). Baseline solutions are loosely constrained (100 m) until they are combined together.

Combining Solutions

We combine daily ambiguity-fixed, loosely constrained solutions using the Kalman filter approach implemented by GLOBK. We include data processed locally as well as solutions for the full IGS and BARD networks processed by and obtained from SOPAC at the Scripps Oceanographic Institute of U.C. San Diego. Using the Kalman filter, we combine all daily solutions to generate an average solution for each month, giving each observation equal weight. We then estimate the average linear velocity of each station in the network from these monthly files. We fix the final positions and velocities of the IGS stations into the ITRF2000 No Net Rotation global reference frame using the GLORG stabilization routine, allowing for rotation and translation of the network. To scale the errors, we follow the method used by the SCEC CMM 3.0 team [Robert King, pers. comm., 2003]. We add white noise to all stations with a magnitude of 2 mm/ yr for the horizontal components and 5 mm/yr for the vertical component. The white noise should average out over the month-long time span of data. To account for "benchmark wobble," we add Markov process noise to the solutions with a magnitude of 1 mm per square root year.

For the Bay Area, we prefer to visualize velocities in a local reference frame centered around station LUTZ (a BARD continuous site on the Bay Block, roughly at the BAVU network centroid). It accentuates the gradient in deformation across the Bay Area and allows easy visual identification of the differences between stations. We simply subtract LUTZ's ITRF2000 velocity from all stations and propogate the correlations in uncertainty to get the error ellipses.