of this web site
to access the data
of Cruise Notes w/links to figures
Tables of variables recorded
and archiving data
During the GLOBEC years (1992-1999), the NMFS shipboard data was rarely
processed. The raw data was stored on disk and then, in most
on 8mm tape and then to CDROM. Most of the GLOBEC cruises were
with a MATLAB routine called "procescs.m" and the processed
for those cruises was posted on both the GLOBEC
In January 2000, with GLOBEC field work complete,
we started developing
a method to routinely load the NEFSC SCS shipboard data into the ORACLE
database. This is a joint effort between the centers Data
Services and the Oceanography Branch. The conventions developed to archive
data in a standardized format are documented elsewhere.
Beginning in early 2001, we occassionally got salt
files from Joyce Dennecour (Narr Lab) and compare the Niskin samples to
the flow through values. See link in "notes on processing"
When the Henry Bigelow came on line in 2007, we started
sending alontrack data to the Shipboard Automated Met Ocean
System (SAMOS) based at Florida State. In 2008, the ALBATROSS and
DELAWARE started sending data as well. A routine was devised to
download the netCDF data posted by the SAMOS system at Florida Stae and
convert it to the ORACLE tables at our lab. The advantage of this
system is that the data was already merged and cleaned to some extent.
The disadvantage at first was that not ALL the variables were being
sent. Salinity, for example, was not one of the parameters originally
processed through this system. We are working out the bugs of this new
system in 2008 and hope to have this the new standard routine.
ADCP processing has occasionally been done. Ideally, in the future, we would like to
integrate the velocity data into the same database.
of this web site
This site simply provides a listing of processed cruises (see
Table 1 below) with links to figures that were generated during the
processing. Figures include basic time series and ship track plots in order to
allow the user to browse any particular cruise and determine what data is
available. A log of notes associated with processing each cruise is
as well. The range of values specified as "acceptable" in the
operation is documented in the log file. Both the raw
data associated with each cruise is available on
that the "processed" data is a single ascii file with one minute
of several variables (time, lat,lon, airt, temp, salt, fluorescence,
v-wind, barop, and humidity) that has also been loaded into
At the time of this writing, it excludes other data like bottom depth,
ship speed, etc. An estimate of those variables can be made
How to access
One may get at the data in one of two ways:
Table 1. List of Cruises and links to figures and logs
find the processed ascii file in
/jgofdata/ship/<cruise>/ship.dat (or ask me to post it on ftp if you do not have access to that disk)
visit the "Alongtrack" section of the NEFSC Oceanography "data" page.
This allows users to extract the processed data needed by time and or geographic box from the ORACLE database
Table 2. Raw Data Model
*note: wind speed and direction are converted to
eastward and northward
wind (meters/second) in the processed data file
on recording and archiving data
Notes on processing
most files listed in Table 2 above are currently stored as raw
on the NEFSC internal website which is not accessible to the public
where xxx is "del"
where YY is the two-digit year code
where ZZ is the cruise code and some cruises have multiple legs
l1, l2, etc.
other directories in the same level as compress are raw,eventdata,
where the "eventldata" directory contains
raw files are basically the same as the aco files except they
slightly higher rate of sampling
alot of text characters in each record
adcp directory has:
four types of RDI files .cfg, XYr.ZZZ, XYp.ZZZ, and XYn.ZZZ
with config, raw ping data, processed data, and navigation data
data should appear here whenever the chief scientist request it
misc directory usually has pictures (*.jpg)
aco files are the "ascii compressed" version of appended .raw files and
the ".tpl" files have a few lines explaining what is contained in the
"yrday1_gmt" happens to be the convention now used by the SCS people
yearday. Other forms are:
yrday0_local where noontime Jan 1st would be 0.5
yrday1_local where noontime Jan 1st would be 1.5
yrday0_utc where "utc" stands for
"universal time constant"
(i think) but is the same as "gmt"
We have asked the SCS et's to conform to these file structure standards
as much as posible. A consistent protocol will make it much easier for
Bottom_depth is the most difficult variable to process since the record
is full of spikes and very little has been done as yet about this
Since bottom depth is fairly well documented for the NE shelf, I do not
attempt at improving that data.
Documentation of the step-by-step perl processing procedure
is documented here
In cases where the normal "compressed" data was not archived an
method of processing was developed in Oct 2000 using the PERL routine
which operates on the eventlog files.
In cases where salinity samples were taken (ie Niskin bottles), a
check is conducted as described here.