\item read arguments from a command line.
\end{itemize}
-daq\_evetbuild must be executed before daq\_netmem (as daq\_evtbuild is the one who opens buffers in shared memory).
+daq\_evtbuild must be executed before daq\_netmem (as daq\_evtbuild is the one who opens buffers in shared memory).
+
+\subsection{Check list: what and how to check before production beam time}
+
+\begin{itemize}
+\item Check EB servers:
+ \begin{itemize}
+ \item Check if our EB servers lxhadeb01,2,3,4 are up and running.
+ \item Check if both HADES and GSI VLANs are available from the EB servers.
+ \item Clean up all hard disks on all EB servers. If you need to archive some hld files, see subsection~\ref{eb_data_tape}.
+ \item Check if daq\_disks process runs in the background on each EB server. This process is started by the cronjob at the boot time. If daq\_disks is not running you can start it yourself under 'hadaq' account: \textbf{/home/hadaq/bin/daq\_disks -s 10 > /dev/null 2>\&1 \&}
+ \item Check if the following mounts are present. (Unless, execute on lxhadesdaq under 'root': \textbf{mount -a}).
+ \begin{itemize}
+ \item 192.168.100.11:/home/hadaq/oper -> lxhadesdaq:/home/hadaq/oper/oper\_1
+ \item 192.168.100.12:/home/hadaq/oper -> lxhadesdaq:/home/hadaq/oper/oper\_2
+ \item 192.168.100.13:/home/hadaq/oper -> lxhadesdaq:/home/hadaq/oper/oper\_3
+ \item 192.168.100.14:/home/hadaq/oper -> lxhadesdaq:/home/hadaq/oper/oper\_4
+ \end{itemize}
+ \item Check connection to the Data Movers for the data archiving:
+ \begin{itemize}
+ \item Ask Horst Goeringer to create a new archive for the production beam time.
+ \item First set a test path in the hadestest archive in lxhadesdaq:\~/trbsoft/daq/evtbuild/eb.conf: \textbf{RFIO\_PATH: rfiodaq:gstore:/hadaqtest/test001}
+ \item \textbf{lxhadesdaq:\~/trbsoft/daq/evtbuild/start\_eb\_gbe.pl -e start -n 1-16 --rfio on}
+ \item On MEDM GUI you have to see that all 16 EB processes got connected to 8 Data Movers (two connections per Data Mover). In the log file you have to see the message \textbf{rfio\_fopen: opened connection to Data Mover: slxdm17} from each Event Builder. If there are some error messages related to RFIO and Data Movers or there are more than two connections to the same Data Mover, try to contact Horst Goeringer.
+ \item Change the archive name and path (RFIO\_PATH:) in lxhadesdaq:\~/trbsoft/daq/evtbuild/eb.conf to the new ones.
+ \end{itemize}
+ \end{itemize}
+\item Start Perl scripts to insert data to Oracle DB (pay attention to the data base name inside the scripts: \$database):
+ \begin{itemize}
+ \item Update Oracle DB with new subevt IDs and new boards (see subsection~\ref{ora_before_running_scripts}).
+ \item Start lxhadesdaq:~/trbsoft/daq/oracle/daq2ora\_client.pl (see subsection~\ref{ora_boards_script}).
+ \item Start lxhadesdaq:/home/hadaq/oper/runinfo2ora.pl (see subsection~\ref{ora_run_script}).
+ \end{itemize}
+\end{itemize}
\subsection{daq\_evtbuild options}
daq\_evtbuild can be executed with the following options:
\item [--lustre path\_to\_lustre] Output path for writing data on the Lustre cluster (if mounted).
\item [--orapath path] Path to eb\_runinfo2ora.txt for writing data to Oracle.
\item [--ignore] Ignore trigger mismatch conditions.
-\item [--maxtrigmissmatch number] Maximum number of triggers allowed for missmatch.
+\item [--maxtrigmissmatch number] Maximum number of triggers allowed for mismatch.
\item [--multidisk] Write data to a disk number provided by daq\_disks via shared memory.
\end{description}
\item [Options for debugging] :
\end{description}
\end{description}
-To support data writing to multiple disks the dedicated daq\_disk server runs on each Event Buidler server in the background.
-The other important script cleanup.pl is aimed to clean up disks when they get occupide up to 90\%.
+To support data writing to multiple disks the dedicated daq\_disk server runs on each Event Builder server in the background.
+The other important script cleanup.pl is aimed to clean up disks when they get occupied up to 90\%.
Both processes are executed in the background at boot time.
\subsection{daq\_netmem options}
\item[-c|--conf <path/name>] : Path to the config file (default: ../evtbuild/eb.conf).
\item[-e|--eb <start|stop>] : Start or stop Event Builders (default: start).
\item[-i|--ioc <start|stop>] : Start or stop IOCs (default: start).
- \item[-n|--nr <rangeOfEBs>] : Range of numbers of Event Bulders to be started.
- \item[-v|--verb] : More verbouse.
+ \item[-n|--nr <rangeOfEBs>] : Range of numbers of Event Builders to be started.
+ \item[-v|--verb] : More verbose.
\end{description}
\item[Execution examples] :
\begin{description}
\item[Start EBs] : \\\verb!start_eb_gbe.pl -e start!
\item[Stop EBs] : \\\verb!start_eb_gbe.pl -e stop!
\item[Start 6 EBs with the 0,1,2,3,5,7] : \\\verb!start_eb_gbe.pl -e start -n 0-3 -n 5 -n 7!
-\item[Start Epics IOCs] : \\\verb!start_eb_gbe.pl -i start!
-\item[Stop Epics IOCs] : \\\verb!start_eb_gbe.pl -i stop!
+\item[Start Epics IOCs] : \\\verb!start_eb_gbe.pl -i start -n 1-16!
+\item[Stop Epics IOCs] : \\\verb!start_eb_gbe.pl -i stop -n 1-16!
\end{description}
\end{description}
\subsection{Monitoring Event Builders}
-There is a special script which can monitor an activity on the open EB's ports. Usually event builder writes a message to the log file that given port did not receive any data: \textbf{Jun 13 15:27:38 lxhadeb02p DAQ<I>: NETMEM-2 <E> daq\_netmem: source 4, port 50006: no data received}. In addition to this you can check yourself all the ports on (for example) lxhadeb02 by executing \textbf{/home/hadaq/bin/scan\_active\_ports.pl -e 2}. The script will read /tmp/eb2\_192.168.100.12.txt with all the ports and report the actual port number, the IP of the sender as well as the sender's port number. The file /tmp/eb2\_192.168.100.12.txt is copied to EB server by the /home/hadaq/trbsoft/daq/evtbuild/start\_eb\_gbe.pl during the last EB startup.
+There is a special script which can monitor an activity on the open EB's ports. Usually event builder writes a message to the log file that given port did not receive any data: \textbf{Jun 13 15:27:38 lxhadeb02p DAQ<I>: NETMEM-2 <E> daq\_netmem: source 4, port 50006: no data received}. In addition to this you can check yourself all the ports on (for example) lxhadeb02 by executing \textbf{/home/hadaq/bin/scan\_active\_ports.pl -e 2}. The script will read /tmp/eb2\_192.168.100.12.txt with all the ports and report the actual port number, the IP of the sender as well as the sender's port number. The file /tmp/eb2\_192.168.100.12.txt is copied to EB server (lxhadeb02) by the /home/hadaq/trbsoft/daq/evtbuild/start\_eb\_gbe.pl during the last EB startup.
As has been already said the monitoring of EBs is based on the IOC processes running on each EB server.
-\\Usually the monitoring is already running at vncserver lxhadesdaq:3. Before starting the monitoring one should set two environment variables:
-\\\verb!export EPICS_CA_ADDR_LIST=192.168.101.255!
+\\Usually the monitoring is already running at vncserver lxhadesdaq:1. Before starting the monitoring one should set two environment variables:
+\\\verb!export EPICS_CA_ADDR_LIST=192.168.103.255!
\\\verb!export EPICS_CA_AUTO_ADDR_LIST=NO!
\\The monitoring can be started by executing:
\\\verb!lxhadesdaq:/home/scs/Desktop/DAQ/EB_Monitor.desktop!
\end{itemize}
\end{itemize}
-\subsection{Event Building compiling quide}
+\subsection{Event Building compiling guide}
\textbf{hadaq} module needs \textbf{allParam} and \textbf{compat} modules.
Let's assume that the base directory where all the modules are located is /home/hadaq/daqsoftware/.
-RFIO is not in the automake since this feature is rarely needed (during beamtime), thus configure will create Makefile without RFIO libs. To compile with RFIO do the following after running configure:
+RFIO is not in the automake since this feature is rarely needed (during beam-time), thus configure will create Makefile without RFIO libs. To compile with RFIO do the following after running configure:
\begin{itemize}
\item In evtbuild.c uncomment: \#define RFIO
\item Check if rawapin.h, rawcommn.h, rawclin.h are in "include" dir
\begin{itemize}
\item Reason: This error occurs when the event builder application tries to open more than 128 sets of semaphores (when the standard setting is kernel.sem="250 32000 32 128"):
\begin{itemize}
-\item 250 - SEMMSL - The maximum number of semaphores in a sempahore set
-\item 32000 - SEMMNS - The maximum number of sempahores in the system
+\item 250 - SEMMSL - The maximum number of semaphores in a semaphore set
+\item 32000 - SEMMNS - The maximum number of semaphores in the system
\item 32 - SEMOPM - The maximum number of operations in a single semop call
-\item 128 - SEMMNI - The maximum number of sempahore sets (128 sets mean 64 shared memory segments since two semaphore sets are required per memory segment. In this case, daq\_evtbuild -m 65 will lead to an error)
+\item 128 - SEMMNI - The maximum number of semaphore sets (128 sets mean 64 shared memory segments since two semaphore sets are required per memory segment. In this case, daq\_evtbuild -m 65 will lead to an error)
\end{itemize}
\item Solution: sysctl -w kernel.sem="250 128000 32 512"
\end{itemize}
\subsection{Event Building software}
In the current HADES setup the Event Building system is a set of 16 processes distributed over 4 servers.
-As shown in \ref{fig:ebproc} the data is received by the Receiver (daq\_netmem) and placed in double buffer.
+As shown in fig.~\ref{fig:ebproc} the data is received by the Receiver (daq\_netmem) and placed in double buffer.
There is a separate shared memory segment (double buffer) for each incoming data stream.
-The shared memory segments are openned by daq\_evtbuild therefore daq\_evtbuild should be started first. Each shared memory segment is controlled by ShmTrans structure which contains pointers to HadTuQueue sctructures. Each HadTuQueue structure controls a part of a double buffer for writing/reading. All the data coming to daq\_netmem are packed according to HadTuQueue format with a header consisting of a size and decoding.
+The shared memory segments are opened by daq\_evtbuild therefore daq\_evtbuild should be started first. Each shared memory segment is controlled by ShmTrans structure which contains pointers to HadTuQueue structures. Each HadTuQueue structure controls a part of a double buffer for writing/reading. All the data coming to daq\_netmem are packed according to HadTuQueue format with a header consisting of a size and decoding.
Therefore the buffer consists of HadTuQueue which contains the HadTuQueues containing the data (subevents), see fig.~\ref{fig:ebstruct} and fig.~\ref{fig:ebqueue}. At the request from the Builder (daq\_evtbuild) the pointers to the double buffer are swapped and the Builder can read the new data from the buffer, build the events and write the events to the hld file or send the events to the Data Movers via RFIO mechanism. This is how it works in short.
\item args.h/c - structure and functions for parsing the arguments for daq\_evtbuild.
\item hadtuqueue.h/c - structure and functions to manipulate HadTUQueue (Hades Transport Unit Queue). This queue is used to transport subevents.
\begin{itemize}
- \item conHadTuQueue(HadTuQueue *my, void *mem, size\_t size) - Constract hadTuQueue to control the buffer which will begin at a memory address where mem pointer points to.
- \item conHadTuQueue\_voidP(HadTuQueue *my, void *mem) - Constract hadTuQueue for reading the buffer (called by daq\_evtbuild). As the queue itself was already created before, the header of the queue is read and the corresponding data of the hadTuQueue scructure are set.
+ \item conHadTuQueue(HadTuQueue *my, void *mem, size\_t size) - Construct hadTuQueue to control the buffer which will begin at a memory address where mem pointer points to.
+ \item conHadTuQueue\_voidP(HadTuQueue *my, void *mem) - Construct hadTuQueue for reading the buffer (called by daq\_evtbuild). As the queue itself was already created before, the header of the queue is read and the corresponding data of the hadTuQueue structure are set.
\item HadTuQueue\_push(HadTuQueue *my) - Used for writing to the buffer. Move run pointer to point at the free memory after the last element of the queue and update the size of the queue.
\item HadTuQueue\_pop(HadTuQueue *my) - Used for reading the buffer. Move run pointer to point at the next element of the queue.
\item HadTuQueue\_empty(HadTuQueue *my) - Check if the run pointer reached the end of the queue.
\item ShmTrans* ShmTrans\_open(char *name, size\_t size) - Get a pointer to an existing shared memory with name and size.
\item ShmTrans\_recv(ShmTrans *shmem) - Get a pointer to the first element of the hadTuQueue in the buffer. If we run through the hole buffer already, the switch of buffers is requested.
\item ShmTrans\_send(ShmTrans *shmem) - Increment the run pointer to point at a free memory in the buffer after the last copied message.
- \item ShmTrans\_requestSpace(ShmTrans *shmem) - Here we switch the buffers (wrQueue and rdQueue pointers to buffers, see \ref{fig:ebstruct}) if it was requested.
+ \item ShmTrans\_requestSpace(ShmTrans *shmem) - Here we switch the buffers (wrQueue and rdQueue pointers to buffers, see fig.~\ref{fig:ebstruct}) if it was requested.
\item ShmTrans\_tryAlloc(ShmTrans *shmem, size\_t size) - Get a pointer to a free memory after a last inserted message. If the space left is less than size, return NULL.
\item ShmTrans\_free(ShmTrans *shmem) - Move run pointer of hadTuQueue structure (which rdQueue pointer points to) to the beginning of the next internal hadTuQueue.
\end{itemize}
\textbf{Let's now see the main steps for daq\_netmem in more detail:}
\begin{itemize}
-\item First we open UDP ports and do all the necesary preparations for receiving the data from the network (for each incoming data stream):
+\item First we open UDP ports and do all the necessary preparations for receiving the data from the network (for each incoming data stream):
\begin{itemize}
\item NetTrans\_create(); - Here we also initialize statistics of packets and messages for daq\_netmem
\begin{itemize}
\item rcvBufLenReq = 1 * (1 $<<$ 20); - Requested UDP socket buffer length, 1MB is quite enough.
\item setsockopt(... \&rcvBufLenReq ...)
\item getsockopt(... \&rcvBufLenRet ...); - In case rcvBufLenRet is less than rcvBufLenReq you will get a warning: \textbf{UDP receive buffer length smaller than requested buffer length}. To fix it you have to execute under 'root': \textbf{sysctl -w net.core.rmem\_max=10485760}
- \item bind()
+ \item bind(); - Bind the socket. Pay attention that the EB's ports cannot be used by other applications and vise versa.
\item my->mtuSize = 63 * 1024; - This is an important number which defines Maximum Transfer Unit size for incoming UDP packets.
\end{itemize}
\end{itemize}
\begin{itemize}
\item ShmTrans\_open();
\begin{itemize}
- \item PsxShm\_open(... O\_RDWR ...); - Get the pointer to already created by daq\_evtbuild shared memory, allocate neccessary memory for structures.
- \item conHadTuQueue(my->wrQueue ...); - Constract hadTuQueue structure to control writing to buffer
- \item conHadTuQueue(my->rdQueue ...); - Constract hadTuQueue structure to control reading from buffer
+ \item PsxShm\_open(... O\_RDWR ...); - Get the pointer to already created by daq\_evtbuild shared memory, allocate necessary memory for structures.
+ \item conHadTuQueue(my->wrQueue ...); - Construct hadTuQueue structure to control writing to buffer
+ \item conHadTuQueue(my->rdQueue ...); - Construct hadTuQueue structure to control reading from buffer
\item sem\_open(); - Get semaphores created by daq\_evtbuild.
\end{itemize}
\end{itemize}
\item shm\_open(); - Establish a connection between a shared memory object and a file descriptor.
\item mmap(); - Map into memory.
\end{itemize}
- \item conHadTuQueue(my->wrQueue ...); - Constract hadTuQueue structure to control writing to buffer
- \item conHadTuQueue(my->rdQueue ...); - Constract hadTuQueue structure to control reading from buffer
+ \item conHadTuQueue(my->wrQueue ...); - Construct hadTuQueue structure to control writing to buffer
+ \item conHadTuQueue(my->rdQueue ...); - Construct hadTuQueue structure to control reading from buffer
\item sem\_open(... O\_CREAT | O\_EXCL ...); - Create semaphores to control the access to shared memory.
\end{itemize}
\item Worker\_addStatistic(); - Add statistics for monitoring EBs.
\item HadTuQueue\_front(my->rdQueue); - Get a pointer to the next internal hadTuQueue to be read or return NULL if end of queue/buffer is reached.
\item switchStorage(); - If we reached the end of the buffer for reading (end of external hadTuQueue) we request a double buffer switch (swap of pointers to read/write parts).
\end{itemize}
-\item conHadTuQueue\_voidP(); - constract hadTuQueue structure to manipulate the internal hadTuQueue/buffer.
+\item conHadTuQueue\_voidP(); - construct hadTuQueue structure to manipulate the internal hadTuQueue/buffer.
\item subEvt = HadTuQueue\_front(hadTuQueue[i]); - get a pointer to a subevent from the internal hadTuQueue/buffer.
\item currId = SubEvt\_trigType(); - Get trigger type from the subevent of the first data source. The event builder startup script ensures that the first data source is always CTS.
\item currId = currId | (DAQVERSION $<<$ 12); - Add to the event ID the DAQ version number needed by unpackers.
\subsection{Event Building control by EPICS IOC}
-Offline analysis places the following demands:
+Off-line analysis places the following demands:
\begin{itemize}
\item Synchronization of hld files: all EB processes should open and close hld files at the same time (jitter of a couple of seconds is allowed).
\item All the hld files collected in parallel must have the same RUN ID.
where:
\begin{itemize}
-\item \$maxFileSize - Master IOC generates new RUN ID whe this maximum file size is reached. This setting is from eb.conf (EB\_FSIZE: 1500).
-\item \$ebtype - Type of IOC: master/svale. There is only one master IOC which corresponds to the EB process with the smallest number.
+\item \$maxFileSize - Master IOC generates new RUN ID when this maximum file size is reached. This setting is from eb.conf (EB\_FSIZE: 1500).
+\item \$ebtype - Type of IOC: master/slave. There is only one master IOC which corresponds to the EB process with the smallest number.
\item \$ebnum - Simply a number of the EB process.
\item dbLoadRecords("db/totalevtstat.db") - This record is loaded only for master IOC.
\item dbLoadRecords("db/genrunid.db","eb=\$ebnum") - This record is loaded only for master IOC.
\end{itemize}
During the startup of IOCs the start\_eb\_gbe.pl script copies st\_eb01.cmd files to the corresponding Event Builder servers - scs@lxhadeb01.gsi.de:/home/scs/ebctrl/ioc/iocBoot/iocebctrl/.
-Then the startIOC() subroutine executes remotely on EB server via ssh the folloing command: \textbf{bash; . /home/scs/.bashrc; cd \$ioc\_dir; screen -dmS \$screen\_name ../../bin/linux-x86\_64/ebctrl \$stcmd}. Where \$ioc\_dir = /home/scs/ebctrl/ioc/iocBoot/iocebctrl/, \$screen\_name = ioc\_eb02, \$stcmd = st\_eb02.cmd.
+Then the startIOC() subroutine executes remotely on EB server via ssh the following command: \textbf{bash; . /home/scs/.bashrc; cd \$ioc\_dir; screen -dmS \$screen\_name ../../bin/linux-x86\_64/ebctrl \$stcmd}. Where \$ioc\_dir = /home/scs/ebctrl/ioc/iocBoot/iocebctrl/, \$screen\_name = ioc\_eb02, \$stcmd = st\_eb02.cmd.
The real IOC startup output copied from the screen:
\end{itemize}
\subsection{Before running scripts}
+\label{ora_before_running_scripts}
-Before we sart inserting current boards to the Oracle Data Base we have to update the Oracle Data Base with all the existing information (in case there are new boards or different subevents).
+Before we start inserting current boards to the Oracle Data Base we have to update the Oracle Data Base with all the existing information (in case there are new boards or different subevents).
\begin{itemize}
\item First we collect the information about all existing subevents and boards:
where 'db-hades' is a production data base and 'db-hades-test' is a test data base.
\subsection{Time stamp and current info on boards}
+\label{ora_boards_script}
The DAQ startup script (startup.pl) writes the ascii file with all the active boards in the system and the time stamp.
The daq2ora\_client.pl script reads this ascii file on lxhadesdaq e.g. \\\verb!~/oper/daq2ora/daq2ora_2010-08-30_12.49.50.txt!
\item[-d|--daemon] : Run as a daemon.
\item[-o|--oracle] : Do insert to Oracle data base.
\item[-p|--sport port] : Port for status server.
- \item[-v|--verb] : More verbous.
+ \item[-v|--verb] : More verbose.
\item[-f|--file file] : Given file for insertion to Oracle.
\end{description}
\item[More info] :
\end{description}
\subsection{RUN Start/Stop info}
+\label{ora_run_script}
\begin{figure}
\centering
The script can be executed on lxhadesdaq with all the files from 16 EBs:
/home/hadaq/oper/runinfo2ora.pl -f /home/hadaq/oper/oper\_1/eb\_runinfo2ora\_1.txt \\ -f /home/hadaq/oper/oper\_2/eb\_runinfo2ora\_2.txt -f /home/hadaq/oper/oper\_3/eb\_runinfo2ora\_3.txt -f /home/hadaq/oper/oper\_4/eb\_runinfo2ora\_4.txt -f /home/hadaq/oper/oper\_1/eb\_runinfo2ora\_5.txt -f /home/hadaq/oper/oper\_2/eb\_runinfo2ora\_6.txt -f /home/hadaq/oper/oper\_3/eb\_runinfo2ora\_7.txt -f /home/hadaq/oper/oper\_4/eb\_runinfo2ora\_8.txt -f /home/hadaq/oper/oper\_1/eb\_runinfo2ora\_9.txt -f /home/hadaq/oper/oper\_2/eb\_runinfo2ora\_10.txt -f /home/hadaq/oper/oper\_3/eb\_runinfo2ora\_11.txt -f /home/hadaq/oper/oper\_4/eb\_runinfo2ora\_12.txt -f /home/hadaq/oper/oper\_4/eb\_runinfo2ora\_13.txt -f /home/hadaq/oper/oper\_2/eb\_runinfo2ora\_14.txt -f /home/hadaq/oper/oper\_3/eb\_runinfo2ora\_15.txt -f /home/hadaq/oper/oper\_1/eb\_runinfo2ora\_16.txt
+\subsection{Archiving data from EB disks to tape}
+\label{eb_data_tape}
+
+There are two scripts which can be used to archive the data from EB disks to tape.
+These scripts can also be checked out from CVS:
+\begin{itemize}
+\item CVS/Root: :ext:hadaq@lxi001.gsi.de:/misc/hadesprojects/daq/cvsroot/
+\item CVS/Repo: tools
+\end{itemize}
+
+First script \textbf{lxhadeb01:/home/hadaq/bin/archived\_data.pl} searches the files to be archived. Run \textbf{archived\_data.pl -h} for help.
+For example to check all hld files on tape from archive hadesoct10raw with prefix 'be' one can execute on lxhadeb01: \textbf{archived\_data.pl -a hadesoct10raw -p be -o tape}. The output of the script are several hld file lists:
+\begin{itemize}
+\item Files on TAPE (/tmp/Files\_on\_TAPE\_can\_be\_removed.txt):
+\item Files on TAPE have different size:
+\item Files in CACHE:
+\item Files in CACHE have different size:
+\item Files with other status:
+\item Files with other status and different size:
+\end{itemize}
+
+The first file list (also copied to the file /tmp/Files\_on\_TAPE\_can\_be\_removed.txt) can be used for deleting the files from EB disks as those hld files are already archived on tape and have the correct sizes. Another list (Files in CACHE) indicates hld files which are still in a cache of Data Mover and in the process of archiving.
+
+Second script \textbf{lxhadeb01:/home/hadaq/bin/data2tape.pl} will archive the data to a tape. See \textbf{data2tape.pl -h} for a help. Here are two examples of usage:
+\begin{itemize}
+\item Archive all hld files with prefixes 'st' and 'be' from sept 4th, 2010 to the archive hadessep10raw:
+\begin{itemize}
+\item date2tape.pl -p be -p st -s 2010-09-04\_00:00:00 -e 2010-09-04\_23:59:59 -a hadessep10raw
+\end{itemize}
+\item Archive all hld files from the list /home/hadaq/kgoebel.txt to archive "hadesuser/kgoebel/hld/":
+\begin{itemize}
+\item date2tape.pl -a hadesuser -d kgoebel/hld -f /home/hadaq/kgoebel.txt
+\end{itemize}
+\end{itemize}
\ No newline at end of file