Currently (2017), TRBNet-Hubs have an inherent weakness, as the data which
flows into the hubs are not checked for sanity (for example no
CRC-check). Every bogus network packet, for example produced by
-TRBNet-Endpoints FPGAs which suffer from a voltage drop on the core supply, or
+TRBNet-Endpoint FPGAs which suffer from a voltage drop on the core supply, or
from a SEU, can cause the TRBNet-HUB to crash. It is planned in the long-term
to reduce these crashes by sanity checks of the data (actually TRBNet-headers)
in the media interfaces.
sources and generates out of them a timing signal and the needed internal
TRBNet trigger, which is then transported to all slaves (which in your case
are all on the same TRB3). They react on the trigger and extract the data from
-the front end and transport it to the central FPGA, which is you *special*
+the front end and transport it to the central FPGA, which is your *special*
case (only one TRB3) is the same FPGA as the CTS is running in. There the data
is collected from all 4 peripheral FPGAs and then combined to a UDP-frame,
which is then sent via many Ethernet-packets to the Eventbuilder.
urlcolor=darkblue]{hyperref}
\usepackage{cite}
+%%\usepackage{cancel}
+%%\usepackage{ulem}
+
\newcolumntype{W}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{L}{>{\arraybackslash}X}
\newcolumntype{C}{>{\centering\arraybackslash}X}