The concept using a standard tool is not intended for productive usage, but demonstration and teaching. Using (named) pipe would allow separating the processes better in time an CPU.
The XML file contains the information for the RedBox recording process and not only the data structure of the rng files.
The XML config file contains the definitions of variable types,variable names and record types in separate sections.
All binary and bit interpretation is done with character tools based on hex output or "1001"-Strings like bitstruct. The data processing is done with gawk (ver > 4.1) scripts. With standard tools, it should be possible to define a set of information (variables) and to extract the information similar to an output (CSV export) configuration in ADS4. The application of the "Rules", additional algorithmic statement can be also included in an awk script. Actually this command line is used:
od -t x2 -v -j 512 --endian=big $datapath/$filemask | tr “[a-f]” “[A-F]” | gawk -f h0.awk | gawk -f h1.awk | ...
Use UsarExtractor to extract tar archieve with rng files and mfr1.cfg (xml format). The program runs under Linux 64 bit and Cocalc.com, source code is available somewhere in git. Workaround for "checksum error", use ADS4 to pre-load usar file the copy rng and cfg file from temp folder. Tool chain works with this single rng also.
Use od for hex output, od tool has some advantage, because it can adjust the byte order (which is needed).
The tr step is only needed if the upper case are requested for next processing steps.
The hex char stream is separated in bytes with a separator char ';' to allow search in bytes for next step. Note to prevent to get stuck with 0100310020 while searching for 1003102 (char position is odd not even),
Here the DLE unstuffing gsub("10;10","10"); and the search for the records separator is done. The actual separator used is RS=";10;03;10;02;"; meaning is combined byte sequence of 0x10 0x30 0x10 0x20 indicates end of last record and start of new record. Field separators are removed by gsub(";","",$0);. The output is hex records with line separator "\n" to be used with other tools.
Output definition and details for the display of curves in a graph are defined in an XML-file by ADS4. This can be used as a definition for the record types and field which should decoded. For each field a decoding statements is generated. Actually this is generate from a shorter definition format without references using awk script (generate.awk). The output code including some scaling can be managed by this step. (Actually, code example can be found in bits.awk)
In a histogram with 256 bins, the sum of all length values is summed and at the end written to file hist.txt.
(Actually, code can be found in h1.awk)
Table lookup is using gawk arrays which accepts string as index is standard. Variable defined in function() are used as global (arrggg...). @include "decode.inc" statement is used, file is generated by generate.awk Record type RT=0x00 is used for header values definition True multi-dimensional array are an gawk extension, standard has two dimensional with x,y converted to [email protected] strings. Check with Alpine Linux is open, there only awk sandbox is provided. Actually first record will be lost.
Tool chain can be used to convert CadHeader.bin (RT=0xFF) in gawk -v varname = value sequences, all header information is available in awk is supplied via command line $details. Check cardheader.awk and line in script file.
Dynamic model of mass point and Kalman filter to reduce data Customer/driver info, loco information, wheel sensor correction multiple scale/resolution of information