Our full technical support staff does not monitor this forum. If you need assistance from a member of our staff, please submit your question from the Ask a Question page.


Log in or register to post/reply in the forum.

How to mirror the old CardOut workflow in TableFile


mwassmer Nov 1, 2017 12:16 AM

Something I liked about CardOut is that the memory card effectively became an extension of internal datalogger memory, and it didn't require much, if any, manual intervention. When a new program was uploaded, the everything remained as long as "Retain Data" was selected and the table structure didn't change. If the table structure did change, it forced a delete of the data in internal memory and on the card. I liked not having to go into File Control to manage the CRD files.

I can't figure out how to get TableFile (using Option 64) to work the same way. I'm particularly having trouble with the following:

  • When a new program has an identical table structure as the previous program, a new file is created on CRD regardless of the Table parameters. If, for example, I had 100 days of data in the old file (before the new program) and 1 day of data in the new file (after the new program), I assume the retrieval tools would only retrieve the 1 day of new data...correct? If so, this limits the usefulness of the data retrieval tools.
  • When a new program has a different table structure than the previous program, a new file is created on CRD regardless which TableFile parameters are used. Because new space keeps being allocated with each new program upload, I eventually get messages telling me that the card is full. If the new file overwrote the old file (as with CardOut), this wouldn't happen.

Is there a way to configure TableFile so that it works exactly like CardOut?


sonoautomated Nov 10, 2017 05:35 PM

Hi Mike,

The short answer is no.  The TableFile and CardOut have different advantages and different methods for writing data to a card.  We just had a great blog article about writing data to a card.  Perhaps it has information that can help you.

https://www.campbellsci.com/blog/store-datalogger-data-to-memory-card

Regards,


mwassmer Nov 10, 2017 06:03 PM

The blog post does not discuss the issues I raised in the forum post, which is why I submitted the forum post.

Could you please provide a detailed response to the two issues I raised in the following bullet points?

  • When a new program has an identical table structure as the previous program, a new file is created on CRD regardless of the Table parameters. If, for example, I had 100 days of data in the old file (before the new program) and 1 day of data in the new file (after the new program), I assume the retrieval tools would only retrieve the 1 day of new data...correct? If so, this limits the usefulness of the data retrieval tools. What is the recommended workflow for dealing with this problem?
  • When a new program has a different table structure than the previous program, a new file is created on CRD regardless which TableFile parameters are used. Because new space keeps being allocated with each new program upload, I eventually get messages telling me that the card is full. If the new file overwrote the old file (as with CardOut), this wouldn't happen. What is the recommended workflow for avoiding these "card is full" issues?


DAM Nov 10, 2017 06:37 PM

As you are aware, TableFile using option 64 is a "hybrid" CardOut/TableFile combination implemented to accommodate, at the time, very large cards (>2GB). Since no single file can exceed 2G, breaking up the cardout to deal with large cards and still provide the "extension of internal memory" as you point out, was the intent.

Regarding your two bullet points above, the first (keeping the data if nothing changes) does seem like a reasonable thing to investigate. This is where the "hybridization" is perhaps biased too much toward TableFile rather than CardOut. I will look more into making an OS change here.

The second item is a bit trickier, because this would require deleting old data. We have taken the stand not to delete data from another program unless that option is selected in the download operation. This may be left in your hands at download time. If you know things have changed select the delete existing files option.


mwassmer Nov 10, 2017 07:13 PM

Thank you for the helpful reply.

In my first bullet point, I asked a question that related back to my example: "I assume the retrieval tools would only retrieve the 1 day of new data...correct?" From your reply, I assume the answer is YES, but can you please confirm?

If CSI could write a supplemntal blog post or white paper that outlines recommended workflows for common use cases, such as two that I outlined, that would be greatly appreciated. The first blog post was an excellent description of the motivation for and specifications of the new TableFile instructon.


DAM Nov 13, 2017 03:40 PM

The data collection tools within loggernet would only retrieve the data that is currently "visible" in the currently active file (current day for you appilication). However, there is a file retrieval mechanism in loggernet that can be configured to collect the older data files.

So the answer to your question is yes. But there is a way to work around it.

Another option is to have the datalogger, under program control, FTP the old files to a server.


Carolyn Nov 13, 2017 04:09 PM

Mike, 

It can be confusing to understand the differences between TableFile option 64 and CardOut.

In order to answer your question, I need to know what you mean by 'retrieval tools' ? Do you mean LoggerNet Collect now? Or are you talking about retrieving an entire closed file? I hope you mean the former (i.e. LoggerNet Collect Now) because that is very simple, and it is not handled differently between CardOut and Tablefile option 64. 

The operation of Tablefile option 64 is very similar to CardOut in that space for the table is pre-allocated. The active file remains open while it is being written to, and is closed when the Interval condition is met (at which point space is allocated for a new table and a new file is opened). Each file written to the card can be up to 2G in size, and the most recent file is treated by the datalogger as an extension of internal memory. Thus, the data in the active file can be collected using software or accessed using data table access syntax (once a file is closed, it is not accessible in this way).

 As for your second question regarding avoiding filling up the card with Tablefiles, that is managed with the MaxFiles parameter of TableFile. Following the directions below, you can set up the tablefiles to either ring (over write the oldest files when the card is full) or Fill and Stop, just as with Card out. 

The MaxFiles parameter is used to specify the maximum number of files to retain on the storage device. When the MaxFiles is reached, the oldest file will be deleted prior to writing the new one. If the destination drive is not large enough to accommodate MaxFiles, the datalogger will adjust MaxFiles internally (though the parameter will not be changed in the instruction). If MaxFiles is set to -1, then no limit will be set for the maximum number of files that can be written, until the storage device is full. Once the device is full, the oldest file will be deleted prior to writing the new one. If MaxFiles is set to -2, there is no limit set for the maximum number of files that can be written, but once the storage device is full, no new files will be written. Thus, -1 is analogous to an auto-allocated ring memory mode, and -2 is analogous to an auto-allocated fill and stop mode. If MaxFiles is set to 0, FileName will not be incremented and the file will be overwritten with a new file each time.

The CRBasic Help files have recently been updated to hopefully explain this a little more clearly. 

The inormation below might be helpful:

TableFile option 64 Mode Operation

 - File pre-allocated like CardOut

- Data copied from RAM in chunks to the open file (like CardOut)

-Is one method of writing data to a card better than the other?

An advantage of the CardOut() instruction is that it is very simple to add to your program. 

In many applications, however, the TableFile() instruction with Option 64 has advantages over the CardOut() instruction.

These advantages include:

- Allowing multiple small files to be written from the same data table so that storage for a single table can exceed 2 GB. the TableFile() instruction controls the size of its output files thorugh the NumRecs, TimeIntoInterval, and Interval parameters.

-Faster compile times when small sizes are specified

-Easy retrieveal of closed files via the File Control utility, FTP, or email

-Closed files are safe files. Anytime a file is open for wrtiing, in can become corrupted if a power loss occurs or if the writing is interuped for any reason. Tablefile() Option 64 wil close the files at the programmed time or record interval Once those files are closed, they are safe from this type of corruption.


Carolyn Nov 13, 2017 04:49 PM

BTW, in addition to the Blog, there is also a whith paper on TableFile option 64 that might be useful.

https://s.campbellsci.com/documents/us/technical-papers/write-high-frequency-data-to-cf-cards.pdf


mwassmer Nov 14, 2017 08:39 PM

Thank you for the reply, Carolyn.

In response to your question, I am referring to Collect Now, scheduled collection, etc.

Sometimes, I lose connectivity to my datalogger for a period of time. If a new file is created on the CRD before I restore connectivity, the data from the old (closed) file is not retrieved by the data retrieval tools. To avoid this problem, I make sure the file size is much larger than any anticipated connectivity gap.

With a large file size, the problem I encounter is that, when I upload a revised version of the program that either changes or doesn't change the data table structure, a new file is automatically created and all the space for the just closed file remains allocated. It doesn't take many program uploads before I end up with an "Insufficient Card memory for Table CRD:FileName.dat" error under "Card Status". The other problem I encounter is that a new file is created even if the data table doesn't change.


Carolyn Nov 15, 2017 05:11 PM

Mike,

Thanks for the clarifications. I better understand your frustration now. In the case of sending a new program, it will be best to manually delete the old files first. 

As Dave mentioned above, LoggerNet's File Retrieval tab as an option for retrieving and deleting the files. I  have seen this work well for customers, provided that precautions are made. Care must be taken with this approach because when the TableFile interval hits, the current file is closed and a new file is opened. During this window, there is a vulnerability to corruption if the card is accessed remotely. Hence with the File Retrieval option, It is possible that LoggerNet will attempt to retrieve and or delete a file that is not fully opened or closed.To reduce the chance of file corruption, you should put an offset into the File Retrieval/deletion process. I.e., do the File Retrieval at 15 min or 30 min into the hour, as opposed to on the hour. So, for example, if the Base Time is set at 12:15 am and the interval is set for 1 day, file retrieval will be attempted at 12:15 am, instead of at midnight.

The other thing that I recommend to avoid TableFile corruption via Loggernet's file retrieval option is to have your program rename the file when the Tablefile output is complete (use the Lastfilename and OutStat parameters of TableFile), and then let Loggernet retrieve and delete this renamed file. 

 Something like:

 If Outstat Then

FileRename(LastFileName,NewFileName)

EndIf

This is important because after the first file is closed and the second file is opened, there will always be two files with the same name (just different number) on the CRD, one closed and the new file opened for writing. There is no way to tell Loggernet to retrieve and delete only the closed file because you must use a wild card for the file number.  


mwassmer Nov 15, 2017 08:47 PM

Thank you for the explanation, but I'm still quite confused. If I spent a few hours playing around with it, I might be able to come up with an acceptable workflow, but I don't think it's worth the trouble. I went back and read the blog post about CardOut vs TableFile and it seems like the added complexity of using TableFile does not justify the benefits for my application. Therefore, I'll go back CardOut for the foreseeable future.

Log in or register to post/reply in the forum.