Writing to/overwriting specific line in .csv or .txt file in function node

I am using Node-RED to read a string of data from a serial device and write the data into a .csv file that can be read by a specific method in a script in Ignition Edge. I won’t go into detail about the Ignition side of things, but basically, the .csv file needs to be formatted exactly as follows (without the comments I’ve added for this post):

"#NAMES"
"Cycle number","Date","Time","Energy" //name of each column
"#TYPES"
"I","str","str", "I" //alias for javascript data class of each column
"#ROWS","4" //number of rows (must match actual number of rows)
"7621","01/31/25","10:47:54 AM","775"
"7622","01/31/25","10:49:07 AM","740"
"7623","01/31/25","10:49:58 AM","760"
"7624","01/31/25","10:50:57 AM","785"

I want this .csv file to always contain data from the 3000 most recent cycles of the serial device (the device outputs 1 string per cycle). I am able to use a function node to take the string and format it into the required format for each data row:

  7631       01/31/25 11:03:46 AM 765   //raw string from device
"7631","01/31/25","11:03:46 AM","765" //formatted for .csv file

But to keep only the 3000 most recent entries, on each cycle I need to append the new line of data to the last line of the .csv file, and “delete” line 6 of the .csv file (the oldest line of data).

A “nice to have” would be to update the number of rows in line 5 of the .csv file until the number of rows of data reaches 3000 (and until this point is reached, not delete the oldest data from line 6). But I could also just have dummy data in there to start. The machine will reach 3000 cycles pretty quickly.

The method I’m using in Ignition is pretty inflexible so I’m not able to only read certain lines of the .csv file there. I also can’t have this dataset become infinitely large in Ignition.

Is this possible to perform within a function node?

This is absolutely possible with a function node, it’s just a question of efficiency…
You mention that the device outputs one string per cycle and that the machine will hit 3000 cycles pretty quickly — what’s the rate of cycles i.e. how often will this code be running?

In any case my first thought is to split the csv string into an array, one element per line, then drop the oldest / top data line element when there are too many values. Something like this:

var lines = msg.payload.split('\n');
if (lines.length > 4) lines.splice(5,1); // Delete 1 element at index 5 (line 6)
lines[4] = '"#ROWS", "' + (lines.length-6) + '"';
msg.payload = lines.join('\n');
return msg;

I put together a flow to test this and it seems to be working (I used .. > 4 instead of 3004 just to test the code without a bunch of data).
image

Here’s that flow if you want to check my work:

[{"id":"8fef35d2be7e4286","type":"function","z":"a9c9e672800e138d","name":"set data","func":"msg.payload = '\"#NAMES\"\\n'+\n'\"Cycle number\", \"Date\", \"Time\", \"Energy\"\\n'+\n'\"#TYPES\"\\n'+\n'\"I\", \"str\", \"str\", \"I\"\\n'+\n'\"#ROWS\", \"4\"\\n'+\n'\"7621\", \"01/31/25\", \"10:47:54 AM\", \"775\"\\n'+\n'\"7622\", \"01/31/25\", \"10:49:07 AM\", \"740\"\\n'+\n'\"7623\", \"01/31/25\", \"10:49:58 AM\", \"760\"\\n'+\n'\"7624\", \"01/31/25\", \"10:50:57 AM\", \"785\"\\n';\nreturn msg;","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[],"x":480,"y":560,"wires":[["1e554dbcf7930c77","6ca4869f4f78bcdf"]]},{"id":"7ef7ffe1469f6dbc","type":"inject","z":"a9c9e672800e138d","name":"","props":[],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":330,"y":560,"wires":[["8fef35d2be7e4286"]]},{"id":"1e554dbcf7930c77","type":"function","z":"a9c9e672800e138d","name":"drop extra lines","func":"var lines = msg.payload.split('\\n');\nif (lines.length > 4) lines.splice(5,1);\nlines[4] = '\"#ROWS\", \"' + (lines.length-6) + '\"';\nmsg.payload = lines.join('\\n');\nreturn msg;","outputs":1,"timeout":0,"noerr":0,"initialize":"","finalize":"","libs":[],"x":660,"y":560,"wires":[["bd68c622bde1b6e9"]]},{"id":"6ca4869f4f78bcdf","type":"debug","z":"a9c9e672800e138d","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":575,"y":600,"wires":[],"l":false},{"id":"bd68c622bde1b6e9","type":"debug","z":"a9c9e672800e138d","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":775,"y":560,"wires":[],"l":false}]

EDIT: For your “nice to have”, using arrays also gives you a convenient way to count the lines and update your "#ROWS" with just one extra line:

lines[4] = '"#ROWS","' + (lines.length-6) + '"';

Assuming I’m understanding what you’re looking for there.
Let me know if that does what you’re expecting?

2 Likes

The machine cycles ~ once per minute while it is running.

1 Like

OK sweet, once per minute should be fine, no need to make it hyper-efficient.

Give this method a test and let us know if it does what you need, there are some other (possibly less elegant) approaches that come to mind if the array method doesn’t work.

1 Like

Thank you. This would probably be easier if I had a full Ignition license and could work with a database, but I still haven’t gotten approval for it :frowning:

Trying to wrap my head around this. What would your script in “drop extra lines” look like if you got rid of the “nice to have” and started with a .csv file that already has 3000 data rows?

Where in this flow would I append a new string to the array?

I was assuming you append it before putting it into this function node, and the function node just handles the size management and row counting. But that said, something simple like:

msg.payload += '\n' + newString;

Before doing the split would work; where newString is your new string and your main csv is on msg.payload


EDIT: If I’m understanding your question correctly; if it does have 3000 or more lines already you’d probably need to put the splice in a loop, i.e.

while(lines.length > 3006) lines.splice(5,1);

I haven’t tested that, just something to give you an idea of where to start.

1 Like

I guess my concern with that approach is wouldn’t I have a message that grows infinitely as more cycles are run?

No, that’s what the splice is there for — splice() removes items from the array.

1 Like

Ah okay, so then would I need to store my working array as a context variable so I can reference (and update) it each cycle in that main function node?

Something like:

var array = flow.get("working_array");
array[3005] = msg.payload;
while (array.length > 3004) array.splice(5, 1);
msg.payload = array.join('\n');
array.push('');
flow.set("working_array", array);
return msg;

Assuming:

msg.payload = '"7631","01/31/25","11:03:46 AM","765"' //Unique values each cyle

And “working_array” is an array stored in context data (starting out with 3000 rows of data or empty strings).

EDIT: I think you’d have to initialize your array with an extra row.

EDIT 2: And add a blank element to the end of your array after splicing each time, which I think you can do with .push().

EDIT 3: I see that you could just use .join to turn the array into a string and then write that to your msg.payload, then write your msg.payload to the .csv file with a write file node next.

It might be easier to just store the string (your return value that is the file contents) and append new lines with the code I mentioned earlier, more like this:

var savedString = flow.get("working_string");
savedString += '\n' + msg.payload;
var array= savedString.split('\n');
while (array.length > 3006) array.splice(5, 1);
lines[4] = '"#ROWS", "' + (lines.length-6) + '"';
msg.payload = array.join('\n');
flow.set("working_string", msg.payload);
return msg;

That way you can avoid having an array with empty strings.

My thinking is that if you were to overwrite the 3005th entry that means if you had for example only 1000 values they would be at the bottom of your list of rows, rather than the top. That might not matter, but it’d make the CSV a bit less readable unless your list is already at max capacity.

If you don’t mind scrolling your CSV then storing the array instead should work, just thinking about the user experience once the file is created.

1 Like

I was able to get this all to work and am reading data from the CSV in Ignition!

On Start script:

var working_array = new Array(3006);
working_array[0] = '"#NAMES"';
working_array[1] = '"cycle_number","date","time","AM/PM","act_time","act_energy","peak_power","abs_position","collapse_distance","pressure","mode","downspeed"';
working_array[2] = '"#TYPES"';
working_array[3] = '"I","str","str","str","I","I","d","d","d","I","str","d"';
working_array[4] = '"#ROWS","3000"';
for (var i = 5; i < working_array.length; i++) {
    working_array[i] = '"0","date","time","AM/PM","0","0","0.0","0.0","0.0","0","mode","0.0"';
};
flow.set("working_array", working_array);

On Message script:

var array = flow.get("working_array");
array[3005] = msg.payload;
while (array.length > 3005) array.splice(5, 1);
msg.payload = array.join('\n');
array.push('');
flow.set("working_array", array);
return msg;

Then I write msg.payload to a .csv file with a Write File node.

I simulated some cycles with an Inject node and it doesn’t seem to slow things down at all even if I’m injecting every couple seconds.

I would attach my whole flow but there’s a bunch of other junk in there.

2 Likes

Thanks for this follow-up, looks good! Glad you were able to get it working and aren’t seeing any performance issues.

2 Likes