[Plugin] CA User Scripts


Recommended Posts

This is a great plugin.

 

A question about scripts with cron jobs... so I setup a "powerdown" script this morning to simply turn run powerdown at 11am. What is the process for this running? I clicked apply and then left it... do I then need to run said script in the background?

 

Also, deleting a script with a cron job, I presume the cron job will also be taken care of?

 

Thanks.

Link to comment
22 minutes ago, johnieutah said:

A question about scripts with cron jobs... so I setup a "powerdown" script this morning to simply turn run powerdown at 11am. What is the process for this running? I clicked apply and then left it... do I then need to run said script in the background?

Nope.  Set it and forget it

 

22 minutes ago, johnieutah said:

Also, deleting a script with a cron job, I presume the cron job will also be taken care of?

It should.  I'd be surprised if I forgot to handle that situation (been awhile since I looked at the code), but if you see an update tonight, you'll know why.

Link to comment
51 minutes ago, johnieutah said:

Ok, so I setup a simple script:

#!/bin/bash
powerdown

 

Set-up the cron job to run the powerdown at 11am:

0 11 * * *

 

And then hit apply. But nothing happened... anyway I can check the logs or something?!

 

Cheers.

 

 

Until it runs you won't see anything.  You can see the cron line added by 

cat /etc/cron.d/root


 

BTW, Deleting a script won't actually delete the cron entry, but the first thing I do when executing a custom cron is to see if the script still exists and if it doesn't to immediately exit.  Far simpler programming.  But, anything that you do that requires you to hit the Apply button will wind up deleting the cron entry.

Edited by Squid
Link to comment
  • 3 weeks later...

Hello,

 

I have had an issue recently...when I click on the log icon to see the output from the most recent scheduled run on any of my scripts the pop up shows, but the log never shows up...it just hangs there and looks like it is waiting for my server...

 

I can download the zip of my logs correctly, but not the recent log...

 

Is there anything I should try to resolve this?

 

I am on the latest 6.4 RC...

 

Thanks!!

Link to comment
If they're too big to download then you've got a different issue. Namely that your browser is choking on trying to render the entire file.

Not near a computer right now but IIRC they're stored at /tmp/user.scripts/logs

Sent from my LG-D852 using Tapatalk because I can


Thanks they are located there... Yes the browser is choking on the 10mb text files for some of my script logs... Is there another location for the smaller logs? Not the zip files, but the shorter logs you can see from the user scripts plugin ....

Thanks for your help!

Sent from my SM-G955U1 using Tapatalk

Link to comment
  • 2 weeks later...
1 hour ago, wgstarks said:

I see that the .DS Cleanup script has a popup window and warning about the script aborting if the window is closed. Does this mean that I can't set the script to run on a schedule?

All scripts when run in the foreground show that.

Link to comment
  • 2 weeks later...

Is there a way to make the scripts only run if they are not already running? I haven't been able to pin down a unique name from PS to add a working check into the script. I intend to use rclone to copy any new files to the local server, every 30 minutes or every hour or so. The problem is, sometimes a download might take more than that to complete, and the scripts that I use on a different system can accommodate that but I can't get it figured out here.

Link to comment
1 hour ago, Squid said:

Safest way would be for your script to record it's PID somewhere on start, delete the record on exit, and when starting check to see if the record exists and exit if it does.

 

Good idea, shouldn't need a PID then just a file. Check for a file called running, if there exit. Then make a file called running, run script, and finally delete the file. I'm no scripting pro, does that sound right to everyone else? lol

Link to comment
7 hours ago, monogoat said:

 

Good idea, shouldn't need a PID then just a file. Check for a file called running, if there exit. Then make a file called running, run script, and finally delete the file. I'm no scripting pro, does that sound right to everyone else? lol

 

I was going to suggest the same. We used to call it a "file watcher" file. But saving the PID is the way it is normally done. 

Link to comment
  • 3 weeks later...
43 minutes ago, uldise said:

Hi, and sorry if it was asked and answered before. i would like to run one script for many times with different input variables. are there ability to pass any variables to script?

https://forums.lime-technology.com/topic/48286-plugin-ca-user-scripts/?tab=comments#comment-475625

 

That's for running manually.  For running on a schedule with different variables, you'll have to create separate scripts

Edited by Squid
Link to comment
  • 3 weeks later...

I'm a linux noob, but is it possible to write a scipt, which logs my disk status (hdparm -C /dev/sdX) (spin up/spin down) every or 15 or 60 minutes, and writes the results to a file. (maybe extend it with a HDD temp as well, maybe even fan rpm?)

 

preferably all the results (from every different run) in the same file, or seperate disks to seperate files maybe?

( i know i can see a spin down event in the extended log, but it doesn't tell me, when a disk was spun up or down for example)

 

thank you very much!

Edited by LSL1337
Link to comment
On 11/16/2017 at 6:36 AM, LSL1337 said:

I'm a linux noob, but is it possible to write a scipt, which logs my disk status (hdparm -C /dev/sdX) (spin up/spin down) every or 15 or 60 minutes, and writes the results to a file. (maybe extend it with a HDD temp as well, maybe even fan rpm?)

 

preferably all the results (from every different run) in the same file, or seperate disks to seperate files maybe?

( i know i can see a spin down event in the extended log, but it doesn't tell me, when a disk was spun up or down for example)

 

thank you very much!

Without thinking too much:

 

#!/bin/bash
hdparm -C /dev/sdb >> /mnt/user/sharename/status.txt  #change path to suit
smartctl -a /dev/sdb | grep Temperature >> /mnt/user/sharename/status.txt
#repeat for more disks

sensors | grep Fan >> /mnt/user/sharename/status.txt

And running at whatever schedule you choose.

  • Upvote 1
Link to comment

Hi, may i ask how to properly use the docker exec ... inside the script ?

 

im hanging on that command

 

docker exec -t apache bash

... doesnt go further then this

 

script for example

 

#!/bin/bash
echo step 0
docker exec -t apache bash
echo step 1

 

step 0 is the end ... im experimenting with this for auto certbot renewals inside apache docker ... but 1st i need to bypass this line ;)

 

for an tip thanks ahead

Link to comment
4 hours ago, alturismo said:

docker exec -t apache bash

... doesnt go further then this

Because executing bash within a container by default means that its interactive, and the system is effectively waiting for you to enter in the commands

Try this

docker exec -t apache Step 0
docker exec -t apache Step 1
docker exec -t apache Step 2

 

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.