technologiq

Members
  • Posts

    61
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

1320 profile views

technologiq's Achievements

Rookie

Rookie (2/14)

7

Reputation

  1. Hi all, my Nextcloud instance is fine other than crona nd I'm trying to use Nextcloud-cronjob and it spits this back in the logs: ------------------------------------------------------------- Executing Cron Tasks: Thu Apr 27 19:45:00 UTC 2023 ------------------------------------------------------------- > Nextcloud Container ID: 545788c4f8a8 > Running Script: ./run-cron-php.sh Cannot write into "config" directory! This can usually be fixed by giving the web server write access to the config directory. But, if you prefer to keep config.php file read only, set the option "config_is_read_only" to true in it. See https://docs.nextcloud.com/server/26/go.php?to=admin-config > Done
  2. Working now, I should have known better to try for the simple things before doing everything else. Thank you!
  3. root@Vault:~# ls -la /mnt/user/ total 12 drwx------ 1 999 users 36 Nov 30 15:21 ./ drwxr-xr-x 20 root root 400 Nov 30 20:15 ../ drwxrwxrwx 1 nobody users 43 Dec 7 2019 .Trash-99/ drwxrwxrwx 1 nobody users 66 Sep 14 2019 CommunityApplicationsAppdataBackup/ drwxrwxrwx 1 nobody users 4096 Nov 30 09:53 appdata/ drwxrwxrwx 1 nobody users 99 Nov 27 14:42 backups/ drwxr-xr-x+ 1 nobody users 18 Mar 20 2022 documents/ drwxrwxrwx 1 nobody users 6 Jun 16 2020 domains/ drwxrwxrwx 1 999 users 30 Nov 23 20:47 downloads/ drwxrwxrwx 1 nobody users 299 Jan 4 2022 isos/ drwxrwxrwx 1 999 users 30 Mar 28 2022 media/ drwxrwxrwx 1 nobody users 134 Jun 12 18:08 movies/ drwxrwx--- 1 nobody users 262 Nov 24 10:31 nextcloud/ -rw-r--r-- 1 nobody users 0 Nov 27 17:24 nextcloud.log drwxrwxrwx 1 nobody users 35 Jan 2 2022 photos/ drwxrwxrwx 1 nobody users 4096 Aug 2 2020 projects/ drwxrwxrwx 1 nobody users 78 Apr 2 2022 scans/ drwxrwxrwx 1 nobody users 20 Nov 22 22:18 system/ drwxrwxrwx 1 nobody users 6 Nov 24 2019 tv/ root@Vault:~# Thanks for the assistance!
  4. I've tried all that. What you see now was just my most recent attempt to fix.
  5. I've been troubleshooting this for days and am getting exhausted trying to resolve. A few days ago my shares just stopped working. I Can't access any shares from any Windows workstation. I've poured over the forums and have tried changing my SMB settings, share settings, adding users that match my windows username. Disabling SMB1 on windows, rebooting both server and workstations. Removing any prior SMB connections to the server (net use delete). I've deleted my windows credentials which didn't fix it, I've re-added them manually, also didn't fix it I've attached my diagnostics. Any assistance would be appreciated.
  6. Going to wrap this post up. Ultimately I got in this mess because I didn't know what I was doing adding the 2nd cache drive. After watching @SpaceInvaderOnes Youtube video on the subject I had a much better understanding. Fortunately for me I had the CA Appdata Backup/Restore plugin installed which had 3 recent weekly backups. All I had to do was re-set-up my cache and then restore the appdata folder do it. Once I did that, it was just a matter of reinstalling my previous dockers and I was good to go. This also forced me to re-evaluate my cache setup as well as wrapping my head around cache pools and the prefer/yes/no. I ended up creating new multiple cache pools and now my unraid is running faster than ever and all my data is properly protected. Thank you @JorgeB for your help!
  7. When I go to the restore data tab, it shows it restoring to: /mnt/cache/appdata and I can NOT change that location. But my appdata is located in /mnt/cache_ssd/appdata * I did do the restore and copied the files over which worked, but it would have been nice to be able to choose where to restore.
  8. Is there any way to change the Destination folder? Or do I just have to copy everything before I reboot?
  9. I *do* have a couple backups of Appdata that was made using the CA Appdata Plugin. At this point would it just make more sense form me to get my cache pool mounted & formatted as new and then perform a restore?
  10. Thanks @JorgeB, I tried it again with the correct command and had the same results root@Vault:/dev# btrfs-select-super -s 1 /dev/sdk1 No valid Btrfs found on /dev/sdk1 ERROR: open ctree failed root@Vault:/dev# btrfs-select-super -s 1 /dev/sdo1 No valid Btrfs found on /dev/sdo1 ERROR: open ctree failed
  11. root@Vault:/dev# btrfs-select-super -s 1 /dev/sdo No valid Btrfs found on /dev/sdo ERROR: open ctree failed root@Vault:/dev# btrfs-select-super -s 1 /dev/sdk No valid Btrfs found on /dev/sdk ERROR: open ctree failed Tried with array started and stopped with the same results.
  12. Thank you, diagnostics attached. vault-diagnostics-20221125-1127.zip
  13. I added a second Cache drive and did NOT back up my current appdata (I am aware I am an idiot for not doing this). After a Reboot (to 6.11.5) my docker list was empty and the appdata is also empty. Restoring previous containers does nothing to help since the config files are missing. AppData share is set to prefer: cache. Am I f'ed and have to start from scratch or can I get my appdata back?