trachal

Members
  • Posts

    6
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

trachal's Achievements

Noob

Noob (1/14)

0

Reputation

  1. FYI – I was able to work around this issue by: Enabling Bridging in Network settings Deleting the custom Docker network on eth0 Recreate the custom Docker network on br0 Reassign all the dockers to the new custom Docker network on br0 All seems stable so far docker-network-insp.txt nas1-diagnostics-20230904-0218.zip
  2. Issues with custom IPVLANL3 Docker network after the upgrade. All my dockers run on a custom IPVLANL3 network after the up grade none of the dockers will start and produce this error in the log “failed to create the ipvlan port: device or resource busy” The custom network is present and all dockers are assigned to it. If I roll back everything works correctly as it always has root@NAS1:~# docker network ls NETWORK ID NAME DRIVER SCOPE c672eff05a16 bridge bridge local 3bbd8bb2ae5c host host local de7e521a4485 ipvlan-l3 ipvlan local f6fe78a2336c none null local nas1-diagnostics-20230901-2323.zip
  3. Issues with custom IPVLANL3 Docker network after the upgrade. All my dockers run on a custom IPVLANL3 network after the up grade none of the dockers will start and produce this error in the log “failed to create the ipvlan port: device or resource busy” The custom network is present and all dockers are assigned to it. If I roll back everything works correctly as it always has root@NAS1:~# docker network ls NETWORK ID NAME DRIVER SCOPE c672eff05a16 bridge bridge local 3bbd8bb2ae5c host host local de7e521a4485 ipvlan-l3 ipvlan local f6fe78a2336c none null local nas1-diagnostics-20230901-2323.zip
  4. I Just replaced them both with 2 brand new cable. Got 9 back, added 5 back to the Array and it is rebuilding. Will see how it looks when all is done. Thanks for the help
  5. I checked the connections and swapped the cables. Disk 5 was failed Red X, and now is the Unassigned Device sdi Think that happened after a few reboots
  6. Version: 6.7.2 After an Extended power outage I lost one drive Failed, and another with a Unmountable: No file system errorr. I tried xfs_repair, on the drive with the bad file system, got the following root@NAS1:~# xfs_repair -v /dev/md9 Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error The Array starts and I can get to the remaining disk shares I am fine with losing the data on these 2 drives, Just need help getting the array back with the remaining drives. nas1-diagnostics-20190808-1522.zip