climateprediction.net home page
Posts by lazlo_vii

Posts by lazlo_vii

1) Questions and Answers : Unix/Linux : Run Linux work units with Windows 10 WSL (Message 63497)
Posted 6 Feb 2021 by lazlo_vii
Post:
I needed to reboot, so I suspended the work unit with BoincTasks (at a checkpoint no less), thinking that would make it safe for a reboot.
But it wasn't. The work unit errored out upon restart.

So you have to use "boinccmd --quit" to shut it down safely.
That will make it difficult to use.



You could try issuing several "sync" commands in a row before the reboot. I have no idea how Win10 will handle it (It might even just ignore it) but it may be worth a try. I have RAID array that does a constant 2-6 megabytes per second in writes for WCG OPN1 tasks and no matter what I tried CPDN tasks failed on every reboot because the work doesn't get written to disk before the system restarts. If WSL the kind of containerization that used your CPU's virtualization instructions you might have the same problem I do.
2) Questions and Answers : Unix/Linux : Run Linux work units with Windows 10 WSL (Message 63486)
Posted 3 Feb 2021 by lazlo_vii
Post:
Here is the manual page for boinccmd:

https://boinc.berkeley.edu/wiki/Boinccmd_tool

That might help you get the most out of it. Another thought is editing/creating the remote_hosts.cfg and adding your windows IP address and computer name e.g.:

# This file contains a list of hostnames or IP addresses (one per line)
# of remote hosts, that are allowed to connect and to control the local
# BOINC core client via GUI RPCs.
# Lines beginning with a # or a ; are treated like comments and will be
# ignored.
#
#host.example.com
#192.168.0.180
192.168.1.2
desktop-pc


Then you should be able to point the BOINC GUI to what ever IP address WSL is using.
3) Message boards : Number crunching : New work Discussion (Message 63339)
Posted 18 Jan 2021 by lazlo_vii
Post:
Here is the iostat output for my RAID10 array:

$ iostat -tymd /dev/sd[e-h] 300
Linux 5.8.0-38-generic (bsquad-host-1) 	01/18/21 	_x86_64_	(32 CPU)


01/18/21 21:52:45
Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s    MB_read    MB_wrtn    MB_dscd
sde              28.11         0.00         2.38         0.00          0        715          0
sdf              28.88         0.00         2.38         0.00          0        715          0
sdg              28.36         0.00         2.38         0.00          0        715          0
sdh              28.37         0.00         2.38         0.00          0        715          0


01/18/21 21:57:45
Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s    MB_read    MB_wrtn    MB_dscd
sde              26.81         0.00         2.17         0.00          0        649          0
sdf              27.05         0.00         2.16         0.00          0        647          0
sdg              26.16         0.00         2.16         0.00          0        647          0
sdh              27.00         0.00         2.17         0.00          0        649          0


01/18/21 22:02:45
Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s    MB_read    MB_wrtn    MB_dscd
sde              32.42         0.00         2.61         0.00          0        784          0
sdf              32.33         0.00         2.61         0.00          0        783          0
sdg              32.30         0.00         2.61         0.00          0        784          0
sdh              32.18         0.00         2.61         0.00          0        784          0


01/18/21 22:07:45
Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s    MB_read    MB_wrtn    MB_dscd
sde              26.28         0.00         2.29         0.00          0        687          0
sdf              27.41         0.00         2.29         0.00          0        686          0
sdg              26.60         0.00         2.29         0.00          0        686          0
sdh              27.51         0.00         2.29         0.00          0        687          0


Right now the only active workload on these disks is BOINC WGC.
4) Message boards : Number crunching : New work Discussion (Message 63327)
Posted 18 Jan 2021 by lazlo_vii
Post:
Even if I run 8 I seem to rarely get crashes when i reboot.

I only get a crash when I reboot after an Ubuntu update (mainly of the Linux kernel). To prevent that, suspend the work unit before the reboot, and it should work.


I have tried suspending CPDN task before rebooting however I still had failures. I think it's because the disks I use to store all of BOINC data on (for all four of my computers and each computer runs at least two containers) are writing data almost constantly. All of my BOINC projects (60 threads of crunching) average about 180MB of data written per minute across a four disk RAID10 array. Since three of the systems use the storage across the network that means I have to shut them down before I reboot the main server. If I suspend all of my BOINC projects on the main server and wait about five minutes before I reboot it's not quite so bad but I still loose work sometimes. Also, sometimes I forget to stop everything before the reboot and that is almost certain to cause failures for CPDN. So long story short, rebooting my network is a 10 or 15 minute job instead of a 2 minute job when I run CPDN tasks. I don't want to have go through unusual procedures just because I run CPDN.

I wish I knew of an easy way to permanently raise the IO priority of CPDN tasks or the containers I am running them in. That might help me isolate the cause. Another solution might be to move CPDN's storage off of the spinning disks and onto the SSD's but I really don't add write heavy workloads to them.
5) Message boards : Number crunching : New work Discussion (Message 63323)
Posted 17 Jan 2021 by lazlo_vii
Post:
Unless you isolate the workloads to physically coherent caches, which is not possible on all CPU architectures, then you may not find a balance for mixing CPDN tasks with WCG ARP and MIP tasks while using all CPU cores. My testing on Ryzen 3000 CPU's has found that at best I should run one CPDN, ARP, and WIP task per 8MB of isolated L3 cache. I don't even do that. I run each project and sub-project on dedicated three or four core segments of my CPU's because I don't want to deal with fine tuning in greater detail.

Don't take this as Gospel. It's just my own very limited testing.

If you are intent on mixing BOINC projects I urge you investigate on a per CPU model + OS basis. Do not think that an Intel Haswell generation CPU + Ubuntu can operate with the same mix as an AMD bulldozer or Ryzen + CentOS. Prove what it can take to yourself and then, please, share it with all of us. I say this because different distros will use different kernel verisions, among other things.

My biggest issue right now is that I have had numerous CPDN WU's fail lately. I think the cause is do to (and I have no way to prove this because I cannot see the code) CPDN setting a lower priority on disk I/O than other projects and other background tasks I have running. Even after a five minute waiting period before restarting a computer and it's containers, tasks from CPDN have failed because they couldn't (wouldn't?) write their data to disk (because it was too busy?) before I rebooted. This is my main frustration with CPDN at the moment. Two days ago I decided to give up on CPDN for the short term (again) because it is just a waste of time and electricity to crunch numbers on tasks that can't take a system reboot for security updates.
6) Message boards : Number crunching : Please fix the deadlines! (Message 63296)
Posted 7 Jan 2021 by lazlo_vii
Post:
C'est something you hardly ever use. In Linux I had to use the command prompt just to install something. It wasn't actually possible with the GUI. Even after I gave it permission, the "run as a program" didn't appear in the context menu. I guess there is something more buggy than Windows after all.


All of this complaining only underscores your ignorance of how computer hardware and software works. I thought ignorance was supposed to be blissful but I guess not. Now, it's not my job to educate you so I will not try...much. All you seem to know is how you think it should work and until you stop believing your opinions and start embracing the facts things will only get worse. The truth is simply this: The best way to do any job starts with using the right tool. Don't use a command line to browse the web or edit photos. Don't use a GUI admin a cluster of servers. Don't think a drill, a saw, and a hammer are meant for the same jobs just because they can all make holes in things.

Learning to properly use your tools will make you much happier. Unless of course you are the kind of person that is only happy when they are complaining.
Explain to me why I could not install a program with the GUI. Double click downloaded file, the OS thinks it's a text file and opens it with a text editor not capable of such a large file and crashes. Right click file and go to properties, and allow it to run as a program. Double click it again. Still loads as text. Right click to select "run as program", option not there. This nonsense just doesn't happen in Windows.


The answers to your questions is just a few internet searches away.
7) Message boards : Number crunching : Please fix the deadlines! (Message 63289)
Posted 6 Jan 2021 by lazlo_vii
Post:
C'est something you hardly ever use. In Linux I had to use the command prompt just to install something. It wasn't actually possible with the GUI. Even after I gave it permission, the "run as a program" didn't appear in the context menu. I guess there is something more buggy than Windows after all.


All of this complaining only underscores your ignorance of how computer hardware and software works. I thought ignorance was supposed to be blissful but I guess not. Now, it's not my job to educate you so I will not try...much. All you seem to know is how you think it should work and until you stop believing your opinions and start embracing the facts things will only get worse. The truth is simply this: The best way to do any job starts with using the right tool. Don't use a command line to browse the web or edit photos. Don't use a GUI admin a cluster of servers. Don't think a drill, a saw, and a hammer are meant for the same jobs just because they can all make holes in things.

Learning to properly use your tools will make you much happier. Unless of course you are the kind of person that is only happy when they are complaining.
8) Message boards : Number crunching : Please fix the deadlines! (Message 63279)
Posted 6 Jan 2021 by lazlo_vii
Post:
peter@nobox:~$ man man
peter@nobox:~$ man bash
peter@nobox:~$ man pwd
peter@nobox:~$ man export
peter@nobox:~$ man echo
peter@nobox:~$ echo $PATH
peter@nobox:~$ pwd
peter@nobox:~$ export PATH=$PATH:$PWD
peter@nobox:~$ echo $PATH
Microsoft stopped most command line usage a couple of decades ago.



Qu’est-ce que c’est "PowerShell?"
9) Message boards : Number crunching : AMD Ryzen 7 2700X taking 50 days to complete a project running 24/7 (Message 63269)
Posted 5 Jan 2021 by lazlo_vii
Post:
Something else to consider:

All Ryzen CPU's in the 1000 to 3000 range have L3 cache dedicated to each 3 core or 4 core Core Complex (CCX) and these communicate at over the Infinity Fabric (IF). IF speeds are not uniform for read and write operations. Writes are slower than reads. Read that last sentence again and let it sink in. Transferring data across the IF to a different CCX will cause small but measurable delays in computation. While you will never notice these delays while browsing the web or streaming video, or in any number of other short term tasks the delays will become apparent when running tasks like CPDN that take days or weeks to finish. Isolating (AKA CPU pinning) tasks on specific CCX's will grant a small performance boost that over time will be quite significant in the case of long term workloads.

AMD "fixed" this for the Ryzen 5000 line of CPU's.

My own two cents:

Use LXD to isolate CPDN workloads on specific CCX's and do not turn off SMT (Symmetric Multi-Threading AKA Hyper threading). Instead set BOINC to use 3/8 of the threads in each CCX and let the OS use the which ever logical cores it chooses for background tasks and thermal management.

You will have to do some digging through old articles about the 2700X to find the right settings (and maybe learn a bit about containerization) if you go this route, but you will see a long term performance boost over running 6 threads with no CPU isolation.
10) Message boards : Number crunching : Please fix the deadlines! (Message 63268)
Posted 5 Jan 2021 by lazlo_vii
Post:
I think the final straw was the command line not seeing a program in the current directory without me prefixing with ./ I class that as exceedingly unintuitive bad programming and won't be using it again.
http://www.catb.org/esr/writings/unix-koans/two_paths.html
I only got a C in English Literature, you'll have to explain what you mean. And you arsed up your quoting.



peter@nobox:~$ man man
peter@nobox:~$ man bash
peter@nobox:~$ man pwd
peter@nobox:~$ man export
peter@nobox:~$ man echo
peter@nobox:~$ echo $PATH
peter@nobox:~$ pwd
peter@nobox:~$ export PATH=$PATH:$PWD
peter@nobox:~$ echo $PATH
11) Message boards : Number crunching : Please fix the deadlines! (Message 63223)
Posted 30 Dec 2020 by lazlo_vii
Post:
Fair enough. I personally hate the windows interface. If that was the only reason there are instructions out there to make the Linux interface virtually identical to the Windows or the Mac one if it comes to that but it seemed too much faff to me when there is such a range of desktop environments to be used with Linux that it should be possible to find one that suits.
It's not the interface per se, but the whole way the OS works. Even installing a program requires faffing about giving things permission to execute. It's too locked down for my liking. I literally could not get a program to install on Linux, even using the command line, I had to use some kind of install manager that looked similar to Google Play on Android. I think the final straw was the command line not seeing a program in the current directory without me prefixing with ./ I class that as exceedingly unintuitive bad programming and won't be using it again.[/quote]

http://www.catb.org/esr/writings/unix-koans/two_paths.html
12) Message boards : Number crunching : New work Discussion (Message 63172)
Posted 24 Dec 2020 by lazlo_vii
Post:
Yes and thank you. Any reason why?



If I had to guess (and yes, I am guessing) I would say it's because Unix used UDP 42 for the Host Name Server Protocol where Windows uses if for all *.dll inter-process communication. Just a guess.
13) Message boards : Number crunching : Move tasks between computers (Message 63155)
Posted 21 Dec 2020 by lazlo_vii
Post:
I move BOINC tasks between computers at times. Instead of using a VM I use an LXD container. See the HOWTO in the UNIX/Linux sector of these boards.
14) Message boards : Number crunching : New work Discussion (Message 63124)
Posted 18 Dec 2020 by lazlo_vii
Post:
Bryn Mawr -

...Now, obviously, I cannot check the server for an uncleared loch file...


Try:
ls /var/run/lock
15) Questions and Answers : Unix/Linux : *** Running 32bit CPDN from 64bit Linux - Discussion *** (Message 63113)
Posted 15 Dec 2020 by lazlo_vii
Post:
Do Linux WU's run faster than Windows?


As none of the tasks currently sent out by CPDN run on both Linux and Windows it is almost impossible to answer this question. From memory when all tasks were multiplatform I seem to remember that they would finish marginally faster on Windows machines than Linux ones all other things being equal but if you look at the BOINC forums where this has been discussed on and off over the years you will see it varies from project to project and while not relevant for CPDN it can also vary between graphic cards when one operating system has much better drivers than the other for a particular card.

There are a few other people around who will remember the days when all tasks were multiplatform who may have better memories than I do about the difference between operating systems then. (It was in my early days of using BOINC and I knew a lot less about all of the different variables involved than I do now!



I would like to add that while a Windows C compiler can make Linux executables and vice versa neither can guarantee equal cross platform performance. So it's best to use Windows to make programs for Windows and to use Linux to make programs for Linux.
16) Message boards : Number crunching : Please fix the deadlines! (Message 63102)
Posted 5 Dec 2020 by lazlo_vii
Post:
I split my Ryzen 9 3950 into 4 systems each with 4 physical cores and if needed the 4 complimenting logical cores from SMT. It works great.
17) Message boards : Number crunching : Please fix the deadlines! (Message 63090)
Posted 3 Dec 2020 by lazlo_vii
Post:
Peter,

I run BOINC on Ubuntu Linux and I use LXD containers to isolate projects and pin them to specific sets of CPUs. I did a small right up about it here:

https://www.cpdn.org/forum_thread.php?id=9003

It's not for everyone. I see it as the easiest way to dedicate a single machine with lots of cores to multiple BOINC projects by effectively turning it into a few smaller machines with each one using specific CPUs and if needed limits on RAM and Disk usage.
18) Questions and Answers : Unix/Linux : HOWTO: Use Ubuntu and LXD to help manage and isolate BOINC workloads. (Message 63043)
Posted 27 Nov 2020 by lazlo_vii
Post:
Thanks for this. I can see how this may be useful for some. I only run other projects when no work is available for CPDN. What isn't clear to me is whether having the 32 bit libraries installed on the host OS means you don't need them on the cloud image downloaded?


In theory you shouldn't need to install them on the guest if they are installed on the host. On the other hand I like to keep my hosts as clean as I can. What helps one workload might be bad for another.

EDIT: And you are welcome!
19) Questions and Answers : Unix/Linux : HOWTO: Use Ubuntu and LXD to help manage and isolate BOINC workloads. (Message 63040)
Posted 27 Nov 2020 by lazlo_vii
Post:
OK, I messed up. I didn't know there was a 60 minute timer on editing a post, so if a mod would be so kind as to remove my post above I would be grateful.

Here I go again:

I am sure that as you worked through the Getting Started Guide for LXD you noticed that it has a lot of options. It can be used to no matter what your computing scale is: from a single host to an entire data center. For now just focus on a simple deployment and as you learn and test it out you can scale up if you want to.

Let's start a container, set for limits on which CPU cores it can use, restart it, log into it, and install updates and stuff!

you@lxd.host:~$ lxc launch ubuntu:focal crunch-con1
you@lxd.host:~$ lxc config set crunch-con1 limits.cpu 0-3
you@lxd.host:~$ lxc restart crunch-con1
you@lxd.host:~$ lxc exec crunch-con1 bash
root@cruch-con1:~# apt update
root@cruch-con1:~# apt upgrade
root@cruch-con1:~# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   43 bits physical, 48 bits virtual
CPU(s):                          16
On-line CPU(s) list:             0-3 <<--  **NOTICE*** Only using these CPU cores
Off-line CPU(s) list:            4-15 <<--  **NOTICE** Not using these CPU cores
Thread(s) per core:              0
Core(s) per socket:              8
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD
...
...
root@cruch-con1:~# apt install boinc-client lib32ncurses6 lib32z1 lib32stdc++-7-dev  
root@cruch-con1:~# nano /etc/boinc-client/gui_rpc_auth.cfg  [b][u]<<-- Only needed if you want to set a password for remote management[/b][/u]
root@cruch-con1:~#nano /etc/boinc-client/remote_hosts.cfg [b][u] <<-- You need to add the IP address(es) of the system(s) running the BOINC Manager to this file[/b][/u]
root@cruch-con1:~# systemctl restart boinc-client
root@cruch-con1:~# exit
you@lxd.host:~$


Once you have created the container, set it's CPU config the way you want it, then installed and configured any software you need it you can launch the BOINC Manager to connect to it. After you configure it you are ready to crunch!

You can find the IP address for all containers on your host like this:
laz@bsquad-host-1:~$ lxc list
+----------------+---------+---------------------+------+-----------+-----------+
|      NAME      |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-01 | RUNNING | 192.168.2.13 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-02 | RUNNING | 192.168.2.14 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-03 | RUNNING | 192.168.2.15 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-04 | RUNNING | 192.168.2.16 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
laz@bsquad-host-1:~$


If you configured LXD to be available across the network you can add the remote system by doing:
lxc remote add remote-hostname-or-IP-address

After you enter the password you set durring lxd init you can access a remote host by doing:
lxc list remote-hostname-or-IP-address:


Like this:
laz@desktop:~$ lxc list bsquad-host-1:
+----------------+---------+---------------------+------+-----------+-----------+
|      NAME      |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-01 | RUNNING | 192.168.2.13 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-02 | RUNNING | 192.168.2.14 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-03 | RUNNING | 192.168.2.15 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
| brute-squad-04 | RUNNING | 192.168.2.16 (eth0) |      | CONTAINER | 0         |
+----------------+---------+---------------------+------+-----------+-----------+
laz@desktop:~$


If I want to move a container first I need to stop it, then move it, then reset the CPU limits, then start it:
laz@desktop:~$ lxc stop bsquad-host-1:brute-squad-04
laz@desktop:~$ lxc move bsquad-host-1:brute-squad-04 brute-squad-04
laz@desktop:~$ lxc list
+----------------+---------+------+------+-----------+-----------+
|      NAME      |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+----------------+---------+------+------+-----------+-----------+
| brute-squad-04 | STOPPED |      |      | CONTAINER | 0         |
+----------------+---------+------+------+-----------+-----------+
laz@desktop:~$ lxc config set brute-squad-04 limit.cpu 4-7
laz@desktop:~$ lxc start brute-squad-04


So as you can imagine LXD is wonderful tool if you want to manage multiple BOINC projects because you can put each one in it's own container, assign it it's set of CPU's and let it crunch away. If you need to upgrade the PC or something you can move your work from one system to another. Be careful running in-progress tasks on CPU's that have a different generation or manufacturer. They may fail. It would be best to let them finish before moving the container. If you also contribute to projects like Einstein@home or F@H you can use LXD to assign your GPUs to containers as well.

I hope you guys find this an handy as I do,

Laz
20) Questions and Answers : Unix/Linux : HOWTO: Use Ubuntu and LXD to help manage and isolate BOINC workloads. (Message 63038)
Posted 27 Nov 2020 by lazlo_vii
Post:
No matter what I want to tell myself, I am not a Computer Genius.

In the vast majority of cases letting the application and operating systems I run pick their own default behaviors almost always leads to more productivity and less heartache. In my humble opinion this is not the case for BOINC. The way I see it the BOINC time sharing scheme is deeply flawed if you want to contribute to more than one project at a time. Also, BOINC tasks are not portable between systems even if the systems all run the exact same CPU architecture.

In this guide I will show you how to use Ubuntu and the Linux Container Daemon to overcome these limitations. Using LXD I can have multiple containers running different BOINC projects with no time sharing and isolate them from one another on my CPUs all with just a few commands. Since all of my systems run Ryzen 3000 CPUs I can even shut down a container and move it between systems on my network if needs be.

You will need to know how to get around on the Linux command line, have a good understanding of your hardware, and your network layout to follow this guide effectively.
It is expected that you will read this entire guide, at least follow the important links and bookmark them for later reading, and above all backup your config files before you make any changes to your system(s). This guide will assume that you are going to have all of the systems on the same 192.168.0.0/24 subnet. I will not cover using boinccmd here and instead will focus on simple management using the BOINC Manager program.

If you run Ubuntu in a VM you will not get the full benefits of CPU isolation because VM's use virtual CPUs and to my knowledge there is no way to know which vCPU is connected to which physical CPU or when it will change. That is up to the Hypervisor and you should consult the docs for which ever one you are using.

STEP 1: Configuring your host

By default LXD will place your containers in a private network that you cannot access outside of the containers. For the purpose of easy management we will create a bridge that will allow your containers to get an IP address from your home router and there by allow the BOINC Manager to access them as needed. Starting with Ubuntu 18.04 network management is done with a program called NetPlan. You can find the docs here:

https://netplan.io/reference/

Also look over the Examples they posted on that site if you get stuck.

First, back up your existing configuration file:
sudo mv /etc/netplan/00*.yaml /etc/netplan/00-config.original


If you ever want you old network configuration back just remove our custom config file and rename your backup to 00-config.yaml.

To create a new config we will need to know the name of you network interface card. You can find this by running ip addr and looking at the device names in the output. See man ip for more info. Once you know what your NIC is called (for this guide I will use the name enp3s0) we just need to create the new config that specifies the bridge with:
sudo nano /etc/netplan/00-custom-config.yaml


It should look like this if you are using DHCP. (I you have a static IP address adjust according to you needs) :
network:
  version: 2
  ethernets:
    enp3s0:
      dhcp4: false
      dhcp6: false
  bridges:
    br0:
      interfaces: [enp3s0]
      dhcp4:  true
      parameters:
        stp: true
        forward-delay: 0


To test the net config you can run sudo netplan generate and if doesn't return any errors apply the new config with sudo netplan apply.

Step 2: Install and configure LXD

The people that write LXD have put together what may be one of the finest examples of easy to use documentation in all of the FLOSS ecosystem and I will not do them injustice of butchering it for this guide. You can find the Getting Started guide and other important links here:

https://linuxcontainers.org/lxd/getting-started-cli/

The important thing to note is when you run lxd init that you tell it you do not want to create a new bridge and that you do want to use an existing bridge. If you followed the example above it is called br0. Also, if you want to move containers between systems on your network be sure to say yes when asked if you want LXD to be available on the network.

Next up will be creating containers and getting them ready to crunch work units.


Next 20

©2024 climateprediction.net