Migrate some public docs from public snippets from GitLab

This commit is contained in:
Arif Ali 2022-01-25 23:23:33 +00:00
commit 6bcdd23e4f
Signed by: arif
GPG Key ID: 369608FBA1353A70
7 changed files with 916 additions and 0 deletions

117
charm_venv_resolution.md Normal file
View File

@ -0,0 +1,117 @@
Fix issue with a venv for python for a charm, such as telegraf. The following snippet removes the old `.venv` and then recreates it from scratch based on the currently installed charm.
The following example does for the unit telegraf/144 and on the unit itself. It would be ideal to run these commands as the `root` user.
```bash
cd /var/lib/juju/agents/unit-telegraf-144/charm
mv ../.venv ../.venv.bak
mv wheelhouse/.bootstrapped wheelhouse/.bootstrapped.bak
JUJU_CHARM_DIR=$PWD PYTHONPATH=$PWD/lib python3 -c 'import charms.layer.basic; charms.layer.basic.bootstrap_charm_deps()';
```
Below is the sample output of what it would look like
```
Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
python3-setuptools is already the newest version (39.0.1-2).
python3-yaml is already the newest version (3.12-1build2).
python3-wheel is already the newest version (0.30.0-0.2).
python3-dev is already the newest version (3.6.7-1~18.04).
python3-pip is already the newest version (9.0.1-2.3~ubuntu1.18.04.5).
0 upgraded, 0 newly installed, 0 to remove and 130 not upgraded.
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-netifaces is already the newest version (0.10.4-0.1build4).
python3-yaml is already the newest version (3.12-1build2).
python3-psutil is already the newest version (5.4.2-1ubuntu0.1).
0 upgraded, 0 newly installed, 0 to remove and 130 not upgraded.
Reading package lists... Done
Building dependency tree
Reading state information... Done
virtualenv is already the newest version (15.1.0+ds-1.1).
0 upgraded, 0 newly installed, 0 to remove and 130 not upgraded.
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /var/lib/juju/agents/unit-telegraf-144/.venv/bin/python3
Also creating executable in /var/lib/juju/agents/unit-telegraf-144/.venv/bin/python
Please make sure you remove any previous custom paths from your /root/.pydistutils.cfg file.
Installing setuptools, pkg_resources, pip, wheel...done.
Collecting pip
Installing collected packages: pip
Found existing installation: pip 9.0.1
Uninstalling pip-9.0.1:
Successfully uninstalled pip-9.0.1
Successfully installed pip-18.1
Looking in links: wheelhouse
Collecting setuptools
Collecting setuptools-scm
Installing collected packages: setuptools, setuptools-scm
Found existing installation: setuptools 39.0.1
Uninstalling setuptools-39.0.1:
Successfully uninstalled setuptools-39.0.1
Successfully installed setuptools-41.6.0 setuptools-scm-1.17.0
Looking in links: wheelhouse
Processing ./wheelhouse/chardet-3.0.4.tar.gz
Processing ./wheelhouse/pyaml-19.12.0.tar.gz
Processing ./wheelhouse/PyYAML-5.3.tar.gz
Processing ./wheelhouse/urllib3-1.25.8.tar.gz
Processing ./wheelhouse/charms.reactive-1.3.0.tar.gz
Processing ./wheelhouse/certifi-2019.11.28.tar.gz
Processing ./wheelhouse/Jinja2-2.11.0.tar.gz
Processing ./wheelhouse/netaddr-0.7.19.tar.gz
Processing ./wheelhouse/setuptools_scm-1.17.0.tar.gz
Processing ./wheelhouse/requests-2.22.0.tar.gz
Processing ./wheelhouse/idna-2.8.tar.gz
Processing ./wheelhouse/six-1.14.0.tar.gz
Processing ./wheelhouse/pip-18.1.tar.gz
Installing build dependencies ... done
Processing ./wheelhouse/MarkupSafe-1.1.1.tar.gz
Processing ./wheelhouse/wheel-0.33.6.tar.gz
Processing ./wheelhouse/Tempita-0.5.2.tar.gz
Processing ./wheelhouse/setuptools-41.6.0.zip
Processing ./wheelhouse/charmhelpers-0.20.8.tar.gz
Collecting PyYAML (from pyaml==19.12.0)
Collecting charmhelpers>=0.5.0 (from charms.reactive==1.3.0)
Collecting MarkupSafe>=0.23 (from Jinja2==2.11.0)
Collecting idna<2.9,>=2.5 (from requests==2.22.0)
Building wheels for collected packages: chardet, pyaml, urllib3, charms.reactive, certifi, Jinja2, netaddr, setuptools-scm, requests, six, pip, wheel, Tempita, setuptools
Running setup.py bdist_wheel for chardet ... done
Stored in directory: /root/.cache/pip/wheels/27/5f/24/2e5bc688af87bffe0457fa16b8601c0d393ccbc9b068d6a396
Running setup.py bdist_wheel for pyaml ... done
Stored in directory: /root/.cache/pip/wheels/79/c5/be/f7eb7d0568d71e1086f493d8ec21e4359063afdfe861d21f69
Running setup.py bdist_wheel for urllib3 ... done
Stored in directory: /root/.cache/pip/wheels/23/94/93/4583b7ac7c053705ebd68f212acbd3269101fb83ca82921af4
Running setup.py bdist_wheel for charms.reactive ... done
Stored in directory: /root/.cache/pip/wheels/ec/2c/88/87d93ff0af94fe2fdb7d6fed373387fce01cf8fcf70b872edb
Running setup.py bdist_wheel for certifi ... done
Stored in directory: /root/.cache/pip/wheels/68/68/7c/e5c72a2ab27d6f47a37739f31266f4bbff18e914a1303a9f89
Running setup.py bdist_wheel for Jinja2 ... done
Stored in directory: /root/.cache/pip/wheels/52/03/30/d94e1a4322fad761a4c4f788f241076ef7bdad1d80eba607d1
Running setup.py bdist_wheel for netaddr ... done
Stored in directory: /root/.cache/pip/wheels/28/10/61/1d2b1f00d446322bfae4c76502775ac272f9207683d5ed7234
Running setup.py bdist_wheel for setuptools-scm ... done
Stored in directory: /root/.cache/pip/wheels/1a/ed/b2/64019918c81e2a2064d2239686495401fd9780461aff2a005f
Running setup.py bdist_wheel for requests ... done
Stored in directory: /root/.cache/pip/wheels/f4/a8/31/47d7cb71b5b5ccaa9a18a0a92f7e090027f68f46ff0f362c18
Running setup.py bdist_wheel for six ... done
Stored in directory: /root/.cache/pip/wheels/9e/fb/a8/9e8dba0d2311302df42fa26af88dd5b33c7e5875875219a4f1
Running setup.py bdist_wheel for pip ... done
Stored in directory: /root/.cache/pip/wheels/8d/88/eb/b66a604956f95523092e96cc5cf2d37943c3b64621b2e8d43e
Running setup.py bdist_wheel for wheel ... done
Stored in directory: /root/.cache/pip/wheels/a0/27/45/ccafd2fd5940f63eea98fcc6670f477243afa1c51247d5af6c
Running setup.py bdist_wheel for Tempita ... done
Stored in directory: /root/.cache/pip/wheels/fb/8a/b2/a3e73f6fa52fd1cf6db7c20380c8ef35e238bf8101addfd210
Running setup.py bdist_wheel for setuptools ... done
Stored in directory: /root/.cache/pip/wheels/f7/67/78/82b27b93488f9a401d5e86288924c5ab52bbd31fc9ad83c8d4
Successfully built chardet pyaml urllib3 charms.reactive certifi Jinja2 netaddr setuptools-scm requests six pip wheel Tempita setuptools
Installing collected packages: chardet, PyYAML, pyaml, urllib3, netaddr, Tempita, MarkupSafe, Jinja2, six, charmhelpers, charms.reactive, certifi, setuptools-scm, idna, requests, pip, wheel, setuptools
Successfully installed Jinja2-2.11.0 MarkupSafe-1.1.1 PyYAML-5.3 Tempita-0.5.2 certifi-2019.11.28 chardet-3.0.4 charmhelpers-0.20.8 charms.reactive-1.3.0 idna-2.8 netaddr-0.7.19 pip-18.1 pyaml-19.12.0 requests-2.22.0 setuptools-41.6.0 setuptools-scm-1.17.0 six-1.14.0 urllib3-1.25.8 wheel-0.33.6
Argument expected for the -c option
usage: /var/lib/juju/agents/unit-telegraf-144/.venv/bin/python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
```

32
devstack.md Normal file
View File

@ -0,0 +1,32 @@
This is on freshly installed Ubuntu Server 18.04.3
```
git clone https://opendev.org/openstack/devstack
cd devstack
IP_ADDR=`ip -o -4 a s ens3 | awk '{print $4}' | awk -F'/' '{print $1}'`
cat > local.conf << EOF
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=\$ADMIN_PASSWORD
RABBIT_PASSWORD=\$ADMIN_PASSWORD
SERVICE_PASSWORD=\$ADMIN_PASSWORD
HOST_IP=${IP_ADDR}
EOF
sudo apt -y remove python3-httplib2 python3-pyasn1 python3-pyasn1-modules
./stack.sh
```
Once everything has been deployed, we add 2 rules to the default security group
openstack security group rule create default --proto tcp --remote-ip 0.0.0.0/0 --dst-port 22
openstack security group rule create default --proto icmp --remote-ip 0.0.0.0/0
Now we can try and logging in through the netns
sudo ip netns exec qrouter-25b37d91-c0fd-47b5-a502-8a2cf3cedd3b bash
ssh 10.0.0.12 -l cirros

127
mysql_recover.md Normal file
View File

@ -0,0 +1,127 @@
# Recover percona cluster
1. Identify the new master:
Identify the unit which has `safe_to_bootstrap=1`
juju status mysql
Note #1: If all are '0', SSH to the one with the biggest sequence number. If all have the same seqno, SSH to a random one, and then:
sudo vi /var/lib/percona-xtradb-cluster/grastate.dat
Change safe-to-bootstrap from '0' to '1'
To get the seqnos:
juju run --application=mysql cat /var/lib/percona-xtradb-cluster/grastate.dat
Note #2: If all are '-1', run `mysqld_safe --wsrep-recover` on the 3 and compare, line will say recovered position uuid,nodeno. Take a note of that nodeno.
# For the rest of the example, we assume the master is mysql/0
2. Before bootstrapping the master, it's a good idea to move the VIP there, and also prevent Juju from trying to do any operations in the slaves. The following steps will stop MySQL and move the VIP away from the slaves:
```
juju run-action hacluster-mysql/1 --wait pause
juju run-action mysql/1 --wait pause
juju run-action hacluster-mysql/2 --wait pause
juju run-action mysql/2 --wait pause
```
**Note**: that the number of this unit might be different
Confirm mysql is stopped in those units and kill any mysqld processes if necessary. Also confirm that the VIP is now placed in the master unit.
3. Bootstrap master:
Confirm everything is down and kill if necessary
```
sudo systemctl stop mysql.service
sudo systemctl start mysql@bootstrap.service # Bionic
sudo /etc/init.d/mysql bootstrap-pxc && sudo systemctl start mysql # Xenial
```
4. Run `show global status` and confirm that the master is the Primary unit with the size of 1 (`wsrep_cluster_size` and `wsrep_cluster_status`):
```
MYSQLPWD=$(juju run --unit mysql/0 leader-get mysql.passwd)
juju run --unit mysql/0 "mysql -uroot -p${MYSQLPWD} -e \"SHOW global status;\"" | grep -Ei "wsrep_cluster"
```
5. Update `juju status mysql` to confirm the master is now active:
```
juju run --application mysql "hooks/update-status" && juju run --application hacluster-mysql "hooks/update-status" && juju status mysql
```
Your cluster should be operational by now, the slaves still have to be added
6. Start the first slave:
```
juju run-action mysql/1 --wait resume
juju run-action hacluster-mysql/1 --wait resume
```
7. Run `show global status` and confirm that the master is the Primary unit and that the size has increased by 1 (`wsrep_cluster_size` and `wsrep_cluster_status`):
```
MYSQLPWD=$(juju run --unit mysql/0 leader-get mysql.passwd)
juju run --unit mysql/0 "mysql -uroot -p${MYSQLPWD} -e \"SHOW global status;\"" | grep -Ei "wsrep_cluster"
```
8. Confirm the start slave is now active in Juju:
juju run --application mysql "hooks/update-status" && juju run --application hacluster-mysql "hooks/update-status" && juju status mysql
9. If the state is still not active, note that sometimes, in the slaves, the `systemctl status mysql` output shows as `failed` or `timed out` even if everything looks alright. This happens because the systemd unit times out before MySQL stops resyncing. Restart the service if that's the case:
juju run --unit=mysql/1 "sudo systemctl restart mysql.service"
10. Confirm it is now active:
juju run --application mysql "hooks/update-status" && juju run --application hacluster-mysql "hooks/update-status" && juju status mysql
11. Repeat steps 6-10 to `mysql/2`
12. Final check:
juju status mysql
**Note #1**: If any of the units shows `seeded file missing` at the end of the procedure, you can fix it like this:
juju run --unit=mysql/X 'echo "done" | sudo tee -a /var/lib/percona-xtradb-cluster/seeded && sudo chown mysql:mysql /var/lib/percona-xtradb-cluster/seeded'
**Note #2**: If one of the slaves doesn't start at all, showing something in the lines of "MySQL PID not found, pid_file detected/guessed: `/var/run/mysqld/mysqld.pid`, try this:
```
juju ssh to the unit
sudo systemctl stop mysql
Stop/kill pending mysqld processes;
sudo rm -rf /var/run/mysqld.*
sudo systemctl start mysql
juju run --application mysql "hooks/update-status" && juju run --application hacluster-mysql "hooks/update-status" && juju status mysql
```
**Note #3**: As a last resort, if one of the slaves doesn't start at all, you might have to rebuild its DB from scratch. Use this following procedure:
```
juju run-action hacluster-mysql/X --wait pause
juju run-action mysql/X --wait pause
juju ssh mysql/X
```
Stop/kill pending mysqld processes
```
sudo mv /var/lib/percona-xtradb-cluster /var/lib/percona-xtradb-cluster.bak
sudo mkdir /var/lib/percona-xtradb-cluster
sudo chown mysql:mysql /var/lib/percona-xtradb-cluster
sudo chmod 700 /var/lib/percona-xtradb-cluster
juju run-action mysql/X --wait resume
sudo du /var/lib/percona-xtradb-cluster # to see replication progress
```
Once it's done, check if processes are running (`sudo ps -ef | grep mysqld`) and if the service is showing as up (`sudo systemctl status mysql`); if the service shows as timed out (sometimes systemd times out before sync finishes), restart it: `sudo systemctl restart mysql`
```
juju run-action hacluster-mysql/X --wait resume
juju run --application mysql "hooks/update-status" && juju run --application hacluster-mysql "hooks/update-status" && juju status mysql
```

111
recover_innodb_cluster.md Normal file
View File

@ -0,0 +1,111 @@
Assuming the following details, and we are trying to recover `mysql-innodb-cluster/0`
```
mysql-innodb-cluster/0 active idle 0/lxd/6 10.0.1.137 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/6 10.0.1.114 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2* active idle 2/lxd/6 10.0.1.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
```
1. Stop the mysql on the unit `mysql-innodb-cluster/0`
```
sudo systemctl stop mysql
sudo mv /var/lib/mysql /var/lib/mysql.old
```
1. Remove the member from the cluster
```
juju run-action --wait mysql-innodb-cluster/leader remove-instance address=10.0.1.137
```
If the above command doesn't work run with the parameter `force=true`
```
juju run-action --wait mysql-innodb-cluster/leader remove-instance address=10.0.1.137 force=true
```
Confirm it worked by checking the IP is removed from:
```
juju run-action --wait mysql-innodb-cluster/leader cluster-status
```
1. Re-initialise the DB on the machine locally on problematic node i.e. `mysql-innodb-cluster/0`
```
juju ssh mysql-innodb-cluster/0
cd /var/lib
mv mysql mysql.old
mkdir mysql
chown mysql:mysql mysql
chmod 700 mysql
mysqld --initialize
systemctl start mysql
```
Check the mysql status: `sudo systemctl status mysql`
1. Remove the flags using Juju:
Clear flags to force charm to re-create cluster users
```
juju run --unit mysql-innodb-cluster/0 -- charms.reactive clear_flag local.cluster.user-created
juju run --unit mysql-innodb-cluster/0 -- charms.reactive clear_flag local.cluster.all-users-created
juju run --unit mysql-innodb-cluster/0 -- ./hooks/update-status
```
After that, you can confirm it worked by getting the password:
```
juju run --unit mysql-innodb-cluster/leader leader-get cluster-password
```
Connect to the unit `mysql-innodb-cluster/0` and use the password above:
```
mysql -u clusteruser -p -e 'SELECT user,host FROM mysql.user'
```
1. Re-add instance to cluster:
```
juju run-action --wait mysql-innodb-cluster/leader add-instance address=10.0.1.137
juju run-action --wait mysql-innodb-cluster/leader cluster-status
```
Note: If the instance is not added to the cluster use mysqlsh to do this with the step below:
```
juju ssh mysql-innodb-cluster/2
mysqlsh clusteruser@10.0.1.156 --password=<clusterpassword> --cluster
cluster.add_instance("10.0.1.137:3306")
```
Choose the option => `"[C]lone YES"`
You might need to run the command below to configure your instance if you have the error-output below:
"NOTE: Please use the `dba.configure_instance()` command to repair these issues."
If you have the output above run the command below using the `mysqlsh` CLI:
```
dba.configure_instance("clusteruser@10.0.1.137:3306")
```
Note: You will be asked for the password of the user `clusteruser` and after this step you can add the instance back to the cluster:
```
cluster.add_instance("10.0.1.137:3306")
```
choose the option => `"[C]lone YES"`
After that check the cluster status:
```
juju run-action --wait mysql-innodb-cluster/leader cluster-status
```

460
rpi4_hpl.md Normal file
View File

@ -0,0 +1,460 @@
# Running xhpl on Rapberry Pi 4
## Setup
* OS
Raspberry PI OS
* Kernel
Linux pi04.arif.local 5.4.44-v7l+ #1320 SMP Wed Jun 3 16:13:10 BST 2020 armv7l GNU/Linux*
* Bootlader:
root@pi04:~# vcgencmd bootloader_version
May 27 2020 18:47:29
version d648db3968cd31d4948341e09cb8a925c49d2ea1 (release)
timestamp 1590601649
Everything else is stock
* Download latest mpich, and compile using
tar xfz mpich-3.3.2.tar.gz
cd mpich-3.3.2
./configure --prefix=/opt/mpich/3.3.2
make -j 3
sudo make install
* Download latest OpenBLAS
unzip OpenBLAS.zip
cd OpenBLAS-develop
make -j 3
sudo make install
* Download latest hpl
tar xfz hpl-2.3.tar.gz
cd hpl-2.3
## HPL.dat
```
HPLinpack benchmark input file
Innovative Computing Laboratory, University of Tennessee
HPL.out output file name (if any)
6 device out (6=stdout,7=stderr,file)
1 # of problems sizes (N)
19008 Ns
1 # of NBs
192 NBs
0 PMAP process mapping (0=Row-,1=Column-major)
1 # of process grids (P x Q)
1 Ps
1 Qs
16.0 threshold
1 # of panel fact
2 PFACTs (0=left, 1=Crout, 2=Right)
1 # of recursive stopping criterium
4 NBMINs (>= 1)
1 # of panels in recursion
2 NDIVs
1 # of recursive panel fact.
1 RFACTs (0=left, 1=Crout, 2=Right)
1 # of broadcast
1 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM)
1 # of lookahead depth
1 DEPTHs (>=0)
2 SWAP (0=bin-exch,1=long,2=mix)
64 swapping threshold
0 L1 in (0=transposed,1=no-transposed) form
0 U in (0=transposed,1=no-transposed) form
1 Equilibration (0=no,1=yes)
8 memory alignment in double (> 0)
##### This line (no. 32) is ignored (it serves as a separator). ######
0 Number of additional problem sizes for PTRANS
1200 10000 30000 values of N
0 number of additional blocking sizes for PTRANS
40 9 8 13 13 20 16 32 64 values of NB
```
## Make.rpi4-mpich
```
#
# -- High Performance Computing Linpack Benchmark (HPL)
# HPL - 2.3 - December 2, 2018
# Antoine P. Petitet
# University of Tennessee, Knoxville
# Innovative Computing Laboratory
# (C) Copyright 2000-2008 All Rights Reserved
#
# -- Copyright notice and Licensing terms:
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# 3. All advertising materials mentioning features or use of this
# software must display the following acknowledgement:
# This product includes software developed at the University of
# Tennessee, Knoxville, Innovative Computing Laboratory.
#
# 4. The name of the University, the name of the Laboratory, or the
# names of its contributors may not be used to endorse or promote
# products derived from this software without specific written
# permission.
#
# -- Disclaimer:
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE UNIVERSITY
# OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# ######################################################################
#
# ----------------------------------------------------------------------
# - shell --------------------------------------------------------------
# ----------------------------------------------------------------------
#
SHELL = /bin/sh
#
CD = cd
CP = cp
LN_S = ln -fs
MKDIR = mkdir -p
RM = /bin/rm -f
TOUCH = touch
#
# ----------------------------------------------------------------------
# - Platform identifier ------------------------------------------------
# ----------------------------------------------------------------------
#
ARCH = rpi4-mpich
#
# ----------------------------------------------------------------------
# - HPL Directory Structure / HPL library ------------------------------
# ----------------------------------------------------------------------
#
TOPdir = $(HOME)/hpl/hpl-2.3
INCdir = $(TOPdir)/include
BINdir = $(TOPdir)/bin/$(ARCH)
LIBdir = $(TOPdir)/lib/$(ARCH)
#
HPLlib = $(LIBdir)/libhpl.a
#
# ----------------------------------------------------------------------
# - Message Passing library (MPI) --------------------------------------
# ----------------------------------------------------------------------
# MPinc tells the C compiler where to find the Message Passing library
# header files, MPlib is defined to be the name of the library to be
# used. The variable MPdir is only used for defining MPinc and MPlib.
#
MPdir = /opt/mpich/3.3.2
MPinc = -I$(MPdir)/include
MPlib = $(MPdir)/lib/libmpi.a
#
# ----------------------------------------------------------------------
# - Linear Algebra library (BLAS or VSIPL) -----------------------------
# ----------------------------------------------------------------------
# LAinc tells the C compiler where to find the Linear Algebra library
# header files, LAlib is defined to be the name of the library to be
# used. The variable LAdir is only used for defining LAinc and LAlib.
#
LAdir = /opt/OpenBLAS
LAinc = $(LAdir)/include
LAlib = $(LAdir)/lib/libopenblas.a -lpthread
#
# ----------------------------------------------------------------------
# - F77 / C interface --------------------------------------------------
# ----------------------------------------------------------------------
# You can skip this section if and only if you are not planning to use
# a BLAS library featuring a Fortran 77 interface. Otherwise, it is
# necessary to fill out the F2CDEFS variable with the appropriate
# options. **One and only one** option should be chosen in **each** of
# the 3 following categories:
#
# 1) name space (How C calls a Fortran 77 routine)
#
# -DAdd_ : all lower case and a suffixed underscore (Suns,
# Intel, ...), [default]
# -DNoChange : all lower case (IBM RS6000),
# -DUpCase : all upper case (Cray),
# -DAdd__ : the FORTRAN compiler in use is f2c.
#
# 2) C and Fortran 77 integer mapping
#
# -DF77_INTEGER=int : Fortran 77 INTEGER is a C int, [default]
# -DF77_INTEGER=long : Fortran 77 INTEGER is a C long,
# -DF77_INTEGER=short : Fortran 77 INTEGER is a C short.
#
# 3) Fortran 77 string handling
#
# -DStringSunStyle : The string address is passed at the string loca-
# tion on the stack, and the string length is then
# passed as an F77_INTEGER after all explicit
# stack arguments, [default]
# -DStringStructPtr : The address of a structure is passed by a
# Fortran 77 string, and the structure is of the
# form: struct {char *cp; F77_INTEGER len;},
# -DStringStructVal : A structure is passed by value for each Fortran
# 77 string, and the structure is of the form:
# struct {char *cp; F77_INTEGER len;},
# -DStringCrayStyle : Special option for Cray machines, which uses
# Cray fcd (fortran character descriptor) for
# interoperation.
#
F2CDEFS =
#
# ----------------------------------------------------------------------
# - HPL includes / libraries / specifics -------------------------------
# ----------------------------------------------------------------------
#
HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) -I$(LAinc) $(MPinc)
HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib) -lrt -lbacktrace
#
# - Compile time options -----------------------------------------------
#
# -DHPL_COPY_L force the copy of the panel L before bcast;
# -DHPL_CALL_CBLAS call the cblas interface;
# -DHPL_CALL_VSIPL call the vsip library;
# -DHPL_DETAILED_TIMING enable detailed timers;
#
# By default HPL will:
# *) not copy L before broadcast,
# *) call the BLAS Fortran 77 interface,
# *) not display detailed timing information.
#
HPL_OPTS = -DHPL_DETAILED_TIMING -DHPL_PROGRESS_REPORT -DHPL_CALL_CBLAS
#
# ----------------------------------------------------------------------
#
HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES)
#
# ----------------------------------------------------------------------
# - Compilers / linkers - Optimization flags ---------------------------
# ----------------------------------------------------------------------
#
CC = gcc
CCNOOPT = $(HPL_DEFS)
CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops -W -Wall
#
# On some platforms, it is necessary to use the Fortran linker to find
# the Fortran internals used in the BLAS library.
#
LINKER = $(CC)
LINKFLAGS = $(CCFLAGS)
#
ARCHIVER = ar
ARFLAGS = r
RANLIB = echo
#
# ----------------------------------------------------------------------
```
## Compiling and running
We can compile using the following command
make arch=rpi4-mpich
My HPL.dat was in ~/hpl, so the fact my PWD was ~/hpl, I ran the benchmark in the following way
OMP_NUM_THREADS=4 ./hpl-2.3/bin/rpi4-mpich/xhpl
Finally, my result is here from the above environment for the 8GB board, it was using 2.82GB
```
================================================================================
HPLinpack 2.3 -- High-Performance Linpack benchmark -- December 2, 2018
Written by A. Petitet and R. Clint Whaley, Innovative Computing Laboratory, UTK
Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK
Modified by Julien Langou, University of Colorado Denver
================================================================================
An explanation of the input/output parameters follows:
T/V : Wall time / encoded variant.
N : The order of the coefficient matrix A.
NB : The partitioning blocking factor.
P : The number of process rows.
Q : The number of process columns.
Time : Time in seconds to solve the linear system.
Gflops : Rate of execution for solving the linear system.
The following parameter values will be used:
N : 19008
NB : 192
PMAP : Row-major process mapping
P : 1
Q : 1
PFACT : Right
NBMIN : 4
NDIV : 2
RFACT : Crout
BCAST : 1ringM
DEPTH : 1
SWAP : Mix (threshold = 64)
L1 : transposed form
U : transposed form
EQUIL : yes
ALIGN : 8 double precision words
--------------------------------------------------------------------------------
- The matrix A is randomly generated for each test.
- The following scaled residual check will be computed:
||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N )
- The relative machine precision (eps) is taken to be 1.110223e-16
- Computational tests pass if scaled residuals are less than 16.0
Column=000000192 Fraction= 1.0% Gflops=1.352e+01
Column=000000384 Fraction= 2.0% Gflops=1.353e+01
Column=000000576 Fraction= 3.0% Gflops=1.355e+01
Column=000000768 Fraction= 4.0% Gflops=1.358e+01
Column=000000960 Fraction= 5.1% Gflops=1.360e+01
Column=000001152 Fraction= 6.1% Gflops=1.359e+01
Column=000001344 Fraction= 7.1% Gflops=1.359e+01
Column=000001536 Fraction= 8.1% Gflops=1.357e+01
Column=000001728 Fraction= 9.1% Gflops=1.357e+01
Column=000001920 Fraction=10.1% Gflops=1.356e+01
Column=000002112 Fraction=11.1% Gflops=1.356e+01
Column=000002304 Fraction=12.1% Gflops=1.356e+01
Column=000002496 Fraction=13.1% Gflops=1.355e+01
Column=000002688 Fraction=14.1% Gflops=1.355e+01
Column=000002880 Fraction=15.2% Gflops=1.353e+01
Column=000003072 Fraction=16.2% Gflops=1.353e+01
Column=000003264 Fraction=17.2% Gflops=1.353e+01
Column=000003456 Fraction=18.2% Gflops=1.352e+01
Column=000003648 Fraction=19.2% Gflops=1.352e+01
Column=000003840 Fraction=20.2% Gflops=1.352e+01
Column=000004032 Fraction=21.2% Gflops=1.351e+01
Column=000004224 Fraction=22.2% Gflops=1.351e+01
Column=000004416 Fraction=23.2% Gflops=1.351e+01
Column=000004608 Fraction=24.2% Gflops=1.350e+01
Column=000004800 Fraction=25.3% Gflops=1.350e+01
Column=000004992 Fraction=26.3% Gflops=1.349e+01
Column=000005184 Fraction=27.3% Gflops=1.349e+01
Column=000005376 Fraction=28.3% Gflops=1.349e+01
Column=000005568 Fraction=29.3% Gflops=1.348e+01
Column=000005760 Fraction=30.3% Gflops=1.348e+01
Column=000005952 Fraction=31.3% Gflops=1.347e+01
Column=000006144 Fraction=32.3% Gflops=1.346e+01
Column=000006336 Fraction=33.3% Gflops=1.346e+01
Column=000006528 Fraction=34.3% Gflops=1.345e+01
Column=000006720 Fraction=35.4% Gflops=1.345e+01
Column=000006912 Fraction=36.4% Gflops=1.344e+01
Column=000007104 Fraction=37.4% Gflops=1.344e+01
Column=000007296 Fraction=38.4% Gflops=1.343e+01
Column=000007488 Fraction=39.4% Gflops=1.343e+01
Column=000007680 Fraction=40.4% Gflops=1.343e+01
Column=000007872 Fraction=41.4% Gflops=1.343e+01
Column=000008064 Fraction=42.4% Gflops=1.342e+01
Column=000008256 Fraction=43.4% Gflops=1.342e+01
Column=000008448 Fraction=44.4% Gflops=1.341e+01
Column=000008640 Fraction=45.5% Gflops=1.341e+01
Column=000008832 Fraction=46.5% Gflops=1.341e+01
Column=000009024 Fraction=47.5% Gflops=1.340e+01
Column=000009216 Fraction=48.5% Gflops=1.340e+01
Column=000009408 Fraction=49.5% Gflops=1.340e+01
Column=000009600 Fraction=50.5% Gflops=1.339e+01
Column=000009792 Fraction=51.5% Gflops=1.339e+01
Column=000009984 Fraction=52.5% Gflops=1.339e+01
Column=000010176 Fraction=53.5% Gflops=1.339e+01
Column=000010368 Fraction=54.5% Gflops=1.338e+01
Column=000010560 Fraction=55.6% Gflops=1.338e+01
Column=000010752 Fraction=56.6% Gflops=1.338e+01
Column=000010944 Fraction=57.6% Gflops=1.337e+01
Column=000011136 Fraction=58.6% Gflops=1.337e+01
Column=000011328 Fraction=59.6% Gflops=1.337e+01
Column=000011520 Fraction=60.6% Gflops=1.336e+01
Column=000011712 Fraction=61.6% Gflops=1.336e+01
Column=000011904 Fraction=62.6% Gflops=1.336e+01
Column=000012096 Fraction=63.6% Gflops=1.335e+01
Column=000012288 Fraction=64.6% Gflops=1.335e+01
Column=000012480 Fraction=65.7% Gflops=1.335e+01
Column=000012672 Fraction=66.7% Gflops=1.334e+01
Column=000012864 Fraction=67.7% Gflops=1.334e+01
Column=000013056 Fraction=68.7% Gflops=1.334e+01
Column=000013248 Fraction=69.7% Gflops=1.333e+01
Column=000013440 Fraction=70.7% Gflops=1.333e+01
Column=000013632 Fraction=71.7% Gflops=1.333e+01
Column=000013824 Fraction=72.7% Gflops=1.333e+01
Column=000014016 Fraction=73.7% Gflops=1.332e+01
Column=000014208 Fraction=74.7% Gflops=1.332e+01
Column=000014400 Fraction=75.8% Gflops=1.332e+01
Column=000014592 Fraction=76.8% Gflops=1.331e+01
Column=000014784 Fraction=77.8% Gflops=1.331e+01
Column=000014976 Fraction=78.8% Gflops=1.331e+01
Column=000015168 Fraction=79.8% Gflops=1.331e+01
Column=000015360 Fraction=80.8% Gflops=1.330e+01
Column=000015552 Fraction=81.8% Gflops=1.330e+01
Column=000015744 Fraction=82.8% Gflops=1.330e+01
Column=000015936 Fraction=83.8% Gflops=1.330e+01
Column=000016128 Fraction=84.8% Gflops=1.330e+01
Column=000016320 Fraction=85.9% Gflops=1.330e+01
Column=000016512 Fraction=86.9% Gflops=1.329e+01
Column=000016704 Fraction=87.9% Gflops=1.329e+01
Column=000016896 Fraction=88.9% Gflops=1.329e+01
Column=000017088 Fraction=89.9% Gflops=1.329e+01
Column=000017280 Fraction=90.9% Gflops=1.329e+01
Column=000017472 Fraction=91.9% Gflops=1.329e+01
Column=000017664 Fraction=92.9% Gflops=1.329e+01
Column=000017856 Fraction=93.9% Gflops=1.328e+01
Column=000018048 Fraction=94.9% Gflops=1.328e+01
Column=000018240 Fraction=96.0% Gflops=1.328e+01
Column=000018432 Fraction=97.0% Gflops=1.328e+01
Column=000018624 Fraction=98.0% Gflops=1.328e+01
Column=000018816 Fraction=99.0% Gflops=1.328e+01
================================================================================
T/V N NB P Q Time Gflops
--------------------------------------------------------------------------------
WR11C2R4 19008 192 1 1 345.24 1.3263e+01
HPL_pdgesv() start time Fri Jun 5 16:39:38 2020
HPL_pdgesv() end time Fri Jun 5 16:45:23 2020
--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV--VVV-
Max aggregated wall time rfact . . . : 9.47
+ Max aggregated wall time pfact . . : 3.14
+ Max aggregated wall time mxswp . . : 0.64
Max aggregated wall time update . . : 335.26
+ Max aggregated wall time laswp . . : 10.44
Max aggregated wall time up tr sv . : 0.49
--------------------------------------------------------------------------------
||Ax-b||_oo/(eps*(||A||_oo*||x||_oo+||b||_oo)*N)= 9.59421981e-04 ...... PASSED
================================================================================
Finished 1 tests with the following results:
1 tests completed and passed residual checks,
0 tests completed and failed residual checks,
0 tests skipped because of illegal input values.
--------------------------------------------------------------------------------
End of Tests.
================================================================================
```
The above result showed that we were able to acheive 13.26 GFlops
**Note**: The system was not tuned, i.e. the following things were not changed
* Reduction of any services running on the pi
* CPU governor
* CPU overclocking

31
ruqya_ayaah.md Normal file
View File

@ -0,0 +1,31 @@
Surah | Juz | Ayaah
------|-----|-------
al-Fatihah | | ALL
al-Bakarah | 1 | 1 - 5
al-Bakarah | 1 | 102
al-Bakarah | 1 | 137
al-Bakarah | 3 | 255
al-Bakarah | 3 | 284 - 286
al-Imraan | 3 | 1 - 5
al-Imraan | 3 | 85
an-Nisaa | 5 | 54
al-Anaam | 7 | 17
al-'Araaf | 8 | 54 - 56
al-'Araaf | 9 | 117 - 122
at-Tawba | 10 | 14
Yunus | 11 | 57
Yunus | 11 | 79 - 82
Bani-Isra'eel | 15 | 82
al-Kahf | 15 | 39
at-Taha | 16 | 65 - 69
al-Mu'minoon | 18 | 114 - 118
ash-Sha'oor | 19 | 80
Yaseen | 22 | 1 - 9
as-Saafaat | 23 | 1 - 10
Sajdaa | 24 | 44
al-Ahqaaf | 26 | 31
al-Mulk | 29 | 3
al-Qalam | 29 | 51-52
al-Ikhlaas | 30 | ALL
al-Falaq | 30 | ALL
an-Naas | 30 | ALL

View File

@ -0,0 +1,38 @@
1. Get to the relation's details:
juju run --unit ceph-osd/4 'relation-ids secrets-storage'
will get you the relation id.
juju run --unit ceph-osd/4 'relation-get -r 311 - vault/0'
juju run --unit ceph-osd/4 'relation-get -r 311 - vault/1'
juju run --unit ceph-osd/4 'relation-get -r 311 - vault/2'
You need to query all the units (maybe just the leader is enough), to see which relation stores the `ceph-osd/4_role_id` and `4_token_id` variables.
1. you have the token, need to find the last-token.
Install sqlite3 to be able to browse the unit's KV store (`apt install sqlite3`, not necessarily on the node, if that's not allowed).
then fetch the last-token:
juju ssh ceph-osd/4 sudo -i
apt install sqlite3
sqlite3 /var/lib/juju/agents/unit-ceph-osd-4/charm/.unit-state.db 'select data from kv where key="last-token";'
Probably there are easier ways to get this data, but this works for sure.
1. Match and update the last token id with the current one:
juju run --unit vault/2 -- 'relation-set -r 311 ceph-osd/4_token="WHAT you got from the DB"'
e.g.:
juju run --unit vault/2 -- 'relation-set -r 311 ceph-osd/4_token='"s.bk3GMbPxKwgGXyODysVjihuA"'
Pay attention to the escape characters and dots, etc.
4. make sure you got the correct relation data updated.
juju run --unit ceph-osd/4 'relation-get -r 311 - vault/2'