Feed aggregator
Deploying 11.2.0.4 on a recent ODA
11gR2 should be retired now, and there is normally no need to deploy a new database using this version anymore. But in the real world, you may need to do that for some reasons. 11gR2 is no more supported on Oracle Database Appliance, but this is the way you can setup a “brand new” Oracle Database 11gR2 environment on ODA. Obviously, I would only recommend doing this if there is no other options.
HistoryThe first ODAs were designed for 11gR2, but support for this version disappeared nearly 2 years ago. There are still ODAs running 11gR2 databases, but it’s mostly old ODAs deployed several years ago. Applying the latest ODA patch on top of 11gR2 databases is not a problem, your 11gR2 databases won’t be patched although every other components will. Therefore, running the most recent 19.17 patch with 11gR2 databases is OK. But deploying a new infrastructure with X9-2 ODAs running 11gR2 databases wouldn’t make sense and is not supported. Furthermore, ODA development’s team now focus on modern features, mainly tied with Oracle 19c. Don’t expect to run these new features on old 11gR2 databases. Most of the time it won’t work correctly as Oracle doesn’t bother anymore testing on 11gR2.
2 solutions for running 11gR2 databases on ODAIf you absolutely need 11gR2, you currently have 2 solutions.
The first one is deploying an old set of binaries from an old patch, for example 19.10. It’s the easiest way to put 11gR2 on your ODA:
cd /opt/dbi/
unzip p23494997_1910000_Linux-x86-64.zip
odacli update-repository -f /opt/dbi/odacli-dcs-19.10.0.0.0-210115-DB-11.2.0.4.zip
odacli create-dbhome -v 11.2.0.4.210119
odacli list-dbhomes
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
2a00d3bb-b042-4720-94a2-bef13bfaf5f5 OraDB19000_home1 19.15.0.0.220419 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 CONFIGURED
4f56e25e-e228-4cc3-b827-b15b66c67143 OraDB19000_home4 19.16.0.0.220719 /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4 CONFIGURED
306d949e-6a61-4d7f-83e2-b023a9c47586 OraDB11204_home2 11.2.0.4.210119 /u01/app/odaorahome/oracle/product/11.2.0.4/dbhome_2 CREATING
The second one we will focus on is creating a new user-managed VM and manually deploy an 11gR2 DB home. I would definitely prefer this solution because I don’t want to use unsupported version on my ODA: I want to keep everything clean and supported. Creating a user-managed VM needs some work, but this VM won’t have any link to my ODA system, everything will be running on its own inside the VM.
License considerationsYou should be careful when using user-managed VMs with Oracle databases inside. If you run on Standard Edition, you normally have 1 license per socket on your system, and you can use as may cores as you want. When using Enterprise Edition, you will limit the available number of cores on your ODA with odacli update-cpucore, or you will use CPU Pools for DB Systems and run all your DB Systems in these CPU Pools. Configuring a VM CPU pool (which is different from DB Systems’ CPU pool) for databases in user-managed VMs is not compliant with the Oracle licensing model. You’d better limiting the total CPU-cores of your ODA in this case. For example, if you need 6 cores for 19c DB Systems and 2 cores for your 11gR2, configure 8 cores on your ODA with odacli update-cpucore and then configure the 2 CPU pools accordingly.
Test environmentMy test environment is based on a brand new ODA X9-2S deployed with 19.16 and using Standard Edition. I will use 19c DB Systems for most databases, and I will need a user-managed VM for 11gR2. The purpose of this user-managed VM is to decomission an old 11gR2 server and put all Oracle databases into the ODA. The application linked to this 11gR2 database will also be decomissioned, but later. It’s why there is no plan to migrate this database to a newer version.
Setting up the user-managed 11gR2 VMI will need a VM CPU pool for this “old” system:
odacli create-cpupool -n cpupool4olddbs -c 2 -vm
odacli list-cpupools
Name Type Configured on Cores Associated resources Created Updated
-------------------- ------------------ ------------- ----- ----------------------- ------------------------ ----------
cpupool4olddbs VM srv-bd3 2 NONE 2022-11-29 14:50:28 CET 2022-11-29 14:50:28 CET
cpupool4dbsystems DB_SYSTEM_SHARED srv-bd3 8 NONE 2022-11-29 14:49:55 CET 2022-11-29 14:49:55 CET
My VM will need filesystems, I first need to create 2 VM storages, one for DATA and one for Recovery Area:
odacli create-vmstorage -n VMsDATA -dg DATA -s 300G
odacli create-vmstorage -n VMsRECO -dg RECO -s 100G
odacli list-vmstorages
Name Disk group Volume name Volume device Size Mount Point Created Updated
-------------------- --------------- --------------- ------------------------------ ---------- ----------------------------------- ------------------------ ------------------------
VMsRECO RECO VMSRECO /dev/asm/vmsreco-115 100.00 GB /u05/app/sharedrepo/vmsreco 2022-11-30 09:34:38 CET 2022-11-30 09:34:38 CET
VMsDATA DATA VMSDATA /dev/asm/vmsdata-211 300.00 GB /u05/app/sharedrepo/vmsdata 2022-11-30 09:34:04 CET 2022-11-30 09:34:04 CET
Now I will create vdisks in these VM storages:
odacli create-vdisk -n prlinux11-data -vms VMsDATA -s 200G
odacli create-vdisk -n prlinux11-reco -vms VMsRECO -s 70G
odacli list-vdisks
Name VM storage Size Shared Sparse Created Updated
-------------------- -------------------- ---------- ---------- ---------- ------------------------ ------------------------
prlinux11-reco VMsRECO 70.00 GB NO NO 2022-11-30 09:39:51 CET 2022-11-30 09:39:51 CET
prlinux11-data VMsDATA 200.00 GB NO NO 2022-11-30 09:41:06 CET 2022-11-30 09:41:06 CET
Now, let’s create the VM. I will use an Oracle Linux distribution provided as an ISO file. Creating this VM will create a boot disk on the VM storage VMsDATA, and connect the 2 vdisks previously created. I will use my ODA’s IP address to map the VNC port.
odacli create-vm -n PRLINUX11 -m 32G -src /mnt/nas/V1009690-01.iso -vd prlinux11-data,prlinux11-reco -vc 2 -cp cpupool4olddbs -vn prnetwork -vms VMsDATA -s 50G -g "vnc,listen=10.146.107.31"
odacli describe-job -i 5d94d0b1-da6c-43d0-89ca-98f6e5a89cfa
Job details
----------------------------------------------------------------
ID: 5d94d0b1-da6c-43d0-89ca-98f6e5a89cfa
Description: VM PRLINUX11 creation
Status: Success
Created: November 30, 2022 10:05:41 AM CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate dependency resources November 30, 2022 10:05:41 AM CET November 30, 2022 10:05:41 AM CET Success
Validate resource allocations November 30, 2022 10:05:41 AM CET November 30, 2022 10:05:41 AM CET Success
Allocate resources November 30, 2022 10:05:41 AM CET November 30, 2022 10:05:41 AM CET Success
Provision new VM November 30, 2022 10:05:41 AM CET November 30, 2022 10:05:43 AM CET Success
Add VM to Clusterware November 30, 2022 10:05:43 AM CET November 30, 2022 10:05:44 AM CET Success
Save domain in ACFS November 30, 2022 10:05:44 AM CET November 30, 2022 10:05:44 AM CET Success
Create VM metadata November 30, 2022 10:05:44 AM CET November 30, 2022 10:05:44 AM CET Success
Persist metadata November 30, 2022 10:05:44 AM CET November 30, 2022 10:05:44 AM CET Success
odacli describe-vm -n PRLINUX11 | grep Display
Display Port: 10.146.107.31:3
Creating a VM is fast as nothing is really created.
Now I can use a VNC client connected to 10.146.107.31:5903 and deploy a Linux distribution. I would recommend using the same OS as your ODA, meaning an Oracle Linux 7.9, but you can use an older one if needed. Linux setup on this VM is quite typical, additional settings and packages will be deployed using preinstall package provided by Oracle.
Once the Linux is deployed, let’s check the disks and configure LVM according to OFA naming:
fdisk -l /dev/vdb | grep GB
Disk /dev/vdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
fdisk -l /dev/vdc | grep GB
Disk /dev/vdc: 75.2 GB, 75161927680 bytes, 146800640 sectors
pvcreate /dev/vdb
pvcreate /dev/vdc
vgcreate vg_oradata /dev/vdb
vgcreate vg_orareco /dev/vdc
lvcreate -L 30G -n lv_u01 vg_oradata
lvcreate -L 160G -n lv_data vg_oradata
lvcreate -L 60G -n lv_reco vg_orareco
mkdir /u01
mkdir -p /u02/app/oracle/oradata
mkdir -p /u03/app/oracle
mkfs.ext4 /dev/mapper/vg_oradata-lv_u01
mkfs.ext4 /dev/mapper/vg_oradata-lv_data
mkfs.ext4 /dev/mapper/vg_orareco-lv_reco
echo "/dev/mapper/vg_oradata-lv_u01 /u01 ext4 defaults 1 2" >> /etc/fstab
echo "/dev/mapper/vg_oradata-lv_data /u02/app/oracle/oradata ext4 defaults 1 2" >> /etc/fstab
echo "/dev/mapper/vg_orareco-lv_reco /u03/app/oracle ext4 defaults 1 2" >> /etc/fstab
mount -a
df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 8.6M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/ol-root 44G 1.4G 43G 4% /
/dev/vda1 1014M 184M 831M 19% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/0
/dev/mapper/vg_oradata-lv_u01 30G 45M 28G 1% /u01
/dev/mapper/vg_oradata-lv_data 158G 61M 150G 1% /u02/app/oracle/oradata
/dev/mapper/vg_orareco-lv_reco 59G 53M 56G 1% /u03/app/oracle
Let’s copy the ISO file on the server for package setup. I will mount this ISO file, configure a local repository on this ISO, and install the preinstall package. Other packages will be needed for 11gR2:
mkdir /install
scp root@10.168.1.54:/mnt/nas/V1009690-01.iso /install/
mkdir /mnt/iso
mount -o loop /install/V1009690-01.iso /mnt/iso
rm -f /etc/yum.repos.d/*
vi /etc/yum.repos.d/local-oel7.repo
[OEL790]
name=Oracle Linux 7.9 x86_64
baseurl=file:///mnt/iso
gpgkey=file:///mnt/iso/RPM-GPG-KEY
gpgcheck=1
enabled=1
yum install oracle-database-preinstall-19c.x86_64
...
yum install compat-libstdc++-33-3.2.3-72.el7.x86_64
yum install gcc-4.8.5-44.0.3.el7.x86_64
yum install gcc-c++-4.8.5-44.0.3.el7.x86_64
yum install mlocate
Now I will do the 11gR2 setup using an image from the old system and the cloning method: I don’t want any changes regarding the binaries.
scp root@10.168.1.54:/mnt/nas/root_SRV-BD2_dbhome_11gR2.tgz /install/
chmod -R 755 /install/
chown -R oracle:oinstall /u01/
chown -R oracle:oinstall /u02/
chown -R oracle:oinstall /u03/
su - oracle
mkdir -p /u01/app/oracle/product/11.2.0.4/
mkdir -p /u01/app/oracle/network/admin
mkdir /u01/app/oracle/local
cd /u01/app/oracle/product/11.2.0.4/
tar xzf /install/root_SRV-BD2_dbhome_11gR2.tgz
cd /u01/app/oracle/product/11.2.0.4/dbhome_1/clone/bin
/u01/app/oracle/product/11.2.0.4/dbhome_1/clone/bin/clone.pl ORACLE_BASE="/u01/app/oracle" ORACLE_HOME="/u01/app/oracle/product/11.2.0.4/dbhome_1" ORACLE_HOME_NAME=OraDB11gHome1 OSDBA_GROUP=dba OSOPER_GROUP=oper
...
exit
/u01/app/oraInventory/orainstRoot.sh
/u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
I will disable SELinux and the firewall, I don’t need them:
vi /etc/selinux/config
SELINUX=disabled
systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
systemctl stop firewalld
reboot
A relink of the binaries was needed:
su - oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
$ORACLE_HOME/bin/relink all
Now I need to configure a default listener:
su – oracle
$ORACLE_HOME/bin/netca -silent -responsefile /u01/app/oracle/product/11.2.0.4/dbhome_1/assistants/netca/netca.rsp
Parsing command line arguments:
Parameter "silent" = true
Parameter "responsefile" = /u01/app/oracle/product/11.2.0.4/dbhome_1/assistants/netca/netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Oracle Net Listener Startup:
Running Listener Control:
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/lsnrctl start LISTENER
Listener Control complete.
Listener started successfully.
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0
My system is now ready for “new” 11gR2 databases.
Next stepsThe system is now deployed and ready to host a first database. Creating a database is done by using dbca or by restoring a database from a backup. Using a backup is definitely the best idea to keep the database as close as the source one. You may need to use db_file_name_convert to remap the old datafile structure to the new one. I would probably create a pfile from the source database, make the needed changes and start my new instance with this pfile.
ConclusionOnly consider this solution if no other one is possible. Remember that it comes without any support and without any performance guarantee. For sure, I wouldn’t recommend using this solution for a production database, and I definitely advise migrating all your old databases to 19c. Don’t forget that 11gR2 is more than 15-year old now, and you shouldn’t deploy it in 2023.
L’article Deploying 11.2.0.4 on a recent ODA est apparu en premier sur dbi Blog.
Upgrade OEM 13c from R4 to R5
Upgrading Enterprise Manger Cloud Control can be tricky. In this blog I am describing an upgrade I did for a client.
The source version is OEM13c R4

And the target version will be EM13c R5
Before starting I did following tasks
-Copy the key in the repository
oracle@oemserverp01:/home/oracle/emc135/ [oms13cR4] emctl config emkey -copy_to_repos
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos".
oracle@oemserverp01:/u00/app/oracle/product/13.1.0.0/swlib/ [oms13cR4]
-set the parameter “_allow_insert_with_update_check” to true
SQL> alter system set "_allow_insert_with_update_check"=true scope=both;
System altered.
SQL>
You can now stop the oms. Anyway the upgrade tool will ask you to do this when needed
oracle@oemserverp01:/home/oracle/ [oms13cR4] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down
oracle@oemserverp01:/home/oracle/ [oms13cR4] emctl status oms
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
WebTier is Down
Oracle Management Server is Down
JVMD Engine is Down
BI Publisher Server is Down
oracle@oemserverp01:/home/oracle/ [oms13cR4]
Following zip files are downloaded.
oracle@oemserverp01:/home/oracle/emc135/ [EMRE13C] ll
total 8363828
-rw-r--r--. 1 oracle oinstall 1897494675 Jan 23 09:22 em13500_linux64-2.zip
-rw-r--r--. 1 oracle oinstall 2130750885 Jan 23 09:26 em13500_linux64-3.zip
-rw-r--r--. 1 oracle oinstall 1454982045 Jan 23 09:29 em13500_linux64-4.zip
-rw-r--r--. 1 oracle oinstall 1494500710 Jan 23 09:31 em13500_linux64-5.zip
-rwxr-xr-x. 1 oracle oinstall 1586801533 Jan 23 09:19 em13500_linux64.bin
oracle@oemserverp01:/home/oracle/emc135/ [EMRE13C]
To launch the upgrade run the executable em13500_linux64.bin (do chmod +x em13500_linux64.bin)
oracle@oemserverp01:/home/oracle/emc135/ [rdbms19c] pwd
/home/oracle/emc135
oracle@oemserverp01:/home/oracle/emc135/ [rdbms19c] ./em13500_linux64.bin











Then I had a look on the specified file in the error and I deinstalled the agents on the corresponding servers
oracle@oemserverp01:/u00/app/oracle/product/13.1.0.0/swlib/ [EMRE13C] cat /tmp/OraInstall2023-01-23_09-37-33AM/extensibility_check/agent_side_compatibility_check_errors.txt
============================================================================================================================
Plug-in Agent Unsupported Version deployed
============================================================================================================================
Oracle Database dbserver01.example.com:3872 13.2.1.0.0
Oracle Database dbserver02.example.com:3872 13.2.2.0.0
oracle@oemserverp01:/u00/app/oracle/product/13.1.0.0/swlib/ [EMRE13C]
And then continue the upgrade



Execute as required the root.sh file
[root@oemserverp01 ~]# /u00/app/oracle/product/13.5.0.0/omsr5/allroot.sh
Starting to execute allroot.sh .........
Starting to execute /u00/app/oracle/product/13.5.0.0/omsr5/root.sh ......
Check /u00/app/oracle/product/13.5.0.0/omsr5/install/root_oemserverp01.example.com_2023-01-23_12-06-44.log for the output of root script
Finished execution of /u00/app/oracle/product/13.5.0.0/omsr5/root.sh ......
[root@oemserverp01 ~]#

After the end of the process we can validate the new version of our OEM

Hope this may help.
L’article Upgrade OEM 13c from R4 to R5 est apparu en premier sur dbi Blog.
How to submit a talk for a conference
I already did some talks at conferences and I had already the honor to select talks for a conference. During this talk selection I found some excellent examples on how to do to a proposal, but unfortunately also some examples that could be improved. Also my own learnings from proposals will run into this post, because I know, it’s not always easy to find the right words.
Let’s have a look at the things that can be improved. Let’s assume you want to submit a talk about High Availability in PostgreSQL, maybe a comparison of different tools.
TitleFirst of all you, need a title. This one should wake the interest of the audience. Be creative!
“HA in PGSQL” is maybe not the best choice to wake someone’s interest. Of course, the summary of your talk is more important. But if the Title does not catch me, I am even not really interested in the summary.
SummaryThe summary is the heart of your proposal, the heart of your talk. Take some time to write it. It tooks days to prepare your talk. So please take more than two minutes to write your summary.
One liner“I will talk about high availability in PostgreSQL”
Let’s be honest. Do you know what this talk is about now? Is it about replication in general? Is it about one tool? Is it about several tools? Is it a comparison? Will there be examples?
With any respect. This is not enough!
AgendaPublishing the Agenda in the summary maybe looks self-explaining for you. But don’t just put the agenda 1:1 into your proposal
- Explaining PostgreSQL
- Explaining Method 1
- Explaining Method 2
- Conclusion
So now I have a better idea what you want to talk about than in the one liner. But isn’t it worth to put a bit more details into your submission? A bit more passion?
After a small introduction of PostgreSQL and its possibilities to reach high availability, we will have a look at the pros and cons of different solutions (patroni, repmanager and autofailover) to achieve even better availability of your data.
If you really want to put your agenda into the summary already (think that’s a bit taking the surprise), feel free to do. But some more sentences to give a better understanding what is the goal of this talk would be great.
SpellingYes, this is still a topic we have to talk about. For example, if you propose to a PostgreSQL conference and write: Postgre or PostgresSQL, the talk selection committee will ask themselves, if you’re familiar with the topic. Same is valid for Orcl or Oracel.
Always ask somebody to read your proposal. Especially if you’re submitting in another language than your mother tongue. It’s not necessary to write as good as a native speaker. But it should at least be understandable.
“I talk my speach about patroni and high available is good to having” is understandable, but be sure your talk won’t be chosen. The talk committee will ask themselves if your language skills are good enough to give a talk in another language.
And if you don’t have someone to read your summary, even MS Word can help you to find typos.
About youThe “About you” section is important because it should wake the audience’s interest. I should also give the talk selection committee an idea who you are.
So please think about a short but interesting “About You” entry.
Where are your from, how long are you working in your area? Please, don’t put your full CV into the about you section. But maybe a bit more than: “Hi my name is Julia” would be great as well.
DeadlineLast, but not less important, maybe the most important part of your submission.
Make yourself familiar with the deadline for a Call for Papers (CfP). It’s usually published on the conference homepage. Further the CfP is announced on the mailing lists, e.g. for PostgreSQL (pgsql-announce@lists.postgresql.org). Feel free to subscribe to it on www.postgresql.org. There is also the possibility to follow your favourite conference on social media (Twitter / LinkedIn….)
In general, there are also reminders, that the CfP will close soon. You don’t need to submit at the first day. But please submit before the CfP closes.
Usually, the organisation teams have a timeline as well as some time pressure during the organisation period. So please understand, that the voting for the talks will start immediately after the CfP is closed and later submissions or direct messages to the talk selection committee cannot be considered.
First-time speakersDon’t be shy. Try it, I am sure you have something interesting to talk about. Don’t be sad if your talk is not chosen. Give it another try on another conference. There are always talk committees that will look for newcomers to give them a chance. We all started as a first-time speaker. So be brave, we look forward to meet you.
ConclusionThis blog reflects my personal impression of talk submissions I read during the last years. If you find any similarities to your summaries, I am sorry, I tried to invent my own examples.
Of course, it’s always your own decision how you want to propose a talk. Find your own style, it should reflect your personality. But always keep in mind: Do I think this proposal is interesting? Do I want to spend my time listening to that talk?
Have fun with your next or even first proposal!
L’article How to submit a talk for a conference est apparu en premier sur dbi Blog.
Configuring a SMTP relay on Oracle Linux Server v8
For some internal DMK (https://www.dbi-services.com/fr/produits/dmk-management-kit/) development I had to configure a SMTP relay on my VM. Purpose of the SMTP relay is to send the email locally from DMK without any authentication. This Email will be transferred to the SMTP server created by one of my colleague on our AWS Cloud. This SMTP server is of course using authentication. To do so I have been installing and configuring postfix. Through this blog, I have just wanted to share my experience on this installation, hoping it might help someone.
Installation of postfixMy lab is running Oracle Linux Server release 8.6.
[root@srv-ol8-ora ~]# cat /etc/oracle-release Oracle Linux Server release 8.6
I have no postfix currently installed on my VM.
[root@srv-ol8-ora ~]# rpm -qa | grep postfix
I installed it.
[root@srv-ol8-ora ~]# dnf install -y postfix Last metadata expiration check: 0:09:18 ago on Mon 23 Jan 2023 09:13:46 AM CET. Dependencies resolved. ============================================================================================================================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================================================================================================================== Installing: postfix x86_64 2:3.5.8-4.el8 ol8_baseos_latest 1.5 M Installing dependencies: libicu x86_64 60.3-2.el8_1 ol8_baseos_latest 8.8 M Transaction Summary ============================================================================================================================================================================================================================== Install 2 Packages Total download size: 10 M Installed size: 36 M Downloading Packages: (1/2): postfix-3.5.8-4.el8.x86_64.rpm 15 MB/s | 1.5 MB 00:00 (2/2): libicu-60.3-2.el8_1.x86_64.rpm 33 MB/s | 8.8 MB 00:00 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total 38 MB/s | 10 MB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : libicu-60.3-2.el8_1.x86_64 1/2 Running scriptlet: libicu-60.3-2.el8_1.x86_64 1/2 Running scriptlet: postfix-2:3.5.8-4.el8.x86_64 2/2 Installing : postfix-2:3.5.8-4.el8.x86_64 2/2 Running scriptlet: postfix-2:3.5.8-4.el8.x86_64 2/2 Verifying : libicu-60.3-2.el8_1.x86_64 1/2 Verifying : postfix-2:3.5.8-4.el8.x86_64 2/2 Installed: libicu-60.3-2.el8_1.x86_64 postfix-2:3.5.8-4.el8.x86_64 Complete!
The appropriate package has been installed.
[root@srv-ol8-ora ~]# rpm -qa | grep postfix postfix-3.5.8-4.el8.x86_64Firewall configuration
If running a local firewall, this one needs to be configured to allow SMTP traffic.
[root@srv-ol8-ora ~]# firewall-cmd --zone=public --add-service=smtp --permanent success [root@srv-ol8-ora ~]# firewall-cmd --reload success
Or if the firewall is not needed, which is the case on my side, I have simply deactivated it.
[root@srv-ol8-ora postfix]# systemctl stop firewalld [root@srv-ol8-ora postfix]# systemctl disable firewalld Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.Remove sendmail
If installed, sendmail should be removed as per oracle documentation : https://docs.oracle.com/en/learn/oracle-linux-postfix/#introduction
On my lab VM, sendmail is not installed.
[root@srv-ol8-ora ~]# rpm -qa | grep sendmailSet Postfix as the default Mail Transfer Agent
[root@srv-ol8-ora ~]# alternatives --set mta /usr/sbin/sendmail.postfixEnable and start the Postfix service
[root@srv-ol8-ora ~]# systemctl enable --now postfix Created symlink /etc/systemd/system/multi-user.target.wants/postfix.service → /usr/lib/systemd/system/postfix.service.Install SASL
SASL (Simple Authentication and Security Layer) is mandatroy to be used in the Postfix SMTP client.
We need both following packages : cyrus-sasl cyrus-sasl-plain. I checked which cyrus packages are already installed and available.
[root@srv-ol8-ora postfix]# dnf list cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain Last metadata expiration check: 1:59:10 ago on Mon 23 Jan 2023 09:13:46 AM CET. Installed Packages cyrus-sasl-lib.x86_64 2.1.27-6.el8_5 @ol8_baseos_latest Available Packages cyrus-sasl.i686 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl.src 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl.src 2.1.27-6.el8_5 ol8_appstream cyrus-sasl.x86_64 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl-lib.i686 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl-plain.i686 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl-plain.x86_64 2.1.27-6.el8_5
I have installed both needed packages.
[root@srv-ol8-ora postfix]# dnf install cyrus-sasl cyrus-sasl-plain Last metadata expiration check: 1:59:36 ago on Mon 23 Jan 2023 09:13:46 AM CET. Dependencies resolved. ============================================================================================================================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================================================================================================================== Installing: cyrus-sasl x86_64 2.1.27-6.el8_5 ol8_baseos_latest 96 k cyrus-sasl-plain x86_64 2.1.27-6.el8_5 ol8_baseos_latest 47 k Transaction Summary ============================================================================================================================================================================================================================== Install 2 Packages Total download size: 144 k Installed size: 194 k Is this ok [y/N]: y Downloading Packages: (1/2): cyrus-sasl-plain-2.1.27-6.el8_5.x86_64.rpm 946 kB/s | 47 kB 00:00 (2/2): cyrus-sasl-2.1.27-6.el8_5.x86_64.rpm 1.6 MB/s | 96 kB 00:00 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total 2.2 MB/s | 144 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : cyrus-sasl-plain-2.1.27-6.el8_5.x86_64 1/2 Running scriptlet: cyrus-sasl-2.1.27-6.el8_5.x86_64 2/2 Installing : cyrus-sasl-2.1.27-6.el8_5.x86_64 2/2 Running scriptlet: cyrus-sasl-2.1.27-6.el8_5.x86_64 2/2 Verifying : cyrus-sasl-2.1.27-6.el8_5.x86_64 1/2 Verifying : cyrus-sasl-plain-2.1.27-6.el8_5.x86_64 2/2 Installed: cyrus-sasl-2.1.27-6.el8_5.x86_64 cyrus-sasl-plain-2.1.27-6.el8_5.x86_64 Complete!
Checking…
[root@srv-ol8-ora postfix]# dnf list cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain Last metadata expiration check: 1:59:46 ago on Mon 23 Jan 2023 09:13:46 AM CET. Installed Packages cyrus-sasl.x86_64 2.1.27-6.el8_5 @ol8_baseos_latest cyrus-sasl-lib.x86_64 2.1.27-6.el8_5 @ol8_baseos_latest cyrus-sasl-plain.x86_64 2.1.27-6.el8_5 @ol8_baseos_latest Available Packages cyrus-sasl.i686 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl.src 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl.src 2.1.27-6.el8_5 ol8_appstream cyrus-sasl-lib.i686 2.1.27-6.el8_5 ol8_baseos_latest cyrus-sasl-plain.i686 2.1.27-6.el8_5 ol8_baseos_latestConfigure postfix
I have added following needed parameter to the postfix configuration, including the SMTP server name and a reference to a file (sasl_passwd) storing the username and password.
[root@srv-ol8-ora ~]# postconf -e "relayhost = [email-smtp.eu-central-1.amazonaws.com]:587" [root@srv-ol8-ora ~]# postconf -e "smtp_sasl_auth_enable = yes" [root@srv-ol8-ora ~]# postconf -e "smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd" [root@srv-ol8-ora ~]# postconf -e "smtp_sasl_security_options = noanonymous" [root@srv-ol8-ora ~]# postconf -e "smtp_sasl_tls_security_options = noanonymous" [root@srv-ol8-ora ~]# postconf -e "smtp_use_tls = yes" ol8_baseos_latest
I have created the file storing the username and password for sending emails. The file is called sasl_passwd in stored in /etc/postfix. The format is [<smtp_server_name>]:<port> <username>:<password>.
[root@srv-ol8-ora ~]# vi /etc/postfix/sasl_passwd [root@srv-ol8-ora ~]# cat /etc/postfix/sasl_passwd [email-smtp.eu-central-1.amazonaws.com]:587 A*****************H:B******************z
I have change file permission
[root@srv-ol8-ora ~]# chmod 600 /etc/postfix/sasl_passwd
I have run postmap on the file so it will create a database-like file so postfix can read it.
[root@srv-ol8-ora ~]# postmap /etc/postfix/sasl_passwd
And finally reload postfix configuration.
[root@srv-ol8-ora ~]# postfix reload postfix/postfix-script: refreshing the Postfix mail systemTest
In order to run test, I have installed mailx.
[root@srv-ol8-ora ~]# dnf list mailx Last metadata expiration check: 0:52:42 ago on Mon 23 Jan 2023 09:13:46 AM CET. Available Packages mailx.src 12.5-29.el8 ol8_baseos_latest mailx.x86_64 12.5-29.el8 ol8_baseos_latest [root@srv-ol8-ora ~]# dnf install mailx Last metadata expiration check: 0:52:56 ago on Mon 23 Jan 2023 09:13:46 AM CET. Dependencies resolved. ============================================================================================================================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================================================================================================================== Installing: mailx x86_64 12.5-29.el8 ol8_baseos_latest 257 k Transaction Summary ============================================================================================================================================================================================================================== Install 1 Package Total download size: 257 k Installed size: 491 k Is this ok [y/N]: y Downloading Packages: mailx-12.5-29.el8.x86_64.rpm 5.1 MB/s | 257 kB 00:00 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total 4.7 MB/s | 257 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : mailx-12.5-29.el8.x86_64 1/1 Running scriptlet: mailx-12.5-29.el8.x86_64 1/1 Verifying : mailx-12.5-29.el8.x86_64 1/1 Installed: mailx-12.5-29.el8.x86_64 Complete!
I ran a test sending an email to my email address.
[root@srv-ol8-ora ~]# echo "This is a DMK test through SMTP relay." | mailx -s "DMK test" marc.wagner@dbi-services.com
I checked the log file.
[root@srv-ol8-ora ~]# tail -f /var/log/maillog Jan 23 11:18:32 srv-ol8-ora postfix/pickup[47181]: 09472140C214: uid=0 from= Jan 23 11:18:32 srv-ol8-ora postfix/cleanup[47187]: 09472140C214: message-id= Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 09472140C214: from=, size=512, nrcpt=1 (queue active) Jan 23 11:18:32 srv-ol8-ora postfix/smtp[47184]: 09472140C214: to=, relay=email-smtp.eu-central-1.amazonaws.com[3.74.180.161]:587, delay=0.24, delays=0.01/0/0.14/0.1, dsn=5.0.0, status=bounced (host email-smtp.eu-central-1.amazonaws.com[3.74.180.161] said: 554 Message rejected: Email address is not verified. The following identities failed the check in region EU-CENTRAL-1: root@srv-ol8-ora.localdomain, root (in reply to end of DATA command)) Jan 23 11:18:32 srv-ol8-ora postfix/cleanup[47187]: 44B7C140C22A: message-id= Jan 23 11:18:32 srv-ol8-ora postfix/bounce[47186]: 09472140C214: sender non-delivery notification: 44B7C140C22A Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 44B7C140C22A: from=, size=3090, nrcpt=1 (queue active) Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 09472140C214: removed Jan 23 11:18:32 srv-ol8-ora postfix/local[47188]: 44B7C140C22A: to=, relay=local, delay=0.01, delays=0/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox) Jan 23 11:18:32 srv-ol8-ora postfix/qmgr[47182]: 44B7C140C22A: removed
The problem comes from the fact that the sender email address, root@srv-ol8-ora.localdomain, is not authorized on the AWS SMTP server. Using option -r in the mailx command will not help, as not working with SMTP. And in any case, I would have the same problem with DMK sending an email.
This is why I had to change the sender in the postfix configuration.
Changing sender in the postfix configurationTo do so I have created a new configuration file, sender_email, in /etc/postfix mapping the sender email address. This file has been added to the postfix configuration at the ender_canonical_maps parameter.
[root@srv-ol8-ora postfix]# vi sender_email [root@srv-ol8-ora postfix]# cat sender_email root@srv-ol8-ora.localdomain marc.wagner@dbi-services.com oracle@srv-ol8-ora.localdomain marc.wagner@dbi-services.com [root@srv-ol8-ora postfix]# postconf -e "sender_canonical_maps = hash:/etc/postfix/sender_email" [root@srv-ol8-ora postfix]# postmap /etc/postfix/sender_email [root@srv-ol8-ora postfix]# postfix reload postfix/postfix-script: refreshing the Postfix mail systemNew test
I ran a mailx command again.
[root@srv-ol8-ora ~]# echo "This is a DMK test through SMTP relay." | mailx -s "DMK test" marc.wagner@dbi-services.com
And checked the log file again.
[root@srv-ol8-ora ~]# tail -f /var/log/maillog Jan 23 12:02:45 srv-ol8-ora postfix/pickup[57484]: 46AC1140C215: uid=0 from= Jan 23 12:02:45 srv-ol8-ora postfix/cleanup[57682]: 46AC1140C215: message-id= Jan 23 12:02:45 srv-ol8-ora postfix/qmgr[57485]: 46AC1140C215: from=, size=508, nrcpt=1 (queue active) Jan 23 12:02:45 srv-ol8-ora postfix/smtp[57684]: 46AC1140C215: to=, relay=email-smtp.eu-central-1.amazonaws.com[52.28.191.33]:587, delay=0.45, delays=0.03/0.06/0.13/0.23, dsn=2.0.0, status=sent (250 Ok 01070185de4b762a-1c453360-ba6f-49f2-bc7e-508941e8cc7e-000000) Jan 23 12:02:45 srv-ol8-ora postfix/qmgr[57485]: 46AC1140C215: removed
As we can see in the log file, the email sending is now clean. I could confirm this as well getting the email in my email inbox. I could test and validate the same from the DMK software as well.
L’article Configuring a SMTP relay on Oracle Linux Server v8 est apparu en premier sur dbi Blog.
SQL Server: Heap page deallocation and IndexOptimize
HEAP tables are a type of data structure that, unlike a clustered index table, does not have a specific ordering of its rows, making them faster to insert and delete data.
They are often used for temporary “staging” tables.
This is all fine. But, if your table is never dropped and reused over time for data loading, you need to be aware of how SQL Server manages HEAPs pages.
I recently was asked by a customer why a HEAP table would consume significantly more disk space than the same table with a clustered index. Do you have any idea why?”
HEAP page deallocationWell, SQL Server does not deallocate pages when doing a DELETE on a Heap. When a row is deleted, it does not mark the space as reusable, as a result, the space remains allocated and consumes more disk space.
This may seem surprising but it is documented in Microsoft Docs.
See: Space that a table uses is not completely released after you use a DELETE statement to delete data from the table in SQL Server
After you use a DELETE statement in Microsoft SQL Server to delete data from a table, you may notice that the space that the table uses is not completely released.
There’s also a great article by Paul Randal on the topic:
SQLskills SQL101: Why does my heap have a bunch of empty pages?
The table I was asked about is a heap table with a size of approximately 130GB. This table experiences a high volume of data modification, with almost the entire table being deleted and inserted again on a daily basis.
In this specific scenario, the best workaround would be to perform a TRUNCATE statement on the table prior to data loading, as this would clear out any unused pages and release the space back to the operating system.
Another workaround that is not mentioned in the documentation is to perform index maintenance periodically on the HEAP with a table REBUILD, this will defragment the heap and make the space more contiguous, as well as release any unused space back to the operating system.
IndexOptimize and HEAP fragmentationThe Ola Hallengren maintenance script is a widely-used open-source solution for maintaining the performance and integrity of SQL Server databases. One of the features of the script is its ability to rebuild indexes to fix fragmentation.
However, the script does not include any specific functionality for rebuilding heaps or fixing heap fragmentation.
It has been two years since an issue was opened on GitHub for this feature, and we are still waiting for it to be implemented.
Heap empty pages and IndexOptimize demoHere is an example of creating a HEAP table with 3000 empty and unallocated pages. Running the indexOptimize procedure will not address this table, and therefore will not release this unused space.
use master
go
DROP DATABASE IF EXISTS Heapo;
CREATE DATABASE Heapo;
GO
USE Heapo;
GO
CREATE TABLE HeapTest (
[c1] INT IDENTITY
, [c2] VARCHAR(4000) DEFAULT replicate('a', 4000)
, [c3] VARCHAR(4000) DEFAULT ''
);
GO
-- Insert 3000 rows, which will allocate 1500 pages
INSERT INTO HeapTest DEFAULT VALUES;
GO 3000
In this table, we can observe that there are 3000 rows with 1500 pages.
SELECT [page_count], [record_count], [avg_page_space_used_in_percent]
FROM sys.dm_db_index_physical_stats (DB_ID (N'Heapo'), OBJECT_ID (N'HeapTest'), 0, DEFAULT, 'DETAILED');
GO

Let’s delete all the rows.
-- Delete all the rows
DELETE FROM HeapTest;
GO
Rows are still allocated:

Now, what happens if we run Ola Hallengren’s index maintenance script.
exec master.dbo.IndexOptimize @Databases = 'Heapo'

Same things. IndexOptimize does not do anything about HEAPs.
Rebuilding the index will release the unused pages.
ALTER TABLE HeapTest REBUILD

HEAP tables in SQL Server are a powerful data structure that can be used to store temporary data or large volumes of data that are inserted, updated, and deleted frequently.
However, it is important for DBAs to be aware of the issues with deallocations and to manually address HEAP issues as needed.
One strategy is to periodically rebuild the HEAPs using the ALTER TABLE … REBUILD command.
L’article SQL Server: Heap page deallocation and IndexOptimize est apparu en premier sur dbi Blog.
Kubernetes Python client on AWS EKS Step by Step Demo
Kubernetes Python client is used to perform operations on Kubernetes resources in the cluster from your Python code.
Pre-requisites:
- A Running Kubernetes Cluster (Note: I am using AWS EKS cluster)
- Install Kubernetes client for Python with pip by using command: pip install kubernetes
- kubectl utility to verify the resources (Optional)
All the code files used in this video are present in my github repo and the link is in video description.
Openssl: How to automate (without hitting the carriage return many times)
I think nearly everyone, who administers some PCs or servers has used openssl. And almost everything there is straight forward.
To create your own key and certificate, just run:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
-keyout privateKey.key -out certificate.crt
............+..+.+.................+............+.+......+........+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.+....................+......+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.......+.+.....+...+..........+...............+....................+.+...+..+..........+........+......+.+...+.....+...+.......+..+.+...+...........+....+..+.......+.....+...............+................+......+......+...+......+...+...+..+......+......+.........+....+........+............+..........+.....+...+.......+..+...+.............+...+......+..............+....+...........+....+..+.+..+...+.............+............+...+..+.........+...+...............+...+..........+.........+...+...+...+...............+.........+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++But the problem with that approach:
..........+.....+.......+.........+..+.............+.....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.......+.............+.........+..+....+..+...+.+......+...+.....+.........+.+.....+.+.....+...+.+.....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+......+............................+.....+....+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:
You have to add the carriage returns for every line after the 5 dashes and then your certificate looks quite ugly (see red colored text):
openssl x509 -text -in certificate.crt -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
0b:01:9a:aa:f1:59:69:33:84:7e:cf:89:69:0c:d5:80:61:82:b5:28
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
Validity
Not Before: Jan 22 15:54:43 2023 GMT
Not After : Jan 22 15:54:43 2024 GMT
Subject: C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c6:7d:5a:9f:97:3d:43:9b:e0:19:2f:46:31:5c:
82:f0:42:ac:da:a9:e8:d0:91:e0:01:98:05:52:cf:
1c:4e:77:53:1a:96:5c:6a:6f:ca:5c:61:a4:5f:14:
12:ed:69:ae:50:bb:99:28:48:df:bc:f6:76:c1:63:
2b:51:55:ad:bb:62:9f:3a:2b:1f:e7:c3:fd:bb:45:
04:c3:88:ee:b1:ba:c6:e2:f7:f1:80:5b:ef:eb:04:
fb:ec:82:89:39:c6:33:68:0d:3e:36:62:36:e0:a0:
ff:21:5f:74:ad:d2:4b:d4:5d:c4:67:6b:90:a0:8f:
1e:4c:80:31:30:2e:8e:5e:9d:62:8a:1d:45:84:5f:
d3:09:46:fe:4f:8d:68:c6:54:e4:51:da:e0:64:f8:
5d:af:01:2e:79:0c:fe:0b:0f:d6:2e:1b:e6:eb:09:
ca:cc:16:3d:92:53:ae:3b:ad:da:67:a5:ef:69:30:
7f:e7:53:7c:dd:23:59:c8:8c:6b:b0:a9:fa:fc:4c:
c1:44:cf:3f:2f:91:f4:8c:b6:7c:d9:ae:82:6d:96:
aa:bb:51:07:3c:2b:12:24:e4:a3:7d:9b:ee:4b:7e:
f4:02:0e:bc:b4:35:bd:73:dc:6b:b4:34:36:57:48:
72:f2:91:60:2d:79:d9:44:3c:77:76:eb:c7:8a:00:
5f:75
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
F1:77:6C:19:76:FB:E4:DD:50:2A:1E:01:BE:A1:5C:48:3D:5A:40:68
X509v3 Authority Key Identifier:
F1:77:6C:19:76:FB:E4:DD:50:2A:1E:01:BE:A1:5C:48:3D:5A:40:68
X509v3 Basic Constraints: critical
CA:TRUE
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
8a:28:28:12:6c:1e:e5:54:86:9b:6e:90:7a:ca:aa:a7:a1:b3:
b1:43:02:44:e8:9a:59:b2:d6:6e:36:c6:51:3b:9b:f4:91:47:
40:6f:cf:6d:de:86:8d:dd:2f:9e:44:4c:f8:d3:5a:d3:3a:ef:
d5:0d:e1:10:b6:64:34:ee:03:4a:f2:de:ff:da:db:a3:93:20:
13:85:2a:d6:9b:b2:0e:2c:2e:9c:f9:71:ff:32:3b:c3:6b:0a:
e7:98:2d:30:c9:a6:47:b7:72:84:bb:52:23:11:d6:b7:90:cb:
98:cd:59:16:b5:8f:70:46:c1:95:90:01:2f:7f:9c:22:ac:29:
8d:14:97:76:dd:06:56:f8:22:9d:f4:00:9f:40:3c:fb:c2:95:
63:48:50:ee:ad:17:1b:54:6b:60:0c:d5:3e:66:3b:00:0e:7a:
33:99:cc:4a:f6:dc:d1:e3:40:ea:8c:66:df:7e:92:e1:a5:e5:
72:0e:89:ba:87:43:0c:56:70:8c:f2:9b:77:dd:ca:03:8e:24:
fd:6b:51:d2:3b:b2:df:e4:ff:c2:3c:cb:ab:2e:cd:82:f4:69:
ad:a3:81:d7:95:d0:68:e1:3f:fc:50:4d:8b:14:b2:82:8c:19:
2b:06:8a:0e:ef:21:4b:68:4f:e3:1d:53:64:62:97:c8:35:45:
01:54:d9:10
To avoid that you have just to expand your command with the following parameters:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
-keyout privateKey.key -out certificate.crt \
-subj "/C=de/CN=schroff.special/OU=MyDepartment"
Typical attributes can be found here:
(Common Name, Organization, Organizational Unit, Country, Location)PostgreSQL 16: reserved_connections
The maximum number of connections that PostgreSQL will allow, is controlled by the max_connections parameter. By default this allows a maximum of one hundred connections:
postgres=# show max_connections;
max_connections
-----------------
100
(1 row)
In reality not one hundred connections are allowed for normal users, because some of those connections are reserved for superusers. This is controlled by the superuser_reserved_connections parameter which, by default, is limited to three:
postgres=# show superuser_reserved_connections;
superuser_reserved_connections
--------------------------------
3
(1 row)
This means, that 97 connections are available to “normal” users. What happens if we set amount of available connections to one? With the default configuration of three (for superuser_reserved_connections) this seems to be somehow weird. Anyway, lets do it and see what happens:
postgres@debian11pg:/home/postgres/ [150] psql -c "alter system set max_connections=1"
ALTER SYSTEM
postgres@debian11pg:/home/postgres/ [150] pg_ctl restart
waiting for server to shut down.... done
server stopped
waiting for server to start....postgres: superuser_reserved_connections (3) must be less than max_connections (1)
stopped waiting
pg_ctl: could not start server
Examine the log output.
So, even if PostgreSQL allows us to set this, the instance will not start afterwards. Removing this configuration will allow us to start the instance again:
postgres@debian11pg:/home/postgres/ [150] sed -i '/^max_connections/d' $PGDATA/postgresql.auto.conf
postgres@debian11pg:/home/postgres/ [150] psql -l
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges
-----------+----------+----------+-------------+-------------+------------+-----------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
(3 rows)
Are configuration which does make more sense is this:
postgres@debian11pg:/home/postgres/ [150] psql -c "alter system set max_connections=4"
postgres@debian11pg:/home/postgres/ [150] pg_ctl restart
postgres@debian11pg:/home/postgres/ [150] psql -c "show max_connections"
max_connections
-----------------
4
(1 row)
This will allow one “normal” connection and three superuser connections, which easily can be verified:
postgres@debian11pg:/home/postgres/ [150] psql -c "create user u with password 'u' login"
CREATE ROLE
postgres@debian11pg:/home/postgres/ [150] psql -U u postgres &
[1] 16639
postgres@debian11pg:/home/postgres/ [150] psql -U u postgres &
[2] 16641
17:20:32 postgres@debian11pg:/home/postgres/ [150] psql: error: connection to server on socket "/tmp/.s.PGSQL.5442" failed: FATAL: remaining connection slots are reserved for no
Super user connections do still work, of course:
postgres@debian11pg:/home/postgres/ [150] psql
psql (15.0)
Type "help" for help.
postgres=# select count(*) from pg_stat_activity
This is how much control you have around connections up to PostgreSQL15. With PostgreSQL 16, you’ll have an additional option because a new parameter and a new role have been introduced. The new role is called pg_use_reserved_connections:
postgres=# select version();
version
------------------------------------------------------------------------------------------------------
PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
(1 row)
postgres=# \du *reserved*
List of roles
Role name | Attributes | Member of
-----------------------------+--------------+-----------
pg_use_reserved_connections | Cannot login | {}
postgres=#
The new parameter is called reserved_connections, and it is disabled by default:
postgres=# \dconfig *reserved*
List of configuration parameters
Parameter | Value
--------------------------------+-------
reserved_connections | 0
superuser_reserved_connections | 3
(2 rows)
Granting that role to a user, setting the parameter and restarting the instance will give you the new feature:
postgres=# create user u with login password 'u';
CREATE ROLE
postgres=# create user x with login password 'x';
CREATE ROLE
postgres=# grant pg_use_reserved_connections to u;
GRANT ROLE
postgres=# alter system set max_connections = 5;
ALTER SYSTEM
postgres=# alter system set reserved_connections = 1;
ALTER SYSTEM
postgres=# \! pg_ctl restart
waiting for server to shut down.... done
server stopped
waiting for server to start....2023-01-22 06:46:41.828 CET - 1 - 17283 - - @ - 0LOG: redirecting log output to logging collector process
2023-01-22 06:46:41.828 CET - 2 - 17283 - - @ - 0HINT: Future log output will appear in directory "pg_log".
done
server started
postgres=# select 1;
FATAL: terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
postgres=# \dconfig *connections*
List of configuration parameters
Parameter | Value
--------------------------------+-------
log_connections | off
log_disconnections | off
max_connections | 5
reserved_connections | 1
superuser_reserved_connections | 3
(5 rows)
With this configuration user “x” will only be able to create one connection. Any additional connection request will fail:
postgres@debian11pg:/home/postgres/ [pgdev] psql -U x postgres &
[2] 17310
postgres@debian11pg:/home/postgres/ [pgdev] psql (16devel)
Type "help" for help.
[2]+ Stopped psql -U x postgres
postgres@debian11pg:/home/postgres/ [pgdev] psql -U x postgres &
[3] 17312
postgres@debian11pg:/home/postgres/ [pgdev] psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: remaining connection slots are reserved for roles with privileges of pg_use_reserved_connections
[3] Exit 2 psql -U x postgres
User “u” will be able to create one connection, which comes from the reserved connections. Any additional connection request will fail as well, as the remaining slots are reserved for super users:
postgres@debian11pg:/home/postgres/ [pgdev] psql -U u postgres &
[3] 17318
postgres@debian11pg:/home/postgres/ [pgdev] psql (16devel)
Type "help" for help.
[3]+ Stopped psql -U u postgres
postgres@debian11pg:/home/postgres/ [pgdev] psql -U u postgres &
[4] 17320
postgres@debian11pg:/home/postgres/ [pgdev] psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: remaining connection slots are reserved for superusers
[4] Exit 2 psql -U u postgres
Nice, so in addition to reserving connection slots for super users, you can also do that for normal users if you configure the system accordingly. Starting with PostgreSQL 16, of course.
L’article PostgreSQL 16: reserved_connections est apparu en premier sur dbi Blog.
Different number of where condition function call between direct and indirect calls
A brief history of data quality or – the birth of OMrun. Part 2: Turning frustration into solution
Imagine a meeting room (pick the one without windows…) with +25 application managers lined up around one big table. One after the other, they would report the status of their project with focus on the next MDP. And as you can imagine, the projects were always on “green”, especially when the date of the MDP was still far away in the future.
But the closer the MDP date came, the more difficult it would be for application managers to “hide” their challenges in delivering their tasks in time and quality. Since there were so many interfaces between all the applications, the fellow delivery managers had quite a good overview about who might be in trouble delivering and who won’t.
For quite some time, Frank himself was part of this meeting and observed it with a mixture of joy and frustration. Joy, because it was kind of funny to see the project leaders stammering around while trying to explain their difficult project status and still being able to report “green”. And frustration because these meetings were long and tiring and reminded one sometimes more of a theatric experience than of a serious business meeting. But most of all: These meetings were inefficient and extremely expensive if you imagine that +25 senior delivery managers were sitting in one room for +3h every few weeks.
Frank was always fascinated by the possibilities of data driven decision making and it was no surprise that he took his chance to think a little further. And after some reasoning, a business idea came to his mind: Why do we need to rely to the assessment, wordy declarations and “opinions” of application managers when all the evidence is right there in the data?! Because the statement, whether or not an interface is working or whether or not an application is doing what it should be doing can be derived directly from the data.
So how about using data as an early detection system? Shouldn’t it be possible to create a tool to automate this process? You would need a tool, Frank reasoned, that is able to at least:
- compare data based on rules
- cope with different data formats
- measure and display the rule violations for simple further assessment
This was the starting point and there was no stopping Frank in developing an instrument that would later evolve into the powerful framework OMrun. So thanks to a rather inefficient and tiring series of meetings, the world can now use a highly configurable standard framework not only to measure data quality in heterogeneous system environments but also to perform data migrations or anonymize data.
The rest is history – and still evolving!
L’article A brief history of data quality or – the birth of OMrun. Part 2: Turning frustration into solution est apparu en premier sur dbi Blog.
AWS EKS Crossplane for Beginners
Recursion - Why is anchor value not repeated
See you in St Louis for Alliance 2023!
The HEUG Alliance 2023 conference begins next month! I've been reviewing the schedule, and there are some incredible sessions on the agenda.
I am presenting the following sessions:
Monday, Feb 27th9:00 AM - 10:00 AM: PeopleSoft Fluid: Zero to Hero in 60 Minutes
1:00 PM - 2:00 PM: Getting the most out of PeopleSoft PeopleTools: Tips and Techniques
Tuesday, Feb 28th9:00 AM - 10:00 AM: PeopleSoft Integration Strategies
See you there!
Hashicorp Vault. Install and quick first configuration
Keeping and using passwords in scripts is often an overlooked task, at least in the initial stages of development.
This behavior poses serious security problems, and quite often, especially in scripts, one can find plaintext passwords.
If the project uses Git, then the versions of git that contain the passwords are visible in the project history.
The use of a vault should be implemented in the first phase of every project.
This post describes how to install and use the HashiCorp Vault, in a Podman container.
Official container vault image:https://github.com/hashicorp/docker-vault
https://hub.docker.com/_/vault
As these tests are made on a brand new cloud Ubuntu vm, let’s start from the beginning by :
Installingpodman
.
root@blog-vault:~# apt-get install podman
....
vault
user.
root@blog-vault:~# useradd vault --home /home/vault --create-home --shell /bin/bash
root@blog-vault:~# sudo su - vault
vault@blog-vault:~$
root@blog-vault:~# id vault
uid=1002(vault) gid=1002(vault) groups=1002(vault)
root@blog-vault:~# loginctl enable-linger 1002
podman
vault container:
ubuntu@blog-vault:~$ podman login registry.connect.redhat.com
Username: your_username
Password: ********
Login Succeeded!
vault@blog-vault:~$ podman pull registry.connect.redhat.com/hashicorp/vault:1.12.1-ubi
Trying to pull registry.connect.redhat.com/hashicorp/vault:1.12.1-ubi...
Getting image source signatures
Copying blob 4a6625bea753 done
Copying blob 7c43afe89fe5 done
Copying blob 66304ec43437 done
Copying blob 35f159681384 done
Copying blob baed4ec80d13 done
Copying blob c4a8cf07892e done
Copying blob 7ea744a06738 done
Copying config 11bd373e3f done
Writing manifest to image destination
Storing signatures
11bd373e3fd8c82d9e86c5143572ca79c90962ccc72cb15fab4449790bcdbd35
vault@blog-vault:~$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.connect.redhat.com/hashicorp/vault 1.12.1-ubi 11bd373e3fd8 2 months ago 350 MB
vault@blog-vault:~$ mkdir -p $HOME/vault/logs
vault@blog-vault:~$ mkdir -p $HOME/vault/file
vault@blog-vault:~$ mkdir -p $HOME/vault/config
vault@blog-vault:~$ cat vault/config/local.json
{
"storage": {
"file": {
"path": "/vault/file"
}
},
"listener": [
{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": true
}
}
],
"default_lease_ttl": "168h",
"max_lease_ttl": "720h",
"ui": true
}
podman
container
vault@blog-vault:~$ podman run --cap-add=IPC_LOCK \
-v $HOME/vault/logs:/vault/logs \
-v $HOME/vault/config:/vault/config \
-v $HOME/vault/file:/vault/file \
-e 'SKIP_CHOWN=true' \
-e 'SKIP_SETCAP=true' \
-p 8200:8200 vault server
==> Vault server configuration:
Cgo: disabled
Go Version: go1.19.2
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: true
Recovery Mode: false
Storage: file
Version: Vault v1.12.1, built 2022-10-27T12:32:05Z
Version Sha: e34f8a14fb7a88af4640b09f3ddbb5646b946d9c
==> Vault server started! Log data will stream in below:
2023-01-10T09:22:18.628Z [INFO] proxy environment: http_proxy="" https_proxy="" no_proxy=""
2023-01-10T09:22:18.628Z [WARN] no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
2023-01-10T09:22:18.651Z [INFO] core: Initializing version history cache for core
Let’s init the vault and save the root_token
in an env variable for easy use.
Save whole JSON output in a init_vault.json
file.
vault@blog-vault:~$ curl -s --request POST --data '{"secret_shares": 1, "secret_threshold": 1}' http://127.0.0.1:8200/v1/sys/init | jq .
{
"keys": [
"cc86645102f32c100ddb5239dd231457e4944d55139cc63d9e6eb73b23244f59"
],
"keys_base64": [
"zIZkUQLzLBAN21I53SMUV+SUTVUTnMY9nm63OyMkT1k="
],
"root_token": "hvs.lkg7mE8OSF35JjR5va4Q6vwE"
}
vault@blog-vault:~$ export VAULT_ROOT_TOKEN="hvs.lkg7mE8OSF35JjR5va4Q6vwE"
vault@blog-vault:~$ export VAULT_UNSEAL_TOKEN="zIZkUQLzLBAN21I53SMUV+SUTVUTnMY9nm63OyMkT1k="
VAULT_UNSEAL_TOKEN
saved variable.
vault@blog-vault:~$ curl -s --request POST --data '{"key": "'"$VAULT_UNSEAL_TOKEN"'}' http://127.0.0.1:8200/v1/sys/unseal | jq .
{
"type": "shamir",
"initialized": true,
"sealed": false,
"t": 1,
"n": 1,
"progress": 0,
"nonce": "",
"version": "1.12.1",
"build_date": "2022-10-27T12:32:05Z",
"migration": false,
"cluster_name": "vault-cluster-fa1e0851",
"cluster_id": "2dbcd016-63e7-82ac-e684-847d6584a508",
"recovery_seal": false,
"storage_type": "file"
}
vault@blog-vault:~$ curl -s http://127.0.0.1:8200/v1/sys/init | jq .
{
"initialized": true
}
AppRole
auth method.
More on this AppRole
authentification method: https://developer.hashicorp.com/vault/docs/auth/approle
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request POST \
--data '{"type": "approle"}' \
http://127.0.0.1:8200/v1/sys/auth/approle
AppRole
information
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request GET \
http://127.0.0.1:8200/v1/sys/auth/approle | jq .
{
"options": {},
"plugin_version": "",
"running_plugin_version": "v1.12.1+builtin.vault",
"running_sha256": "",
"type": "approle",
"description": "",
"external_entropy_access": false,
"uuid": "f8c3835f-f6d0-cfab-34a0-6337aa3267a5",
"deprecation_status": "supported",
"config": {
"default_lease_ttl": 0,
"force_no_cache": false,
"max_lease_ttl": 0,
"token_type": "default-service"
},
"accessor": "auth_approle_f341e77c",
"local": false,
"seal_wrap": false,
"request_id": "16755bed-2c14-f0ae-8dd4-5e4d16b530ae",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"accessor": "auth_approle_f341e77c",
"config": {
"default_lease_ttl": 0,
"force_no_cache": false,
"max_lease_ttl": 0,
"token_type": "default-service"
},
"deprecation_status": "supported",
"description": "",
"external_entropy_access": false,
"local": false,
"options": {},
"plugin_version": "",
"running_plugin_version": "v1.12.1+builtin.vault",
"running_sha256": "",
"seal_wrap": false,
"type": "approle",
"uuid": "f8c3835f-f6d0-cfab-34a0-6337aa3267a5"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
At this moment the vault is configured and the AppRole authentification method is activated.
Configure the vault for user API usage Create a policyCreate the policy oci_policy
with the rights read/update/create
on the vault secret path.
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request PUT \
--data '{"policy":"\npath \"secret/data/\" {\n capabilities = [\"create\", \"update\", \"read\" ]\n}\n\npath \"secret/data/oci/\" {\n capabilities = [\"create\", \"update\", \"read\" ]\n}\n"}' \
http://127.0.0.1:8200/v1/sys/policies/acl/oci_policy
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request GET \
http://127.0.0.1:8200/v1/sys/policies/acl/oci_policy | jq .
{
"request_id": "5ecd0b6d-1034-908d-3146-322ee531a6ea",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"name": "oci_policy",
"policy": "\npath \"secret/data/\" {\n capabilities = [\"create\", \"update\", \"read\" ]\n}\n\npath \"secret/data/oci/\" {\n capabilities = [\"read\"]\n}\n"
},
"wrap_info": null,
"warnings": null,
"auth": null
}
The KV secret engine is used to store arbitrary secrets.
We use the version 2, as this is the last one at this moment
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request POST \
--data '{ "type":"kv-v2" }' \
http://127.0.0.1:8200/v1/sys/mounts/secret
oci_role
)
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request POST \
--data '{"policies": ["oci_policy"]}' \
http://127.0.0.1:8200/v1/auth/approle/role/oci_role
oci_role
id
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
http://127.0.0.1:8200/v1/auth/approle/role/oci_role/role-id | jq -r ".data"
{
"role_id": "68c22d2b-adf0-2f88-ec07-d7c495c51e30"
}
# save this value
vault@blog-vault:~$ export VAULT_ROLE_ID="68c22d2b-adf0-2f88-ec07-d7c495c51e30"
oci_role
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_ROOT_TOKEN" \
--request POST \
http://127.0.0.1:8200/v1/auth/approle/role/oci_role/secret-id | jq -r ".data"
{
"secret_id": "d7602ddb-a2db-8e1d-47e3-4ee57e0a9140",
"secret_id_accessor": "dd56e8ea-8207-88c8-6c1d-9ff6b1e2ab54",
"secret_id_num_uses": 0,
"secret_id_ttl": 0
}
# save the secret_id value
vault@blog-vault:~$ export VAULT_SECRET_ID="d7602ddb-a2db-8e1d-47e3-4ee57e0a9140"
vault@blog-vault:~$ curl -s --request POST --data '{"role_id": "'"$VAULT_ROLE_ID"'", "secret_id":"'"$VAULT_SECRET_ID"'"}' http://127.0.0.1:8200/v1/auth/approle/login | jq -r ".auth"
{
"client_token": "hvs.CAESIDvS7Q9FG9tt_PWCFls0ya-vZzR9RwwmC4vT63fqZe0_Gh4KHGh2cy5RM0ZDeUpNM2VwbU51Qm9RVXo1ZGk4ZVQ",
"accessor": "2wXPJf3Irw245R6jXaNmR60a",
"policies": [
"default",
"oci_policy"
],
"token_policies": [
"default",
"oci_policy"
],
"metadata": {
"role_name": "oci_role"
},
"lease_duration": 604800,
"renewable": true,
"entity_id": "dfb3bb81-6d8a-0fbb-1f7f-ce4c29e98019",
"token_type": "service",
"orphan": true,
"mfa_requirement": null,
"num_uses": 0
}
# save the client token id
vault@blog-vault:~$ export VAULT_CLIENT_TOKEN="vs.CAESIDvS7Q9FG9tt_PWCFls0ya-vZzR9RwwmC4vT63fqZe0_Gh4KHGh2cy5RM0ZDeUpNM2VwbU51Qm9RVXo1ZGk4ZVQ"
Create the the key named password
and the valus my_long_password
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_CLIENT_TOKEN" \
--request POST \
--data '{ "data": {"password": "my-long-password"} }' \
http://127.0.0.1:8200/v1/secret/data/oci | jq -r ".data"
{
"created_time": "2023-01-19T14:02:27.075843452Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 1
}
vault@blog-vault:~$ curl -s \
--header "X-Vault-Token: $VAULT_CLIENT_TOKEN" \
--request GET \
http://127.0.0.1:8200/v1/secret/data/oci | jq -r ".data.data"
{
"password": "my-long-password"
}
At this moment we are able to get a stored pair key:value
using the fetched $VAULT_CLIENT_TOKEN
Restart the vault container to validate the retention of the vault.
Pay attention to shutdown traces: when the shutdown is triggered the vault is sealed
vault@blog-vault:~$ ==> Vault shutdown triggered
2023-01-19T14:05:42.570Z [INFO] core: marked as sealed
2023-01-19T14:05:42.570Z [INFO] core: pre-seal teardown starting
2023-01-19T14:05:42.570Z [INFO] rollback: stopping rollback manager
2023-01-19T14:05:42.570Z [INFO] core: pre-seal teardown complete
2023-01-19T14:05:42.570Z [INFO] core: stopping cluster listeners
2023-01-19T14:05:42.570Z [INFO] core.cluster-listener: forwarding rpc listeners stopped
2023-01-19T14:05:42.609Z [INFO] core.cluster-listener: rpc listeners successfully shut down
2023-01-19T14:05:42.609Z [INFO] core: cluster listeners successfully shut down
2023-01-19T14:05:42.610Z [INFO] core: vault is sealed
podman
container
vault@blog-vault:~$ podman run --cap-add=IPC_LOCK -v $HOME/vault/logs:/vault/logs -v $HOME/vault/config:/vault/config -v $HOME/vault/file:/vault/file -e 'SKIP_CHOWN=true' -e 'SKIP_SETCAP=true' -p 8200:8200 vault server
==> Vault server configuration:
Cgo: disabled
Go Version: go1.19.2
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: true
Recovery Mode: false
Storage: file
Version: Vault v1.12.1, built 2022-10-27T12:32:05Z
Version Sha: e34f8a14fb7a88af4640b09f3ddbb5646b946d9c
==> Vault server started! Log data will stream in below:
2023-01-19T14:06:04.376Z [INFO] proxy environment: http_proxy="" https_proxy="" no_proxy=""
2023-01-19T14:06:04.377Z [WARN] no api_addr value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
2023-01-19T14:06:04.398Z [INFO] core: Initializing version history cache for core
For the values of these variable see the initial configuration of the vault
vault@blog-vault:~$ export VAULT_ROLE_ID="68c22d2b-adf0-2f88-ec07-d7c495c51e30"
vault@blog-vault:~$ export VAULT_SECRET_ID="d7602ddb-a2db-8e1d-47e3-4ee57e0a9140"
vault@blog-vault:~$ export VAULT_UNSEAL_TOKEN="zIZkUQLzLBAN21I53SMUV+SUTVUTnMY9nm63OyMkT1k="
vault@blog-vault:~$ curl -s http://127.0.0.1:8200/v1/sys/init | jq .
{
"initialized": true
}
vault@blog-vault:~$ curl -s --request POST --data '{"key": "'"$VAULT_UNSEAL_TOKEN"'"}' http://127.0.0.1:8200/v1/sys/unseal | jq .
{
"type": "shamir",
"initialized": true,
"sealed": false,
"t": 1,
"n": 1,
"progress": 0,
"nonce": "",
"version": "1.12.1",
"build_date": "2022-10-27T12:32:05Z",
"migration": false,
"cluster_name": "vault-cluster-3e0948ec",
"cluster_id": "1eb1ee65-f32c-7cf3-a5ec-7975e1f9240a",
"recovery_seal": false,
"storage_type": "file"
}
vault@blog-vault:~$ export VAULT_CLIENT_TOKEN=$(curl -s --request POST --data '{"role_id": "'"$VAULT_ROLE_ID"'", "secret_id":"'"$VAULT_SECRET_ID"'"}' http://127.0.0.1:8200/v1/auth/approle/login | jq -r ".auth.client_token")
vault@blog-vault:~$ export MY_PASSWORD=$(curl -s --header "X-Vault-Token: $VAULT_CLIENT_TOKEN" --request GET http://127.0.0.1:8200/v1/secret/data/oci | jq -r ".data.data.password")
vault@blog-vault:~$ echo $MY_PASSWORD
my-long-password
All API endpoints can receive the ?help=1
parameter to output the end-point help.
vault@blog-vault:~$ curl -s --header "X-Vault-Token:$VAUL_ROOT_TOKEN" http://127.0.0.1:8200/v1/secret?help=1 | jq .
{
"help": "Request: config\nMatching Route: ^config$\n\nConfigures settings for the KV store\n\n## PARAMETERS\n\n cas_required (bool)\n\n If true, the backend will require the cas parameter to be set for each write\n\n delete_version_after (duration (sec))\n\n
….
Installing and using the Vault is quick and easy. Dealing with the problem of storing and using passwords later in a project is much more complicated and difficult to implement.
Happy scriptingL’article Hashicorp Vault. Install and quick first configuration est apparu en premier sur dbi Blog.
Containers for Developers
As I am working on developing a component for YaK, I have to stick to standards on how to write code in Ansible as well as many other constraints regarding security. I always thought of docker being a server component that facilitate deployment and scaling, but it is much more than that: In this blog, I will use it on my laptop to run different tools to improve my code and, finally, server security.
LintDespite embedded constraints to have a consistent formatting, Ansible has some flexibility in how you can write the code that you could have difference between two components written by two different people. Thus “linter” my code is a must to make it more consistent and more robust.
Of course, it is possible to ensure that code is fine with gitlab pipelines on on every commit or merge, but if you never linted your code and doing it on a whole component, you will have lots of failures and warnings to deal with. It is preferable to do it step by step, or better said, role by role.
It is also possible to install ansible-lint inside yakenv container, but packages dependencies might arise.
The best solution I found, for the time being, was to use a container image provided by Ansible developers: ansible creator.
With this image, we have the benefit that comes with containers:
- we do not touch our yakenv image
- it is fully functional out of the box, or out of the container :).
- it is updated and maintained by the community
Starting the container is easy as triggering the command below and mapping the volume to my GIT repo:
docker run --rm -it -v /home/ols/GIT:/data ghcr.io/ansible/creator-ee
Next, in the container, we can simply find all files to lint to ansible-lint like this:
$ cd /data/components/weblogic_domain
$ ansible-lint --offline
A summary of analyze will be shown at end of report:
Failed after min profile: 16 failure(s), 27 warning(s) on 148 files.
As you can see, there is lots of work to do. This is why I prefer to do it role by role:
ansible-lint --offline roles/common/jdk-install/
Once all issues and warnings have been resolved, it is possible to enable a pre-commit hook that will automatically lint the code.
Security ScanningEven it is delivered without any warranty, I like to run testssl.sh script to avoid bad configuration on SSL part of the component. As for ansible-lint, I could install it inside YaK container or on the target server to run, but there is another solution: Use of pre-configured container with all necessary libraries like openssl, for example.
Running this container is also very easy:
docker run --rm -ti drwetter/testssl.sh https://<server ip>:8443
The last argument is the URL to test. Then, it will run for about 5 minutes and test protocols, ciphers, certificate, HTTP responses and few known vulnerabilities. It will also simulate many different clients and Operating Systems.
What Next?Containerizing client application are great. And it works on WSL without any difference as on Linux (Thanks containers!). Nevertheless, analyzing, understanding and resolving generated reports still requires a certain level of expertise.
ansible creator comes also with molecule, a testing framework, which I am also studying and might be in another blog post.
L’article Containers for Developers est apparu en premier sur dbi Blog.
A brief history of data quality or – the birth of OMrun. Part 1: Complexity is only the first name
It all started more than 15 years ago when the core business of OMIS AG (the previous owner of OMrun) was consulting services.
At this time, senior consultant and CIO Frank Zeindler was engaged in a mandate for a major Swiss bank. Frank was a project leader and responsible for the further development of an application in the credit and loan department.
In large organizations, the process to deploy new software is a rather complex task. And major Swiss banks – certainly in the years 2000 – are very large organizations.
To illustrate this fact, quickly imagine yourself the “application landscape” of that bank. The application landscape was a display of the application architecture showing all applications and interfaces involved to run the bank. This application landscape also existed as a print-out-version. As such, it was a piece of paper with the approximate dimensions of 1m x 2m and the descriptions of the “boxes” and “arrows” were in font size 6. To sum it up: A significant number of applications and interfaces are needed to run a major bank…
The release of new software or new interfaces within the application architecture was performed in so called MDP’s (major deployment packages). At this time, such change of releases was planned three times a year. You may say that this is not exactly what you would call CI/CD – but that’s just how it was at this time…
To coordinate and manage and orchestrate such MDPs, a highly skilled, tough and well-informed person from management level was needed. That person – let’s call him Mr. Smith – would run the MDP meetings in a military kind of style in order to manage this “flea circus” of dependencies, priorities, special requests, regulatory requirements and business needs.
The MDP status meetings were somehow feared amongst the involved application managers, because nobody wanted to be the target of Mr. Smith’s critical questions and nobody wanted to be the person to endanger the MDP because he or she could not deliver his or her application until the defined deadline.
Stay tuned, part 2 of “A brief history of data quality or – the birth of OMrun” is coming soon!
L’article A brief history of data quality or – the birth of OMrun. Part 1: Complexity is only the first name est apparu en premier sur dbi Blog.
VirtualBox 7.0.6
VirtualBox 7.0.6 maintenance release has arrived. The downloads and changelog are in the usual places. I’ve installed it on Windows 10 and 11 machines with no drama. I’ve also run Packer builds of all my Vagrant boxes, which you can find on Vagrant Cloud here. So far so good. I’ve not had any drama. I’ll … Continue reading "VirtualBox 7.0.6"
The post VirtualBox 7.0.6 first appeared on The ORACLE-BASE Blog.VirtualBox 7.0.6 was first posted on January 19, 2023 at 8:58 am.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Pages
