I took these notes while setting up Grafana and InfluxDB on Proxmox.
I hit a few minor issues so thought I’d post it here as a mini “How To” or reference for others.
NOTE: If you are just looking for a simple and light-weight way to monitor Proxmox stats (including memory, CPU, disk for your LXCs and VMs), check out the brief section on “Pulse” at the end of this page!
This setup allows me to easily monitor my Proxmox host and the VMs and LXCs it runs via a nice Grafana dashboard, with the data/metrics stored in InfluxDB.
Note that the default user:password for Grafana is admin:admin
Configure Proxmox
Next you need to set the Metrics Server used byProxmox, this will tell proxmox to send all metrics on itself and the VMs and LXCs it runs to InfluxDB.
This is set under “Datacenter” in the proxmox UI:
This looked straightforward too, but there were conflicting opinions on how to do it. I initially went with UDP which didn’t work for me; there was nowhere to set any authentication and I wasn’t allowing anonymous access to InfluxDB, so I switched to using HTTP which then allowed me to specify the (InfluxDB) credentials.
Configure InfluxDB
I created a “proxmox” organisation and a “proxmox” bucket in InfluxDB
I then created an API key/Token specifically for that proxmox bucket, which I used in the above pic.
To verify things were working between Proxmox and InfluxDB, I took a look in the data explorer:
You can see in that pic that InfluxDB has data on my VMs and LXCs, which it must have received from Proxmox, so I then knew my remaining issues were with the connection between InfluxDB <-> Grafana.
Configure Grafana
Initially I was getting “InfluxDB returned error: Unauthorized error reading influxDB” – hence the check above to confirm that Proxmox -> InfluxDB was working ok.
I couldn’t see anywhere in this version of Grafana to specify the Token for InfluxDB though – other screenshots on the ‘net had & used that option, but it wasn’t available for me 🙁
After some reading I learned you could set the Token by creating a new Custom HTTP Header called “Authorization” with the value “Token BXx…….7yBkw==” (that’s the word Token, a space, then the full Token you got from InfluxDB, all set as the Value for a new Custom HTTP Header called Authorization…)
This seemed surprisingly flaky to me, but it worked.
My (working) connection details look like this:
Prior to adding that HTTP Header, I was getting a successful connection but “0 measurements found”.
you don’t need to sign up there or anything else, just enter the ID: 10048 like in this pic and it’ll pull the Dashboard down:
Now I was finally able to see data being populated in Grafana from my Proxmox node & its VMs & LXCs:
Happy days.
The Pulse option
A possible alternative to the above Grafana and InfluxDB stack is to use “Pulse” – this was new to me and I have recently set it up too (you can never have enough monitoring!).
This is a very lightweight and more focused option that is really quick and easy to set up.
While the InfluxDB and Grafana approach can be extended to cover a vast range of monitoring and alerting for all sorts of things – I have set up and used it in several large companies I’ve worked for – if all you really want is Proxmox monitoring without those possibilities, this looks perfect.
This post takes a look at setting up monitoring and alerting in Kubernetes, using Helm and Kubernetes Operators to deploy and configure Prometheus and Grafana.
This platform is quickly and easily deployed to the cluster using a Helm Chart, which in turn uses a Kubernetes Operator, to setup all of the required resources in an existing Kubernetes Cluster.
I’m re-using the Minikube Kubernetes cluster with Helm that was built and described in previous posts here and here, but the same steps should work for any working Kubernetes & Helm setup.
An example Grafana Dashboard for Kubernetes monitoring is then imported and we take a quick look at monitoring of Cluster components with other dashboards
It may sound like Helm and Operators do much the same thing, but they are different and complimentary
Helm and Operators are complementary technologies. Helm is geared towards performing day-1 operations of templatization and deployment of Kubernetes YAMLs — in this case Operator deployment. Operator is geared towards handling day-2 operations of managing application workloads on Kubernetes.
I’m reusing the Minikube cluster from previous posts, so start it back up with:
minikube start
which outputs the following in the console
🎉 minikube 1.10.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.10.1 💡 To disable this notice, run: ‘minikube config set WantUpdateNotification false’
🙄 minikube v1.9.2 on Darwin 10.13.6 ✨ Using the virtualbox driver based on existing profile 👍 Starting control plane node m01 in cluster minikube 🔄 Restarting existing virtualbox VM for “minikube” … 🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.8 … 🌟 Enabling addons: dashboard, default-storageclass, helm-tiller, metrics-server, storage-provisioner 🏄 Done! kubectl is now configured to use “minikube”
this all looks ok, and includes the minikube addons I’d selected previously. Now a quick check to make sure my local helm repo is up to date:
helm repo update
I then used this command to find the latest version of the stable prometheus-operator via a helm search: helm search stable/prometheus-operator --versions | head -2
there’s no doubt a neater/builtin way to find out the latest version, but this did the job – I’m going to install 8.13.8:
install the prometheus operator using Helm, in to a new dedicated “monitoring” namespace just takes this one command: helm install stable/prometheus-operator --version=8.13.8 --name=monitoring --namespace=monitoring
Ooops
that should normally be it, but for me, this resulted in some issues along these lines:
Error: Get http://localhost:8080/version?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
– looks like Helm can’t communicate with Tiller any more; I confirmed this with a simple helm ls which also failed with the same message. This shouldn’t be a problem when v3 of Helm goes “tillerless”, but to fix this quickly I simply re-enabled Tiller in my cluster via Minikube Addons:
verified things worked again with helm ls, then the helm install... command worked and started to do its thing…
New Operator and Namespace
Keeping an eye on progress in my k8s dashboard, I can see the new “monitoring” namespace has been created, and the various Operator components are being downloaded, started up and configured:
you can also keep an eye on progress with: watch -d kubectl get po --namespace=monitoring
this takes a while on my machine, but eventually completes with this console output:
NOTES: The Prometheus Operator has been installed. Check its status by running: kubectl –namespace monitoring get pods -l “release=monitoring”
Visit https://github.com/coreos/prometheus-operator for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
kubectl get po --namespace=monitoring shows the pods now running in the cluster, and for this quick example the easiest way to get access to the new Grafana instance is to forward the pods port 3000 to localhost like this:
the default user for this Grafana is “admin” and the password for that user is “prom-operator“, so log in with those credentials…
Grafana Dashboards for Kubernetes
We can now use the ready-made Grafana dashboards, or add/import ones from the extensive online collection, like this one here for example: https://grafana.com/grafana/dashboards/6417 – simply save the JSON file
then go to Grafana and import it with these settings:
and you should now have a dashboard showing some pretty helpful stats on your kubernetes cluster, it’s health and resource usage:
Finally a very quick look at some of the other inbuilt dashboards – you can use and adjust these to monitor all of the components that comprise your cluster and set up alerting when limits or triggers are reached:
All done & next steps
There’s a whole lot more that can be done here, and many other ways to get to this point, but I found this pretty quick and easy.
I’ve only been looking at monitoring of k8s resources here, but you can obviously set up grafana dashboards for many other things, like monitoring your deployed applications. Many applications (and charts and operators) come with prom endpoints built in, and can easily and automatically be added to your monitoring and alerting dashboards along with other datasources.
Helm and Tiller – what they are, when & why you’d maybe use them
Helm and Tiller – prep, install and Helm Charts
Deploying Jenkins via Helm Charts
and WordPress w/MariaDB too
Wrap up
The below are mostly my technical notes from this session, with some added blurb/explanation.
Helm and Tiller – what they are, when & why you’d maybe use them
From the Helm site:
“Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.”
Helm is basically a package manager for Kubernetes applications. You can choose from a large list of Stable (or not so!) ready made packages and use the Helm Charts to quickly and easily deploy them to your own Kubernetes Cluster.
This makes light work of some very complex deployment tasks, and it’s also possible to extend these ready-made charts to suit your needs, and to write your own Charts from scratch, or pass your own values to override default ones, or… many other interesting options!
For this session we are looking at installing Helm, reviewing some example Helm Charts and deploying a few “vanilla” ones to the cluster we created in the first half of the session. We also touch upon the life-cycle of Helm Charts – it’s similar to dockers – and point out some of the ways this could be extended and customised to suit your needs – more on this at a later date hopefully.
Helm and Tiller – prep, install and Helm Charts
First, installing Helm – it’s as easy as this, run on your laptop/host that’s running the Minikube k8s we setup earlier:
Tiller is the client part of Helm and is deployed inside your k8s cluster. It’s set to be removed with the release of Helm 3, but the basic functionality wont really change. More details here https://helm.sh/blog/helm-3-preview-pt1/
Next we do the Tiller prep & install – add RBAC for tiller, deploy via helm and take a look at the running pods:
now get the URL for the Jenkins service from Minikube:
minikube service --url=true jenki-jenkins
Hit that URL in your browser, and grab the password in UI from Pods > Jenki and log in to Jenkins with the user “admin”:
That’s a Jenkins instance deployed via Helm and Tiller and a Helm Chart to our Kubernetes Cluster running inside Minikube via a VirtualBox VM… all done in a few minutes. And it’s all customisable, repeatable, highly scaleable and awesome.
and WordPress w/MariaDB too
This was the “bonus demo” if my laptop wasn’t on fire – and thanks to some rapid cleaning up it managed fine – showing how quickly we could deploy a functional WordPress with MariaDB backend to our k8s cluster using the Helm Chart.
To prepare for this I did a helm ls to see all the things I had running. then helm delete --purge jenki, gave it a while to recover then had to do
kubectl delete pods <jenkinpod>
before starting the WordPress Chart deployment with
That’s it – we covered a lot in this session, and plan to use this as a platform to explore Helm in more detail later, writing our own Helm Charts and providing our own customisations to them.
Update: this follow-on post runs through setting up Jenkins with Helm then creating Jenkins Pipelines that dynamically provision dockerised Jenkins Agents:
This is the first of two posts on Kubernetes and HelmCharts, focusing on setting up a local development environment for Kubernetes using Minikube, then exploring Helm for package management and quickly and easily deploying several applications to the cluster – NGINX, Jenkins, WordPress with a MariaDB backend, MySQL and Redis.
The content is taken from the practical/demo session I wrote and published in Github here:
One of the key objectives and challenges here was getting a useful local Kubernetes environment up and running as quickly and easily as possible for as wide an audience as we could- there’s so much to the Kubernetes ecosystem that it’s very easy to get side-tracked, and we could have (happily) spent a long time discussing the myriad of alternative possible solutions.
We plan to go “deeper” on all of this in future sessions and have an in-depth Helm session in the works, but for this session we were focused on creating a practical starting point.
</ramble>
Don
What is covered here:
Minikube – what it is (& isn’t) & why you’d use it (or not)
Kubernetes and Minikube components and concepts
setup for Mac and Linux
creating a first Kubernetes cluster in Minikube
minikube addons – what they are and how they can help you
minikube docker env – using DOCKER_HOST with minikube VM
Kubernetes dashboard with Heapster and Metrics Server – made easy by Minikube
kubectl – some examples and alternatives
example app – “hello (Kubernetes) world” minikube style with NGINX, scaling your world
Helm and Tiller – what they are, when & why you’d maybe use them
Helm and Tiller – prep, install and Helm Charts
Deploying Jenkins via Helm Charts
and WordPress w/MariaDB too
wrap up
Minikube – what it is (& isn’t) & why you’d use it (or not)
What it is, why you’d use it etc.
Local development of k8s – runs a single node Kubernetes cluster in a Virtual Machine on your laptop/PC.
All about making things easy for local development, it is not a production solution, or even close to it.
There are many other ways to run k8s, they all have their pros and cons and use cases. The slides from the Meetup covered this in more detail and include links for further info – they are available here:
Cleanup/prep – if required, remove any previous cluster & settings
`minikube delete; rm -rf ~/.minikube`
Creating a first Kubernetes cluster in Minikube
Here we create a first Kubernetes cluster with Minikube, then take a look around in & outside of the VM.
With the above initial setup done, it’s as simple as running this in a shell:
minikube start
Note you could optionally give this Cluster a name, if you are likely to have more than one for different branches of development for example. This is also where you could specify the VM provider if you want to use something other than VirtualBox – there are more details here:
This should produce output like the following, and it may well take a few minutes as the VM is downloaded and started, then a stack of Docker images are started up inside that….
At this point you should be able to see the minikube VM running in the VirtualBox GUI:
Now it’s running, we can connect from our local shell directly to the one inside the running VM by simply issuing:
minikube ssh
This will put you inside the VM where the Kubernetes Cluster is being run, and we can see and interact with the running components, for example:
docker images
should show all of the downloaded images:
and you could do this to see the running containers:
docker ps
Quitting out of the VM puts us back on the local host, where we can use kubectl to query the status of the Minikube cluster – the initial setup has told kubectl about the Minikube-managed Kubernetes Cluster, meaning there’s no other setup required here:
kubectl cluster-info
kubectl get nodes
kubectl describe nodes
minikube addons – what they are and how they can help you
Show some of the ways minkube makes things easier for local dev
First, take a moment to look around these two local folders:
ls -al ~/.minikube; ls -al ~/.kube
These are where Minikube keeps its settings and the VM Image, and where kubectl settings are persisted – and updated by Minikube.
With Minikube you’ve often got the option to either use kubectl directly, or to use some Minikube built-in features to make your life easier.
Addons are one of these features, allowing you to very easily add – or remove – functionality from the cluster like this:
minikube addons list
minikube addons enable heapster
minikube addons enable metrics-server
With those three lines we’ve taken a look at the available addons and their current status, and selected to enable both heapster and the metrics server. This was done to give us cpu and mem stats in the Kubernetes Dashboard, which we will set up in a moment. The output should look something like this:
minikube config view
shows the current state of the config – i.e. what changes have been made, so we can keep a track of them easily.
kubectl --namespace kube-system get pods
now we can enable the dashboard:
minikube addons enable dashboard
and check again to see the current state
minikube addons list
we’ll connect to the Dashboard and take a look around in a moment, but first…
minikube docker env – using the DOCKER_HOST in you minikube VM – how & why
Minikube docker-env – setup local docker client to use minikube docker host
We’re going to look at connecting our local docker client to the docker host inside the Minikube VM. This is made easy by:
minikube docker-env
if you run that command on its own it wiull show you what settings it will export and you can set them by doing:
eval ${minikube docker-env}
From then on, in that shell, your local docker commands will use the docker host inside Minikube.
This is very useful for debugging and local development – when you change and deploy anything to your Kubernetes Cluster, you can easily tail the logs or check for errors or issues. You can also do all of this via the dashboard or kubectl too if you prefer, but it’s another handy and powerful feature from Minikube.
The following image shows the result of running this command:
so we can now use our local docker client to run docker commands like…
docker ps
docker ps | grep -i metrics
docker logs -f <some container id>
etc.
Kubernetes dashboard with Heapster and Metrics Server – made easy by Minikube
Minikube k8s dashboard – here we will start up the k8s dashboard and take look around.
We’ve delayed starting the dashboard up until after we enabled the metrics-server & heapster components we deployed earlier. By doing it in this order, the dashboard will automatically detect and use these components, giving us cpu & mem stats and a nicer looking dash, with no additional config required.
Starting the dashboard simply involved running
minikube dashboard
and waiting for a minute…
That should fire up your browser automatically, then you can take a look around at things like Default namespace > Nodes
and in the namespace kube-system > Deployments
and kube-system > Pods
You can see the logs and statuses of everything running in your k8s cluster – from the core components we covered at the start, to the dashboard, metrics and heapster we enabled recently, and the application we’re going to deploy and scale up soon.
kubectl – some examples and alternatives
# kubectl command line – look at kubectl and keep an eye on things kubectl get deployment -n kube-system
kubectl get pods -o wide -n kube-system
kubectl get services
kubectl
example app – “hello (Kubernetes) world” minikube style with NGINX, scaling your world
Now we’ll deploy the most basic application we can – a “Hello World” style NGINX docker image.
It’s as simple as this, where nginx is the name of the docker image you want to deploy, hello-nginx is the label you want to give it, and port 80 is where you want it to listen:
kubectl run hello-nginx --image=nginx --port=80
that shouldn’t take long, and you can watch the progress like this:
In Part I, Information Radiators, I covered what they are, what the main benefits are, and the approach I usually use to set them up. This post goes in to more technical detail on how I extract this data from Jenkins.
My usual setup/architecture for Jenkins Information Radiators goes something along these lines:
TV screens running Mozilla Firefox or Google Chrome in Kiosk Mode, and Tab Mix Plus set up to rotate tabs (if required)
JSP Pages served via Tomcat on Linux server (which also runs the data extracting script described below)
MySQL database on Linux server – contains tables with data pulled from Jenkins and other sources, and the config data too (which URL’s to monitor)
And you’ll need some Jenkins instances/jobs to monitor too, obviously 🙂
The Jenkins XML API is very useful for automating tasks like this – if you simply append “/api/xml” to a
Jenkins job URL, it will serve up an XML version – note there is also a JSON API and a CLI and plenty of other options, but I’m using what suits me.
The Jenkins XML API
For example, if you go to one of your Jenkins jobs and add /api/xml like this:
That XML contains loads of very useful information inside handy XML tag descriptions – you just need a way to get at that data and then you can present it as you like…
XPAth queries and the Jenkins XML API
so to automate that, I used to extend that approach a to query Jenkins via the XML API using XPAth queries to bring back just the data I actually wanted, quite like querying a database.
For example, wget’ing this URL would return just the current value of the <building> tag in the above XML:
e.g. “true” or “false” – this was very useful and easy to do, but the functionality was removed/disabled in recent versions of Jenkins for security reasons, meaning that my processes that used it needed rewritten 🙁
Extracting the data – Plan B…
So, here’s the new solution I went for – the real scripts/methods do some error handling and cleaning up etc but I’m just highlighting the main functions and the high level logic behind each of them here;
get_url’s method:
query a table in MySQL that contains a list of the job names and URL’s to monitor
for each $JOB_NAME found, it calls the get_file method, passing that the URL as a parameter.
get_file method:
this takes a URL param, and uses curl to fetch and save the XML data from that URL to a temporary file (“xmlfile”):
curl -sL "$1" | xmllint --format - > xmlfile
Note I’m using “xmllint –format” there to nicely format the XML data, which makes processing it later much easier.
get_data method:
this first calls “get_if_building” (see below) to see if the job is currently running or not, then it does:
TRUE_VAR="true"
if [[ "$IS_BUILDING" == "$TRUE_VAR" ]]; then
RESULT_TEXT="building..."
else
RESULT_TEXT=`grep "result>" xmlfile | awk -F\> '{print $2}' | awk -F\< '{print $1}'`
fi
get_if_building method:
this simply checks and sets the IS_BUILDING var like so:
My script then updates the MySQL database with the results from each check: success/failure, date, build number, user, change details etc
I then have JSP pages that read data from that table, and translate things like true/false in to HTML that sets the background colours (Red, Amber, Green), and shows the appropriate blocks and details per job.
If you have a few browsers/TV’s or Monitors showing these strategically placed around the office, developers get rapid feedback on the result of their code changes which speeds up development, increases quality and reduces development time and costs – and they can be fun to watch and set up too 🙂
and then wrote this page: Some PHP examples
detailing roughly how they were put together, but this time I wanted to create a searchable database of famous quotations, and focus on the MySQL side of things a bit more too (so that next time I will have a note of how I did it!).
I found a very nice CSV data file on http://thewebminer.com/download for free – I don’t really do Facebook much and don’t have a Twitter account so I thought/hoped they’d settle for a blog post in exchange…
After installing and setting up MySQL, connect to your database…
mysql –user=myuser –password=myusualpassword dev
— or connect without specifying a database/schema and do “show databases;”
mysql> show tables;
+---------------+
| Tables_in_dev |
+---------------+
| areacodes |
| dictionary |
+---------------+
2 rows in set (0.02 sec)
For this little app, I want to create a new table with fields for each row in the CSV file
plus I’d like an auto_increment field to make fetching random numbers easier
CREATE TABLE quotes (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
quote varchar(800),
author varchar(100),
genre varchar(100)
);
mysql> show tables;
+---------------+
| Tables_in_dev |
+---------------+
| areacodes |
| dictionary |
| quotes |
+---------------+
3 rows in set (0.00 sec)
mysql> describe quotes;
+--------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| quote | varchar(800) | YES | | NULL | |
| author | varchar(100) | YES | | NULL | |
| genre | varchar(100) | YES | | NULL | |
+--------+--------------+------+-----+---------+----------------+
4 rows in set (0.06 sec)
ok, the table looks good, so I can load the CSV data file – note that I’ve got the “quotes35000.csv” file I downloaded from http://thewebminer.com/download sitting in the current directory:
mysql> load data local infile 'quotes35000.csv' into table quotes
-> fields terminated by ';'
-> lines terminated by 'n'
-> (quote, author, genre);
Query OK, 35002 rows affected (1.54 sec)
Records: 35002 Deleted: 0 Skipped: 0 Warnings: 0
that looks like it went well (“35002 rows affected”), time to check it:
mysql> select count(*) from quotes;
+----------+
| count(*) |
+----------+
| 35002 |
+----------+
1 row in set (0.04 sec)
mysql> select * from quotes where author like '%Einstein' and genre like 'attitude%';
+------+-----------------------------------------------------+-----------------+-----------+
| id | quote | author | genre |
+------+-----------------------------------------------------+-----------------+-----------+
|4647 | Weakness of attitude becomes weakness of character. | Albert Einstein | attitude
+------+-----------------------------------------------------+-----------------+-----------+
1 row in set (0.02 sec)
All looking good, the row count and the returned query match what I’d expect having looked at the contents of the CSV file.
I also want to do a “random quote of the day thing”, so looked in to ways to do this in MySQL – my initial thought was to use something basic like “ORDER BY RAND() LIMIT 0,1;” to bring back one random row, but I guessed there may be better ways.
mysql> SELECT * FROM quotes WHERE id >= (SELECT FLOOR( MAX(id) * RAND()) FROM quotes ) ORDER BY id LIMIT 1;
+-----+--------------------------------------------------------------------------+-----------------+-------+
| id | quote | author | genre |
+-----+--------------------------------------------------------------------------+-----------------+-------+
|84 | Men are like wine - some turn to vinegar, but the best improve with age. | Pope John XXIII | age
+-----+--------------------------------------------------------------------------+-----------------+-------+
1 row in set (1.54 sec)
then this…
mysql> SELECT * FROM quotes ORDER BY RAND() LIMIT 0,1;
+-------+-------------------------------------------------------------------------------------+----------------+-------+
| id | quote | author | genre |
+-------+-------------------------------------------------------------------------------------+----------------+-------+
|29470 | The good die young, because they see it's no use living if you have got to be good. | John Barrymore | good
+-------+-------------------------------------------------------------------------------------+----------------+-------+
1 row in set (0.73 sec)
and found to my surprise that in this case, looking at the timings, the simple approach looks to be faster – probably because of the relatively small table and its simple structure?
Anyway, that’s the database side of things sorted, the next part is to put together some PHP code to allow searching for quotes based on author, partial quote or genre, and to write a simple “random quote” generator kind of thing.
My main interest in this is in doing both day to day maintenance tasks to support environments, and in scripting monitoring and preventative Jenkins jobs that report on various aspects of Oracle Database servers – these automated database monitors have proved very worthwhile, and often identify upcoming issues before they cause problems (e.g. expiring users, table spaces filling up, disabled constraints and triggers, etc etc).
Find and kill sessions:
Connect:
sqlplus / as sysdba
Set the line size so things look better:
set linesize 999
Then run a query to show active users:
SELECT s.osuser, s.status, s.process, s.machine, s.inst_id, s.sid, s.serial#, p.spid, s.username, s.program FROM gv$session s JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id WHERE s.type != ‘BACKGROUND’;
If you want to kill one, use the SID and SERIAL from the above:
ALTER SYSTEM KILL SESSION ‘{SID},{SERIAL}’;
———————–
Tablespaces; finding, resizing and autoextending:
sqlplus / as sysdba
set linesize 999
List Oracle Database tablespace files:
SELECT FILE_NAME as FNAME, TABLESPACE_NAME as TSPACE,BYTES, AUTOEXTENSIBLE as AUTOEX, MAXBYTES as MAXB,INCREMENT_BY as INC FROM DBA_DATA_FILES;
From the above, get the file name for the Table Space that needs altered, and do something like this:
ALTER DATABASE DATAFILE ‘{/path to above TS file, eg /ora/path/undotbs_0001.dbf}’ AUTOEXTEND ON NEXT 64m MAXSIZE 2G;
databases…
sqlplus / as sysdba
startup
shutdown immediate
———————–
Find invalid objects:
Optionally filtered by owner(s) and without synonyms…
select owner || ‘.’ || object_name || ‘[‘ || object_type || ‘]’
from dba_objects
where status = ‘INVALID’
and object_type != ‘SYNONYM’
and owner in (‘SYSTEM’,’SYS’,’TOOLS’,’DEVUSER’);
———————–
Check Constraints and Triggers:
SELECT * FROM all_constraints WHERE status <> ‘ENABLED’;
or filter by users:
SELECT * FROM all_constraints WHERE owner = ‘ARBOR’ and status <> ‘ENABLED’;
Triggers are similar:
select * from all_triggers where status <> ‘ENABLED’;
———————–
Check for locked/locking users:
those already locked:
select * from dba_users where username in (‘SYSTEM’,’SYS’,’TOOLS’,’DEVUSER’) and lock_date is not null;
or those about to be locked (I add this to my Jenkins Database monitoring jobs so you get some warning…):
select * from dba_users where expiry_date < (trunc(SYSDATE) +7) and lock_date = null;
———————–
Check the Oracle Wallet:
Check to see if encrytion is present:
select * from dba_encrypted_columns;
if that brings something back, then you can check the state of the Oracle Wallet:
SELECT status from v$encryption_wallet where status not like ‘OPEN’;
———————–
Running SQL scripts from Shell scripts:
This can be done in various ways, but I tend to either use this approach to simply run a file and exit:
echo “About to run ${SCRIPT_NAME} on ${SERVER}…”
echo exit | sqlplus ${DB_USER}/${DB_PASSWORD}@${SERVER} @/path/to/sql/scripts/${SCRIPT_NAME}.sql
echo “Script ${SCRIPT_NAME} complete.” # now check the return code etc…
or sometimes a HEREDOC is more suitable, something like this example for checking database links work:
echo “Checking ${DBLINK} link for user ${DB_USER}…”
DBLINK_CHECK=$(sqlplus -s -l ${DB_USER}/${DB_PASS}@${ADM_DBASE}<<EOF
set echo off heading off feedback off
SELECT ‘Link works’ from dual@${DBLINK};
exit;
EOF)
if [ $? -ne 0 ]
then
echo “ERROR: Checking link ${DBLINK} as ${DB_USER} FAILED”
fi
———————–
If you find any of these useful or would like to suggest additions or changes please let me know.