A collection of python scripts that I have built, to do things using the OCI APIs.
In order to utilize the OCI API, you must initialize using PIP:
prompt> pip3 install oci
Also, you should have the OCI Config set up locally. If you have multiple profiles set up, for example if you use multiple tenancies or user accounts, those named profiles can be used with most of these scripts.
These scripts:
- oci-analyze-exacs-costs-by-database.py
- oci-exacs-storage-used-csv.py
are designed to use a combination of OCI APIs as well as python built-in and 3rd party libraries, in order to "slice and dice" metric data from OCI, specifically StorageUsed for a list of databases running on ExaCS. As additional API calls are added, more information, such as cost analysis data, could be added.
At the moment, with a compartment OCID, the script will do the following:
- Establish an OCI session using OCI Config information on the local machine (profiles supported too)
- Pull high level data for each ExaCS rack in the compartment
- List each database on each rack, and pull
StorageUsedmetric for a number of days - Average the data over the time period and output to the screen or CSV file
Argument Parsing (argparse) and CSV Writing (csv) are among the python built-ins used.
This script (oci-policy-analyze-python.py) to pull all IAM policies from a tenancy or compartment hierarchy and organize them by
- Special Policies (admit/define/endorse)
- Dynamic Group Policies
- Service Policies
- Regular Policies
The script attempts to parse each statement into a list of tuples. Each tuple looks like:
(Subject) (Verb) (Resource) (Location) (Conditions)
Ther script also outputs the result to a local file called output-ocid.json. This can be used to run JQ commands for additional filtering.
Caching - To make it easier for large tenancies, if you want to run multiple filters or extractions, you can read the entire policy set from a cache set using -c. By running the script, the cache is created or updated. Subsequent runs with -c will attempt to load from the cache, so that filter commands are near instantaneous.
Output - To write the filtered output to a JSON file, provide -w. This will take whatever is filtered and put out to a JSON file for further analysis.
Filters - The internal format that the script uses to parse statements is using tuples make it easier to filter. The script supports filters via these parameters:
- [-sf SUBJECTFILTER]
- [-vf VERBFILTER]
- [-rf RESOURCEFILTER]
- [-lf LOCATIONFILTER]
The script starts wherever you tell it in the compartment hierarchy and recurses through all compartments. To run it at the tenancy root, omit -o . To start within a compartment hierarchy, pass in -o compartment_ocid.
Optionally, if you use profiles in your OCI config (eg other than DEFAULT), pass in -pr/--profile to set that. Omit if you only have a DEFAULT profile defined.
FUTURE: This code couldbe put into OCI function that maintains a current policy DB in Autonomous DB, with ability to develop new policies as needed.
python3 oci-policy-analyze-python.py -o ocid1.tenancy.oc1..zzzzzzzzzz
python3 oci-policy-analyze-python.py -o ocid1.compartment.oc1..zzzzzzzzzz
python3 oci-policy-analyze-python.py --profile CUSTOMER -o ocid1.tenancy.oc1..zzzzzzzzz
The flags below can be used independently or in tandem:
-sf/--subjectfilterFilter all statement subjects by this text-vf/--verbfilterFilter all verbs (inspect,read,use,manage) by this text-rf/--resourcefilterFilter all resource (eg database or stream-family etc) subjects by this text-lf/--locationfilterFilter all location (eg compartment name) subjects by this text
# Filter statements by group ABC and verb manage
python3 oci-policy-analyze-python.py -o ocid1.compartment.oc1..zzzzzzzzzz -sf ABC -vf manage
# Filter alternate OCI profile tenancy level by compartment DEF
python3 oci-policy-analyze-python.py --profile CUSTOMER -o ocid1.tenancy.oc1..zzzzzzzzz -lf DEF
# Multiple filters from a cached set of policies (find all policy statements that allow MANAGE and apply to TENANCY)
python3 ./oci-policy-analyze-python.py --profile CUSTOMER -c -lf tenancy -vf manage
Use of 4 filters and -w to create JSON extract:
# Multiple filters from a cached set of policies (find all policy statements that allow MANAGE and apply to TENANCY)
python3 ./oci-policy-analyze-python.py --profile CUSTOMER -c -lf tenancy -sf performancemonitor -vf manage -rf metrics -w
Produces output like this:
[
{
"type": "regular",
"subject": "group 'dbperformancemonitor'",
"verb": "manage",
"resource": "metrics",
"location": "tenancy",
"conditions": "any {target.compartment.name = 'database', target.compartment.name = 'non_production_database', target.compartment.name = 'nonprod_exacs', target.compartment.name = 'production_database', target.compartment.name = 'prod_exacs'}",
"lineage": {
"policy-compartment-ocid": "ocid1.tenancy.oc1..xxx",
"policy-relative-hierarchy": "",
"policy-name": "DBMgmt_User_Policy",
"policy-ocid": "ocid1.policy.oc1..xxx",
"policy-text": "Allow group 'DBPerformanceMonitor' to manage metrics in tenancy where any {target.compartment.name = 'Database', target.compartment.name = 'Non_Production_Database', target.compartment.name = 'NonProd_Exacs', target.compartment.name = 'Production_Database', target.compartment.name = 'Prod_Exacs'}"
}
},
{
"type": "regular",
"subject": "group 'dbperformancemonitor'",
"verb": "manage",
"resource": "metrics",
"location": "tenancy",
"conditions": "any {target.compartment.name = 'database', target.compartment.name = 'non_production_database', target.compartment.name = 'nonprod_exacs', target.compartment.name = 'production_database', target.compartment.name = 'prod_exacs'}",
"lineage": {
"policy-compartment-ocid": "ocid1.tenancy.oc1..xxx",
"policy-relative-hierarchy": "",
"policy-name": "DBPerformanceMonitor_Policy",
"policy-ocid": "ocid1.policy.oc1..xxx",
"policy-text": "Allow group 'DBPerformanceMonitor' to manage metrics in tenancy where any {target.compartment.name = 'Database', target.compartment.name = 'Non_Production_Database', target.compartment.name = 'NonProd_Exacs', target.compartment.name = 'Production_Database', target.compartment.name = 'Prod_Exacs'}"
}
}
]Script to show metrics history and specifically call out when a metric goes over and under a specific threshold. Alarms that watch multiple metric streams may stay in FIRING state (not good) for a long time. This doesn't provide details of when each stream crossed the threshold set by the alarm (over or under). This script does that. It looks at XX days of history, takes a metrics query, and a threshold value (similar to alarm). Then it pulls all data and only shows when it exceeds or falls below the thresold.
Provide the required params as such:
prompt> oci-python-code % python3 ./oci-metrics-alarm-history.py --help
usage: oci-metrics-alarm-history.py [-h] [-v] [-pr PROFILE] -c COMPARTMENTOCID -n NAMESPACE
[-r RESOURCEGROUP] [-d DAYS] -q QUERY -t THRESHOLD
options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
-pr PROFILE, --profile PROFILE
Config Profile, named
-c COMPARTMENTOCID, --compartmentocid COMPARTMENTOCID
Metrics Compartment OCID
-n NAMESPACE, --namespace NAMESPACE
Metrics Namespace
-r RESOURCEGROUP, --resourcegroup RESOURCEGROUP
Resource Group
-d DAYS, --days DAYS Days of data to analyze
-q QUERY, --query QUERY
Full metric query
-t THRESHOLD, --threshold THRESHOLD
Numeric threshold when crossed (will check value
Example, providing a profile (OCI Config), and a complex query
prompt> python3 ./oci-metrics-alarm-history.py -c ocid1.compartment.oc1..xxx -n oracle_appmgmt -t 95 -r host -d 60 -q 'FilesystemUtilization[6h]{fileSystemName !~ "/*ora002|/*ora003|/*ora004|/*ora005|/*ora006|/*ora007|/*ora008|/*ora009"}.mean()' -pr YYY
Using profile YYY.
Using 60 days of data
Using ocid1.compartment.oc1..xxx / oracle_appmgmt / host / FilesystemUtilization[6h]{fileSystemName !~ "/*ora002|/*ora003|/*ora004|/*ora005|/*ora006|/*ora007|/*ora008|/*ora009"}.mean() / threshold 95
Metrics Query: {
"end_time": "2023-01-24T11:20:29.554533Z",
"namespace": "oracle_appmgmt",
"query": "FilesystemUtilization[6h]{fileSystemName !~ \"/*ora002|/*ora003|/*ora004|/*ora005|/*ora006|/*ora007|/*ora008|/*ora009\"}.mean()",
"resolution": null,
"resource_group": "host",
"start_time": "2022-11-25T11:20:29.554533Z"
}
Metrics Result Size: 480
Host XXX File System /fwr/addr exceeded threshold ( t: 95 / val: 99.35100000000017 ) at 2023-01-13 17:00:00+00:00
Host YYY File System /epy/ora_export exceeded threshold ( t: 95 / val: 97.61499999999987 ) at 2023-01-13 17:00:00+00:00
Host ZZZ File System /u00 exceeded threshold ( t: 95 / val: 97.75488757396457 ) at 2023-01-13 17:00:00+00:00
Host ZZZ File System /u00 went below threshold ( t: 95 / val: 75.79682500000013 ) at 2023-01-17 23:00:00+00:00
Script iterates Regions and Compartments, lists OSS buckets, and formats the approximate size.
Script iterates PDBs in a compartment and prints information for the PDB and CDB if it shows as failed.
Script originally written for a customer. Has its own README.
These scripts are able to use Kafka libraries to produce and consume from OCI Streaming. This must be enabled on the tenancy, with policy and permission, as well as the setup of Stream Pools and Streams. See the help that comes out:
prompt > python3 ./consume-kafka.py
usage: consume-kafka.py [-h] [-v] -p STREAMPOOL -u USERNAME -a AUTHTOKEN -t TENANCYNAME -s STREAM [-e ENDPOINT]
consume-kafka.py: error: the following arguments are required: -p/--streampool, -u/--username, -a/--authtoken, -t/--tenancyname, -s/--stream