Wednesday, October 23, 2013

Editing a Putty Session to always point at the correct AWS Instance.



Problem : Putty Sessions are in the registry.

So we want to edit the registry to insert the new public DNS name after we have started it.  ( our intent is to start and stop nightly so we are lazy about picking up the new DNS name )

We can get the dns name from the previous step.

Probably the best way to do this is write and execute a registry .reg file.

Relevant key is
HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\Sessions.


So.... if we do something along the line of exporting the relevant session hive and the substituting in the rest... i.e. like

aws ec2 describe-instances --filters "Name=instance-id,Values=i-xxxx" --quer
y "Reservations[*].Instances[*].{dz:PublicDnsName}" --output text  > pdn1.out
set /p myvar=type firstpart.in > update.reg
echo "HostName"="%myvar%" >> update.reg
type lastpart.in >> update.reg
C:\Users\Chris\.aws>

Sub in the relevant instance-id and then run update. reg .... also don't forget to edit the registry if you want to divert X window tunnels etc for Vnc desktop , you need the internal IP for that though so a slightly different call but should not be excessive.

If you wanted to make that easy you could run it up as a .bat with admin rights to avoid the UAC prompt but .. as I said... exercise for the reader !


Using the AWS CLI to get a public hostname



We use this for the next step,  amending Putty.


  • Assumptions : 
  • We know the instance-id 
  • We have a working CLI. 
So we use ec2-describe-instances 


C:\Users\Chris\.aws>
We use a filter to only get info for our instance. 
We use a query to only output the Public DNSNAME 
aws ec2 describe-instances 

--filters "Name=instance-id,
Values=i-xxxxxx"
 --query
 "Reservations[*].Instances[*].{dz:PublicDnsName}" 
--output  text



C:\Users\Chris\.aws>aws ec2 describe-instances --filters "Name=instance-id,Value
s=i-xxxxxxxx" --query "Reservations[*].Instances[*].{dz:PublicDnsName}" --output
 text
ec2-255-255-255-5255.eu-west-1.compute.amazonaws.com


So now we have our public DNSNAME, and with that we can change our putty settings ! 

Getting the Amazon SDK Running.

Hi folks we will talk about getting Amazon AWS command line tools

We assume you have signed up with Amazon and given them your credit card etc etc.

First thing to do is download the Windows installer from


http://aws.amazon.com/cli/


https://s3.amazonaws.com/aws-cli/AWSCLI64.msi


this is 5.7 mb currently .  So when you run that



So accept the license and click next next next 




So the next thing to do is to follow the rest of  http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html


If we look at our PATH setting now by opening a command window we can see that AWSCLI has been added to the end of the variable, usually c:\program files\Amazon\AWSCLI
If we now try 

C:\Users\Chris>aws

usage: aws [options] [parameters]
aws: error: too few arguments

C:\Users\Chris>

it is evident we must do a lot of things to get it all configured to start talking to our instances. 

First thing to do is to look at getting the config file set up. 

This will need to be in a directory .aws, so usually %USERPROFILE%\.aws\config

C:\Users\Chris>mkdir .aws


C:\Users\Chris>cd .aws

C:\Users\Chris\.aws>dir
 Volume in drive C is OS
 Volume Serial Number is F619-A748

 Directory of C:\Users\Chris\.aws

23/10/2013  12:43    

         .
23/10/2013  12:43    

         ..
               0 File(s)              0 bytes
               2 Dir(s)  13,599,453,184 bytes free


So what will this config file contain ?  Well this is the important credential and region information. 

We see an example in the getting started 

[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region = us-west-2


OK so where do we get our values.

Region should be easy as it is where you will be doing most of your work for me that will be eu-west-1

yours will probably be different. 

OK that was easy.  For our access key and secret we got these when we signed up first for AW.  You can get them from https://portal.aws.amazon.com/gp/aws/securityCredentials

So edit the liitle config file with the relevant information in it 



 Directory of C:\Users\Chris\.aws

23/10/2013  12:56    
         .
23/10/2013  12:56    
         ..
23/10/2013  12:57               137 config
               1 File(s)            137 bytes
               2 Dir(s)  13,595,860,992 bytes free


C:\Users\Chris\.aws>


One point to note is that if you are using notepad to edit the config file it saves it as .txt which is wrong it needs to be just called config. 

OK so let's give it a try 

C:\Users\Chris\.aws>aws ec2 describe-regions|more

{
    "Regions": [
        {
            "Endpoint": "ec2.eu-west-1.amazonaws.com",
            "RegionName": "eu-west-1"
        },
        {
            "Endpoint": "ec2.sa-east-1.amazonaws.com",
            "RegionName": "sa-east-1"
        },
        {
            "Endpoint": "ec2.us-east-1.amazonaws.com",
            "RegionName": "us-east-1"
        },
        {
            "Endpoint": "ec2.ap-northeast-1.amazonaws.com",
            "RegionName": "ap-northeast-1"
        },
        {
            "Endpoint": "ec2.us-west-2.amazonaws.com",
            "RegionName": "us-west-2"
        },
        {
            "Endpoint": "ec2.us-west-1.amazonaws.com",
-- More  --



OK... all working

In the next post we will show how to grab some value out of this.

Tuesday, August 27, 2013

Puppet Certificate Store.

If you are using Puppet, and in particular if you are using it to install modules from the main Puppet repository, you may run into an issue.

Specifically this is when the puppet module complains that the puppet server certificate is not valid.


Symptom : Redhat 6.4

Puppet latest version.

After a lot of messing around with openssl ( which turned out not to be the problem )  we went back to concentrating on Puppet.

Puppet expects to see the list of trusted server certificates in

/etc/pki/tls/cert.pem ( we only found this via strace -f )

if it cannot find it then you will get the error.

In a default setup this cert.pem file doesn't exist and needs to be linked in.

The list of certs is happily available in the certs/ca-bundle.crt file.

So :

ln -s cert.pem certs/ca-bundle.crt

and now Puppet module install works just fine.

Regards

Chris.

Monday, June 17, 2013

VMWARE tools on OEL6



Problem : VMWARE tools installing on Oracle Enterprise Linux 6

Problem :  vmware-config-tools cannot find the Kernel headers, even though yum sttes they are installed.

Solution :   yum -y install kernel-uek-devel.x86_64


then re-run vmware-config-tools. 

All done

Regards

Chris. 

Cloud Control 12c MIscellaneous issues



Cloud Control Refused to start at boot ( OMS not started, and agent not started).  This was an upgraded Cloud control with several restarts of the process, but it completed successfully.

After much shell script tracing the issue turned out to be a messed up oragchomelist file in /etc

there were several wrong entries in it.

For a non-oms machine the config of this file should be a single line

$AGENT_HOME:$AGENT_INSTANCE_HOME

see ML 1386113.1 note for more details..

 Further, if this is actually an OMS  machine there will be a 2nd line 

$AGENT_HOME:$AGENT_INSTANCE_HOME
$OMS_HOME

where $OMS_HOME=$MIDDLEWARE_HOME/oms , usually.

Once you set this up properly, OMS And agents will all start up nicely.

Regards

Chris.

Friday, May 17, 2013

Upgrading to EM Cloud Control 12c

It looks like Linux 6.3 targets aren't supported... but they are, there is a bug fix on Metalink

get what you need ( a .sql file ) from PUC: Linux 6.x Agents are shown as Unsupported in the Enterprise Manager 12c Upgrade Console [ID 1545450.1]




Thursday, May 2, 2013

Connecting to Work Repository in ODI

One problem I had run into when connecting to ODI was being unable to connect to a work repository remotely.  After debugging this problem it came down to the fact that the hostname resolution was not what was expected; I was using an IP address to connect and this was what was causing the issue.


When ODI Studio has a connection created,  for the master repository it uses the JDBC Url you type in , which is OK as :port:sid

However when it comes to connecting to the Work repository, the studio goes back to the server, and downloads the JDBC URL connection information for it from the Master Repository.   This URL has the hostname for the database server embedded, NOT the IP address, and if you cannot resolve the hostname , you won't be able to connect to the database server; thus Work Repository Connections will fail.

There are two resolutions

1: Enable NS resolution of the db server, either via hosts file or DNS
2:  Edit the Repository definition of the URL via clicking into the Work Repository Definition from the Master, clicking the plug icon, then JDBC , then editing the host= to be host=IP ADDRESS. 


Once either of these are done, then it will work like a top !

The symptom of this issue is ODI-10182


Tuesday, April 2, 2013

Oracle Business Intelligence Blog: OBIEE 11.1.1.7.0 Available


Hi Folks -  OBIEE 11.1.1.7 Now out !   See the bloglink below for more details and some greatest hits.
Oracle Business Intelligence Blog: OBIEE 11.1.1.7.0 Available

Wednesday, March 20, 2013

11g OCM

Hurray

Just been awarded 11g Oracle Certified Master !  ( OCM)



Tuesday, March 12, 2013

MIssing Graphs on OBIEE upgrade


Problem
You may upgrade or patch your OBIEE and find graphs missing

If you are upgrading to 11.1.1.6.2  or higher note you need Adobe Flash Player 10.1 installed in your browser. In some cases you may not be prompted for the installer/upgrader of Flash .... so go to Adobe.com and get the latest Flash player and see if it solves the problem.


Regards

Chris

Using Fully Qualified Table Names in OBIEE

Problem Description

1.  You have created a read-only user on your Oracle production database, to use as a reporting user that OBIEE can connect as.
2.  You've used the Admin Tool to successfully map the tables in
3.  However as soon as you try and do anything more you get an error ORA-00942 table or view does not exist


Why ?

Solution :   OBIEE needs to be told to qualify the table names with the Schema Owner.

Go back to the Connection Pool definition and tick the box "Use Fully Qualified Table Names", and save the repo.

Problem Solved.


Later

Chris.

Friday, March 8, 2013

Adding a Connection Pool to OBIEE

Folks

Some recommendations

If you're adding an Oracle Connection Pool the tendency is to use the TNS Alias.   This implies, though , the following problems

1: Admin Client needs the TNS Entry ( AND addition of TNS_ADMIN environmental variables into the Windows Envars, which you may not have Admin access to do on your workstation ).... sooo how to fix

2:  Server needs the TNS entry in $ORACLE_HOME/network/admin, where Oracle_HOME=MW_HOME/Oracle_BI1 - e.g.
/u01/obidev/mw/Oracle_BI1/network/admin

Means you have a pain keeping them in sync, especially as databases get created/added etc

Easy  fix : just use the  full connection string in the connection pool DS string

(DESCRIPTION =    (ADDRESS_LIST =      (ADDRESS = (PROTOCOL = TCP)(HOST = mydbhost.org.com)(PORT= 1521))        )    (CONNECT_DATA =      (SERVICE_NAME = MYSERVICE)    )  )

( all on one line ) .   Now you don't have to maintain anything  !   YES a 10gRAC string will be long but  it's a simple copy and paste


FINALLY.   If you are adding a Connection POOL SAVE the definition and Check it IN before importing Metadata.


Ran into a problem on the import that locked the CP definition as checked OUT  and had to bounce nqsserver - nqserror 36004

Upgrading OBIEE




Points to watch out for


1.  Follow the instructions

2. Do not forget to upgrade the Catalog; this has caused me more stress than anything else, even for point fixes like .3 to .8

This means editing sawserver instanceconfig in a ridiculously long directory like

/u01/obidev/mw/instances/instance1/config/OracleBIPresentationServicesComponent/coreapplication_obips1

and changing UpgradeAndExit to TRUE, bouncing SAWSERVER/Presentation services, and then changing the flag back to FALSE, then starting it again.

....   Check all errors in the log file.

If you do not do this the symptom is a lot of Dashboards and Analyses do not appear in the system and it can be very difficult to figure out what is going on.

Regards

Chris.

Restarting OBIEE Quickly.

This is a fast and dirty method  .  Used to restart the BI Server = nqsserver

Relies on the fact that OPMN will start it up again immediately.


effectively  : get nqsserver pid and feed it to kill -9


Much faster than doing it through the console.

OS : Linux x86-64 RHEL5


ps -ef|grep obidev|grep nqsserver|grep -v grep|awk '{ print $2}'|xargs kill -9


replace obidev and nqsserver as appropriate with the owner name and process name - e.g. if bouncing Presentation Services 


Monday, February 4, 2013

Connecting MySQL to OBIEE



Tested on OBIEE 11.1.1.6BP6

Similar problem to connecting SQL Server to OBIEE

Assumption : MySQL Community Edition ( i.e. FREE edition )
Assumption : Server OS is Redhat 5 64-bit

How do we connect our client ?

Straightforward. Download latest ODBC driver, set up DSN,  ( remember to give it a meaningful name ),  create connection pool, import metadata, define business and presentation layers.


How do we connect our server ?

1. Problem set :   Data Direct Drivers are shipped to connect OBIEE to MySQL.  Unfortunately these do not work as you get a message "[nQSError: 16001] ODBC error state: S1000 code: 0 message: [DataDirect][ODBC MySQL Wire Protocol driver]Connections to MySQL Community Servers are not supported.  Please contact MySQL to obtain a MySQL Enterprise or Commercial version.."

Which I think is appalling but there you are.

How to solve this ?

Use MySQL native ODBC instead of DataDirect ODBC.

You can get these from

http://dev.mysql.com/downloads/connector/odbc/#downloads

This is an rpm download and install, you need root to do this so go get the sysadmin to do all that no reboot and no outage needed.  DO NOT DOWNLOAD THE ANSI driver as it does not work.  You want the one called mysql-connector-odbc-5.2.3-1.rhel5.x86_64.rpm.  I

If itt says ANSI it's wrong and you'll get lots of UNICODE errros. 

2. Problem Set 2.   What entries in ODBC.ini ?

AND where, you may ask is odbc.ini location set ( hint - look at opmn.xml )

change host,DSN, uid,pwd,dbname as needed below.


[LIVE_MYSQL]
Driver       = /usr/lib64/libmyodbc5w.so
Description  = Connector/ODBC 3.51 Driver DSN
SERVER       = svr.myhost.com
PORT         = 3306
USER         = username
Password     = password
Database     = dbname
OPTION       = 3

This doesn't work -  causes nqserver to crash repeatedly. 

3.  Solution :  Add /usr/lib64 to opmn.xml 
For every mention of LIBPATH  and PATH, add /usr/lib64 to the beginning.
e.g. 
"LD_LIBRARY_PATH" value="/usr/lib64:$ORACLE_HOME/common/ODBC/Merant/6.0SP1/lib$:$ORACLE_HOME/bifoundation/server/bin$:$ORACLE_HOME/bifoundation/web/bin$:$ORACLE_HOME/clients/epm/Essbase/EssbaseRTC/bin$:$ORACLE_HOME/bifoundation/odbc/lib$:$ORACLE_INSTANCE$:$ORACLE_HOME/lib"

Remember there are 8 mentions 4 LIBPATHS and 4 paths. 
Restart the whole stack and then voila it all works, mysql data sources appear in Analytics 

Happy days. 

Regards

Chris. 


Monday, January 21, 2013

Oracle Recovery Advisor.

Quite frequently with RMAN you must enter a fairly abstruse set of commands to perform a given action.  RMAN's syntax is not easy to remember, and in a stressful time like having to perform a recovery it can be difficult to remember the correct sequence of commands.

Step forward the Recovery Advisor, which can make this job very easy, of diagnosing a problem and then performing the correct action.

Let's assume someone has by accident removed users01.dbf and we try and start the database.


Let's further assume someone has shutdown the database abort;

( mumble mumble some one reminded me that this only works if you're probably  archivelog mode and you have a proper backup. )
[oracle@oel6u3m1 ~]$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Tue Jan 22 00:42:28 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup

Oracle instance started
database mounted
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of startup command at 01/22/2013 00:42:39
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/home/oracle/oradata/orcl/users01.dbf'

RMAN>

As expected, of course. 

Normally here we would go through some sort of custom restore/recover - is there a less specific fix ? 



RMAN> list failure;

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
262        HIGH     OPEN      22-JAN-13     One or more non-system datafiles are missing

RMAN>


This is the data recovery advisor.  It's correctly figured out that a file is missing. 

Does it know how to fix the problem ? 

RMAN> advise failure
2> ;

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
262        HIGH     OPEN      22-JAN-13     One or more non-system datafiles are missing

analyzing automatic repair options; this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=134 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
1. If file /home/oracle/oradata/orcl/users01.dbf was unintentionally renamed or moved, restore it

Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 4
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /home/oracle/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2553734102.hm

RMAN>


Yes it does ! 


we can run the generated script - OR - we can ask RMAN to fix the problem without prompting ...

RMAN> repair failure noprompt
2> ;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /home/oracle/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2553734102.hm

contents of repair script:
   # restore and recover datafile
   restore datafile 4;
   recover datafile 4;
   sql 'alter database datafile 4 online';
executing repair script

Starting restore at 22-JAN-13
using channel ORA_DISK_1

channel ORA_DISK_1: restoring datafile 00004
input datafile copy RECID=12 STAMP=805336584 file name=/home/oracle/app/oracle/fast_recovery_area/ORCL/datafile/o1_mf_users_8hvq94p6_.dbf
destination for restore of datafile 00004: /home/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00004
output file name=/home/oracle/oradata/orcl/users01.dbf RECID=0 STAMP=0
Finished restore at 22-JAN-13

Starting recover at 22-JAN-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: /home/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/fast_recovery_area/ORCL/backupset/2013_01_22/o1_mf_nnnd1_INCR_UPDATE_8hvqr9y                                           j_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/fast_recovery_area/ORCL/backupset/2013_01_22/o1_mf_nnnd1_INCR_UPDATE_8hvqr9yj_.bkp tag=IN                                           CR_UPDATE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 22-JAN-13

sql statement: alter database datafile 4 online
repair failure complete
database opened

RMAN>


In a lot of cases this could be faster than going through tedious steps to find the problem, remember the goal is uptime..

Remember - Data Recovery Advisor - may help you out. 

Regards

Chris. 

Thursday, January 17, 2013

Cloning an Oracle RDBMS Home



In this case I want to clone just an Oracle 11.2.0.3 Home back to the UAT machine, and then I can run databases against it.

we are following

http://docs.oracle.com/cd/E11882_01/install.112/e24321/app_cloning.htm#sthref782


Step 1.  Zip/Tar up the existing home and transfer and unzip it to the destination location.

In this case this is

/u11/app/oracle11t/product/11.2.0.3/dbhome_1/network/admin


So this is our starting point.  If we wanted to we could start databases now, but the clone is not finished as the inventory is not done properly, and a load of other steps.

Remove all the .ora files in $OH/network/admin on the cloned home.


Let's establish some Parameters ( needed before proceeding )

ORACLE_BASE=/u11/app/oracle11t
ORACLE_HOME=/u11/app/oracle11t/product/11.2.0.3/dbhome_1/
OSDBA_GROUP=dba
OSOPER_GROUP= 
ORACLE_HOME_NAME="CLONED11203"


[oracle11t@vostok bin]$ perl clone.pl ORACLE_HOME=/u11/app/oracle11t/product/11.2.0.3/dbhome_1/ ORACLE_BASE=/u11/app/oracle11t OSDBA_GROUP=dba ORACLE_HOME_NAME="CLONED11203"
./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u11/app/oracle11t/product/11.2.0.3/dbhome_1/" "ORACLE_BASE=/u11/app/oracle11t" "oracle_install_OSDBA=dba" "ORACLE_HOME_NAME=CLONED11203" -silent -noConfig -nowait
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 39687 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-01-17_02-52-40PM. Please wait ...Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

You can find the log of this install session at:
 /oracle/oraInventory/logs/cloneActions2013-01-17_02-52-40PM.log
.................................................................................................... 100% Done.



Installation in progress (Thursday, January 17, 2013 2:52:53 PM GMT)
..............................................................................                                                  78% Done.
Install successful

Linking in progress (Thursday, January 17, 2013 2:52:59 PM GMT)
Link successful

Setup in progress (Thursday, January 17, 2013 2:53:55 PM GMT)
Setup successful

End of install phases.(Thursday, January 17, 2013 2:53:59 PM GMT)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/u11/app/oracle11t/product/11.2.0.3/dbhome_1//root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

The cloning of CLONED11203 was successful.
Please check '/oracle/oraInventory/logs/cloneActions2013-01-17_02-52-40PM.log' for more details.

dbhome_1]# ./root.sh
Check /u11/app/oracle11t/product/11.2.0.3/dbhome_1//install/root__2013-01-17_14-55-11.log for the output of root script


And that's all there is; the Oracle Home should now be usable, registered, and patched. 

Note there is a whole other set of work to clone the database, but it is not that difficult with the current toolses. 

Regards

Chris. 

How to determine open TCP/IP files/sockets of a Linux/Unix Process

Using lsof, a standard linux tool. 

Let's assume we want to see the open TCPV4 sockets ( normal, remote connections ) for a linux process. 
We assume we know the PID. 

Relevant lsof options : 

-i4  ( displays TCPV4 sockets ) 
-p  ( displays files for a particular process ) 
-a  ( uses AND to filter the results. Be careful. If you do not use this then lsof  uses OR in the filters and you will get the wrong answer ) 

OK lets try it out and then refine. 

let's look at the CUPS Daemon for something nice and easy 


[root]# ps -ef|grep cups
root      5160     1  0  2012 ?        00:00:46 cupsd

OK  so now we have something to look at , pid 5160 

[root]# lsof -a -i4 -p 5160
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
cupsd   5160 root    4u  IPv4  14303       TCP localhost.localdomain:ipp (LISTEN)
cupsd   5160 root    7u  IPv4  14307       UDP *:ipp


OK so we are seeing hostnames and service names. 

We use -n to convert hostnames to IP addresses


[root]# lsof -a -n -i4 -p 5160
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
cupsd   5160 root    4u  IPv4  14303       TCP 127.0.0.1:ipp (LISTEN)
cupsd   5160 root    7u  IPv4  14307       UDP *:ipp

And we use -P to convert network service names to numbers , which for most Oracle applications is much more relevant. 


[root]# lsof -a -n -P -i4 -p 5160
COMMAND  PID USER   FD   TYPE DEVICE SIZE NODE NAME
cupsd   5160 root    4u  IPv4  14303       TCP 127.0.0.1:631 (LISTEN)
cupsd   5160 root    7u  IPv4  14307       UDP *:631

So we can see that CUPS is listening locally on port 631. 

Regards, 
Chris Slattery. 





Wednesday, January 16, 2013

Item invalidation in the Oracle result set cache ( Part 1 )




SQL> select id,type,status,namespace,name from v$result_cache_objects;

        3 Dependency Published       SH.CUSTOMERS
       110 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=104489
       109 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=103379
       108 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         2 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         1 Result     Published SQL   SELECT DECODE('A','A','1','2') FROM DUAL
         0 Result     Published SQL   SELECT USER FROM DUAL
         4 Result     Invalid   SQL   select cust_id from customers




OK all good so what happens if we update the customers table ? 


SQL> select cust_first_name,cust_last_name from customers where cust_id=104489;
Branden              Kennedy

SQL> update customers set cust_first_name='Brandon' where cust_id=104489;

1 row updated.

OK so now what has happened to the cache ? 

SQL> select id,type,status,namespace,name from v$result_cache_objects;
         3 Dependency Published       SH.CUSTOMERS
       110 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=104489
       109 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=103379
       108 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         2 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         1 Result     Published SQL   SELECT DECODE('A','A','1','2') FROM DUAL
         0 Result     Published SQL   SELECT USER FROM DUAL
         4 Result     Invalid   SQL   select cust_id from customers

STILL SHOWING AS CACHE ! 

Why ? 

Let's COMMIT ! 

SQL> commit;

Commit complete.

SQL> select id,type,status,namespace,name from v$result_cache_objects;
         3 Dependency Published       SH.CUSTOMERS
       108 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         2 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         1 Result     Published SQL   SELECT DECODE('A','A','1','2') FROM DUAL
         0 Result     Published SQL   SELECT USER FROM DUAL
         4 Result     Invalid   SQL   select cust_id from customers
       109 Result     Invalid   SQL   select cust_first_name,cust_last_name from customers where cust_id=103379
       110 Result     Invalid   SQL   select cust_first_name,cust_last_name from customers where cust_id=104489

8 rows selected.


So now committing has caused the dependency on CUSTOMERS to force a flush on those cached items. 

Now if we reissue the SQL we can see 


SQL> select cust_first_name,cust_last_name from customers where cust_id=104489;
Brandon              Kennedy


Execution Plan
----------------------------------------------------------
Plan hash value: 4238351645

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |     1 |    20 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE                | 39tgvk9kh2k8y33jug4hnv7mr5 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| CUSTOMERS                  |     1 |    20 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | CUSTOMERS_PK               |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CUST_ID"=104489)

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=2; dependencies=(SH.CUSTOMERS); attributes=(single-row); name="select cust_first_name,cust_last_name from customers where cust_id=104489"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        622  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL> select cust_first_name,cust_last_name from customers where cust_id=104489;
Brandon              Kennedy


Execution Plan
----------------------------------------------------------
Plan hash value: 4238351645

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |     1 |    20 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE                | 39tgvk9kh2k8y33jug4hnv7mr5 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| CUSTOMERS                  |     1 |    20 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | CUSTOMERS_PK               |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CUST_ID"=104489)

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=2; dependencies=(SH.CUSTOMERS); attributes=(single-row); name="select cust_first_name,cust_last_name from customers where cust_id=104489"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        622  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

We have as expected the 3 consistent gets, so it is bypassing the cache. 

If we reissue  

SQL> /
Brandon              Kennedy


Execution Plan
----------------------------------------------------------
Plan hash value: 4238351645

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |     1 |    20 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE                | 39tgvk9kh2k8y33jug4hnv7mr5 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| CUSTOMERS                  |     1 |    20 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | CUSTOMERS_PK               |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CUST_ID"=104489)

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=2; dependencies=(SH.CUSTOMERS); attributes=(single-row); name="select cust_first_name,cust_last_name from customers where cust_id=104489"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        622  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed


So now it's caching again. 

Let's look at the result cache info again 



SQL> select id,type,status,namespace,name from v$result_cache_objects
  2  ;
         3 Dependency Published       SH.CUSTOMERS
       112 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=104489
       111 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
       108 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         2 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         1 Result     Published SQL   SELECT DECODE('A','A','1','2') FROM DUAL
         0 Result     Published SQL   SELECT USER FROM DUAL
         4 Result     Invalid   SQL   select cust_id from customers
       109 Result     Invalid   SQL   select cust_first_name,cust_last_name from customers where cust_id=103379
       110 Result     Invalid   SQL   select cust_first_name,cust_last_name from customers where cust_id=104489

10 rows selected.

A new Cache entry ! Note there are now TWO entries for the 104489 lookup, One is invalid and the other is Published ( available for use ! ) 

So be very careful about selecting from this table and using the information. 

This brings me to the last item in this post... dependency checks 

 V$RESULT_CACHE_DEPENDENCY will tell us what object dependencies are in the cache. 


SQL> select * from V$RESULT_CACHE_DEPENDENCY;
       112          3      76289


That's not very useful 

let's enhance a little 


select d.result_id,o.owner,o.object_name,o.object_type from V$RESULT_CACHE_DEPENDENCY D, dba_objects o where d.object_no=o.object_id
/


SQL> /

 RESULT_ID OWNER                          OBJECT_NAME                    OBJECT_TYPE
---------- ------------------------------ ------------------------------ -------------------
       112 SH                             CUSTOMERS                      TABLE


That's a little better... so we can see that our 112 cache  from above, depends on the customers table.  Which explains why if we commit a change to the customers table the cached results are invalid. 

That's enough for now.... More soon 



What other things can we look at in the result set cache ?



Well what items are in the result set cache ?

All docs are at

http://docs.oracle.com/cd/E11882_01/server.112/e25513/dynviews_2141.htm#sthref3469


select id,type,status,namespace,name from v$result_cache_objects


        ID TYPE       STATUS    NAMES NAME
---------- ---------- --------- ----- --------------------------------------------------------------------------------------------------------------------------------
         3 Dependency Published       SH.CUSTOMERS
       110 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=104489
       109 Result     Published SQL   select cust_first_name,cust_last_name from customers where cust_id=103379
       108 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         2 Result     Published SQL   SELECT USERENV('SESSIONID') FROM DUAL
         1 Result     Published SQL   SELECT DECODE('A','A','1','2') FROM DUAL
         0 Result     Published SQL   SELECT USER FROM DUAL
         4 Result     Invalid   SQL   select cust_id from customers


So we can see some information. We can also change that slightly to see how many times it has been used 

  1* select id,type,pin_count,scan_count,block_count from v$result_cache_objects
SQL> /

        ID TYPE        PIN_COUNT SCAN_COUNT BLOCK_COUNT
---------- ---------- ---------- ---------- -----------
         3 Dependency          0          0           1
       110 Result              0          0           1
       109 Result              0          1           1
       108 Result              0          0           1
         2 Result              0          0           1
         1 Result              0          3           1
         0 Result              0          1           1
         4 Result              0          0         104

8 rows selected.

And this lets us look a little more detail into the usage of the cache. 


What if we want more summary information ? 

select * from V$RESULT_CACHE_STATISTICS
/

  1* select * from V$RESULT_CACHE_STATISTICS
SQL> /
         1 Block Size (Bytes)             1024
         2 Block Count Maximum            2080
         3 Block Count Current            128
         4 Result Size Maximum (Blocks)   104
         5 Create Count Success           7
         6 Create Count Failure           0
         7 Find Count                     5
         8 Invalidation Count             0
         9 Delete Count Invalid           0
        10 Delete Count Valid             0
        11 Hash Chain Length              1
        12 Find Copy Count                5

12 rows selected.


OK All great. 



Wednesday, January 9, 2013

Looking at Result Set Cache.

Based on our previous post we can now look at execution plans.
( we are using stock SH.CUSTOMERS table )


( Well actually I've found out there's a bug in DBMS_XPLAN.DISPLAY_CURSORs, it doesn't display result set caching !  back to Autotrace, hopefully that'll be OK !



Annnyway


set long 32000
set linesize 32000
set pagesize 0
 set autotrace on
alter system set result_cache_mode=manual;
select cust_first_name,cust_last_name from customers where cust_id=103379;
set autotrace off
-- select cust_first_name,cust_last_name from customers where cust_id=103379;
-- select * from table(dbms_xplan.display_cursor(format=>'ADVANCED'));
quit;

System altered.

Alana                Yoon


Execution Plan
----------------------------------------------------------
Plan hash value: 4238351645

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |     1 |    20 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| CUSTOMERS    |     1 |    20 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | CUSTOMERS_PK |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CUST_ID"=103379)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        617  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

[oracle@oel6u3m1 xplain]$

So our nice little plan costs us 3 gets.  if we were doing this a bazillion times we would have to probe the BC for the blocks every time.   Why not just cache the results of the query  ?


et long 32000
set linesize 32000
set pagesize 0
 set autotrace on
alter system set result_cache_mode=force;
select cust_first_name,cust_last_name from customers where cust_id=103379;
set autotrace off
quit;

System altered.

Alana                Yoon


Execution Plan
----------------------------------------------------------
Plan hash value: 4238351645

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                            |     1 |    20 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE                | 3d41f60nry3b51vp54prrk2m38 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| CUSTOMERS                  |     1 |    20 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | CUSTOMERS_PK               |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CUST_ID"=103379)

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=2; dependencies=(SH.CUSTOMERS); attributes=(single-row); name="select cust_first_name,cust_last_name from customers where cust_id=103379"



Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        617  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed



... If we look we can see that as soon as we turn caching on for Results the consistent gets goes to Zero !

so instead of a bazillion hits we have zero.

Now again this is still only server side caching, not front-end... let's look at that in a few blog posts.

Relevant Parameters are


result_cache_max_result              integer     5
result_cache_max_size                big integer 2080K
result_cache_mode                    string      FORCE
result_cache_remote_expiration       integer     0




Check into these variables;  each should be considered carefully