Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 15 hours 8 min ago

Nested Cursor loops and conditional insert and update statements-Performace issues

Tue, 2024-11-12 22:06
Hi Tom, Hope you are doing well! I have 2 cursor loops. cursor C1,cursor c2(parameterised on C1 results). below is a pseudo code depicting the situation: <code>declare cursor C1 select distinct a,b,c from table1;--fetches 18K+records cursor c2(p_a,p_b,p_c) select * from table1 where a=p_a and b=p_b and c=p_c;----fetches 20K+records begin loop c1 minor select queries and few validations.... loop c2(a,b,c) actual validations if insert flag='Y' Insert into table1; ELSE UPDATE table1 end loop C2; End loop C1; END;</code> I am unable to use bulk collect and FORALL as I have to insert or update conditionally. The performance impact is too high. Question: How can I achieve above with a better performance. Kindly help Best Regards, Srabanee
Categories: DBA Blogs

Prepared statement Vs regular statement in JDBC

Tue, 2024-11-12 22:06
Tom: I'm not sure whether to use a PreparedStatement or just the regular Statement if a query is going to be executed only once before the statement is closed. I have read through some documentation regarding this topic and still not exactly clear on which way is better for the following scenario. I have a EJB(stateless session bean) that needs to retrieve a record from the database and returns an Object (Object created using the resultset getXXX methods..). If I use a PreparedStatement, connection.prepareStatement(sql) would have to dip to the database to see if a cached statement is available or need to precompile this 'sql'. And another dip to the database to execute this prepared statement. Once I have constructed my Object using the resultset, both the resultset and the prepared statement are closed (connection is returned to the pool). The same steps happen for every request to the EJB (prepareStatement(), execute(), close()). Is there any benefit in using preparedStatement for this scenario ? In contrast, if I used the regular Statement, my guess would be the connection.createStatement() just returns a new object reference(instead of the database dip in the case of preparedstatement). My execute will dip to the database. The only problem i see with this approach is the sql statement has to be parsed everytime before execution (or will this be cached ???). Is there some guidelines on when to use PreparedStatement Vs Statement? Thanks, Ram.
Categories: DBA Blogs

complicated View does not use /*+ result_cache */

Mon, 2024-11-11 09:26
Hi AskTom Team, I do not know why my <b>view </b> is not using the <b>/*+ result_cache */</b> hint. Hoqw can I look for the reason? <code> select /*+ result_cache */ * from ( WITH pttc as ( SELECT pttc.pttc_code AS term_code, pttc.pttc_cts_value AS term_cts_value, pttc.pttc_create_date AS term_create_date, pttc.pttc_create_user AS term_create_user, pttc.pttc_desc AS term_description, pttc.pttc_id AS term_id, pttc.pttc_modify_date AS term_modify_date, pttc.pttc_modify_user AS term_modify_user, pttc.pttc_pdtc_id AS term_pdtc_id, pttc.pttc_prftc_id AS term_prftc_id, pttc.pttc_sort AS term_sort, pttc.pttc_term_id AS term_term_id, pttc.pttc_valid_from AS term_valid_from, pttc.pttc_valid_to AS term_valid_to, pttc.pttc_wfttc_id AS term_wfttc_id, proc.proc_create_date AS create_date, proc.proc_create_user AS create_user, proc.proc_desc AS description, proc.proc_id AS id, lngc.lngc_iso_639_1 AS lngc_code, lngc.lngc_id, proc.proc_modify_date AS modify_date, proc.proc_modify_user AS modify_user, proc.proc_name AS name, proc.proc_revision_number AS rev, proc.proc_sort AS sort, ccsc.ccsc_code AS state_code, ccsc.ccsc_id AS state_id, ccsc.ccsc_name AS state_name, proc.proc_valid_from AS valid_from, proc.proc_valid_to AS valid_to, PROC_EXCHANGE_DATE AS exchange_date, PROC_version AS version FROM cat_proc_type_term pttc INNER JOIN cat_proc_type proc ON proc.proc_pttc_id = pttc.pttc_id INNER JOIN ctl_language lngc ON proc.proc_lngc_id = lngc.lngc_id INNER JOIN cat_ctl_cat_state ccsc ON proc.proc_ccsc_id = ccsc.ccsc_id AND TRUNC (SYSDATE) BETWEEN pttc.pttc_valid_from AND NVL ( pttc.pttc_valid_to, TO_DATE ('31-12-9999', 'dd-mm-yyyy')) AND TRUNC (SYSDATE) BETWEEN proc.proc_valid_from AND NVL ( proc.proc_valid_to, TO_DATE ('31-12-9999', 'dd-mm-yyyy')) AND ccsc.ccsc_c...
Categories: DBA Blogs

Logical I/O issues

Mon, 2024-11-11 09:26
Hi Team, We are doing some testing on heavy loads, but we are experiencing strange behavior. We have a process that is doing a lot of updates and inserts on a primarily 2 tables. But some of the aggregate queries have a tendency to start being slow (from being very fast) and doing way more Logical I/O than seems normal for it to do. During testing we are running on quite homogenous data so we should not see very much difference between each run. I have also tested the scripts on similar data and never gotten slow runs. All code is run from PL/SQL here. During test, we are running 5 session that each works with one entity, that will have from 2000 to 400000 transactions in the run (always the same number for each entity in each run, although this will not be the case when we come to production). For 2000, 10000, 20000 and 40000 pr entity (10000, 50000, 100000 and 200000 runs), we have no problems. Most of the scripts for select and update are running within second (even less than a second). When we enter a 500000 run (100000 pr. entity), we start seeing problems. First 2-3 times the scripts are run, it takes seconds and then the next take minutes (for this amount up to 12-18 minutes). Logical I/O rises from almost 0 to 60000000. When we enter a 1000000 run (200000 pr. entity), things even get worse. Then almost all of the runs take more than 20 minutes and more often than not, some of the runs never end properly, as we have a 1 hour timeout. Here is an example code: <code>WITH source AS ( SELECT MAX(lpad(t2.column6, 7, '0') || '0000') AS aggregate_reference, t1.column1, t1.column2, t1.column3, t1.column4, t1.column5 FROM tab1e1 t1 INNER JOIN table2 t2 ON t1.column4 = t2.column4 AND t1.initiatorbankid = t2.initiatorbankid AND t1.column3 = t2.column3 AND t1.column1 = t2.column1 AND t1.column2 = t2.column2 AND t1.column5 = t2.column5 WHERE t1.column1 <> 'T' AND t1.column7= '2544' AND t1.column4 = 2081 GROUP BY t1.column3, t1.column1, t1.column2, t1.column4, t1.column5 ) SELECT appdata.type_for_reference(pk_column, aggregate_reference) BULK COLLECT INTO v_ref FROM table1 tx INNER JOIN source ON tx.column4 = source.column4 AND tx.column3 = source.column3 AND tx.column1 = source.column1 AND tx.column2 = source.column2 AND tx.column5 = source.column5; </code> In a 500000 run, table1 has 500000 (100000 pr. entity) records, table2 has 200000 (40000 pr. entity) records. The with clause would return about 50000 records (10000 pr. entity) and the total statement ab...
Categories: DBA Blogs

UCP settings

Mon, 2024-11-11 09:26
I'd like to switch from Apache Tomcat datasources to UCP. With Tomcat datasources we use Datasource.Initsql=ALTER SESSION SET CURRENT_SCHEMA=XXXXXXX Datasource.tomcat.jdbcInterceptors=ConnectionPoolLogger(MODULE=XXXX, ACTION=YYYY) Are there ways to do the same with UCP? Something like: PoolDataSource.setModule('XXXX') PoolDataSource.setAction('YYYY') PoolDataSource.initSQL('ALTER SESSION SET CURRENT_SCHEMA=XXXXXXX') We are using Oracle Database 19c Enterprise Edition Release 19.0.0.0.0
Categories: DBA Blogs

Pro*C: FOR clause in SELECT statements is no longer an error

Mon, 2024-11-11 09:26
Hi Tom! In Pro*C, when working with pointers to dynamic arrays, you can specify the FOR clause to indicate what is the size of your dynamic array, like: <code> EXEC SQL BEGIN DECLARE SECTION; int* ids = (int*)malloc(sizeof(int) * n); // n is some runtime variable int nrows = n; EXEC SQL END DECLARE SECTION; EXEC SQL FOR :nrows insert into some_table (id) values (:ids); </code> However, the Pro*C documentation says that you can't use the FOR syntax for SELECT statements, because you will get an error: <code> PCC-E-0056: FOR clause not allowed on SELECT statement at ... </code> This sentence (and all other sentences stating the same thing in different points of the document) has been in the Pro*C and Oracle Precompiler docs since at least version 8. However, at least in 19c (and I think in 11 as well, but I'm not fully sure), that's no longer the case and you can use FOR in SELECT statements without problems. <code> EXEC SQL BEGIN DECLARE SECTION; int* output = (int*)malloc(sizeof(int) * n); // n is some runtime variable int nrows = n; EXEC SQL END DECLARE SECTION; EXEC SQL FOR :nrows select id into :output from some_table; </code> and it works just fine and do what I pretend without issues. However, I wonder, starting from which version was the restriction relaxed? I have two choices here: assume the FOR clause working in SELECTs is actually a "bug" (it shouldn't work), or it's a feature and the docs are buggy, and I'm inclined for the latter because there's plenty of instances in the Pro*C docs that are outdated: - The doc claims you need to use CODE=CPP if you want to use C++-style comments (//...). That's false. Pro*C accepts that kind of comments just fine with PARSE=ALL. I guess the reason is that the claim is there in the docs from times when Pro*C only cared about C89 and C90 (because the C++-style comments were added to the C standard in C99), even if that's no longer the case. At some points they increased the support for C90 but forgot to update the docs. - In the release notes of version 12, it says they have added a new option called trim_password. However, the option is totally missing from the documented list of options all the way up to 23c (but using the option on the command line "compiles", at least in 19c). - It claims that host variables of type `char` and `char[n]` uses the same default mapping demanded by `CHAR_MAP`. False, host variables of type `char` are always mapped to the external type `VARCHAR2` (discovered first by trying to solve a bug in my code, and then confirmed by looking at the type code assigned to each host variable in the sqlcud0 array of the generated code). All of this tells me you that you have to rely on your own experiments on top of what the doc says, because the docs are, at the very least, marginally maintained. But even if my experiments say otherwise, I would like to have at least "something", "somewhere" (some Oracle post, some A...
Categories: DBA Blogs

Create Pluggable Database

Fri, 2024-11-01 12:06
How do i create pluggable database in oracle linux <code> Enter user-name: sys as sysdba Enter password: Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.5.0.24.07 SQL> alter session set container=cdb$root; Session altered. SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1admin IDENTIFIED BY welcome ROLES = (dba) DEFAULT TABLESPACE users DATAFILE '/opt/oracle/oradata/FREE/pdb1users01.dbf' SIZE 250M AUTOEXTEND ON FILE_NAME_CONVERT = ('/opt/oracle/oradata/FREE/pdbseed/', '/opt/oracle/oradata/FREE/pdb1/'); CREATE PLUGGABLE DATABASE pdb1 * ERROR at line 1: ORA-01017: invalid credential or not authorized; logon denied Help: https://docs.oracle.com/error-help/db/ora-01017/ SQL> show user USER is "SYS" SQL> select * from session_privs; .... PRIVILEGE ---------------------------------------- EXECUTE DYNAMIC MLE USE ANY JOB RESOURCE LOGMINING CREATE ANY CREDENTIAL CREATE CREDENTIAL ALTER LOCKDOWN PROFILE DROP LOCKDOWN PROFILE CREATE LOCKDOWN PROFILE SET CONTAINER CREATE PLUGGABLE DATABASE INHERIT ANY REMOTE PRIVILEGES .... </code> Please help me to solve this problem Thanks :)
Categories: DBA Blogs

Oracle OSON: Denormalization and Design Patterns?

Fri, 2024-11-01 12:06
Greetings, Mongo JSON/BSON provides what seem like two cool improvements: 1.) Denormalization through embedding documents: https://www.mongodb.com/docs/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/ 2.) Design patterns to optimize access to data: https://www.mongodb.com/blog/post/building-with-patterns-a-summary I would think that with OSON, we can denormalize/embed documents by simply creating multiple JSON data types within a single table. Is there a limit to the number of JSON data types per table? Are there any potential performance problems to watch out for? Regarding Mongo design patterns to optimize access to data, does Oracle OSON have their own design patterns? Finally, design patterns seem like they optimize data access, why hasn't Oracle RDBMS implemented their own design patterns? Thanks, John
Categories: DBA Blogs

Table array in parameter

Fri, 2024-11-01 12:06
I need to pass an array into parameter in a store procedure like this <code> --############################################################################################## -- P A C K A G E S P E C --############################################################################################## CREATE OR REPLACE PACKAGE TEST_PKG" IS type chef_type is RECORD ( p_numero_chef varchar2(3), p_type_infraction_chef varchar2(1), p_code_infraction_chef varchar2(4), p_modele_chef varchar2(7), p_nature_chef varchar2(6), p_reglement_chef varchar2(20), p_article_chef varchar2(20), p_type_date_chef varchar2(1), p_date_infraction_chef varchar2(8), p_date_infraction_fin_chef varchar2(8) ); type chef_array is varray(100) of chef_type; ?????? --############################################################################################## -- PROC --############################################################################################## PROCEDURE TEST_PROC ( p_var... p_nombre_chef in number p_chef????????????in chef_array, p_var... ); --############################################################################################## -- P A C K A G E B O D Y --############################################################################################## CREATE OR REPLACE PACKAGE BODY TEST_PKG" IS --############################################################################################## -- (2) S I C O U R M E T A J O U R S I C M A --############################################################################################## PROCEDURE TEST_PROC ( ????????????p_var... ????????????p_nombre_chef in number, ????????????p_chef IN chef_array, ????????????p_var... ??????) ??????IS ??????v_var... ??????BEGIN ??????FOR i IN 1 .. p_nombre_chef LOOP ????????????v_numero_chef := p_chef.p_numero_chef(i); ????????????v_type_infraction_chef := p_chef.p_type_infraction_chef(i); ????????????... ????????????insert ... ??????END LOOP; ??????END SICOUR_MAJ_SICMA;??? error message [Error] PLS-00302 (382: 30): PLS-00302: The composant 'P_NUMERO_CHEF' must be declared [Error] PLS-00302 (387: 39): PLS-00302: The composant 'P_TYPE_INFRACTION_CHEF' must be declared </code>
Categories: DBA Blogs

recover using storage snapshot

Fri, 2024-11-01 12:06
I have consistent storage snapshot for oracle database. I want to restore the snapshot to a mount host and roll the archivelogs to a specific point-in-time or SCN. I always got error when trying to open the database on mount host. SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01113: file 1 needs media recovery ORA-01110: data file 1: '+DATA/ORCL/DATAFILE/system.256.1183393031' What I have done: Production Host - Take consistent storage snapshot of online database including DATA,REDO,ARCHIVE - Make database changes - Backup archivelogs Mount Host: - Restore storage snapshot - Attach ASM disk groups - Restore archivelogs taken after storage snapshot was made. - Mount database, trying to recover database to SCN or specific time or just until cacel Always got ORA-01113 and ORA-01110. Tried to recover using backup controlfile, same result. What should the correct steps to roll the archivelogs on top of storage snapshots? Thanks!
Categories: DBA Blogs

Can you find where a specific column from a specific table is used in Oracle?

Fri, 2024-11-01 12:06
As a database developer, I try to ensure column names in tables are not named something like "TYPE", or "OWNER", or "PRIMARY". It seems redundant to a Java developer to add a column to a table that indicates the OWNER of a PRODUCT to a column named PRODUCT_OWNER... after all, in the Java Entity, the property is "owner", and it's obviously the Product Owner. The problem comes in when the business wants to have multiple Owners, so we have to go find out all the code that references OWNER, but just for this table, and we have multiple tables with OWNER columns. Or the column is something like TYPE, and the word TYPE is all over the database objects. Or we want to drop the column, and want to see where it's used (this one is easier, drop the column in a test database, and see what breaks.) There are 100 different cases where, as a database developer, I need to know where a column is used. I know there are ways you can find certain things, like in SYS.ALL_DEPENDENCIES, or searching other tables where COLUMN_NAME = 'OWNER', or looking in USER_SOURCE, etc. Some are helpful, others just return a lot of noise. Obviously, the Oracle compiler knows which columns are used where, otherwise, the syntactic compile wouldn't work. Is it possible to find this information via a data dictionary view?
Categories: DBA Blogs

How to import data from more than one (.csv ) [Excel] files into the related tables !

Fri, 2024-11-01 12:06
Dear Tom, Hi ! Thanks for the previous answers.They were as perfect as always. Qust I have got around 50 .csv files (Excel files) containing some data which needs to be imported into the related tables in the database. I have a script which I am running from the command propmt (Windows NT) ,which makes use of sqlldr command.I am creating a .ctl file.For every table I am creating a single .ctl file. The sample of script is as follows [---c1.ctl---] LOAD DATA INFILE MT_srvc_ctgry_rel.csv INTO TABLE MT_srvc_ctgry_rel append fields terminated by "," ( SRVC_CTGRY_CD, SRVC_CD, DEL_F, DEL_DT DATE 'YYYY-MM-DD', UPDT_USER_ID, CRT_DT DATE 'YYYY-MM-DD', UPDT_DT DATE 'YYYY-MM-DD') I am making use of following command to execute the script c:\> SQLLDR user1/user1@dbmatch c1.ctl c1.log In this script MT_srvc_ctgry_rel is the Table name which is empty and getting data from MT_srvc_ctgry_rel.csv file. Now I am creating such script for every Table because the csv file and ofcourse COLUMNS in every table are different. Consider I have T1,T2,T3 and T4 tables and C1.csv,c2.csv.c3.csv.c4.csv files containing data. Can you suggest me a method through which the data in all these 4 tables can be inserted at one go through there respective .csv files. Is there any Keyword which can be used to mark all the COLUMNS in the table instead of writing every column in the script in the parenthesis and also is it possible to mention more than one file (.csv) name in the script. Kindly answer ! Thanks & Regards OP Lalwani
Categories: DBA Blogs

Oracle's JSON implementation compared to Mongo

Thu, 2024-10-31 17:46
Greetings, I recently heard that Oracle 23 will support JSON! I am excited about this since I have been taken away from supporting Oracle for the past three years to work on Mongo. As I have been working with Mongo, I?ve realized a few big weaknesses over RDBMS design. I wonder if Oracle's JSON implementation avoids these weaknesses: 1.) Storage overhead from repeating field names in each document. a. Let's say I have a collection that will contain 1000 fields. In this collection we will have 1 billion documents. Each document will be required to add a value for each of the 1000 fields. I calculated the number of total characters used for the 1000 field names to be 10,000 characters. Does Oracle save the field names for the 1000 fields in each document. That will result in 10,000 characters times a billion documents = 10 Trillion characters stored! In relational database tables, only 10,000 characters would be stored for the table since they only store the field names in the table definition. b. Do you know if Oracle?s JSON implementation won?t store the repeating field names in each document of the same collection? There are cases where we don?t need the ?flexible schema? option. We want all documents to have the same fields and set data types. 2.) Mongo has a two replicaset requirement for each Mongo primary server instance. Is Oracle going to require the same, which is to have two standby database servers for each primary server? 3.) Mongo highly recommends that we have enough memory to store the whole index and collection working set in memory. The weakness here is that Mongo licensing is based on memory and Mongo's decision to store the field names per document significantly bloats memory with useless information. Will Oracle continue to license based off of CPU for JSON implementations? Thanks, John
Categories: DBA Blogs

Oracle JSON versus Mongo JSON

Thu, 2024-10-31 17:46
Greetings, Has there been any performance comparison between Oracle 23 and Mongo 7 on saving and accessing JSON data? If yes, do you have a link to that test? Thanks, John
Categories: DBA Blogs

High Water Mark

Thu, 2024-10-31 17:46
Hello Tom, I have gone through with Oracle documentation and some of your article and question/Answer about High Water Mark. I am kind of confuced with HWM. According to my knowledge, "all Oracle segments have an upper boundary containing the data within the segment. This upper boundary is called the "high water mark" or HWM". That means table can not contain data beyond the HWM (If not correct, please make me correct), if this is correct and we are determining high water mark with help of this query: SELECT blocks, empty_blocks, num_rows FROM user_tables WHERE table_name = <tablename>; Where, 1. BLOCKS represents the number of blocks 'ever' used by the segment. 2. EMPTY_BLOCKS represents only the number of blocks above the 'HWM'. Now if according to HWM defination we can not go beyond HWM, then: Question: What is EMPTY_BLOCKS represents. Question: How to determine the HWM, please give me an example. Question: For what all object HWM is possible? Question: How to determine real space used by a table? (Below the HWM) and I was using: SELECT COUNT(DISTINCT SUBSTR(rowid,15,4)||SUBSTR(rowid,1,8)) "Used" FROM schema.table; or SELECT COUNT (DISTINCT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)||DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)) "Used" FROM schema.table; or SELECT COUNT (DISTINCT SUBSTR(rowid,1,15)) "Used" FROM schema.table; or but all of three queries are giving different-2 results. Could you please also through some focus on this. THX Monica
Categories: DBA Blogs

External table and SQL macros

Tue, 2024-10-29 10:46
hi, i have a question regarding external table and macro's we have an external table and and there are a couple of complex sql functions case, regexp, subst.. that i would 'hide' in a view, so we can reuse those expression in PLSQL. With a view a cannot pass the location, so i thought a macro can help me out, but till now, no luck <code>select banner_full from v$version; BANNER_FULL -------------------------------------------------------------------------------- Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production Version 19.24.0.0.0 create table test_table ( a varchar2(10), b varchar2(10), c varchar2(10), ) organization external ( type oracle_loader default directory dir_temp access parameters ( fields csv without embedded missing field values are null reject rows with all null fields ) location ( 'test.csv' ) ); -- this works select * from test_table external modify ( location('test.csv') );</code> <code>create or replace function external_table_macro_test ( p_tab in dbms_tf.table_t ,p_filename in varchar2 ) return clob sql_macro is begin return q'{select a, case a when 'test' then b end as just_an_example from p_tab external modify(location('p_filename'))}'; end external_table_macro_test; /</code> <code>-- this returns an error, ORA-30663: An EXTERNAL MODIFY clause can only be specified for an external or hybrid-partitioned table. select * from external_table_macro_test(p_tab => test_table, p_filename => 'test.csv' );</code> even without the filename parameter it fails, is this combination even possible? thx
Categories: DBA Blogs

Documentation of the lifecycle from dev enviroment to production, with git, visual builder studio, etc ??

Mon, 2024-10-28 16:26
I would like, if exist documentation for distribution from dev to prod environment ?? REgards Eduardo
Categories: DBA Blogs

Unable to connect to Oracle server(vm) from my laptop cmd prompt using sqlplus

Mon, 2024-10-28 16:26
Oracle 19 server cloud vm and from my laptop I am able to connect to the same database using sql developer tool . And I have installed oracle 19 client on my laptop and intention is to connect oracle from python script. when I tried to connect database using sqlplus from command prompt , getting error saying "ORA-12547: TNS:lost contact". Kindly let me know how to resolve this. sqlplus userid/pwd@oracleserver:port/servicename
Categories: DBA Blogs

Download Files from file Server

Mon, 2024-10-28 16:26
Dear all, I would like to download files from file server and view it in oracle apex. Till now I am unable to do so. I have been able to export file, uploaded from a page oracle apex to file server but I would like also to do the other way round i.e. to read files from fileserver and view it in oracle apex. Note that I am saving only file Id and filename in an oracle table. The objective in doing this is that I would like to avoid saving files as BLOB in Database. Thanking you,
Categories: DBA Blogs

EXECUTE IMMEDIATE Dynamically handling USING clause

Fri, 2024-10-18 00:06
Hi, We have case which is depicted in below code. Requirement is SQL has three input values in WHERE clause and which are passed using an EXECUTE IMMEDIATE USING clause. There may be cases where user may not pass one or more values. Now it is giving error when executed. Is there a way to handle this requirement. <code>declare l_empno number := &l_empno; l_deptno number:= &l_deptno; l_mgr number:=&l_mgr; l_stm varchar2(1000); begin l_stm := 'select * from emp where empno = :1 and deptno = :2 and mgr = :3'; execute immediate l_stm using l_empno,l_deptno,to_number(l_mgr); end;</code> 1) When all values are passed as input then PL/SQL block completes successfully 2) When any one of the value is not passed then it errors. Is there a way to do this? Thanks, Girish
Categories: DBA Blogs

Pages