Home » RDBMS Server » Server Utilities » sqlldr commiting too few rows given readsize,bindsize,rows settings
sqlldr commiting too few rows given readsize,bindsize,rows settings [message #72032] Tue, 25 February 2003 12:43 Go to next message
DK
Messages: 11
Registered: July 2002
Junior Member
I am having a problem of the number of rows sqlldr is inserting per commit, right now it is only doing something like 21 rows per commit, much too slow when I am trying to load up millions of rows…

Here is the sqlldr command line that I am using :

sqlldr userid=${USER_NAME}/${USER_PASSWD}@${ORACLE_SID} control=$CONTROL_FILE log=$LOG_FILE errors=999999999999 readsize=10000000

bindsize=10000000 rows=1000000 bads=$BADFILE discard=$DISCARDFILE

As you can see, I have the readsize, bindsize and rows parameters set very large, yet, again , it is only loading 21 rows at a time.

Do you have any ideas why this is? Is there anything inherent in the creation of the table that would cause this?

Anybody know how to set the optimal readsize/bindsize parameters?

Thanks,
DK
Re: sqlldr commiting too few rows given readsize,bindsize,rows settings [message #72039 is a reply to message #72032] Tue, 25 February 2003 23:58 Go to previous messageGo to next message
K.K. Raj kumar
Messages: 33
Registered: July 2002
Member
You have to set the parameter BUFFER to a larger value.

(This value is proportional to the SGA of your instance)
Re: sqlldr commiting too few rows given readsize,bindsize,rows settings [message #74329 is a reply to message #72032] Tue, 16 November 2004 04:49 Go to previous message
Despina
Messages: 1
Registered: November 2004
Junior Member
dear friend
u r loading an array of rows=1000000, but such an array don't need only bindsize=10000000 of memory. Try setting rows lower. It loads too many rows but luck of memory is slowing it down.
Previous Topic: TSPITR Export Error
Next Topic: SQL*Loader User Defined Variables
Goto Forum:
  


Current Time: Tue Nov 12 14:19:52 CST 2024