Large table help [message #273273] |
Tue, 09 October 2007 16:12 |
zzasteven
Messages: 18 Registered: August 2007
|
Junior Member |
|
|
I have a table which I need to insert into every day about 500,000 new rows. The table is about 80 million now. Which is about 12 gigs of data. The insert used to run fast when the table was small but now that the table is large it is taking hours.
I tried the insert with the append hint. I also tried the merge statement it did not speed it up. It needs to run in under 2 hours. But it is taking a second or two per insert and that time is growing as the table is growing.
Is there any way to do a fast insert into a very large table?
or is there some setting that we have to set to work with large tables?
any ideas would be helpful. the insert just selects from one table into another. The table it is selecting from never grows it is always about 600,000 rows.
|
|
|
|
|
|
|
Re: Large table help [message #273426 is a reply to message #273285] |
Wed, 10 October 2007 07:33 |
zzasteven
Messages: 18 Registered: August 2007
|
Junior Member |
|
|
like i said it is a very simple select statement
i have tried different hints to try to speed it up with big tables but nothing works.
insert into flpos
(ACFAN,
IPCUS,
PRBDTE,
PRSHRBAL,
PRAUO,
PRAUD,
PRAOR,
PRREGC,
PRTACFAN,
PRRFLN,
PRfardte,
PRIMPSRC,
PRDIVOPT)
(select /*+ PARALLEL(FWIMPORTRKP9,DEFAULT) */ aaacfan,
aaipcus,
aabdte,
aashrbal,
a_UserName,
sysdate,
osaor,
osregc,
aatacfan,
aarfln,
aafardte,
aaimpsrc,
aadivopt
from fwimportrkp9
where aarfln = a_filename
and errorind = '0'
and dupind = 'N'
and aaptpe = 'P'
and not exists
(select '1' from flpos
where ACFAN = aaacfan
and IPCUS = aaipcus
and prtacfan = aatacfan
and PRBDTE = aabdte));
thanks for all the help...
|
|
|
|
|
Re: Large table help [message #273531 is a reply to message #273509] |
Wed, 10 October 2007 14:49 |
zzasteven
Messages: 18 Registered: August 2007
|
Junior Member |
|
|
i see what you are saying about the selecting from a table where we are inserting. i shall change that. strange that it ran fast on tables of 40 million. the SQL uses the indexes on the table. But the larger the table the longer it takes.
I have never heard of moving it to a temp table and then moving into the real table. I shall have to try that.
thanks
for your help and input!
|
|
|
Re: Large table help [message #273725 is a reply to message #273531] |
Thu, 11 October 2007 09:01 |
|
Kevin Meade
Messages: 2103 Registered: December 1999 Location: Connecticut USA
|
Senior Member |
|
|
Hope it works.
I am not saying the code is wrong. Heck I write this kind of code all the time. But when you start moving lots of data, small quirks become bigger issues.
The idea of a temp table is just somehting a lot of people do when moving data. It seems a common practice among many to create waypoints in a data translation process.
When I said temp table, I didn't actually mean a "global temporary table", any waypoint table would work. I was just thinking of this:
create table work1
(
a number, b number, c number
)
/
create table real1
(
a number, b number,c number
)
/
insert into work1
select a,b,c from real1
/
insert into real1 select * from work1
/
this takes the heat of the real table. Seems silly, but I bet it makes a difference to you.
Good luck, Kevin
|
|
|