Snapshots Last -Refresh - SQL & PL/SQL

Does anyone know why the LAST_REFRESH column in USER_SNAPSHOTS
always show 01-JAN-1950 12:00:00 for all snapshots?
Is it a Bug?
Thanks
Thiru 

I believe it is a bug that is fixed in Oracle 9i. At least
that's the response I was given when I opened up a TAR against a
similiar thing. Although in my case, the LAST_REFRESH date was
the initial time I created the snapshot, and never changed with
subsequent refreshes. Do a Metalink seach and you'll find lots
of information as well as "work-arounds".

Related

finding out a particular query

Hi All,
in my working environment, due to some execution of query, thresold value got reached and server got restarted.
now i wanted to know which query have been execute in a particular time ie in between 5pm to 7pm on 24th of april. 
Hi,
We have almost the same issue, please see the following link, it might help you.
SQL statements logging - need an idea
Regards 
ya its ok for the next steps
but as of now, auditing is off, and i want the old query which is already executed,
and also one more information about the same, the query, the user who have executed this query.
thanks for you answer but i need much detailed. 
If auditing is off then much more likely that it cannot be retrieved anymore, since the DB got restarted.
But try this if you can still see what you're looking for.
Try to select it from these data dictionary views, if it's still there: describe the following then select the columns you need in v$sql or v$sqlarea;
Regards.

Portal reports(some) under monitor tab timed out

Hi,
Some of my reports unsder montior tab just getting timed out or no rows returned. Portal version 3..9.8.3
I have more than 600000 rows in wwlog_activity_log1$, and log2$ tables. They are switching every 15 days.
All of the objects related to this reports are valid.
I have total 28 indexes on both tables.
Can some body shoot some idea why some of these reports not working ...??
Thanks
Janga 
Hi,
May be your transactions are more, you should try reducing the log interval to less than 15 days. This might help.
Thanks,
Sharmila 
You are right.. If I truncate both log tables It will work fine.. but after next day I will have same problem.
With in a day I am getting approximately 18000 trasactions
I am unable view reports even for two days.
But I cannot rotate logs every day..
Is this a BUG or any work around for this..?
Thanks
Janga 
You are right.. If I truncate both log tables It will work fine.. but after next day I will have same problem.
With in a day I am getting approximately 18000 trasactions
I am unable view reports even for two days.
But I cannot rotate logs every day..
Is this a BUG or any work around for this..?
Thanks
Janga 
Hi,
This is happening only when I access report named "What are the most popular folders" and happening from second day after I truncate log tables.
Thanks
Janga 
Looks like this is a bug. Some of the columns being used in this chart are not indexed. Please contact customer support to file a bug for this.
Thanks,
Sharmila

Undo Problem

we are facing the unexpected rise in UNDO.We want to know which query is causing the problem and how much undo it is genrating?
we are using 10gr2.
Regards 
Whewn faced with something like your situation, you have to ask 'what has changed' ?
Have some developers changed some code, has your application changed in someway. has someone perhaps scripted a bad sql?
rgds
Alan 
First you could run a statspack report or ADDM report then you have better insight of what's going on inside your database.
Also query the v$transaction view see how many active transactions are in your database. and like other poster said find out what has changed recently, why application users changed their behavior. Why the transactions are not committed timely. 
Is the any major update statement ran in last three or four days.
Check with all the users what they are updating. 
Hi friends we have got the right SQL query to deal it:--------
select SQL_ID,to_char(BEGIN_TIME,'DD-MON-YY HH24:MI') "BEGIN_TIME",
to_char(END_TIME,'DD-MON-YY HH24:MI') "END_TIME",
sum(UNDOBLKS*8)/1024/1024 "UNDO IN MB" from v$sql,v$undostat where v$undostat. MAXQUERYID=v$sql.SQL_ID
and to_char(BEGIN_TIME,'DD-MON-YY HH24:MI') BETWEEN '&BDDDMONYYTIMEhh24mi' AND ('&EDDDMONYYTIMEhh24mi')
group by sql_id,to_char(begin_time,'DD-MON-YY HH24:MI'),to_char(end_time,'DD-MON-YY HH24:MI');
cheers 
hi,
How many rows are in each table. You are doing a group by and this may cause a big sort in your temp tablespace. Has the data increased in the past few days?
rgds
alan.

why is materialized view data delayed?

Hi,
I am looking after some materialized views which are meant to refresh every night but we have discovered that even though the refresh occurs every night, the data from the source database doesn't go into the destination until after the next day refresh.
Ideally we want data of yesterday to be available in the dest database today, but it may only be available tomorrow even though the refresh job took place last night.
Does anyone have an explanation for something like this?
Thanks 
Are you monitoring how long the MVs take to refresh?
Once the refresh has finished (successfully), the data in them will be the data from the underlying query at the start of the refresh job.
If the scenario is anything other than this then further analysis at your end is required. 
For example data was added at 14:00 yesterday and the refresh job which happens daily take place yesterday at 19:00 taking less than 3 minutes. We expect to see the data added yesterday now but it won't be available until tomorrow - that is after tonights reflect.
I can't explain or imagine how this is possible. Can anyone help on this? 
HI,
What is the db version? Well it sounds difficult to understand.
What happens if u manually refresh the mview? Does the data gets reflected (in manual refresh case) after refresh completion? Try to trace the session at 10046 level which does the mview refresh and check the generated trace, might be you can get some clue.
Anand 
Not enough information - you've got a very specific problem with no doubt a very specific answer and highly unlikely to be solved with generic discussion.
It absolutely depends on the specifics which, unless you can distill into a test case, will only be solved by investigation that only you can do.
I'd guess that either the underlying query is wrong and it's not selecting the data - should be easily testable. Are there date predicates ?
Also, sounds unlikely, but a transaction might add a row to table X at T+1 but if it's not committed until T+3, it won't be visible at T+2.
Edited by: Dom Brooks on Feb 10, 2011 12:40 PM

Performance Issue?

In our instances, when we delete any of the elements from a person's element entry screens, it takes quite a while - upto 30 seconds - to save the entries after the deletions. I have not encountered this in any of my previous clients! Does anyone know what the reason for this might be? 
like any performance issue
- create a sql trace file
- tkprof the file (sort on elapsed time)
- identify the sql that consumes the most resources
- fix it, or let dev fix it (in R12.0.99) 
1)All users are facing this problem ? if yes
Check the Table Space also..
2) Do you have this issue with particular Machine ?
3) Check it out its because of Connectivity issue 
You can check if there are any active dbms pipes that are not being monitored. Query v$db_pipes view to get this information.
If this is happens in Forms screens, please check if there any Custom.pll code that is doing extra validations on the Element Entry forms. 
I am sorry for the late update here. I thought I had posted that I will post the results of what you guys suggested later but apparently, I had not.
Well, Soma, there are 8 rows on that view and all look the same (stats are different of course). What am I looking for here? 
Thierry,
I created the trace file and the first time I used tkprof on the trace file, the first query showed up to be over 45 seconds in elapsed fetching time! I tried to regenerate the trace file repeating the same process but I am not getting the same results inspite of following the same process and the system taking the same long time to save after deleting an element entry and clicking 'save'. I should have saved the first trace file with a different name before trying to regenerate it. Well, I will try to generate the original results again. The query was quite simple with just one column in the select statement and one where condition. The select statement used the max function though... 
Maddy,
Yes, all users are facing this problem. No, it is not restricted to a particular machine.
What should I be checking for in the tablespace and what connectivity issue might I be looking for here? 
Thierry,
I found the original trace file.
This is the query that took almost 46 seconds to complete!
SELECT MAX(ASSIGNMENT_ID), MAX(ELEMENT_LINK_ID)
FROM
PAY_ELEMENT_ENTRY_VALUES_F_AC WHERE ELEMENT_ENTRY_ID = :B1 
Any other suggestions anyone? 
is it a custom view/table? Do you have custom code that call this? I checked PAY tables and views and I don't see any seeded tables or view for PAY_ELEMENT_ENTRY_VALUES_F_AC
I checked in FPK RUP2 env and also etrm. 
no, it is not a custom object. it is a standard seeded table when auditing is used. we resolved the issue. when auditing was first enabled, years ago, i think we were an 11.5.9
at least, we were not on 11.5.10
the upgrade brought about the performance issue. and then later, one of the patches provided by Oracle must have somehow affected it further.
we disabled auditing on one of the test instances and the process which earlier took more than 35 second now takes less than a second to run.
we still have to re-enable auditing and see if that solves the issue are slows down the performance again. 
once you know the very sql that takes all the resources, the execution path of that query is needed.
v$sql_plan tells you that. Use the hash value of your query (hv in the raw trace file).
good luck!
I would not be surprised if an index on element_entry_id is missing on the audit table. 
Check how many records has been accumulated in the audit table, since it's taking longer time, there might be many records. If you think you do not need all of them, probably you can purge some of them so that it might help in improving performance.
At the same time, check whether the audit table has all the required indexes on it, which might be causing the issue. 
will check these and get back to you guys
thanks 
hey guys
i didnt have time to do what you suggested after i found the fix... we re-enabled auditing and its now working fine!

Categories

Resources