How to run query in parallel  to improve performance - Data Service Integrator

I am using ALDSP2.5, My data tables are split to 12 ways, based on hash of a particular column name. I have a query to get a piece of data I am looking for. However, this data is split across the 12 tables. So, even though my query is the same, I need to run it on 12 tables instead of 1. I want to run all 12 queries in parallel instead of one by one, collapse the datasets returned and return it back to the caller. How can I do this in ALDSP ?
To be specific, I will call below operation to get data:
declare function ds:SOA_1MIN_POOL_METRIC() as element(tgt:SOA_1MIN_POOL_METRIC_00)*
{
src0:SOA_1MIN_POOL_METRIC(),
src1:SOA_1MIN_POOL_METRIC(),
src2:SOA_1MIN_POOL_METRIC(),
src3:SOA_1MIN_POOL_METRIC(),
src4:SOA_1MIN_POOL_METRIC(),
src5:SOA_1MIN_POOL_METRIC(),
src6:SOA_1MIN_POOL_METRIC(),
src7:SOA_1MIN_POOL_METRIC(),
src8:SOA_1MIN_POOL_METRIC(),
src9:SOA_1MIN_POOL_METRIC(),
src10:SOA_1MIN_POOL_METRIC(),
src11:SOA_1MIN_POOL_METRIC()
};
This method acts as a proxy, it aggregates data from 12 data tables
src0:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_00 table
src1:SOA_1MIN_POOL_METRIC() get data from SOA_1MIN_POOL_METRIC_01 table and so on.
The data source of each table is different (src0, src1 etc), how can I run these queries in parallel to improve performance? 

see the documentation for the async function if you want to run the queries in parallel. It's not clear that doing so will improve performance, though.
I don't see any predicate (selection/where) in your query thus the function call shown will retrieve every row in all twelve tables.
Many databases provides means for slicing databases - using the database's slicing feature is likely much more efficient that creating your own.
- Mike 

Thanks Mike.
The async function works, from the log, I could see the queries are executed in parallel.
but the behavior is confused, with same input, sometimes it gives me right result, some times(especially when there are few other applications running in the machine) it throws below exception:
java.lang.IllegalStateException
     at weblogic.xml.query.iterators.BasicMaterializedTokenStream.deRegister(BasicMaterializedTokenStream.java:256)
     at weblogic.xml.query.iterators.BasicMaterializedTokenStream$MatStreamIterator.close(BasicMaterializedTokenStream.java:436)
     at weblogic.xml.query.runtime.core.RTVariable.close(RTVariable.java:54)
     at weblogic.xml.query.runtime.core.RTVariableSync.close(RTVariableSync.java:74)
     at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
     at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
     at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
     at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
     at weblogic.xml.query.runtime.core.IfThenElse.close(IfThenElse.java:99)
     at weblogic.xml.query.runtime.core.CountMapIterator.close(CountMapIterator.java:222)
     at weblogic.xml.query.runtime.core.LetIterator.close(LetIterator.java:140)
     at weblogic.xml.query.runtime.constructor.SuperElementConstructor.prepClose(SuperElementConstructor.java:183)
     at weblogic.xml.query.runtime.constructor.PartMatElemConstructor.close(PartMatElemConstructor.java:251)
     at weblogic.xml.query.runtime.querycide.QueryAssassin.close(QueryAssassin.java:65)
     at weblogic.xml.query.iterators.FirstOrderIterator.close(FirstOrderIterator.java:173)
     at weblogic.xml.query.runtime.core.QueryIterator.close(QueryIterator.java:146)
     at com.bea.ld.server.QueryInvocation.getResult(QueryInvocation.java:462)
     at com.bea.ld.EJBRequestHandler.executeFunction(EJBRequestHandler.java:346)
     at com.bea.ld.ServerBean.executeFunction(ServerBean.java:108)
     at com.bea.ld.Server_ydm4ie_EOImpl.executeFunction(Server_ydm4ie_EOImpl.java:262)
     at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invokeFunction(XmlDataServiceBase.java:312)
     at com.bea.dsp.dsmediator.client.XmlDataServiceBase.invoke(XmlDataServiceBase.java:231)
     at com.ebay.rds.dao.SOAMetricDAO.getMetricAggNumber(SOAMetricDAO.java:502)
     at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:199)
     at com.ebay.rds.impl.NexusImpl.getMetricAggNumber(NexusImpl.java:174)
     at RDSWS.getMetricAggNumber(RDSWS.jws:240)
     at jrockit.reflect.VirtualNativeMethodInvoker.invoke(Ljava.lang.Object;[Ljava.lang.Object;)Ljava.lang.Object;(Unknown Source)
     at java.lang.reflect.Method.invoke(Ljava.lang.Object;[Ljava.lang.Object;I)Ljava.lang.Object;(Unknown Source)
     at com.bea.wlw.runtime.core.dispatcher.DispMethod.invoke(DispMethod.java:371)
below is my code example, first I get data from all the 12 queries, each query is enclosed with fn-bea:async function, finally, I do a group by aggregation based on the whole data set, is it possible that the exception is due to some threads are not returned data yet, but the aggregation has started?
  
the metircName, serviceName, opname, and $soaDbRequest are simply passed from operation parameters.
 
let $METRIC_RESULT :=
  
        fn-bea:async(
            for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
            for $SOA_POOL_METRIC in src0:SOA_1MIN_POOL_METRIC()
            where
            $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
            and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
            and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
     
            and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
               or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
            and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
               or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
            and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
               or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
            and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
               or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
           
            return
            $SOA_POOL_METRIC
            ),
           fn-bea:async(for $SOA_METRIC in ns20:getMetrics($metricName,$serviceName,$opName,"")
            for $SOA_POOL_METRIC in src1:SOA_1MIN_POOL_METRIC()
            where
            $SOA_POOL_METRIC/SOA_METRIC_ID eq fn-bea:fence($SOA_METRIC/SOA_METRIC_ID)
            and $SOA_POOL_METRIC/CAL_CUBE_ID  ge fn-bea:fence($soaDbRequest/ns16:StartTime)  
            and $SOA_POOL_METRIC/CAL_CUBE_ID lt fn-bea:fence($soaDbRequest/ns16:EndTime )
     
            and ( $SOA_POOL_METRIC/SOA_SERVICE_ID eq fn-bea:fence($soaDbRequest/ns16:ServiceID)
               or (0 eq fn-bea:fence($soaDbRequest/ns16:ServiceID)))
            and ( $SOA_POOL_METRIC/POOL_ID eq fn-bea:fence($soaDbRequest/ns16:PoolID)
               or (0 eq fn-bea:fence($soaDbRequest/ns16:PoolID)))
            and ( $SOA_POOL_METRIC/SOA_USE_CASE_ID eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)
               or (0 eq fn-bea:fence($soaDbRequest/ns16:UseCaseID)))
            and ( $SOA_POOL_METRIC/ROLE_TYPE eq fn-bea:fence($soaDbRequest/ns16:RoleID)
               or (-1 eq fn-bea:fence($soaDbRequest/ns16:RoleID)))
           
            return
            $SOA_POOL_METRIC
            ),
         ... //12 similar queries
        for $Metric_data in $METRIC_RESULT    
        group $Metric_data as $Metric_data_Group        
        by   $Metric_data/ROLE_TYPE as $role_type_id  
        return
        <ns0:RawMetric>
            <ns0:endTime?></ns0:endTime>
            <ns0:target?>{$role_type_id}</ns0:target>
<ns0:value0>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE0)}</ns0:value0>
<ns0:value1>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE1)}</ns0:value1>
<ns0:value2>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE2)}</ns0:value2>
<ns0:value3>{fn:sum($Metric_data_Group/METRIC_COMPONENT_VALUE3)}</ns0:value3>
</ns0:RawMetric>
could you tell me why the result is unstable? thanks! 

Hi..
We ran accross a similar issue October 2007 on that version of aldsp, although we were calling webservices, it was erratic behaviour with the async function and the stack trace was similar. I'd suggest opening a support case for it, they have a patch available if it is the same issue. (out patch was CR349222). 

Thanks Mark, this helps.
However, we verify the async solution, the performance does not improved, the total execution time is the sum of all queries, not the longest query, below data show all the tables and the SQL execution time for each table.
Table Name     SQL Execute time     Record Number
SOA_1H_POOL_METRIC_04     28     
SOA_1H_POOL_METRIC_02     27     
SOA_1H_POOL_METRIC_00     48     
SOA_1H_POOL_METRIC_03     52     2
SOA_1H_POOL_METRIC_04     27     
SOA_1H_POOL_METRIC_02     27     
SOA_1H_POOL_METRIC_00     47     
SOA_1H_POOL_METRIC_01     497     89
SOA_1H_POOL_METRIC_01     345     24
SOA_1H_POOL_METRIC_03     49     
SOA_1H_POOL_METRIC_05     771     73
SOA_1H_POOL_METRIC_05     612     
SOA_1H_POOL_METRIC_06     29     
SOA_1H_POOL_METRIC_06     29     
SOA_1H_POOL_METRIC_07     955     101
SOA_1H_POOL_METRIC_07     601     
SOA_1H_POOL_METRIC_08     857     86
SOA_1H_POOL_METRIC_08     604     
SOA_1H_POOL_METRIC_09     3615     338
SOA_1H_POOL_METRIC_09     2491     3
SOA_1H_POOL_METRIC_10     315     31
SOA_1H_POOL_METRIC_10     214     2
SOA_1H_POOL_METRIC_11     3502     488
SOA_1H_POOL_METRIC_11     2039     2
Total     17781 ms     864
The total time of SQL is 17781 ms, it is very similiar to the actual data service execution time 18519ms, seems that SQLs are executed one by one, not in parallel.
but should not we expect the data service executime would be the longest query? that is 3615+2491=6s (in our case each thread access table two times).
could you tell me how does the async function work? Is there any specail configuration in weblofic/ALDSP to make it work?
Btw, we test this data service from web service invocation.
Again, thanks a ton for your support. 

Hi..
We had a few issues with it originally, if you check the link Re: Parallel Webservice Calls it mentions a bit more on the operation of the asyn function, in our case it did call the webservices in parallel, so ours was a case of the longest individual call being more or less the dataservice time. We're not calling against databases in parallel so can't really be specific on these points, but I guess there's other areas to also look at, such as the database connection pooling etc..
The performance and general stability of the product has been improved tremendously in the 3.0 version we're currently running (any chance of you upgrading?).
..Regards
..Mark. 

I suspect the first 5 are being executed asynchronously with the remaining executing synchronously.
Go to the ALDSP Console, increase "Maximum Threads Per Query" to 12.
If that doesn't help, post again.
But you need to stop and think about what your goals are. If you have one or two queries that use up all the available threads, nothing else is going to run until those queries are complete. Is that ok for your scenario?
** note : Do not use fn-bea:fence unless you open a case with Customer Support and they tell you to use it. The sole purpose of fence is to block the optimizer - you really don't want to do that, do you?
- Mike
Edited by: mikereiche on Feb 10, 2009 8:58 AM 

Mike, you are right, after I changed "Maximum Threads Per Query" from 5 to 12, the execution time is ~8, which is close to the longest query.
I have 6 machines for ALDSP, also a lot of client applications to use ALDSP, I will balance the thread resource with response time.
I hate to use fence, but sometime optimizer does not do a good job, I have to use fence to make sure I can get good SQL.
for below example:
$SOA_POOL_METRIC/CAL_CUBE_ID ge fn-bea:fence($soaDbRequest/ns16:StartTime)
if I remove fence, it will give sql as (CAL_CUBE_ID>=? or CAL_CUBE_ID>=? or CAL_CUBE_ID>=? ...), actually it is single value input, I only need one "CAL_CUBE_ID>=?", is there a simple way to avoid this "PPK optimization" ?
Finally, could you tell me how to get CR349222 for async function issue fixing?
Thanks a ton for your timely support, really appreciate this. 

if I remove fence, it will give sql as (CAL_CUBE_ID>=? or CAL_CUBE_ID>=? or CAL_CUBE_ID>=? ...), actually it is single value input, I only need one "CAL_CUBE_ID>=?", is there a simple way to avoid this "PPK optimization" ? 1) You might be sure it is a single value input, but ALDSP (and myself) are not so sure. You might be blocking a useful optimization. In ALDSP 3.x, if your query is written such that it is unambigously always a single value, sql with only one parameter will be generated. Not so in ALDSP 2.5. You have to use fence (I believe also subscripting with [1] will work).
2) Even if it is always a single value and ALDSP creates sql with 20 parameters - they will be all populated with the same value. If this performs worse than sql with a single parameter with the same value - you need to talk to your database vendor about optimization.
To a patch, open a case with customer support.
- Mike

Related

this is a bug: versioning and deferred constraints

When an update is done, the version of an entity is increased by one. After the update succeeded and the following commit is done, some contraints may still fail. These are called deferred constraints; Oracle can set that per constraint definition, Informix can delay all checks until commit time.
When a constraint fails in the commit, the increased version of the entity is not rolled back. On a retry of the persist, this will cause Toplink to think that someone else has modified the record, when in effect it is using the wrong version number.
So, in the first attempt
update X set ..., version = 2 where ... and version = 1
Update succeeds, commit fails. The record still has version=1, querying the version field of the entity in question returns version=2.
Second attempt of the same entity:
update X set ..., version = 3 where ... and version = 2
Fails with a "modified" exception.
Workaround by using callbacks to store the version value upon #PostLoad, update the value upon #PostUpdate en force-set it in the #PreUpdate. 
Hello,
Can you show some code on how to reproduce it? Are you using the JPA, and if so, how are you persisting/merging the entity and what is its version number pre/post rollback? TopLink should be defering merging changes till after the transaction has commited, so I'm not sure how this would be happening.
Best Regards,
Chris 
Yes, and yes; I use JPA. I even have the patch code to be included in each entity for a workaround (see thread 513936 "retry after rollback").
Below is my unit test, its a bit contrived because it's an adaptation of another test and tries to mimick the application behaviour as best as possible, but logically it does the following:
- load an entity (Stand)
- make two changes, one will fail on a unique constraint
- attempt to persist
- undo the change
- attempt to persist
Stand lStandCarryOver = null;
// load
{
     Stand lStand = Stand.findByPK(1); // this is stand A1
     lStand.setDescription( lStand.getDescription() + "x"); // a change to check if the entity was saved
     lStandCarryOver = lStand;
     System.out.println("!!!1 Stand=" + lStand.getStandid()+ "." + lStand.getLazylock() + " / " + lStand.getDescription());
}
// attempt 1
try
{
     Stand lStand = lStandCarryOver;
     lStand.setStandid("A2"); // this will cause an exception upon commit (deferred constraint)
     EntityManagerFinder.find().getTransaction().begin();
     System.out.println("!!!2 Stand=" + lStand.getStandid()+ "." + lStand.getLazylock() + " / " + lStand.getDescription());
     EntityManagerFinder.find().merge( lStand );
     EntityManagerFinder.find().getTransaction().commit();
}          
catch (Throwable e)
{
     System.out.println("!!!2a Stand=" + lStandCarryOver.getStandid()+ "." + lStandCarryOver.getLazylock() + " / " + lStandCarryOver.getDescription());
     if (EntityManagerFinder.find().getTransaction().isActive()) EntityManagerFinder.find().getTransaction().rollback();
}
System.out.println("!!! ==================");
// attempt 2
try
{
     Stand lStand = lStandCarryOver;
     lStand.setStandid("A1"); // this will remove the exception
     System.out.println("!!!3 Stand=" + lStand.getStandid()+ "." + lStand.getLazylock() + " / " + lStand.getDescription());
     EntityManagerFinder.find().getTransaction().begin();
     EntityManagerFinder.find().merge( lStand );
     EntityManagerFinder.find().getTransaction().commit();
}
catch (Throwable e)
{     
     if (EntityManagerFinder.find().getTransaction().isActive()) EntityManagerFinder.find().getTransaction().rollback();
}
Toplink is set to debug + exceptions.
At the first marker (!!!1) it will print "version".
Marker 2 is never reached.
Marker 2a however, shows "version + 1".
Marker 3 shows "version + 1" and thus fails with a "someone modified" exception.
It is quite simple to fix: at the load store the version in a variable. Before the begin in attempt 2 set the version and off you go. This is what my workaround patch does. I added it to all my entities and now my application works. 
Hello,
I can't be sure since I can't tell what is hapening in Stand.findByPK(1) or the EntityManagerFinder.find() methods, but it looks like you are using an extended EntityManager to initially find the iStand object. It looks like the iStand object initially obtained is a managed object, which means the merge( lStand ) line is a noop - you said it wasn't being reached anyway since marker 2 for some reason isn't being reached either. A database exception would cause your transaction to
rollback, making any managed entities detached and leave them in an inconsitent state according to the spec. Section 3.3.2 states::
"In particular, the state of version attributes and generated state (e.g., generated primary keys) may be inconsistent. Instances that were formerly managed by the persistence context (including new instances that were made persistent in that transaction) may therefore not be reusable in the same manner as other detached objects—for example, they may fail when passed to the merge operation."
Another solution would be to clear the em after you have queried on it - before starting the transaction. This would cause the iStand instance you are merging in to both modification attempts to start off being detached - so the version will not be changed in the first attempt. You should then see the results of the second marker as well, as the merge should be the first time the em can see any changes to persist.
Best Regards,
Chris 
findByPk is some out-of-the-book createQuery / getResultList code. Nothing special there. The same goes for the entity manager, all standard JPA stuff:
Map<String, Object> cfg = BM.getEntityManagerFactorySettings(url/usr/pwd/...);
EntityManagerFactory lEntityManagerFactory = Persistence.createEntityManagerFactory("reinders", cfg);
The EntityManagerFinder contains some logic of finding the correct EM in the multiscreened application. However, the version used here is "singleton", so there is only one EM.
Yes, the initial entity is a managed one. But the test code is a behavioral copy of the application, so in the first attempt the merge might be NOOP, but in the second attempt it is not. The application ofcourse runs twice through the same code, so I could have coded the test with a for loop.
You mention I wrote that the marker 2 is not reached... I just tested that, and it is not correct. Attempt 1 fails on the commit.
I'm initially not very pleased with section 3.3.2. Suppose I have a very big business model with all kind of changes applied to it. Then I instruct a toplevel entity to persist and expect all cascaded entities to persist also. If that fails, I would expect the BM to return to the pre-transaction state, with all changes in tact.
Clearing the EM before the begin seems to also solve the problem. Is this a solid solution considering my previous concern? 
Your concern seems to be that you are trying to reuse the persistence context's state after a commit - this is valid if the commit is succesful, but not if the transaction rollsback. It would be unwise to trust the state of the context on rollback - a valid optimistic lock exception might cause multiple retries of the same invalid lock value. I believe the common assumption is that since some of the logic used in calculating the changes is based on stale data, the entire transaction should be reprocessed. Larger processes would more frequently pass around detached entities, so as to only require a transaction to merge/persist the changes made when done.
Clearing the em is just one option, and seems to fit how you are using the find method and performing changes. There are other options though depending on how you are making the changes - for instance begin the transaction, find the entity, make changes on it, commit and repeat as neccessary. It all depends on how you are passing around data and determining what needs to change.
Best Regards,
Chris 
Ehhhh. You got me cross-visioned on the first paragraph.
But yes, I do not want to have any locking going on while just editing in the fat-client (so no begin, find, change, commit). Once the user is done finding and altering entities, I want to burst-write, using lazy locking, to the database.
Thanks for clearing that up, Chris! 
Right, in the unit test things work now, but if I do this in the application, things freeze up in the commit.
The application adds 120 standversion entities to a new stand entity, the merge reloads the stand and some other data and then the commit completely blocks (it's executing that method for 10 minutes now without database interaction). When only small changes have been made, the code works.
I'll write up a unit test emulating this behavior tomorrow, but it still isn't working right. 
It appears to be hanging in the identity hash table, inside the cascadeRegisterNewForCreate. More to follow. 
It is reproducable in a testcase.
The relevant BM: Stand [1:N] Standversion [1:N] Standassign [N:1] Article
A new Standversion is mostly created by copying another Standversion (usually the previous one) and making changes. This is code doing exactly this:
// find the original
Standversion lImportFromStandversion = Standversion.findByStandidVersion("A1", 56);
// determine the highest sequence number
int lHighestVersion = lImportFromStandversion.getStand().getHighestVersionOfAllMyStandversions();
if (lHighestVersion == 0) lHighestVersion = 1;
// create new standversion
Standversion lNewStandversion = lImportFromStandversion.cloneShallow();
// initialize the new standversion
GregorianCalendar lNow = new GregorianCalendar();
lNewStandversion.setReplaceDate( new GregorianCalendar( lNow.get(GregorianCalendar.YEAR) + 1, lNow.get(GregorianCalendar.MONTH), lNow.get(GregorianCalendar.DATE) ) );
lNewStandversion.setVersion( lHighestVersion + 1);
lNewStandversion.setPricePerUnit( lImportFromStandversion.getPricePerUnit());
lNewStandversion.setDerrivedfromstandversion(lImportFromStandversion);
     
// copy the assigns from the original to the new
int lCnt = 0;
for (Standassign lImportFromStandassign : new ArrayList<Standassign>( lImportFromStandversion.getStandassignsWhereIAmStandversion() ) ) // The "new ArrayList" forces the whole collection to be read, not lazy loaded. Otherwise this will result in a concurrent modification exception.
{
     // progress
     lCnt++;
     System.out.println( lCnt + "/" + lImportFromStandversion.getStandassignsWhereIAmStandversion().size());
     
     // clone
     Standassign lNewStandassign = lImportFromStandassign.cloneShallow();
     
     // initialize
     lNewStandassign.setChangeIndicator(false);
     lNewStandassign.setStandversion(lNewStandversion);
}
// save
lEntityManager.clear();
lEntityManager.getTransaction().begin();
lEntityManager.merge(lNewStandversion);
lEntityManager.getTransaction().commit();
There are 60 assigns.
Stuff is reloaded in the assign and then Toplink is very very in the merge, as in: doesn't return from the call for 5 minutes and then:
Exception in thread "main" java.util.MissingResourceException: Can't find bundle for base name oracle.toplink.essentials.exceptions.i18n.DescriptorExceptionResource, locale en_US
     at java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:1508)
     at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1262)
     at java.util.ResourceBundle.getBundle(ResourceBundle.java:789)
     at oracle.toplink.essentials.exceptions.i18n.ExceptionMessageGenerator.buildMessage(ExceptionMessageGenerator.java:56)
     at oracle.toplink.essentials.exceptions.DescriptorException.targetInvocationWhileConstructorInstantiation(DescriptorException.java:1559)
     at oracle.toplink.essentials.internal.descriptors.InstantiationPolicy.buildNewInstanceUsingDefaultConstructor(InstantiationPolicy.java:139)
     at oracle.toplink.essentials.internal.descriptors.InstantiationPolicy.buildNewInstance(InstantiationPolicy.java:111)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildNewInstance(ObjectBuilder.java:324)
     at oracle.toplink.essentials.descriptors.copying.InstantiationCopyPolicy.buildClone(InstantiationCopyPolicy.java:37)
     at oracle.toplink.essentials.descriptors.copying.AbstractCopyPolicy.buildWorkingCopyClone(AbstractCopyPolicy.java:49)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.instantiateWorkingCopyClone(ObjectBuilder.java:2029)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject(UnitOfWorkImpl.java:641)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerExistingObject(UnitOfWorkImpl.java:3080)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerExistingObject(UnitOfWorkImpl.java:3024)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerExistingObject(UnitOfWorkImpl.java:3044)
     at oracle.toplink.essentials.mappings.CollectionMapping.buildElementClone(CollectionMapping.java:201)
     at oracle.toplink.essentials.mappings.CollectionMapping.buildCloneForPartObject(CollectionMapping.java:162)
     at oracle.toplink.essentials.internal.indirection.UnitOfWorkQueryValueHolder.buildCloneFor(UnitOfWorkQueryValueHolder.java:60)
     at oracle.toplink.essentials.internal.indirection.UnitOfWorkValueHolder.instantiateImpl(UnitOfWorkValueHolder.java:161)
     at oracle.toplink.essentials.internal.indirection.UnitOfWorkValueHolder.instantiate(UnitOfWorkValueHolder.java:232)
     at oracle.toplink.essentials.internal.indirection.DatabaseValueHolder.getValue(DatabaseValueHolder.java:90)
     at oracle.toplink.essentials.indirection.IndirectList.buildDelegate(IndirectList.java:193)
     at oracle.toplink.essentials.indirection.IndirectList.getDelegate(IndirectList.java:315)
     at oracle.toplink.essentials.indirection.IndirectList.size(IndirectList.java:640)
     at oracle.toplink.essentials.internal.queryframework.CollectionContainerPolicy.sizeFor(CollectionContainerPolicy.java:184)
     at oracle.toplink.essentials.internal.indirection.TransparentIndirectionPolicy.getRealAttributeValueFromObject(TransparentIndirectionPolicy.java:252)
     at oracle.toplink.essentials.mappings.ForeignReferenceMapping.getRealAttributeValueFromObject(ForeignReferenceMapping.java:370)
     at oracle.toplink.essentials.mappings.CollectionMapping.getRealAttributeValueFromObject(CollectionMapping.java:559)
     at oracle.toplink.essentials.mappings.CollectionMapping.getRealCollectionAttributeValueFromObject(CollectionMapping.java:574)
     at oracle.toplink.essentials.mappings.CollectionMapping.mergeIntoObject(CollectionMapping.java:740)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.mergeIntoObject(ObjectBuilder.java:2114)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:427)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChanges(MergeManager.java:249)
     at oracle.toplink.essentials.mappings.ObjectReferenceMapping.mergeIntoObject(ObjectReferenceMapping.java:360)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.mergeIntoObject(ObjectBuilder.java:2114)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:427)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChanges(MergeManager.java:249)
     at oracle.toplink.essentials.mappings.CollectionMapping.mergeIntoObject(CollectionMapping.java:768)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.mergeIntoObject(ObjectBuilder.java:2114)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:427)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChanges(MergeManager.java:249)
     at oracle.toplink.essentials.mappings.CollectionMapping.mergeIntoObject(CollectionMapping.java:768)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.mergeIntoObject(ObjectBuilder.java:2114)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:427)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChanges(MergeManager.java:249)
     at oracle.toplink.essentials.mappings.ObjectReferenceMapping.mergeIntoObject(ObjectReferenceMapping.java:360)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.mergeIntoObject(ObjectBuilder.java:2114)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChangesOfCloneIntoWorkingCopy(MergeManager.java:427)
     at oracle.toplink.essentials.internal.sessions.MergeManager.mergeChanges(MergeManager.java:249)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.mergeCloneWithReferences(UnitOfWorkImpl.java:2709)
     at oracle.toplink.essentials.internal.ejb.cmp3.base.RepeatableWriteUnitOfWork.mergeCloneWithReferences(RepeatableWriteUnitOfWork.java:203)
     at oracle.toplink.essentials.internal.ejb.cmp3.base.EntityManagerImpl.mergeInternal(EntityManagerImpl.java:220)
     at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerImpl.merge(EntityManagerImpl.java:113)
     at nl.reinders.bm.BMTestToplink.main(BMTestToplink.java:263)
Caused by: java.lang.OutOfMemoryError: Java heap space
     at java.util.Arrays.copyOfRange(Arrays.java:3209)
     at java.lang.String.<init>(String.java:216)
     at java.lang.StringBuilder.toString(StringBuilder.java:430)
     at java.net.URLStreamHandler.parseURL(URLStreamHandler.java:232)
     at sun.net.www.protocol.file.Handler.parseURL(Handler.java:50)
     at java.net.URL.<init>(URL.java:596)
     at java.net.URL.<init>(URL.java:464)
     at sun.misc.URLClassPath$FileLoader.getResource(URLClassPath.java:972)
     at sun.misc.URLClassPath.getResource(URLClassPath.java:168)
     at java.net.URLClassLoader$1.run(URLClassLoader.java:192)
     at java.security.AccessController.doPrivileged(Native Method)
     at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
     at java.util.ResourceBundle$Control.newBundle(ResourceBundle.java:2370)
     at java.util.ResourceBundle.loadBundle(ResourceBundle.java:1400)
     at java.util.ResourceBundle.findBundle(ResourceBundle.java:1365)
     at java.util.ResourceBundle.findBundle(ResourceBundle.java:1294)
     at java.util.ResourceBundle.findBundle(ResourceBundle.java:1294)
     at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1236)
     at java.util.ResourceBundle.getBundle(ResourceBundle.java:789)
     at oracle.toplink.essentials.exceptions.i18n.ExceptionMessageGenerator.buildMessage(ExceptionMessageGenerator.java:56)
     at oracle.toplink.essentials.exceptions.DescriptorException.targetInvocationWhileConstructorInstantiation(DescriptorException.java:1559)
     at oracle.toplink.essentials.internal.descriptors.InstantiationPolicy.buildNewInstanceUsingDefaultConstructor(InstantiationPolicy.java:139)
     at oracle.toplink.essentials.internal.descriptors.InstantiationPolicy.buildNewInstance(InstantiationPolicy.java:111)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildNewInstance(ObjectBuilder.java:324)
     at oracle.toplink.essentials.descriptors.copying.InstantiationCopyPolicy.buildClone(InstantiationCopyPolicy.java:37)
     at oracle.toplink.essentials.descriptors.copying.AbstractCopyPolicy.buildWorkingCopyClone(AbstractCopyPolicy.java:49)
     at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.instantiateWorkingCopyClone(ObjectBuilder.java:2029)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.cloneAndRegisterObject(UnitOfWorkImpl.java:641)
     at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerExistingObject(UnitOfWorkImpl.java:3080) 
Right, cloneShallow:
          nl.reinders.bm.Standversion lNew = new nl.reinders.bm.Standversion();
          lNew.setStand( getStand() );
          lNew.setDerrivedfromstandversion( getDerrivedfromstandversion() );
          lNew.setVersion( getVersion() );
          lNew.setReplaceDate( getReplaceDate() );
          lNew.setPricePerUnit( getPricePerUnit() );
          lNew.setInifill( getInifill() );
          lNew.setComment( getComment() );
          return lNew;
And:
          nl.reinders.bm.Standassign lNew = new nl.reinders.bm.Standassign();
          lNew.setArticle( getArticle() );
          lNew.setBatchtype( getBatchtype() );
          lNew.setStandversion( getStandversion() );
          lNew.setCode( getCode() );
          lNew.setChangeIndicator( getChangeIndicator() );
          lNew.setEnding( getEnding() );
          lNew.setEnding2( getEnding2() );
          lNew.setAssignean( getAssignean() );
          return lNew; 
This appears to be some kind of recusive thing. The code works fine for a limited number of Standassigns, but when exceeding somewhere around 10 Standassigns, the execution is exceeding 30 seconds.
What is Toplink doing and how can I make it more happy? 
Hello,
The message states it is running out of heap space - not uncommon when working with a large number of objects and resources. You can allocate additional memory to the JVM using -Xmx and -Xms settings and/or modify your application to use less resources.
In this case, the problem could in part be due to your object tree being very large and interconnected. You mention that you get this problem when you have 120 new standversion added to a new stand entity, but not how many Standassign and Articles etc are being added. Merge forces TopLink to go through the entire loaded (vs not yet loaded) object tree for the object passed in, creating managed instances for all of them. The larger the tree, the longer this will take and the more resources this will consume.
In addition to increasing your memory to something appropriate for your application, I would advise using more indirection - set your 1:N and 1:1 attributes to lazy load and ensuring you only trigger/access lazy atributes when needed. For example, the getHighestVersionOfAllMyStandversions() could be done as a query instead of accessing the triggering the Standversion->Stand->Standversion relationships in memory. Since your new StandVersion references the old, when you merge the new StandVersions the merge/flush/commit processes will have to perform on all reachable loaded entities - Stand's entire standVersion collection will need to be processed as well as all their collections of Standassigns and Articles that might have been accessed.
Best Regards,
Chris 
Understood. The memory is not a real issue indeed, the execution time is.
The testcode in effect adds 1 Standversion with 60 Standassigns (the application added 120). So 61 new objects. Each assign refers one Article and one Batchtype and in this case the same as the originals. Adding those 61 to the database should not take 5+ minutes. I need to find a way to speed that up.
All the relations are set to lazy load. Reading a Stand results in only one query and the versions, assigns, articles, batchtypes are not loaded.
I'll give your advice on the query instead of scanning a go, to see how much improvement it gives. However, I am very worried now: the idea is the have a user "work" the business model and then persist it. There are GUI components like comboboxes that need to load all Stands (because this Stand may refer to another) and a number of complete Standversion lists (because a Standversion is imported from another and the user can select from which). So even though in the testcase optimizing the method may prove valuable, in a daily GUI use I'm pretty sure even more objects will have been loaded.
If this behavior is a must, and 5+ minutes being unacceptable, I must immediatly stop the domain driven approach and revert back to plain old JDBC! 
Increasing the memory to 512M prevents the exception, but the code is still running after 10 minutes at 65% CPU.
However, my experience tells me that this cannot be right. 10 minutes at 65% is A LOT of cpu power (relatively speaking ofcourse) on a 3GHz CPU.
The unit test loads one Standversion, with 60 Standassigns, each referring one Article and one Batchtype. A total of 181 Entities. The new Standversion adds 61 new Entities, making the total 242. (I replaced the "findHighest" with a constant.) What in the world can Toplink be doing with 242 Entities in memory for (now) 13 minutes?
To further support my feeling: the merge reloads the Entities within 1 to 2 seconds. Then it immediatly obtains a new PK for Standversion. And then nothing happens anymore. Since 181 entities are just fetched from the DB, Toplink must be very busy with the remaining 61.
15 minutes... I've terminated the application.
This really cannot be right.

parameterNameMismatch error

Exception Description: The parameter name ID in the query's selection criteria does not match any parameter name defined in the query.
Query: ReadObjectQuery
oracle.toplink.exceptions.QueryException.parameterNameMismatch(QueryException.java:890)
We have been getting this intermittent error for over a year. We were on toplink 10.1.3.1, then migrated to 10.1.3.3, and now on 10.1.3.4. The error occurs every couple of months on our production server. When the error occurs we have to restart our application.
There are two objects (mappings) that have thrown this error. We've created threads in the past:
Intermittent error assigning a OneToOneMapping
The parameter name [ID] in the query's selection criteria does not match an
Both of these mappings are OneToOneMapping.
Both of these mappings are "Target Foreign Key" aka addTargetForeignKeyFieldName();
One is privately owned the other isn't.
Both use indirection.
I've read several other threads relating to parameterNameMismatch error:
ReadObjectQuery, Query Exception, parameterNameMismatch
Getting Parameter_name_mismatch error
After doing some searching is seems that most people who've had this problem have removed Target Foreign Key to resolve their problem. They have programmed around their problem. If I remove Target Foreign Key option then I have to remove my back reference. I would like to keep it. All of my 1000+ functional test cases pass just fine. The application server works like a champ. It's just this bug seems to popup every couple of months and it requires a reboot which makes us look bad.
What could cause this problem? I'm happy and willing to get my hands dirty. I just need a starting point 
Here is the full stack:
at oracle.toplink.exceptions.QueryException.parameterNameMismatch(QueryException.java:890)
     at oracle.toplink.internal.expressions.ParameterExpression.getValue(ParameterExpression.java:175)
     at oracle.toplink.internal.databaseaccess.DatasourceCall.translateQueryString(DatasourceCall.java:540)
     at oracle.toplink.internal.databaseaccess.DatabaseCall.translate(DatabaseCall.java:865)
     at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:116)
     at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:103)
     at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.selectOneRow(DatasourceCallQueryMechanism.java:501)
     at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.selectOneRowFromTable(ExpressionQueryMechanism.java:872)
     at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.selectOneRow(ExpressionQueryMechanism.java:847)
     at oracle.toplink.queryframework.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:415)
     at oracle.toplink.queryframework.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:812)
     at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
     at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:780)
     at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
     at oracle.toplink.publicinterface.Session.internalExecuteQuery(Session.java:2089)
     at oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
     at oracle.toplink.internal.indirection.QueryBasedValueHolder.instantiate(QueryBasedValueHolder.java:62)
     at oracle.toplink.internal.indirection.QueryBasedValueHolder.instantiate(QueryBasedValueHolder.java:55)
     at oracle.toplink.internal.indirection.DatabaseValueHolder.getValue(DatabaseValueHolder.java:61)
     at oracle.toplink.internal.indirection.UnitOfWorkValueHolder.instantiateImpl(UnitOfWorkValueHolder.java:148)
     at oracle.toplink.internal.indirection.UnitOfWorkValueHolder.instantiate(UnitOfWorkValueHolder.java:217)
     at oracle.toplink.internal.indirection.UnitOfWorkQueryValueHolder.setValue(UnitOfWorkQueryValueHolder.java:47) 
I can't see how this could occur, but if it can't be reproduced and only occurs once a month, it would be very difficult to debug. You will need to try isolate the issue so that it can be reproduced.
It seems that the query is running fine, then suddenly fails, this makes little sense, but seems to indicate the query was somehow changed.
Since it is a non-trivial issue, support is probably you best bet.
You might also wish to try the latest EclipseLink release to see if it still occurs.
If you want to try to debug it, after the error occurs try dumping the mappings seletionQuery and its selectionCriteria, has it changed? (dump it on startup to so you have the original). Then you need to find out what is changing it.
Ensure you are never modifying the mapping's selectionCriteria or selectionQuery.
---
James : http://www.eclipselink.org 
Thanks for your response. I would like to debug the problem. I would gladly open up a ticket but w/o the ability to dup the bug I'm afraid they wouldn't be able to help.
I've written a JSP page that dumps the selection values for the mappings. We don't use custom selection queries to that value is null. Is there anything that I should be dumping to the page besides these two values? I once got a debugger attached during an error and looked into the problem.
DatasourceCall.java
line 539 if (parameter instanceof ParameterExpression) {
line 540 value = ((ParameterExpression)parameter).getValue({color:#0000ff}translationRow{color}, session);
line 541 }
translationRow was null. What could cause this to be null? If the selectionCriteria was null would this translationRow be null? I'm trying to better understand the process so I can hopefully debug this situation when it occurs. 
The mapping's selectionQuery/selectionCriteria will not be null after you login / initialize descriptors.
The translationRow should never be null, that would be odd. 
On Startup I see:
SelectionCriteria: Field T_NOTES.ID Base QUERY OBJECT Parameter T_PROJECTS.LASTNOTEID
Selection Query: null
I wrote a JSP that would forcefully remove the selectionCriteria ,trying to simulate breaking it, and it creates a selection query when I do. By the way I didn't get the paremeterNameMismatch error when I overrode the selection criteria to null after the descriptors were initialized.
SelectionCriteria: null
Selection Query: SELECT ID, OBJID, REPLIES, TOPNOTEOBJCODE, NOTETEXT, TOPOBJID, ISMESSAGE, EXTREFID, ISTOPARCHIVED, ENTRYDATE, ATTACHOBJCODE, OWNERID, NOTEOBJCODE, PARENTNOTEID, ISPRIVATE, AUDITTYPE, THREADID, SUBJECT, THREADDATE, ATTACHOBJID, CUSTOMERID, ISARCHIVED, INDENT FROM T_NOTES WHERE (ID = ?)
What is the relationship between selectionQuery / selectionCriteria? Are you expecting both the selectionCriteria and selection query to be null? Could that cause this exception to occur if they both were null?
UPDATE
I put this page on an internal server that is used by lots of individuals. After a few hours I came back, everything was working properly, and saw that this:
selection criteria: Relation operator = Field T_NOTES.ID Base QUERY OBJECT Parameter T_PROJECTS.LASTNOTEID
selection query: SELECT ID, OBJID, REPLIES, TOPNOTEOBJCODE, NOTETEXT, TOPOBJID, ISMESSAGE, EXTREFID, ISTOPARCHIVED, ENTRYDATE, ATTACHOBJCODE, OWNERID, NOTEOBJCODE, PARENTNOTEID, ISPRIVATE, AUDITTYPE, THREADID, SUBJECT, THREADDATE, ATTACHOBJID, CUSTOMERID, ISARCHIVED, INDENT FROM T_NOTES WHERE (ID = ?)
Something filled out the selection query. Does toplink fill out the selection query based on the selection criteria after being used? What causes the selectionQuery to be filled out?
Edited by: Eric Neilsen on Apr 23, 2009 5:26 PM 
I got the error occurring in a debugger. The selection criteria is set and it's the same. The selection query is null.
In DatabaseQueryMechanism the "query" getQuery() is a ReadObjectQuery.
Here are the attributes off of it:
translationRow = null
queryMechanism=ExpressionQueryMechanism
inMemoryQueryIndirectionPolicy->policy=0
descriptor=RelationalDescriptor DatabaseTable(T_SCORECARDS)
What else could be useful in tracking down this problem? I'm going to look into what is setting the translationRow and why it's null. I know the problem stems from this being null. I just don't know what could cause it to be null. 
Ok I'm starting to figure this out so I figured I'd post my findings.
I have a ReadObjectQuery and in prepareCustomQuery
line 545 primaryKeyRow = customQuery.getDescriptor().getObjectBuilder().extractPrimaryKeyRowFromExpression(getSelectionCriteria(), customQuery.getTranslationRow(), customReadQuery.getSession());
line 546 customReadQuery.setTranslationRow(primaryKeyRow);
primaryKeyRow is null
This is how translationRow is returning null!
Looking into why it's null
Inside RelationalExpression.extractPrimaryKeyValues
line 210 } else if (getSecondChild().isParameterExpression()) {
line 211 value = translationRow.get(((ParameterExpression)getSecondChild()).getField());
value is null
Reason:
translationRow which is a DatabaseRow contains two objects
this.fields the fields contains the key that I'm looking for "T_PROJECTS.DELIVERABLESCORECARDID" it returns index 99
this.values.elementAt(99) is null.
DatabaseRow.java
line 282 return this.values.elementAt(index);  This is returning null!!!!!
Why would the vector field be populated but not the vector values? These are out of sync. What could cause this to happen?
UPDATE
I went up the thread stack to see if the DatabaseRecord was correct further up the stack. It wasn't. The (UnitOfWorkValueHolder line 129) getWrappedValueHolder returns a QueryBasedValueHoder who's DatabaseRecord is missing the value for field T_PROJECTS.DELIVERABLESCORECARDID. I don't know why this got corrupted but the only way to fix it is to bounce the application.
http-0.0.0.0-8080-6#12 daemon, priority=5, in group 'jboss', status: 'RUNNING'
     at oracle.toplink.internal.indirection.UnitOfWorkValueHolder.instantiateImpl(UnitOfWorkValueHolder.java:129)
     at oracle.toplink.internal.indirection.UnitOfWorkValueHolder.instantiate(UnitOfWorkValueHolder.java:217)
     at oracle.toplink.internal.indirection.DatabaseValueHolder.getValue(DatabaseValueHolder.java:61)
     at oracle.toplink.internal.indirection.BasicIndirectionPolicy.getRealAttributeValueFromObject(BasicIndirectionPolicy.java:199)
     at oracle.toplink.mappings.ForeignReferenceMapping.getRealAttributeValueFromObject(ForeignReferenceMapping.java:508)
     at oracle.toplink.mappings.ObjectReferenceMapping.getRealAttributeValueFromObject(ObjectReferenceMapping.java:261)
     at oracle.toplink.mappings.ObjectReferenceMapping.preDelete(ObjectReferenceMapping.java:622)
     at oracle.toplink.publicinterface.DescriptorQueryManager.preDelete(DescriptorQueryManager.java:844)
     at oracle.toplink.queryframework.DeleteObjectQuery.executeDatabaseQuery(DeleteObjectQuery.java:129)
     at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
     at oracle.toplink.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:542)
     at oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:100)
     at oracle.toplink.queryframework.DeleteObjectQuery.executeInUnitOfWorkObjectLevelModifyQuery(DeleteObjectQuery.java:97)
     at oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:72)
     at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2,631)
     at oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
     at oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)
     at oracle.toplink.internal.sessions.CommitManager.deleteAllObjects(CommitManager.java:314)
     at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(UnitOfWork.java:1,323)
     at oracle.toplink.publicinterface.UnitOfWork.commitToDatabaseWithChangeSet(UnitOfWork.java:1,416)
     at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(UnitOfWork.java:1,164)
     at oracle.toplink.publicinterface.UnitOfWork.commit(UnitOfWork.java:932)
Edited by: Eric Neilsen on Apr 24, 2009 11:37 AM
UPDATE 2:
FYI this particular instance doesn't have a DELIVERABLESCORECARDID. This is why it is null. It's a privately owned object and this the field that is used as the primary key for that object. So the question is, why is it creating a customQuery and trying to load an object that doesn't exist! I'm still looking into it.
Edited by: Eric Neilsen on Apr 24, 2009 2:02 PM 
The mapping's selectionQuery gets initialized on login, so it seems fine.
The customQuery is always cloned before being prepared, so even if the translationRow were getting set to null, this should not cause any issue in the mapping's query or future queries.
It seems there may be a case when the foreign key value is null that the customQuery gets a null translationRow which could cause an exception, so that seems like it may be an issue. It still does not explain how the mapping's query could be corrupted though. If you can recreate the issue, please contact Oracle support with the issue.
A workaround to the issue may be to set the readObjectQuery in the DescriptorQueryManager to null for the descriptor. 
I've put a breakpoint at this instance
at oracle.toplink.mappings.ObjectReferenceMapping.getRealAttributeValueFromObject(ObjectReferenceMapping.java:261)
     at oracle.toplink.mappings.ObjectReferenceMapping.preDelete(ObjectReferenceMapping.java:622)
     at oracle.toplink.publicinterface.DescriptorQueryManager.preDelete(DescriptorQueryManager.java:844)
     at oracle.toplink.queryframework.DeleteObjectQuery.executeDatabaseQuery(DeleteObjectQuery.java:129)
     at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
     at oracle.toplink.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:542)
     at oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:100)
     at oracle.toplink.queryframework.DeleteObjectQuery.executeInUnitOfWorkObjectLevelModifyQuery(DeleteObjectQuery.java:97)
     at oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:72)
     at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(UnitOfWork.java:2,631)
     at oracle.toplink.publicinterface.Session.executeQuery(Session.java:993)
     at oracle.toplink.publicinterface.Session.executeQuery(Session.java:950)
     at oracle.toplink.internal.sessions.CommitManager.deleteAllObjects(CommitManager.java:314)
     at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(UnitOfWork.java:1,323)
     at oracle.toplink.publicinterface.UnitOfWork.commitToDatabaseWithChangeSet(UnitOfWork.java:1,416)
     at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(UnitOfWork.java:1,164)
     at oracle.toplink.publicinterface.UnitOfWork.commit(UnitOfWork.java:932)
Source: ObjectReferenceMapping.java
public Object getRealAttributeValueFromObject({color:#0000ff}Object object{color}, Session session) {
Object value = super.getRealAttributeValueFromObject(object, session);
value = getReferenceDescriptor().getObjectBuilder().unwrapObject(value, session);
return value;
}
This is what I've found using a debugger:
{color:#0000ff}object - is my RKProject class file{color}
deliverableScoreCardID = -2147483648 +//This is Integer.MINVALUE we use this to signify a null entry.+
_deliverableScoreCard = UnitOfWorkQueryValueHolder : not instantiated
isInstantiated = false
value = null
row = DatabaseRecord, size = 108
query = ReadObjectQuery
{color:#ff0000}*isCustomQueryUsed = "true"*{color}
translationRow = null
{color:#ff0000}This shouldn't be set to true. I don't know how or why this got set but it shouldn't be set true. It should only be set to true if there is a scoreCard attached to this object. If you look at deliverabelScoreCardID it is set to INTNULL this means there isn't an object in the database. For some reason the UnitOfWorkQueryValueHolder, which isn't instantiated has the "isCustomQueryUsed" flag set to true. This needs to be set to false. Is toplink caching these queryValueHolders and then reusing them? This looks like some sort of cache corruption issue. {color} 
isCustomQueryUsed should be true, that is fine. It means that this mapping's query is by primary key, and can obtain a cache hit, that should not be an issue.
Is your mapping still using a targetForeignKey? That is not correct as you have a foreign key, it must use a foreign key, if you want the direct to map it then make it read-only. 
jsutherl wrote:
isCustomQueryUsed should be true, that is fine. It means that this mapping's query is by primary key, and can obtain a cache hit, that should not be an issue.
Is your mapping still using a targetForeignKey? That is not correct as you have a foreign key, it must use a foreign key, if you want the direct to map it then make it read-only.It should only be set to true if there is an object. Which is true for some of the projects but not all of them. The projects that don't have a scorecard shouldn't have the flag set. The problem is if I delete a project that has a scorecard before I delete a project that doesn't have a scorecard. Toplink saves this ReadQuery in the ForeignReferenceMapping field: selectionQuery. It will then use this particular selectionQuery as the target query all projects. See line 1108 "ReadQuery targetQuery = this.selectionQuery;". On line 1123 is the check if (targetQuery.isObjectLevelReadQuery() && (sourceQuery.shouldCascadeAllParts() || (this.isPrivateOwned && sourceQuery.shouldCascadePrivateParts()) || (this.cascadeRefresh && sourceQuery.shouldCascadeByMapping()))) if that returns true then it will refresh the targetQuery and everything shakes out. But it's not.
How should I solve this problem. Should I set "shouldCascasePrivateParts or set cascadeREfresh and shouldCascadeByMapping? Second, I don't even know how to set this. This is all internal through a delete query that is trying to delete a private object. 
I think I'm beginning to understand your issue.
In TopLink the descriptor's query manager has a readObjectQuery, this is a query by primary key that is used by TopLink to allow any query by primary key to use this prepared query and avoid generating the SQL every time. Any time a query is executed by primary key, the original query will be substituted with this one. The isCustomQueryUsed flag caches that this substitution is possible.
In your case, you mapping's query is by primary key, so isCustomQueryUsed should be set to true. However there seems to be a bug in the execution of the primary key query when the id is null, so this leads to your error.
Please contact support with this issue, or log a bug. Include this post and my name (James Sutherland) in the bug/support request.
A workaround should be if you set the readObjectQuery in your descriptor's DescriptorQueryManager to null "after" login.
Another workaround would be to set setShouldBindAllParameters(true) on your mapping's selectionQuery.
---
James : http://www.eclipselink.org 
Thanks
I've created the SR 7473990.994
I'm going to look further into those workarounds. I'll get back to you on my findings. Originally we were using bind on parameters but it was too slow. Which makes no sense because caching of prepared statements should be significantly faster. Maybe it time we look into what was making it run so slowing. It sounds like this will solve this parameterNameMismatch error.
Eric 
I figured out a work around, I figured I'll post it for anyone who has this problem.
1st, subclass ReadObjectQuery
public class TargetMappedReadOnlyQuery extends ReadObjectQuery {
     /**
     * This method is only used for performance reason. With any TARGET_MAPPED mapping there is a toplink bug where it can cache the wrong "customQuery". By returning false
     * we force toplink to always check to see if a "customQuery" is needed.
     *
     * #return FALSE
     */
     #Override
     public Boolean isCustomQueryUsed() {
          return Boolean.FALSE;
     }
}
Then set the selectionQuery to setSelectionQuery(new TargetMappedReadOnlyQuery()); for your target mapped OneToOne query.

java.sql.SQLException: Closed Statement

I'm having a problem with my Java code trying to access an Oracle 10g database through the JDBC driver using the Oracle XML Publisher. We generate a number of different reports, but one report consistently fails with the error: java.sql.SQLException: Closed Statement It only happens in the production environment, which has significantly more data than the test and development environments. One interesting (!) observation is that the error occurs after 10 minutes to the second (almost). Which leads me to think it is a timeout related problem, but am aware that this could be a red herring. And finally, the code is being run within a job sheduled using the Quartz Scheduler v1.6.5. Any help would be appreciated. Many Thanks Lawrence Here is the stack trace: ----- java.sql.SQLException: Closed Statement: getMetaData at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.OracleResultSetImpl.getMetaData(OracleResultSetImpl.java:128) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getMetaData(NewProxyResultSet.java:47) at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.writeData(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.writeGroup(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.processSQLDataSource(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.writeData(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.processData(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.processXML(Unknown Source) at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(Unknown Source) at oracle.apps.xdo.dataengine.DataProcessor.processDataStructre(Unknown Source) at oracle.apps.xdo.dataengine.DataProcessor.processData(Unknown Source) at com.tmobile.sett.file.invoice.InvoiceFileFactory.generateXML(InvoiceFileFactory.java:157) at com.tmobile.sett.file.invoice.InvoiceFileFactory.generateStatements(InvoiceFileFactory.java:365) at com.tmobile.sett.file.invoice.InvoiceFileFactory.generateInvoices(InvoiceFileFactory.java:457) at com.tmobile.sett.file.invoice.StatementGeneratorJob.execute(StatementGeneratorJob.java:34) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525) Here is the code: ----- private void generateXML(String xmlTemplate, String xmlFilename, Hashtable parameters) { DataProcessor dataProcessor = new DataProcessor(); log.debug("generateXML: xmlTemplate {}, xmlFilename {}", xmlTemplate, xmlFilename); try { // Set Data Template to be executed dataProcessor.setDataTemplate(xmlTemplate); // Obtain a new Connection from the Pool Connection conn = HibernateUtil.getConnection(); if (conn == null) log.debug("conn is null after getCall"); if (conn.isClosed()) log.debug("conn is closed after getCall"); // Set the Connection for the dataProcessor dataProcessor.setConnection(conn); if (conn.isClosed()) log.debug("conn is closed after setConnection"); dataProcessor.setParameters(parameters); if (conn.isClosed()) log.debug("conn is closed after setParameters"); // Specify the output directory and file for the data file dataProcessor.setOutput(xmlFilename); if (conn.isClosed()) log.debug("conn is closed after setOutput"); // Process the data template dataProcessor.processData(); if (conn.isClosed()) log.debug("conn is closed after processData"); // Return the connection to the pool conn.close(); } catch (Exception e) { System.err.println("Error generating XML with the following parameters:"); System.err.println("xmlTemplate=" + xmlTemplate); System.err.println("xmlFilename=" + xmlFilename); System.err.println("Here's the stack trace"); e.printStackTrace(); System.exit(1); } }
That sort of error as reported here is often caused by threads using the same statement.
I suppose the note about 10 minutes might be relevant. Only realistic way would be if you did the following
1. Use statement.
2. Do other work not concerned with statement for 10 minutes.
3. Use statement.
And the simple solution to that is - don't do that. Don't keep a statement active for that long if you are not using it. 
I have made some progress in understanding the problem...
Basically, your comment about threads made me look at the code from a threads perspective. There are only three objects used, log4j, oracleXML Publisher and hibernate. Discounting log4j, the Oracle object is created, opened and closed within the procedure, so I concluded that hibernate was the most likely cause of the problem (i know, theres an element of educated guesswork in this).
Anyway, I found the hibernate timeouts (all around 300 seconds) and increased them to 3000 seconds... and lo and behold the reports are generated successfully, although some of them take over 20 minutes each.
So, this is a performance problem with the SQL itself:
SELECT service
, reply
, ROUND(SUM(DECODE(op, 3, 0, count))* :uplift,0) x_events
, ROUND(SUM(DECODE(op, 3, count, 0))* :uplift,0) y_events
FROM aggregate
WHERE ref_time >= :period_start
AND ref_time < :period_end
AND feed = 13
AND agreement = :agreement
AND line = 1
GROUP BY service, reply
ORDER BY 1,2
The SQL in question is querying data for a specific month from a monthly partitioned table that contains about 11 million records a month. We currently have 99 partitions.
Executing the sql in Toad / Sql developer / sqlplus takes around 20 seconds...so why 20 minutes?
Looking in the session browser, I can see that the 20 minute query is looking through all 99 partitions where the 20 second query is only looking at 1 partition for the month in question.
I'm not sure if this is heading off topic for the forum thread, but the question now is...
Why is Oracle using a clearly innefficient execution plan when we are using a query with bound variables and how do we get it to use a more efficient execution plan? 
Obviously one possibility is that you are not running the same SQL in both places.
Other than that I think there is a way to use explain plan via java. So look for that and examine the result.

Error running batch with PreparedStatement

Hi.
I'm trying to use a PreparedStatement to insert many rows in a table. This is the code:
public void executeMultiplePreparedStatement(String query,Iterator dati) throws SQLException{
       PreparedStatement Pstatement = connection.prepareStatement(query+";");
      
       while(dati.hasNext()){
         Vector daIns = (Vector) dati.next();
         for(int i=0;i<daIns.size();i++){
           Pstatement.setString(i+1, "'"+(String) daIns.get(i)+"'");
         }
         Pstatement.addBatch();
       }
       Pstatement.executeBatch();
       Pstatement.close();
  }This is the error I encounter:
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO sys_group_permission (GroupID, PermissionID , Enabled) VALUES ('10','1','Y') was aborted.  Call getNextException to see the cause.
     at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2392)
     at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1257)
     at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:334)
     at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2451)The query I use is correct, I tried it directly on db...
Which error am I doing? Thanks for any help 
have you dissable the auto commit option on the connection
connection.setAutoCommit(false);
i hoppe this will help
baiju 
Good!
It helped me. Thank you 
First, take the hint in the error message: use getNextException to get the underlying error.
Second, GroupID and PermissionID look like entities that are commonly implemented as numeric fields in the database. If that's the case, the best that can happen when use setString() to pass the numeric value in as a String is that the database will do an implicit conversion for you; however, it might not (I don't know if postgres does or doesn't), and any conversion errors will likely get treated as SQLException. If in fact they're numeric fields, then you're probably better off trying to convert the values earlier and using setInt() (or the appropriate set method for the numeric type).
Third, if you're using batching, then you really, really, really, REALLY don't want to be calling prepareStatement() over and over for every row. Batching is designed to reduce 2 things, the query parse (think of it as SQL compilation) and the back-and-forth over the network. prepareStatement() IS the query parse and goes to the database each and every time it's called (if the driver is implemented that way, which is very common, it's a requirement that a parse error result in a SQLException and the only way to know if there's going to be an error is to shuttle over to the DB and parse). You may be throwing away as much as 95% of the benefits of batching. If in fact the SQL may vary between each entry into the batch, you should cache your PreparedStatement somehow; if in fact you're really trying to stream rows into a single table, then you should probably move the PreparedStatement up and out of this method and outside the loop that's producing the rows.
Fourth, if you can't figure out what's going on any other way, try executing your PreparedStatement with bound values outside a batching context; if that works but batching doesn't, it's possible that your driver/database doesn't support batching or has a bug. 
In fact, as you can see I call prepareStatement only once...
why do you say that I call it each time I add a row? 
Beats me. I apparently misread the code. 
Ok. Thanks for your answer ! 
Here is the code:
// here I call (once) prepareStatement()
PreparedStatement Pstatement = connection.prepareStatement(query+";");
      
// here I set the parameters
       while(dati.hasNext()){
         Vector daIns = (Vector) dati.next();
         for(int i=0;i<daIns.size();i++){
           Pstatement.setString(i+1, "'"+(String) daIns.get(i)+"'");
         }//for
         Pstatement.addBatch();
       }//while
       Pstatement.executeBatch();May you explain why this beats you? Thanks 
"beats me" is American slang. It means more-or-less "I have no idea at all".
In other words, I misread the original code, and made a mistake, and made mistaken statements about the code (the theoretical point is still true, it just doesn't apply to your actual code).

No columns in table error when committing via CachedRowSetDataProvider

Okay, another issue that's had me scratching what's left of my hair over the past couple of days.
I've got a CachedRowSetDataProvider tied to a CachedRowSetXImpl. This rowset pulls all data back from a table (command = select all FIRST_NAME, LAST_NAME from t_person). I want to insert/append a new row into that table. I'm doing something like the following:
RowKey rk = t_rolesDataProvider.appendRow();
t_rolesDataProvider.setCursorRow(rk);
t_rolesDataProvider.setValue("FIRST_NAME", fName);
t_rolesDataProvider.setValue("LAST_NAME", lName);
t_rolesDataProvider.commitChanges();
When it gets to the .commitChanges() line, I get the following error:
java.sql.SQLException: No columns in table: t_person
I've seen this reported a couple other times but haven't been able to find anything that helps me with my issue (even though what I'm doing seems much simpler than the other reports I've seen).
I've also reviewed several of the blogs regarding how you're supposed to be able to do this (insertableColumns and all that) but if there are any other resources, I'd appreciate them as well.
Here's the kicker, it works fine when I'm pointing to a MySQL database but when I switch over to an Oracle database, I start getting these errors.
It only seems to be an issue when I go to insert, update or delete data. I can display data from these same tables just fine with either driver/database.
Please help!
Thanks,
TJ 
Hi TJ,
After executing your application, please check your App Server log file.
There may be an SQL error occurring. <installdir>SunAppServer8/domains/creator/logs/server.log
If the SQL error isn't helpful,
Could you post your schema , including the data types.
Unfortunately Oracle types will differ from MySQL types.
Thanks,
John 
I am also getting the same error.
Though it was working fine, I just upgraded to JSC2 Update1. I am using MySQL.
[#|2006-07-14T19:59:48.741+0100|WARNING|sun-appserver-pe8.2|javax.enterprise.system.stream.err|_ThreadID=17;|
java.sql.SQLException: No columns in table: user
at com.sun.sql.rowset.internal.CachedRowSetXWriter.initSQLStatements(CachedRowSetXWriter.java:1251)
at com.sun.sql.rowset.internal.CachedRowSetXWriter.writeData(CachedRowSetXWriter.java:327)
at com.sun.sql.rowset.CachedRowSetXImpl.acceptChanges(CachedRowSetXImpl.java:1036)
at com.sun.data.provider.impl.CachedRowSetDataProvider.commitChanges(CachedRowSetDataProvider.java:833) 
Further on this, I am using same data provider to get data and that works fine, but for commiting changes for insert, it gives error that no columns in table. And server log also doesn't help. Below is server log for the same.
[#|2006-07-15T15:12:35.354+0100|INFO|sun-appserver-pe8.2|javax.enterprise.system.stream.out|_ThreadID=21;|
Reader executing query ps=0 SELECT ALL USER.Id,
USER.Username,
USER.Email,
USER.First_Name,
USER.Last_Name,
USER.Password,
USER.Creation_Date,
USER.Last_Login_Date,
USER.Activated,
USER.Activation_Date,
USER.Security_Question,
USER.Answer,
USER.Currently_Paid,
USER.Free_Trial_Period,
USER.Old_Email,
USER.Receive_Notification,
USER.Role
FROM USER
No Params|#]
[#|2006-07-15T15:12:36.486+0100|INFO|sun-appserver-pe8.2|javax.enterprise.system.stream.out|_ThreadID=21;|
Reader executing finished|#]
[#|2006-07-15T15:12:36.526+0100|WARNING|sun-appserver-pe8.2|javax.enterprise.system.stream.err|_ThreadID=21;|
java.sql.SQLException: No columns in table: user
at com.sun.sql.rowset.internal.CachedRowSetXWriter.initSQLStatements(CachedRowSetXWriter.java:1251)
at com.sun.sql.rowset.internal.CachedRowSetXWriter.writeData(CachedRowSetXWriter.java:327)
at com.sun.sql.rowset.CachedRowSetXImpl.acceptChanges(CachedRowSetXImpl.java:1036)
at com.sun.data.provider.impl.CachedRowSetDataProvider.commitChanges(CachedRowSetDataProvider.java:833)
Any suggestions?
Thanks. 
Hi *,
I have the exactly the same issue using the postgresql jdbc driver (postgresql-8.2dev-503).
Has anyone made progress regarding this issue/bug? If it is bug at all... 
I don't know if my experience will help you, but just in case
I hit the same problem you are facing. I am using MySQL 4.1 and 5.0
Lately (I believe after upgrading to SJSC 2.1, but I am not sure), some code that it used to work well started crashing (after small modifications and recompilation) with the "No columns in table" error. I tracked the error to the trivial situation of mapping a rowset to a table, and just doing a committ_changes(), without any manipulation at all. Reading the table was ok.
After some investigation, I found that the problem seems to be (in my case) related to lowercase-uppercase conversions (bad conversions). A rowset to a MySQL table named "user" was created using "USER" instead. I just had to rename all the references to USER in the creator-managed code related to the rowset.
I would consider this a bug, so if anybody in the support team reads this, pls consider filing a problem report
Regards and hope it helps
/luis 
This might be a long shot but have you tried placing the table name with your insert? i.e.
t_rolesDaraProvider.setValue("tablename.FIRST_NAME", fName);
When I was working with inserts I always had to specify the table name when doing inserts. 
I also get the error "No columns in table" when committing.
Tried everything, but it wont update. What am I doing wrong.
Do we have to update every column in a row? Or can we update just some of the column?
I get this error even when I try to appendRow or update a row that exists. 
Got this to work now. It was my datatype in my tables that was causing the problem. Once I changed from ntext to nvarchar, everything worked just fine. 
I am also having this error inserting into mySQL table. I am updating a view.. But the view is of the base table, so it should be updatable. When I debug this, I see that the "FieldKeys" is fully populated, and the I see nothing irregular inside the DataProvider.
getPDataProvider().cursorLast();  \\ added to try and populate columns
if ( getPDataProvider().canAppendRow() ) {
    try
    {
        getSessionBean1().getPRowSet().execute(); \\ added to try and populate columns
        RowKey rowKey = getPDataProvider().appendRow();
        getPDataProvider().setCursorRow(rowKey);
        getPDataProvider().setValue("partys.PartyId"        ,rowKey, (new Integer(0)));              
        getPDataProvider().setValue("partys.GameId"         ,rowKey, (new Integer(this.getSessionBean1().getSelectedGameId() )));                          
        getPDataProvider().setValue("partys.HostId"         ,rowKey, (new Integer(this.getSessionBean1().getCurrentHostId()  )));                          
        getPDataProvider().setValue("partys.PartyName"      ,rowKey, "Enter Party Name Here" );                          
        getPDataProvider().setValue("partys.StatusId"       ,rowKey, (new Integer(0)));                                                     
        getPDataProvider().setValue("partys.PartPassCode"   ,"Enter Password" );                          
        getPDataProvider().setValue("partys.PersonalMessage",rowKey, "Were having a Party" );  
        getPDataProvider().commitChanges();
    }catch (Exception e) {error("Cannot create new reocord: " + e); return("stay"); }Has anyone found this problem ???
[#|2006-10-28T10:26:35.296-0400|WARNING|sun-appserver-pe8.2|javax.enterprise.system.stream.err|_ThreadID=18;|
java.sql.SQLException: No columns in table: partys
     at com.sun.sql.rowset.internal.CachedRowSetXWriter.initSQLStatements(CachedRowSetXWriter.java:1251)
     at com.sun.sql.rowset.internal.CachedRowSetXWriter.writeData(CachedRowSetXWriter.java:327)
     at com.sun.sql.rowset.CachedRowSetXImpl.acceptChanges(CachedRowSetXImpl.java:1036)
     at com.sun.data.provider.impl.CachedRowSetDataProvider.commitChanges(CachedRowSetDataProvider.java:833)
     at vehicleincidentreportapplication.WWM_Players.createANewPartyButton_action(WWM_Players.java:631)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:585)
     at com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:126)
     at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:72)
     at com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListenerImpl.java:57)
     at javax.faces.component.UICommand.broadcast(UICommand.java:312)
     at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
     at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
     at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:75)
     at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
     at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
     at javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
I solved this problem.
Verify TableName / Properties of RowSet.
Don't set creator before table name.
Write only 'table name' without creator.
I hope it will be usefull to you.
Gabriele 
gabrielle would you please write the code cause i'm a little bit stucked here :) 
ok, after almost 3 days, this is how I made it work.
Besides specifying the columns names, you need to set the table names and schemas for each column. Which seems odd, i know, specially since all that could be already setup.
So, I have a db file called PRT001, with 4 columns, one of them is an autoincrement field, and i dont want to set up anything there.
prt001RowSet.setCommand("SELECT * FROM PRT001");
prt001RowSet.setTableName("PRT001");
prt001RowSet.setSchemaName("RLMFILES");
prt001RowSet.setColumnNames(new String[] {"PRT001ID", "CPY001", "COD001", "DES001"});
prt001RowSet.setColumnTableNames(new String[] {"PRT001", "PRT001", "PRT001", "PRT001"});
prt001RowSet.setColumnSchemaNames(new String[] {"RLMFILES", "RLMFILES", "RLMFILES", "RLMFILES"});
prt001RowSet.setInsertableColumns(new boolean[] {false, true, true, true});
prt001RowSet.setUpdatableColumns(new boolean[] {false, true, true, true});
As you can see, on the Insertable Columns, I left the first one as false, thats the autoincrement column.
Hope that works for you too. 
Switching between an Oracle and MySQL database can cause problems because of column name capitalization.
For example:
        employeeRowSet.setTableName("EMPLOYEE");is different from
        employeeRowSet.setTableName("employee");

Categories

Resources