I'm working with the UltraTree and we have a table with self-referencing data that forms a hierarchy.
According to the following KB, one step in loading the data is that many "root-level nodes" need to be hidden (they are positioned/visible in the root of tree by default, even though they have a reference to a valid parent that lives elsewhere in the tree).
"If you were to run your application at this point, you will notice that not only does the UltraTree display all of the nodes in the correct hierarchy, but it also displays all of the nodes on the root level as well. This is because the UltraTree does not intuitively know not to show the unnecessary data. To work around this problem, you need to handle the UltraTree’s InitializeDataNode event and hide the unnecessary root-level nodes."
The main problem I'm encountering when loading lots of data is the amount of CPU involved in setting UltraTreeNode.Visible to false. This takes a long time for a large tree. The profiler shows most of the time spent in these areas (internal UltraTree code). This happens during the InitializeDataNode event.
Given that I'm waiting so long just to "hide the unnecessary root-level nodes" , I suspect I've reached the limit of the number of nodes that can be supported by the UltraTree. We are loading 10's of thousands.
Can someone verify that this is the practical limit of what the UltraTree supports? Has anyone tried to load trees with more than 10,000 items? Is there any way to improve the performance while changing of the visibility of these nodes? Would it be possible to do this on background threads? (I'm assuming that would involve U/I errors or concurrency errors...).
Any help would be appreciated.
Thank you for contacting Infragistics. I personally have not seen any issues with binding over 10k of data. Please review and modify the sample attached so I can review it. Thank you.
Thank you for the sample. I've modified it to include sample data that takes a long time to initialize. The initialization is CPU-intensive. That initialization work is done so that "root-level nodes" will be hidden when they have a parent relationship. This initialization step is explained in the KB article that I referred to earlier.
If you monitor the startup of the application with the "diagnostic tools" in Visual Studio (via cpu profiling) then you should see that the vast majority of the initialization work is related to hiding the "root-level nodes". Ideally the performance of this could be improved (possibly by initializing the visibility of nodes proactively, and/or using background threads).
Please let me know if you have trouble with this sample.
Cannot upload. Please include instructions for doing that.
I get the error:
An error occurred. Please try again or contact your administrator.
Wasn't able to upload my file here so I uploaded it to a support case. Hopefully the results of that case will be reported back here again for the benefit of others.
The support case is CAS-205474-S0Z8D0.
One way around this would be to use two different tables.
It's simple to create a DataSet with a single table that references itself, but this means that every row exists at the root level and so you have to somehow hide those nodes at the root level.
But you don't have to do it that way. You could create a root-level table (using a query) that only contains the root-level rows you want. Then create a relationship from that root-level table to the child table (which contains ALL the rows). And a second relationship from the child table back to itself.